id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2305.06067
Pebble guided Treasure Hunt in Plane
We study the problem of treasure hunt in a Euclidean plane by a mobile agent with the guidance of pebbles. The initial position of the agent and position of the treasure are modeled as special points in the Euclidean plane. The treasure is situated at a distance at most $D>0$ from the initial position of the agent. The agent has a perfect compass, but an adversary controls the speed of the agent. Hence, the agent can not measure how much distance it traveled for a given time. The agent can find the treasure only when it reaches the exact position of the treasure. The cost of the treasure hunt is defined as the total distance traveled by the agent before it finds the treasure. The agent has no prior knowledge of the position of the treasure or the value of $D$. An Oracle, which knows the treasure's position and the agent's initial location, places some pebbles to guide the agent towards the treasure. Once decided to move along some specified angular direction, the agent can decide to change its direction only when it encounters a pebble or a special point. We ask the following central question in this paper: ``For given $k \ge 0$, What is cheapest treasure hunt algorithm if at most $k$ pebbles are placed by the Oracle?" We show that for $k=1$, there does not exist any treasure hunt algorithm that finds the treasure with finite cost. We show the existence of an algorithm with cost $O(D)$ for $k=2$. For $k>8$ we have designed an algorithm that uses $k$ many pebbles to find the treasure with cost $O(k^{2}) + D(\sin\theta' + \cos\theta')$, where $\theta'=\frac{\pi}{2^{k-8}}$. The second result shows the existence of an algorithm with cost arbitrarily close to $D$ for sufficiently large values of $D$.
Adri Bhattacharya, Barun Gorain, Partha Sarathi Mandal
2023-05-10T11:35:08Z
http://arxiv.org/abs/2305.06067v1
# Pebble guided Treasure Hunt in Plane+ ###### Abstract We study the problem of treasure hunt in a Euclidean plane by a mobile agent with the guidance of pebbles. The initial position of the agent and position of the treasure are modeled as special points in the Euclidean plane. The treasure is situated at a distance at most \(D>0\) from the initial position of the agent. The agent has a perfect compass, but an adversary controls the speed of the agent. Hence, the agent can not measure how much distance it traveled for a given time. The agent can find the treasure only when it reaches the exact position of the treasure. The cost of the treasure hunt is defined as the total distance traveled by the agent before it finds the treasure. The agent has no prior knowledge of the position of the treasure or the value of \(D\). An Oracle, which knows the treasure's position and the agent's initial location, places some pebbles to guide the agent towards the treasure. Once decided to move along some specified angular direction, the agent can decide to change its direction only when it encounters a pebble or a special point. We ask the following central question in this paper: "For given \(k\geq 0\), What is cheapest treasure hunt algorithm if at most \(k\) pebbles are placed by the Oracle?" We show that for \(k=1\), there does not exist any treasure hunt algorithm that finds the treasure with finite cost. We show the existence of an algorithm with cost \(O(D)\) for \(k=2\). For \(k>8\) we have designed an algorithm that uses \(k\) many pebbles to find the treasure with cost \(O(k^{2})+D(\sin\theta^{\prime}+\cos\theta^{\prime})\), where \(\theta^{\prime}=\frac{\pi}{2k-8}\). The second result shows the existence of an algorithm with cost arbitrarily close to \(D\) for sufficiently large values of \(D\). **Keywords** Treasure Hunt, Mobile agent, Pebbles, Euclidean plane, Deterministic algorithms. ## 1 Introduction Treasure Hunt problem is the task of finding an inert target by a mobile agent in an unknown environment. The unknown environment can be modeled as a network or a plane. Initially placed at a point in the unknown environment, a mobile agent has to find an inert target, called the treasure. The target or the treasure can be a miner lost in a cave. The cave can be uninhabitable for humans to search for the lost miner, or it can be inundated with toxic waters henceforth, the person should be found as fast as possible. In computer science applications, a software agent must visit the computers connected by a local area network to find the computer affected by malware. In this paper, we study the problem of treasure hunt in the Euclidean plane under a very weak scenario which assumes very little knowledge and control power of the mobile agent. Specifically, the agent does not have any prior knowledge about the position of the treasure or its distance from the treasure. Moreover, the agent has no control over the speed of its movement, and it is assumed that an adversary completely controls the speed of the agent. In practice, for software agents in a network, the movement speed of the agent depends on various factors, such as congestion in the network. In the case of hardware mobile robots, their speeds depend on many mechanical characteristics as well as environmental factors. The agent is equipped with a perfect compass, which helps the agent to rotate and move in a prescribed direction. The agent is initially placed at a point \(P\) in the plane. The treasure \(T\) is located at most \(D>0\) distance (unknown to the agent) from \(P\). The agent finds the treasure only when it reaches the exact position of the treasure. The agent's initial position is considered a special point, and the agent can detect this point whenever it visits \(P\). In the absence of control over its movement speed, once the agent decides to move along a particular angle, it is very important for the agent to learn when to stop its movement. Otherwise, the adversary can increase the speed arbitrarily high, and the agent ends up traversing an arbitrarily large distance. In order to enable the agent to have some control over its movement, an Oracle, knowing the position of the treasure, and the initial position of the agent, places some stationary pebbles on the plane. We assume a restriction on the pebble placement by the Oracle: any two pebbles must separated by a constant distance, i.e., no two pebbles are placed arbitrarily close 1.. For simplicity, we assume that any two pebbles must be placed at least 1 distance apart. The agent can detect the existence of a pebble only when it reaches the position of the pebble where its placed by the Oracle. Footnote 1: This is required if the sensing capability of the agent is weak, two pebbles placed very close to each other may not be distinguished by the agent. These pebbles placement helps the agent control its movement and rule out the possibility of traversing arbitrarily large distances. Starting from some position of the plane, the agent, moving along a specific direction, stops or changes its direction once it encounters a pebble along the path of its movement. Thus, the movement algorithm of the agent gives instruction to move along a specific angle \(\alpha\) until it encounters a special point (i.e., the initial position \(P\) or the position of the treasure \(T\)) or it hits a pebble. Formally, for a given positive integer \(k\geq 0\), the Oracle is a function \(f_{k}:(E\times E)\to E^{k}\), where \(E\) is the set of of all the points in the Euclidean Plane. The function takes two points as the input, the first one is the initial position of the agent, and the second one is the position of the treasure, and gives \(k\) points in the plane as output which represents the placement of a pebble at each of these \(k\) points. The central question studied in this paper is: "For given \(k\geq 0\), what is the minimum cost of treasure hunt if at most \(k\) pebbles are placed in the plane?" ### Contribution Our contributions in this paper are summarized below. * For \(k=1\) pebbles, we have shown that it is not possible to design a treasure hunt algorithm that finds the treasure with finite cost. * For \(k=2\) pebbles, we propose an algorithm that finds the treasure with cost at most \(4.5D\), where \(D\) is the distance between the initial position of the agent and the treasure. * For \(k>8\), we design an algorithm that finds the treasure using \(k\) pebbles with cost \(O(k^{2})+D\left(\sin\theta^{\prime}+\cos\theta^{\prime}\right)\), where \(\theta^{\prime}=\frac{\pi}{2^{k-8}}\). For sufficiently large values of \(D\) and \(k\in o(\sqrt{D})\), the cost of this algorithm is arbitrarily close to \(D\), the cost of the optimal solution in case the position of the treasure is known to the agent. ### Related Work The task of searching for an inert target by a mobile agent has been rigorously studied in the literature under various scenarios. The underlying environment or the topology may be either a discrete or continuous domain, i.e., a graph or a plane. The search strategy can be either deterministic or randomized. The book by Alpern et al. [2] discusses the randomized algorithms based on searching for an inert target as well as the rendezvous problem, where the target and the agent are both dynamic, and they cooperate to meet. The papers by Miller et al.[19], and Ta-Shma et al.[22] relate the correlation between rendezvous and treasure hunt problem in graph. The problem of searching on a line for an inert target was first initiated by Beck et al. [3]. They gave an optimal competitive ratio of 9. Further, Demaine et al. [7] modified the problem, in which there is a cost involved for each change in the direction of the searcher. In [17], the author surveys the searching problem in terms of search games where the target is either static or mobile. The search domain is either a graph, a bounded domain, or an unbounded set. Fricke et al. [15], generalized the search problem in a plane with multiple searchers. Now, the paradigm of _algorithm with advice_ has been introduced mainly in networks. These _advice_ enhances the efficiency of the problems as discussed in [1, 4, 5, 6, 8, 10, 11, 12, 13, 14, 16, 18, 20]. In this paradigm, the authors [13, 14] mainly studied the minimum amount of advice required in order to solve the problem efficiently. In [9, 11], the online version of the problems with advice was studied. The authors [5], considered the treasure hunt problem, in which they gave an optimal cost algorithm where the agent gets a hint after each movement. Pelc et al. [21], gave an insight into the amount of information required to solve the treasure hunt in geometric terrain at \(O(L)\)- time, where \(L\) is the shortest path of the treasure from the initial point. Bouchard et al. [6], studied how different kinds of initial knowledge impact the cost of treasure hunt in a tree network. The two papers closest to the present work are [18, 20]. Pelc et al. [20], provided a trade-off between cost and information of solving the treasure hunt problem in the plane. They showed optimal and almost optimal results for different ranges of vision radius. Gorain et al. [18], gave an optimal treasure hunt algorithm in graphs with pebbles, termed as advice. In [4], the authors studied a trade-off between the number of pebbles vs. the cost of the treasure hunt algorithm in an undirected port-labeled graph. **Organization:** The paper is organized in the following manner. Section 2 gives a brief idea about the feasibility of the treasure hunt problem when a certain number of pebbles are placed. Section 3 is subdivided into three subsections, in subsection 3.1, the high-level idea of the algorithm is described, in subsection 3.2, the pebble placement strategy is described, and in subsection 3.3 the treasure hunt algorithm is discussed. In section 4, correctness and complexity are discussed. Further, in section 5, possible future work and conclusion are explained. ## 2 Feasibility of Treasure hunt In this section, we discuss the feasibility of the treasure hunt problem, when the oracle places one and two pebbles, respectively. Theorem 2.1: _It is not possible to design a treasure hunt algorithm using at most one pebble that finds the treasure at a finite cost._ Proof: The agent initially placed at \(P\) and the pebble is placed somewhere in the plane by the oracle. Since the agent has no prior information about the location of the treasure, the treasure can be positioned anywhere in the plane by the adversary. The only possible initial instruction for the agent is to move along a certain angle from \(P\). The agent along its movement, must encounter a pebble otherwise, it will continue to move in this direction for an infinite distance, as it has no sense of distance. After encountering the pebble, there are three possibilities: either it may return back to \(P\) and move at a certain angle from \(P\) or it may return back to \(P\) and move along the same path traversed by the agent previously to reach the pebble or it may move at a certain angle from the pebble itself. The adversary may place the treasure at a location different from the possible path to be traversed by the agent. Hence, it is not possible to find the treasure at a finite cost. In this part, we discuss the strategy of pebble placement and respective traversal of the agent toward the treasure when two pebbles are placed by the oracle. **Pebble Placement:** Based on the location of the treasure, two pebbles are placed as follows. Let the coordinates of the treasure \(T\) be \((x_{T},y_{T})\). If either of \(x_{T}\) or \(y_{T}\) is positive, place one pebble at \((z+1,z+1)\), where \(z=\max\{|x_{T}|,|y_{T}|\}\). Place another pebble at \((x_{T},z+1)\). Otherwise, if both \(x_{T}\) and \(y_{T}\) are negative, place one pebble at \((1,1)\) and another pebble at \((x_{T},1)\). **Treasure Hunt by the agent:** The agent initially at \(P\), moves at an angle \(\frac{\pi}{4}\) with the positive \(x\) axis until it encounters treasure or a pebble (i.e., \(p_{1}\)). If a pebble is encountered, then from this position the agent moves along an angle \(\pi-\frac{\pi}{4}\) until it encounters the treasure or reaches a pebble (i.e., \(p_{2}\)). If a pebble is encountered (i.e., from \(p_{2}\)), the agent further moves along an angle \(\frac{\pi}{2}\) until it reaches the treasure \(T\). Theorem 4.1: _The agent finds the treasure with cost \(O(D)\) using the above algorithm._ Proof: According to the proposed algorithm, the cost of finding the treasure is the path \(Pp_{1}+p_{1}p_{2}+p_{2}T\) (refer Fig. 1), where \(p_{1}\) and \(p_{2}\) are the positions of the first and second pebbles, respectively. Let \(f_{i}:\theta\longrightarrow\mathbb{R}\), where \(i=1,\cdots,5\), be the set of cost functions for each of the following cases, we analyze them as follows: 1. If the treasure is on the first quadrant, then let \(A\) and \(B\) be the foot of the perpendicular drawn from \(T\) and \(p_{1}\), respectively. Let \(\angle TPA=\theta\) (refer Fig. 1(a)). So, \(PA=D\cos\theta\) and \(AT=D\sin\theta\). Now we have the following cases: 1. [label=0.,ref=0] 2. When \(x_{T}\geq y_{T}\), then the pebbles \(p_{1}\) and \(p_{2}\) are placed at \((x_{T}+1,x_{T}+1)\) and \((x_{T},x_{T}+1)\), respectively. So, \(PB=D\cos\theta+1\) and \(PB=Bp_{1}\) (since Figure 1: Movement of the agent when the treasure is located in the upper half of the plane \(\Delta p_{1}PB\) is an isosceles triangle), this implies \(Pp_{1}=\sqrt{2}(D\cos\theta+1)\). Moreover in this case, \(p_{1}p_{2}=1\) and \(p_{2}T=p_{2}A-TA=D\cos\theta+1-D\sin\theta\). So, the total cost is: \(\sqrt{2}(D\cos\theta+1)+1+(D\cos\theta+1-D\sin\theta)\) which is linear in terms of \(D\). * When \(y_{T}>x_{T}\), then the pebbles \(p_{1}\) and \(p_{2}\) are placed at \((y_{T}+1,y_{T}+1)\) and \((x_{T},y_{T}+1)\), respectively. So, \(Bp_{1}=D\sin\theta+1\) and \(PB=Bp_{1}\) (since \(\Delta p_{1}PB\) is an isosceles triangle), this implies \(Pp_{1}=\sqrt{2}(D\sin\theta+1)\). Moreover in this case \(p_{1}p_{2}=(D\sin\theta+1)-D\cos\theta\) and \(p_{2}T=p_{2}A-TA=D\sin\theta+1-D\sin\theta=1\). So, the total cost is: \(\sqrt{2}(D\sin\theta+1)+(D\sin\theta+1)-D\cos\theta+1\) which is again linear in terms of \(D\). So, \(f_{1}(\theta)=\max\{\min_{\forall\theta\in[0,2\pi]}\{\sqrt{2}(D\cos\theta+1)+ 1+(D\cos\theta+1-D\sin\theta),\)\(\sqrt{2}(D\sin\theta+1)+(D\sin\theta+1)-D\cos\theta+1\}\}\) * If the treasure is on the second quadrant, let C be the mirror image of \(T\) on first quadrant (refer Fig. 1(b)) then consider \(A\) and \(B\) be the foot of the perpendicular drawn from \(C\) and \(p_{1}\), respectively. Let \(\angle TPD=\theta\), and hence \(\angle CPA=\theta\). So, we have \(PA=D\cos\theta\) and \(AC=D\sin\theta\). We have the following cases: * When \(|x_{T}|\geq y_{T}\), then the pebbles \(p_{1}\) and \(p_{2}\) are placed at \((-x_{T}+1,-x_{T}+1)\) and \((x_{T},-x_{T}+1)\), respectively. So, \(PB=D\cos\theta+1\) and \(PB=Bp_{1}\) (since \(\Delta p_{1}PB\) is an isosceles triangle), this implies \(Pp_{1}=\sqrt{2}(D\cos\theta+1)\). Moreover in this case, \(p_{1}p_{2}=D\cos\theta+1+D\cos\theta\) and \(p_{2}T=p_{2}A-TA=D\cos\theta+1-D\sin\theta\). So, the total cost is: \(\sqrt{2}(D\cos\theta+1)+(D\cos\theta+1-D\sin\theta)+(2D\cos\theta+1)\) which is linear in terms of \(D\). * When \(y_{T}>|x_{T}|\), then the pebbles \(p_{1}\) and \(p_{2}\) are placed at \((y_{T}+1,y_{T}+1)\) and \((x_{T},y_{T}+1)\), respectively. So, \(Bp_{1}=D\sin\theta+1\) and \(PB=Bp_{1}\) (since \(\Delta p_{1}PB\) is an isosceles triangle), this implies \(Pp_{1}=\sqrt{2}(D\sin\theta+1)\). We have \(p_{1}p_{2}=(D\sin\theta+1)+D\cos\theta\) and \(p_{2}T=p_{2}A-TA=D\sin\theta+1-D\sin\theta=1\). So, the total cost is \(\sqrt{2}(D\sin\theta+1)+(D\sin\theta+1)+D\cos\theta+1\), which is again linear in terms of \(D\). So, \(f_{2}(\theta)=\max\{\min_{\forall\theta\in[0,2\pi]}\{\sqrt{2}(D\cos\theta+1)+ (D\cos\theta+1-D\sin\theta)+(2D\cos\theta+1),\sqrt{2}(D\sin\theta+1)+(D\sin \theta+1)+D\cos\theta+1\}\}\). * If the treasure is on the third quadrant, let \(A\) and \(B\) be the foot of the perpendicular drawn from \(p_{2}\) and \(p_{1}\). Let \(\angle TPA=\theta\) (refer Fig. 2(a)) and the pebbles \(p_{1}\) and \(p_{2}\) are placed at \((1,1)\) and \((x_{T},1)\), respectively. So, \(Pp_{1}=\sqrt{2}\), \(p_{1}p_{2}=1+D\cos\theta\) and \(p_{2}T=p_{2}A+AT=1+D\sin\theta\). So, the total cost is: \(\sqrt{2}+1+D\cos\theta+D\sin\theta\), which is again linear in terms of \(D\). Hence, \(f_{3}(\theta)=\min_{\forall\theta\in[0,2\pi]}\{\sqrt{2}+1+D\cos\theta+D\sin\theta\}\). * If the treasure is on the fourth quadrant, then let \(A\) and \(B\) be the foot of the perpendicular drawn from \(T\) and \(p_{1}\), respectively. Let \(\angle TPA=\theta\) (refer Fig. 2(b)). So, \(PA=D\cos\theta\) and \(AT=D\sin\theta\). Now we have the following cases: * When \(x_{T}\geq|y_{T}|\), then the pebbles \(p_{1}\) and \(p_{2}\) are placed at \((x_{T}+1,x_{T}+1)\) and \((x_{T},x_{T}+1)\), respectively. So, \(PB=D\cos\theta+1\) and \(PB=Bp_{1}\) (since \(\Delta p_{1}PB\) is an isosceles triangle), this implies \(Pp_{1}=\sqrt{2}(D\cos\theta+1)\). So, \(Pp_{1}=\sqrt{2}(D\cos\theta+1)+(D\cos\theta+1)+(D\cos\theta+1)\) which is linear in terms of \(D\). So, \(f_{2}(\theta)=\max\{\min_{\forall\theta\in[0,2\pi]}\{\sqrt{2}(D\cos\theta+1)+ (D\cos\theta+1-D\sin\theta)+(2D\cos\theta+1),\sqrt{2}(D\sin\theta+1)+(D\sin \theta+1)+D\cos\theta+1\}\}\). * If the treasure is on the third quadrant, let \(A\) and \(B\) be the foot of the perpendicular drawn from \(p_{2}\) and \(p_{1}\). Let \(\angle TPA=\theta\) (refer Fig. 2(a)) and the pebbles \(p_{1}\) and \(p_{2}\) are placed at \((1,1)\) and \((x_{T},1)\), respectively. So, \(Pp_{1}=\sqrt{2}\), \(p_{1}p_{2}=1+D\cos\theta\) and \(p_{2}T=p_{2}A+AT=1+D\sin\theta\). So, the total cost is: \(\sqrt{2}+1+D\cos\theta+D\sin\theta\), which is again linear in terms of \(D\). Hence, \(f_{3}(\theta)=\min_{\forall\theta\in[0,2\pi]}\{\sqrt{2}+1+D\cos\theta+D\sin\theta\}\). * If the treasure is on the fourth quadrant, then let \(A\) and \(B\) be the foot of the perpendicular drawn from \(T\) and \(p_{1}\), respectively. Let \(\angle TPA=\theta\) (refer Fig. 2(b)). So, \(PA=D\cos\theta\) and \(AT=D\sin\theta\). Now we have the following cases: * When \(x_{T}\geq|y_{T}|\), then the pebbles \(p_{1}\) and \(p_{2}\) are placed at \((x_{T}+1,x_{T}+1)\) and \((x_{T},x_{T}+1)\), respectively. So, \(PB=D\cos\theta+1\) and \(PB=Bp_{1}\) (since \(\Delta p_{1}PB\) is an isosceles triangle), this implies \(Pp_{1}=\sqrt{2}(D\cos\theta+1)\). So, \(Pp_{1}=\sqrt{2}(D\cos\theta+1)+(D\cos\theta+1)+(D\cos\theta+1)\) which is linear in terms of \(D\). So, \(Pp_{1}=\sqrt{2}(D\cos\theta+1)+(D\cos\theta+1)+(D\cos\theta+1)+D\cos\theta+1\) which is again linear in terms of \(D\). So, \(f_{2}(\theta)=\max\{\min_{\forall\theta\in[0,2\pi]}\{\sqrt{2}(D\cos\theta+1)+ (D\cos\theta+1-D\sin\theta)+(2D\cos\theta+1),\sqrt{2}(D\sin\theta+1)+(D\cos \theta+1)+D\cos\theta+1\}\}\). * If the treasure is on the third quadrant, let \(A\) and \(B\) be the foot of the perpendicular drawn from \(T\) and \(p_{1}\). Let \(\angle TPA=\theta\) (refer Fig. 2(b)). So, \(PA=D\cos\theta\) and \(AT=D\sin\theta\). Now we have the following cases: * When \(x_{T}\geq|y_{T}|\), then the pebbles \(p_{1}\) and \(p_{2}\) are placed at \((x_{T}+1,x_{T}+1)\) and \((x_{T},x_{T}+1)\), respectively. So, \(PB=D\cos\theta+1\) and \(PB=Bp_{1}\) (since \(\Delta p_{1}PB\) is an isosceles triangle), this implies \(Pp_{1}=\sqrt{2}(D\cos\theta+1)\). So, \(Pp_{1}=\sqrt{2}(D\cos\theta+1)+(D\cos\theta+1)+(D\cos\theta+1)+(D\cos\theta+1)+(D \cos\theta+1)+(D\cos\theta+1)+(D\cos\theta+1)+(D\cos\theta+1)+(D\cos\theta+1)+(D\cos \theta+1)+(D\cos\theta+1)+(D\cos\theta+1)+(D\cos\theta+1)+ we have \(p_{1}p_{2}=1\) and \(p_{2}T=p_{2}A+TA=D\cos\theta+1+D\sin\theta\). So, the total cost is: \(\sqrt{2}(D\cos\theta+1)+1+(D\cos\theta+1+D\sin\theta)\), which is linear in terms of \(D\). 4. When \(|y_{T}|>x_{T}\), then the pebbles \(p_{1}\) and \(p_{2}\) are placed at \((-y_{T}+1,-y_{T}+1)\) and \((x_{T},-y_{T}+1)\), respectively. So, \(Bp_{1}=D\sin\theta+1\) and \(PB=Bp_{1}\) (since \(\Delta p_{1}PB\) is an isosceles triangle), this implies \(Pp_{1}=\sqrt{2}(D\sin\theta+1)\). Hence, we have \(p_{1}p_{2}=(D\sin\theta+1)-D\cos\theta\) and \(p_{2}T=p_{2}A+TA=D\sin\theta+1+D\sin\theta=2D\sin\theta+1\). So, the total cost is: \(\sqrt{2}(D\sin\theta+1)+(D\sin\theta+1)-D\cos\theta+2D\sin\theta+1\), which is again linear in terms of \(D\). So, we have \(f_{4}(\theta)=\max\{\min_{\forall\theta\in[0,2\pi]}\{\sqrt{2}(D\cos\theta+1)+ 1+(D\cos\theta+1+D\sin\theta),\sqrt{2}(D\sin\theta+1)+(D\sin\theta+1)-D\cos \theta+2D\sin\theta+1\}\}\) Further, the cumulative cost is \(f_{5}(\theta)=\max_{\forall i\in\{1,\cdots,5\}}\{f_{i}(\theta)\}\), which is approximately \(4.5D+(\sqrt{2}+2)\). Hence, from all the above cases, we conclude that the cost complexity is linear in \(D\). ## 3 Improved solution for treasure hunt In this section, we propose faster algorithm which requires at least 9 pebbles to perform the treasure hunt. ### High level idea Before we give the details of the pebble placement algorithm, we describe the high-level idea of the same. Intuitively, depending on the number of pebbles available, the Oracle divides the plane into multiple sectors as described in Section 3.2. Then it identifies the sector number \(m\) in which the treasure is located Figure 2: Movement of the agent when the treasure is located at the lower half of the plane and 'encode' this number by placing the pebbles. The agent, looking at the pebble placements, 'decode' this encoding, and move along the particular sector to find the treasure. There are certain challenges that need to be overcome to implement this idea. **Sector Detection:** The first difficulty is how different placements of pebbles enable the agent to differentiate between the bit 0 and the bit 1. Since the agent has no sense of time and distance, two different pebble placements may look identical to the agent. On the other hand, since the agent has no prior information about the encoded integer, its movement should also be planned in a way that using the same movement strategy will detect the bit zero for some instances and the bit 1 for other instances. The capability of detecting the initial point \(P\) as special point is used to overcome this difficulty. First, we place a pebble \(p_{1}\) at the point (1,0) and two additional fixed pebbles \(p_{2}\) at (1,1) and \(p_{3}\) at (2,1) are placed. The rest of the pebbles are placed based on the fact whether a particular bit of the encoding is a 0 or 1. Initially, consider the specific scenario of encoding only one bit 0 or 1. The idea is to place a particular pebble \(p\) in two possible positions on the x-axis such that the agent, starting from \(P\), reach \(p\), then moving at a certain fix angle \(\alpha\) from \(p\) will reach \(p_{2}\) for one position and \(p_{3}\) for the other. The agent can not distinguish between \(p_{2}\) and \(p_{3}\) but moving in a particular angle \(\beta\) from \(p_{2}\) will reach \(P\) and from \(p_{3}\) will reach \(p_{1}\). These two different scenarios are distinguished as 0 and 1, respectively. In order to achieve and implement the idea, the pebble \(p\) is placed at the point (3,0) in case of encoding 1 and (4,0) in case of encoding 0. The advantage of this specific placement is that in case of placing \(p\) at (3,0) that moving from \(P\) to \(p\), and then moving at an angle \(\arctan\left(\frac{-1}{2}\right)\), the agent reaches \(p_{2}\) and then moving at an angle \(\arctan\left(3\right)\), it reaches \(P\). On the other hand, in the case of placing \(p\) at (4,0), using the same movement, the agent arrives at \(p_{1}\). Hence, it detects these two different observations as two different bits 1 and 0, respectively. (See Fig. 3). Figure 4: Pebble placement when treasure inside \(B\) Figure 3: Placement of pebble by oracle when first bit is 1 We extend the above idea for encoding any binary string \(\mu\) as follows. In addition to the pebbles \(p_{1}\), \(p_{2}\), and \(p_{3}\), one additional pebble for each of the bits of \(\mu\) are placed. To be specific, for \(1\leq i\leq|\mu|\), a pebble \(p_{b_{i}}\) is placed at \((2i+1,0)\) if the \(i\)-th bit is 1, else \(p_{b_{i}}\) is placed at \((2i+2,0)\). Starting from \(P\) to \(p_{b_{i}}\), moving at an angle \(\arctan\left(\frac{-1}{2i}\right)\) until a pebble is reached, then moving at an angle \(\arctan\left(\frac{2i+1}{2i-1}\right)\), the agent reaches either \(P\) or \(p_{1}\) depending on the \(i\)-th bit is 1 or 0 respectively. A difficulty that remains to be overcome is how the agent detects the end of the encoding. This is important because if the termination is not indicated, then there is a possibility that the agent moves to find more pebbles \(p_{b_{j}}\), \(j>|\mu|\), and continues its movement for an infinite distance. We use two additional pebbles named \(p_{t_{1}}\) and \(p_{t_{2}}\) for the specific requirement of termination detection. The position of these two pebbles \(p_{t_{1}}\) and \(p_{t_{2}}\) are as follows. If the 1st bit of the binary string \(\mu\) is 1, i.e., \(p_{b_{1}}\) is placed at \((3,0)\) then the pebbles \(p_{t_{1}}\) and \(p_{t_{2}}\) are placed at \((4,1)\) and \((2|\mu|+6,0)\), respectively. Otherwise, if the 1st bit is 0 then these two pebbles are placed at \((5,1)\) and \((2|\mu|+7,0)\), respectively. After visiting the pebble \(p_{|\mu|}\) for the last bit of \(\mu\), the agent returns to \(P\), and moves as usual to find a pebble expecting to learn more bits of the binary string. From \(P\), once it reaches \(p_{t_{2}}\), it moves at an angle \(\arctan\left(\frac{-1}{2(|\mu|+1)}\right)\) until a pebble is reached. Note that the two pebbles \(p_{t_{1}}\) and \(p_{t_{2}}\) are placed in such a way that the angle \(\angle Pp_{t_{2}}p_{t_{1}}=\arctan\left(\frac{-1}{2(|\mu|+1)}\right)\). Hence using the movement from \(p_{t_{2}}\) at angle \(\arctan\left(\frac{-1}{2(|\mu|+1)}\right)\) the agent reaches \(p_{t_{1}}\) and from \(p_{t_{1}}\) moving at angle \(\arctan\left(\frac{2(|\mu|+1)+1}{2(|\mu|+1)-1}\right)\), it reaches to \(p_{b_{1}}\). Since the following specific movement mentioned above, the agent reaches to a pebble, it initially assumed that it learned the bit zero. But moving west from \(p_{b_{1}}\), it reaches another pebble (i.e., the pebble \(p_{1}\)), instead of origin. This special occurrence indicates the termination of the encoding to the agent. Hence in this way, the agent learns the binary string \(\mu\), and the integer \(\Delta\) whose binary representation is \(\mu\). **Finding the treasure inside the sector:** One more pebble \(p_{T}\) is placed on the foot of the perpendicular drawn from \(T\) on \(L_{j+1}\) (refer Fig.5). After learning the encoding of \(\mu_{j}\), the agent decodes the integer \(j\), and correctly identifies the two lines \(L_{j}\) and \(L_{j+1}\) inside the sector to help the agent in locating the exact location of the treasure. A difficulty arises here while placing the pebble \(p_{T}\) inside the sector as some pebbles that are already placed while the encoding of the sector number may be very close (at distance \(<1\)) from the possible position of \(p_{T}\). To resolve this, we do the placement of the pebbles for encoding on positive x-axis if the position of the treasure is at the left half plane, and the placement of the pebbles are done on the negative x-axis, if the position of the treasure is at the right half plane. To instruct the agent which way it must move to find the pebbles for learning the encoding, one additional pebble \(p_{0}\) is placed at \(P\). Some specific cases need to be separately handled: If the treasure is in a position \((x,y)\), such that \(-1\leq x\leq 1\) and \(y\geq-1\), as this again may create a problem in placing \(p_{T}\) inside the prescribed position inside the sector. The details of these cases will be discussed while explaining the pebble placement strategy in the section 3.2. ### Pebble placement The agent is initially placed at \(P\), and the treasure is placed at \(T\). The oracle, knowing the initial position \(P\) of the agent and the position \(T=(x_{T},y_{T})\) of the treasure, places \(k\) pebbles in the Euclidean plane. Let \(B\) be the square region bounded by the lines \(x=1\), \(x=-1\), \(y=1\), and \(y=-1\). Based on the position of the treasure, the pebble placement is described using two different cases. **Case 1:** If \(x_{T}>0\) and \(T\not\in B\), then the placements of the pebbles are done as follows. 1. Place a pebble \(p_{0}\) at \(P\). 2. Draw \(2^{k-8}\) half-lines \(L_{0},\cdots,L_{2^{k-8}-1}\), starting at the initial position \(P\) of the agent, such that \(L_{0}\) goes North and the angle between consecutive half-lines is \(\pi/2^{k-8}\) for \(i=0,\cdots,2^{k-8}-1\). The sector \(S_{i}\) is defined as the set of points in the plane between \(L_{i}\) and \(L_{(i+1)\mod 2^{k-8}}\), including the points on \(L_{i}\) and excluding the points on \(L_{(i+1)\mod 2^{k-8}}\). If \(T\in S_{j}\), for some \(j\in\{0,1,\cdots,2^{k-8}-1\}\) then place pebbles as follows. * Place the pebbles \(p_{1}\) at (-1,0), \(p_{2}\) at (-1,-1) and \(p_{3}\) at (-2,-1). * Let \(\mu_{j}\) be the binary representation of the integer \(j\) with leading \(\lfloor\log k\rfloor-\lfloor\log j\rfloor\) many zeros. If \(0\leq x_{T}\leq 1\) and \(y_{T}>1\), then \(\mu_{j}=0\cdot\mu_{j}\), else \(\mu_{j}=1\cdot\mu_{j}\). For \(1\leq\ell\leq|\mu_{j}|\), if the \(\ell\)-th bit of \(\mu_{j}\) is 1, then place a pebble at \((-2\ell-1,0)\), else place a pebble at \((-2\ell-2,0)\). * If the 1st bit of \(\mu_{j}\) is 1, then place a pebble \(p_{t_{1}}\) at (-4,-1), else place \(p_{t_{1}}\) at (-5,-1). * If the 1st bit of \(\mu_{j}\) is 1, then place a pebble \(p_{t_{2}}\) at \((-2|\mu_{j}|-6,0)\), else place \(p_{t_{2}}\) at \((-2|\mu_{j}|-7,0)\). 3. If \(x_{T}<0\) and \(T\not\in B\), then the placements of the pebbles are done as follows. For each pebble placed at \((m,n)\), where \(m\neq 0\) or \(n\neq 0\) in the above case, place the corresponding pebble at \((-m,-n)\) in this case. Also, place no pebble at \(P\). 4. If the first bit of \(\mu_{j}\) is 0, then let \(F\) be the foot of the perpendicular drawn from \(T\) to \(L_{j}\), else let \(F\) be the foot of the perpendicular drawn from \(T\) to \(L_{j+1}\). Place a pebble \(p_{T}\) at \(F\) ( Lemma 1 ensures that the pebbles are placed at a distance of at least 1 in this scenario). **Case 2:** If \(x_{T}>0\) and \(T\in B\), then the pebbles are placed as follows. * Place a pebble \(p_{1}\) at (1,0) (refer Fig. 4). * Let \(m_{1}=\tan\left(\pi-\arctan(\frac{-1}{2})-\arctan(3)\right)\) and \(m_{2}=\tan(\pi-\arctan\left(\frac{-1}{2}\right))\). Draw a line \(Q_{1}\) through \(T\) with slope \(m_{1}\) and draw a line \(Q_{2}\) through the point \((2,0)\) with slope \(m_{2}\). Let \(s=(q_{1},q_{2})\) be the point of intersection between these two lines. Let \(s^{\prime}\) be the point on the line whose \(y\) coordinate is \(q_{2}+1\). Draw the lines \(Q^{\prime}_{2}\) parallel to \(Q_{2}\) and go through \(s^{\prime}\). Let \(h\) be the points of intersection of the lines \(Q^{\prime}_{2}\) with \(x\)-axis. Two additional pebbles \(p_{2}\) and \(p_{3}\) are placed as follows. * If \(q_{2}<1\), then place \(p_{2}\) at \(h\) and \(p_{3}\) at \(s^{\prime}\). * Otherwise, place \(p_{2}\) at \((2,0)\) and \(p_{3}\) at \(s\). If \(x_{T}<0\) and \(T\in B\), then placement of the pebbles are done as follows. * Place the pebbles \(p_{0}\) at P and \(p_{1}\) at (-1,0). * Let \(m_{1}=-\tan\left(\pi-\arctan(\frac{-1}{2})-\arctan(3)\right)\) and \(m_{2}=-\tan(\pi-\arctan\left(\frac{-1}{2}\right))\). Draw a line \(Q_{1}\) through \(T\) with slope \(m_{1}\) and draw a line \(Q_{2}\) through the point \((-2,0)\) with slope \(m_{2}\). Let \(r=(r_{1},r_{2})\) be the point of intersection between these two lines. Let \(r^{\prime}\) be the point on the line \(Q_{1}\) whose \(y\) coordinate is \(r_{2}+1\). Draw the lines \(Q^{\prime}_{2}\) parallel to \(Q_{2}\) and go through \(r^{\prime}\). Let \(n\) be the points of intersection of the lines \(Q^{\prime}_{2}\) with \(x\)-axis. Two additional pebbles \(p_{2}\) and \(p_{3}\) are placed as follows. * If \(r_{2}<1\), then place \(p_{2}\) at \(n\) and \(p_{3}\) at \(r^{\prime}\). * Otherwise, place \(p_{2}\) at \((-2,0)\) and \(p_{3}\) at \(r\). ### Treasure hunt Starting from \(P\), the agent finds the treasure with the help of the pebble placed at different points on the plane. On a high level, the agent performs three major tasks: (1) Learn the direction of its initial movement (2) Learn the encoding of the sector number in which the treasure is located, and (3) Move inside the designated sector and find the treasure. The agent learns the direction of its initial movement by observing whether a pebble is placed at \(P\) or not. If a pebble is placed, then it learns that the direction of its initial movement is west and pebble placement is done for the encoding of the sector number on the negative \(x\) axis. Otherwise, it learns that the direction of its initial movement is east and pebble placement is done for the encoding of the sector number on the positive \(x\) axis. Then for each \(j=1,2,\cdots\), it continues its movement in a specific path (depending on the value of \(j\)) and learns the \(j\)-th bit of the encoding until it detects the termination of the encoding. To be specific, the \(j\)-th bit of the encoding is learned by the agent using the movements in the following order from \(P\). * Starting from \(P\), move along \(x\)-axis until the \((j+1)\)-th pebble is reached, * Move at angle \(\arctan(\frac{-1}{2j})\), and continue moving in this direction until a pebble is reached * Move at an angle \(\arctan(\frac{2j+1}{2j-1})\) until \(P\) or a pebble is found. * If \(P\) is found in the previous step, then the bit is 1. * If a pebble is found, then move along \(x\) axis towards \(P\). If \(P\) is encountered, then the bit is 0. * If a pebble is encountered instead of \(P\) in the previous step, then the agent learns that the encoding is completed. ``` 1 Draw \(2^{k-8}\) half lines \(L_{0},\cdots,L_{2^{k-8}-1}\) starting from \(P\), where angle between two consecutive half-lines is \(\frac{\pi}{2^{k-8}}\). Let Sector \(S_{i}\) be the sector bounded by the half lines \(L_{i}\) and \(L_{i+1}\) and let \(T\in S_{\Delta}\), \(\Delta\in\{0,1,\cdots,2^{k-8}-1\}\) 2if\(x_{T}\geq 0\)then 3if\(0\leq x_{T}\leq 1\) and \(-1\leq y_{T}\leq 1\)then 4 SquarePlacement(2) 5 6else 7 Place a pebble \(p_{0}\) at \(P\) 8if\(x_{T}\leq 1\) and \(y_{T}>1\)then 9 NonSquarePlacement(\(1,0\)) 10 Place a pebble \(p_{T}\) at the foot of the perpendicular drawn from \(T\) on \(L_{\Delta}\). 11 12else 13 NonSquarePlacement(\(1,1\)) 14 Place a pebble \(p_{T}\) at the foot of the perpendicular drawn from \(T\) on \(L_{\Delta+1}\). 15 16else 17if\(-1\leq x_{T}\leq 0\) and \(-1\leq y_{T}\leq 1\)then 18 Place a pebble \(p_{0}\) at \(P\). 19 SquarePlacement(1) 20 21else 22if\(-1\leq x_{T}\leq 0\) and \(y_{T}>1\)then 23 NonSquarePlacement(2,0) 24 Place a pebble \(p_{T}\) at the foot of the perpendicular drawn from \(T\) on \(L_{\Delta}\). 25 26else 27 NonSquarePlacement(2,1) 28 Place a pebble \(p_{T}\) at the foot of the perpendicular drawn from \(T\) on \(L_{\Delta+1}\). 29 30 ``` **Algorithm 1**PebblePlacement ``` 1Place a pebble \(p_{1}\) at \(((-1)^{count},0)\). 2if\(q_{2}<1\)then 3 Place the pebbles \(p_{2}\) at \(h\) and \(p_{3}\) at \(s^{\prime}\), respectively. 4 5else 6 Place a pebble \(p_{2}\) at \(((-1)^{count}\cdot 2,0)\) and \(p_{3}\) at \(s\), respectively. ``` **Algorithm 2**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{1}\) at \(((-1)^{count},0)\). 2if\(q_{2}<1\)then 3 Place the pebbles \(p_{2}\) at \(h\) and \(p_{3}\) at \(s^{\prime}\), respectively. 4 5else 6 Place a pebble \(p_{2}\) at \(((-1)^{count}\cdot 2,0)\) and \(p_{3}\) at \(s\), respectively. ``` **Algorithm 3**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{1}\) at \(((-1)^{count},0)\). 2if\(q_{2}<1\)then 4 Place the pebbles \(p_{2}\) at \(h\) and \(p_{3}\) at \(s^{\prime}\), respectively. 5 6else 7 Place a pebble \(p_{2}\) at \(((-1)^{count}\cdot 2,0)\) and \(p_{3}\) at \(s\), respectively. ``` **Algorithm 4**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{1}\) at \(((-1)^{count},0)\). 2if\(q_{2}<1\)then 8 Place the pebbles \(p_{2}\) at \(h\) and \(p_{3}\) at \(s^{\prime}\), respectively. 9 10else 11 Place a pebble \(p_{2}\) at \(((-1)^{count}\cdot 2,0)\) and \(p_{3}\) at \(s\), respectively. ``` **Algorithm 5**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{1}\) at \(((-1)^{count},0)\). 29 30 ``` **Algorithm 6**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{1}\) at \(((-1)^{count},0)\). 21 22 31 ``` **Algorithm 7**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{2}\) at \(((-1)^{count}\cdot 2,0)\) and \(p_{3}\) at \(s\), respectively. **Algorithm 8**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{1}\) at \(((-1)^{count},0)\). 23 34 ``` **Algorithm 9**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{2}\) at \(((-1)^{count}\cdot 2,0)\) and \(p_{3}\) at \(s\), respectively. 24 35 ``` **Algorithm 10**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{1}\) at \(((-1)^{count},0)\). 25 36 ``` **Algorithm 11**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{2}\) at \(((-1)^{count}\cdot 2,0)\) and \(p_{3}\) at \(s\), respectively. **Algorithm 12**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{1}\) at \(((-1)^{count},0)\). 26 37 ``` **Algorithm 13**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{2}\) at \(((-1)^{count}\cdot 2,0)\) and \(p_{3}\) at \(s\), respectively. 27 38 ``` **Algorithm 14**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{1}\) at \(((-1)^{count},0)\). 28 40 ``` **Algorithm 15**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{2}\) at \(((-1)^{count}\cdot 2,0)\) and \(p_{3}\) at \(s\), respectively. **Algorithm 16**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{1}\) at \(((-1)^{count}\cdot 2,0)\) and \(p_{3}\) at \(s\), respectively. 29 41 ``` **Algorithm 17**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{1}\) at \(((-1)^{count},0)\). 30 42 ``` **Algorithm 18**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{2}\) at \(((-1)^{count}\cdot 2,0)\) and \(p_{3}\) at \(s\), respectively. **Algorithm 19**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{2}\) at \(((-1)^{count}\cdot 2,0)\) and \(p_{3}\) at \(s\), respectively. 21 39 ``` **Algorithm 20**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{1}\) at \(((-1)^{count},0)\). 22 31 ``` **Algorithm 21**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{2}\) at \(((-1)^{count}\cdot 2,0)\) and \(p_{3}\) at \(s\), respectively. **Algorithm 22**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{2}\) at \(((-1)^{count}\cdot 2,0)\) and \(p_{3}\) at \(s\), respectively. 23 32 ``` **Algorithm 23**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{2}\) at \(((-1)^{count}\cdot 2,0)\) and \(p_{3}\) at \(s\), respectively. 24 33 ``` **Algorithm 24**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{2}\) at \(((-1)^{count}\cdot 2,0)\) and \(p_{3}\) at \(s\), respectively. 34 45 ``` **Algorithm 25**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{1}\) at \(((-1)^{count},0)\). 26 46 ``` **Algorithm 26**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{2}\) at \(((-1)^{count}\cdot 2,0)\) and \(p_{3}\) at \(s\), respectively. 47 58 ``` **Algorithm 27**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{1}\) at \(((-1)^{count}\cdot 2,0)\) and \(p_{3}\) at \(s\), respectively. 59 60 ``` **Algorithm 30**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{1}\) at \(((-1)^{count}\cdot 2,0)\) and \(p_{3}\) at \(s\), respectively. 70 71 ``` **Algorithm 31**SquarePlacement(\(count\)) ``` 1Place a pebble \(p_{1}\) at \(((-1)^{count}\cdot 2,0)\) and \(p_{3}\) at \(s\), respectively. 82 ``` 1 Initially \(l=2\). 2 Place the pebbles \(p_{1}\) at \(((-1)^{count},0)\), \(p_{2}\) at \(((-1)^{count},(-1)^{count})\) and \(p_{3}\) at \(((-1)^{count}\cdot 2,(-1)^{count})\), respectively. 3\(\mu_{j}\) be the binary representation of the integer \(j\) with leading \(\lfloor\log k\rfloor-\lfloor\log j\rfloor\) many zeroes. 4\(\mu_{j}=bit.\mu_{j}\)\(\triangleright\) Represents the concatenation of \(bit\) value with \(\mu_{j}\) 5if\(bit=1\)then 6 Place a pebble at \(((-1)^{count}\cdot 3,0)\). 7 8else 9 Place a pebble at \(((-1)^{count}\cdot 4,0)\). 10while\(l\leq k+1\)do 11if\(\ell\)-th bit of \(\mu_{j}\) is \(1\)then 12 Place a pebble at \(((-1)^{count}\cdot(2\ell+1),0)\). 13 14else 15 Place a pebble at \(((-1)^{count}\cdot(2\ell+2),0)\). 16\(l=l+1\) 17 18if1st bit of \(\mu_{j}\) is 1then 19 Place the pebbles \(p_{t_{1}}\) at \(((-1)^{count}\cdot 4,(-1)^{count})\) and \(p_{t_{2}}\) at \(((-1)^{count}\cdot(2|\mu_{j}|+6),0)\), respectively. 20 21else 22 Place the pebbles \(p_{t_{1}}\) at \(((-1)^{count}\cdot 5,(-1)^{count})\) and \(p_{t_{2}}\) at \(((-1)^{count}\cdot(2|\mu_{j}|+7),0)\), respectively. ``` **Algorithm 3**NonSquarePlacement\((count,bit)\) ``` 1 If a pebble is found at \(P\) then set \(angle=\pi\) otherwise set \(angle=0\). 2\(t=2\), \(\mu=\epsilon\) 3 Start moving at an angle \(angle\) with the positive \(x\) axis. 4iftreasure is foundthen 5 Terminate 6else 7 Continue moving in the same direction until the \(t\)-th pebble or the treasure is found. 8iftreasure foundthen 9 Terminate 10else 11\(\ell\)=FindBit\((t,angle)\) 12if\(\ell\in\{0,1\}\)then 13\(\mu=\mu\cdot\ell\). 14\(t=t+1\). 15 Go to Step 3 16else 17 FindTreasure\((\mu,angle)\) 18 ``` 1 Move at an angle \(\pi-\theta_{t}\), where \(\theta_{t}=\arctan(\frac{-1}{2t})\) until the treasure or a pebble is found. 2iftreasurefoundthen 3 Terminate 4else 5 Move at an angle \(\pi-\beta_{t}\), where \(\beta_{t}=\arctan(\frac{2t+1}{2t-1})\). 6iftreasurefoundthen 7 Terminate 8elseif\(P\) isfoundthen 9 return 1 10elseifa pebble is found at a point other than \(P\)then 11if\(angle=0\)then 12 Move at an angle \(\pi+\frac{\pi}{4}\). 13else 14 Move at an angle \(\pi-\frac{\pi}{4}\). 15if\(P\) isfoundthen 16 return 0 17else 18 Continue its movement until \(P\) is reached. 19 return 2 ``` **Algorithm 5**FindBit\((t,angle)\) ``` 1 Let \(\Delta\) be the integer whose binary representation is \(\mu\) 2if\(angle=\pi\)then 3if\(\mu_{1}=0\)then 4\(val=\Delta\) 5\(\operatorname{SectorTravel}(val,1,2)\) 6else 7\(val=\Delta+1\) 8\(\operatorname{SectorTravel}(val,1,1)\) 9 10else 11if\(\mu_{1}=0\)then 12\(val=\Delta\) 13\(\operatorname{SectorTravel}(val,2,1)\) 14 15else 16\(val=\Delta+1\) 17\(\operatorname{SectorTravel}(val,2,2)\) 18 ``` **Algorithm 6**FindTreasure\((\mu,angle)\) ``` 1 Let \(\Delta\) be the integer whose binary representation is \(\mu\) 2if\(angle=\pi\)then 3if\(\mu_{1}=0\)then 4\(val=\Delta\) 5\(val=\Delta+1\) 6else 7\(val=\Delta+1\) 8\(val=\Delta+1\) 9 10else 11\(val=\Delta+1\) 12\(val=\Delta+1\) 13\(\operatorname{SectorTravel}(val,2,2)\) 14 15 16 end if 17 18 end if 19 20 end if 21return 1 22 [MISSING_PAGE_POST] eturn 1 44 [MISSING_PAGE_POST] 1593 Let \(\mu\) be the binary string learned by the agent in the above process and let \(\Delta\) be the integer whose binary representation of \(\mu\). If the first bit of \(\mu\) is \(1\), then the agent starts moving along \(L_{\Delta+1}\) from \(P\) until it hits a pebble or reaches the treasure. Once the pebble is reached, the agent changes its direction at angle \(\frac{\pi}{2}\) if its initial direction of movement was west, else the agent changes its direction at angle \(\frac{3\pi}{2}\). It continues its movement in the current direction until the treasure is reached. The following lemma ensures that the pebbles are placed at a distance of at least \(1\) in step 4 of Case 1 in the above pebble placement strategy. Lemma 1: _If \(T=(x_{T},y_{T})\in B^{\prime}\), where \(B^{\prime}=\{(x,y)|\ 0\leq x\leq 1\) and \(y_{T}>1\}\), the location of the foot of the perpendicular \(F\) on \(L_{j}\) is outside the square \(B\)._ Proof: Let the position of \(F\) be \((h,k)\). Let \(m_{1}\) be the slope of the line \(PF\) and \(m_{2}\) be the slope of the line \(FT\) (See Fig. 7). Now as \(PF\perp FT\), therefore \(m_{1}\cdot m_{2}=-1\). The slope \(m_{1}=\frac{k}{h}\) is positive as \(k>0\) and \(h>0\), so \(m_{2}=\frac{y_{T}-k}{x^{T}-h}\) must be negative to satisfy the above condition. Now, \(m_{2}\) can be negative if one of the following cases is true. * Case-1: \(y_{T}>k\) and \(x_{T}<h\), * Case-2: if \(y_{T}<k\) and \(x_{T}>h\), If Case 1 is true, then the point \(F\) must be on the right side of the line \(x=X_{T}\), which is not possible. Therefore, Case 2 must be true, i.e., \(1<y_{T}<k\) and \(x_{T}>h\). This implies that \(F\) is outside \(B\). The execution of the algorithm is explained with the help of a example. **Example-1:** Given 11 pebbles, the oracle divides the plane into \(2^{11-8}\) sectors. Suppose the treasure is placed in the sector 5 (as depicted in Fig. 5). Moreover, consider the position of the treasure is outside the square \(B\). So, the oracle places the pebbles by following the algorithm _PebblePlacement 1_, such that the agent, after following the algorithm _AgentMovement 4_ learns the direction of its initial movement and further learns the encoding of the sector number (i.e., 101 in this case) in which the treasure is located in the following manner. An iteration of the algorithm 4 is defined as a cycle which consists of the agent's movement starting from \(P\) and returning back to \(P\). In the first iteration, the agent initially at \(P\), does not find a pebble at \(P\). The algorithm 4 instructs the agent to move towards east until it encounters a second pebble \(p_{b_{1}}\) along positive \(x\)-axis. From \(p_{b_{1}}\) the agent moves at an angle \(\pi-\arctan(\frac{-1}{2})\) until it encounters a pebble \(p_{2}\). From \(p_{2}\) it further moves at an angle \(\pi-\arctan\left(\frac{2+1}{2-1}\right)\) until it reaches the origin \(P\). So, after completion of the first iteration (i.e., the path traversed \(P\to p_{b_{1}}\to p_{2}\to P\)) the agent learns that the first bit is 1. In the second iteration, the agent again moves towards east until it reaches the third pebble \(p_{b_{2}}\). From \(p_{b_{2}}\), the agent moves at an angle \(\pi-\arctan\left(\frac{-1}{2\cdot 2}\right)\) until it encounters a pebble \(p_{2}\), from \(p_{2}\) it further moves at an angle \(\pi-\arctan\left(\frac{2\cdot 2+1}{2\cdot 2-1}\right)\) until it reaches the origin \(P\). So, after completion of the second iteration (i.e., the path traversed \(P\to p_{b_{2}}\to p_{2}\to P\)) the agent learns that the second bit is again 1. In the third iteration, the agent after a similar movement towards east reaches the fourth pebble \(p_{b_{3}}\) along the positive x-axis. From \(p_{b_{3}}\), it further moves at an angle \(\pi-\arctan(\frac{-1}{2\cdot 3})\) until it reaches the pebble \(p_{3}\). From \(p_{3}\), it moves along an angle \(\pi-\arctan\left(\frac{2\cdot 3+1}{2\cdot 3-1}\right)\) until it reaches a pebble \(p_{1}\). From \(p_{1}\), the agent finally moves at an angle \(\pi+\frac{\pi}{4}\) until it reaches \(P\). So, after completion of the third iteration (i.e., the path traversed \(P\to p_{b_{3}}\to p_{3}\to p_{1}\to P\)) the agent learns that the third bit is 0. In the fourth iteration, with a similar movement the agent reaches the pebble \(p_{b_{4}}\), and from this position the agent moves at an angle \(\pi-\arctan\left(\frac{-1}{2\cdot 4}\right)\) until it reaches \(p_{2}\), from \(p_{2}\) it further moves at an angle \(\pi-\arctan\left(\frac{2\cdot 4+1}{2\cdot 4-1}\right)\) until it reaches \(P\). So, in the fourth iteration (i.e., the path traversed \(P\to p_{b_{4}}\to p_{2}\to P\)), the agent learns that the fourth bit is 1. In the fifth iteration, the agent reaches the fifth pebble, i.e., \(p_{t_{2}}\) (refer Fig. 5), from \(p_{t_{2}}\) it moves at an angle \(\pi-\arctan\left(\frac{-1}{2\cdot 5}\right)\) until it reaches a pebble \(p_{t_{1}}\), from this position it further moves at an angle \(\pi-\arctan\left(\frac{2\cdot 5+1}{2\cdot 5-1}\right)\) until it reaches a pebble \(p_{b_{1}}\). Further from \(p_{b_{1}}\), the agent further moves at an angle \(\pi+\frac{\pi}{4}\) until it reaches \(P\). Since the agent encounters the pebble \(p_{1}\) after its last movement from \(p_{b_{1}}\), this gives the knowledge to the agent that termination is achieved. Hence the binary string obtained by the agent is \(\mu=1101\)(say). Figure 5: Figure showing demonstration of Example-1 Then, the agent by following the algorithm _AgentMovement 4_ decodes that somewhere in sector 5 the treasure is located. Further, since \(\mu_{1}=1\), the agent then follows the algorithms _FindTreasure 6_ and _SectorTravel 7_ to finally reach the treasure by traversing the half-line \(L_{6}\) and encountering the pebble \(p_{T}\), from which moving at an angle \(\pi+\frac{\pi}{2}\) the agent ultimately reaches the treasure \(T\). ## 4 Complexity In this section, we give the correctness and an upper bound on the cost of finding treasure from the proposed algorithms. The following two lemmas show the algorithm's correctness when the treasure is inside \(B\) and the upper bound of the cost of treasure hunt. Lemma 2: _With 3 pebbles and the treasure located inside \(B\), the agent successfully finds the treasure._ Proof: When the treasure is present inside the square \(B\), the oracle places a pebble \(p_{0}\) at \(P\), if the treasure is located in the left half of \(y\)-axis. Otherwise, no pebble is placed at \(P\) as discussed in Case-2 of section 3.2 (also refer to lines 3 and 14 of algorithm 1). So, the agent starts its movement from \(P\) along an angle \(\pi\), i.e., along negative \(x\)-axis if it finds a pebble \(p_{0}\) at \(P\) (refer lines 1 and 2 of algorithm 4) otherwise, the agent moves along an angle 0, i.e., positive \(x\)-axis if no pebble is found at \(P\) (refer line 4 of algorithm 4). Now we have the following cases depending on the presence of a pebble at \(P\). * _Pebble not found at \(P\)_: In this case, the agent while moving along positive \(x\)-axis either finds the treasure and the algorithm terminates (refer to lines 7 and 8 of Algorithm 4). Otherwise, the agent finds a pebble \(p_{1}\) placed at \((1,0)\), which it ignores as instructed in the algorithm _FindBit_ 5 and continues to move until it reaches the treasure or encounters a pebble. If the treasure is not found, then the pebble \(p_{2}\) is placed by the oracle at either \(h\) or \((2,0)\) (refer to lines 10 and 13 of algorithm 2). The agent, after encountering the second pebble, moves along an angle \(\pi-\theta_{1}\), where \(\theta_{1}=\arctan\left(\frac{-1}{2}\right)\) until the treasure or a pebble is found. If a pebble is found, then we have the following cases: * \(p_{2}\)_placed at (2,0)_: In this case, the agent finds the pebble \(p_{3}\) at \(s\) (refer to line 14 of algorithm 2), from which it further moves along an angle \(\pi-\beta_{1}\), where \(\beta_{1}=\arctan(3)\) and finds the treasure. * \(p_{2}\)_placed at \(h\)_: In this case, the agent finds the pebble \(p_{3}\) at \(s^{\prime}\) (refer to line 11 of algorithm 2), from which it further moves along an angle \(\pi-\beta_{1}\), where \(\beta_{1}=\arctan(3)\) and finds the treasure. * _Pebble found at \(P\)_: In this case, the agent moves along negative \(x\)-axis and performs the similar task as described above. The reason being, the pebbles are placed in a similar manner, just on the adjacent half (i.e., left half of \(y\)-axis) as discussed in the above case (refer to \(x_{T}<0\) in case 2 of section 3.2). Lemma 3: _When the treasure is located inside \(B\), the agent starting from \(P\) successfully finds the treasure at cost \(O(D)\)._ Proof: The treasure is located at \((x_{T},y_{T})\) and let the co-ordinates of \(h\) be \((x^{\prime},0)\). The worst possible placement of pebbles \(p_{2}\) and \(p_{3}\) by the oracle are at \(h\) and \(s^{\prime}\) (where \(s^{\prime}=(q_{1}^{\prime},q_{2}+1)\) refer Fig. 6 and refer to the lines 10 and 11 of algorithm 2), respectively. So, the traversal of the agent to reach the treasure will be along the path \(Ph\to hs^{\prime}\to s^{\prime}T\). The total cost of this traversal is as follows: \(x^{\prime}+\sqrt{(x^{\prime}-q_{1}^{\prime})^{2}+(q_{2}+1)^{2}}+\sqrt{(q_{1}^{ \prime}-x_{T})^{2}+(q_{2}+1-y_{T})^{2}}\) (where \(|Ph|=x^{\prime}\), \(|hs^{\prime}|=\sqrt{(x^{\prime}-q_{1}^{\prime})^{2}+(q_{2}+1)^{2}}\) and \(|s^{\prime}T|=\sqrt{(q_{1}^{\prime}-x_{T})^{2}+(q_{2}+1-y_{T})^{2}}\)). Now, since \(|PT|<|Ph|+|hs^{\prime}|+|s^{\prime}T|\), i.e., \(D<|Ph|+|hs^{\prime}|+|s^{\prime}T|\) (since \(|PT|=D\)). Hence, in this case, the cost of reaching the treasure is \(O(D)\). Lemma 4, Lemma 5, Lemma 6 and Lemma 7 shows the correctness of the algorithm when the treasure is located outside \(B\). Lemma 4: _When the treasure is outside \(B\), the agent successfully finds the \(j\)-th bit of the binary string \(\mu\) at cost \(O(j)\)._ Proof: To obtain the \(j\)-th bit of \(\mu\) the movement of the agent is as follows. When the treasure is present outside the square \(B\), the oracle places a pebble \(p_{0}\) at \(P\) if the treasure is located in the right half of \(y\)-axis otherwise, there is no pebble placed at \(P\) as discussed in case 1 of section 3.2 (refer to line 6 of algorithm 1). The movement of the agent from \(P\) is as follows: * \(p_{0}\) **found at \(P\)**: In this case the agent moves at an angle \(\pi\), i.e., along negative \(x\)-axis (refer to the lines 1, 2 and 6 of algorithm 4). Further, it Figure 6: Traversal of an agent when the treasure is inside the square ignores the first \(j\) many pebbles along the negative \(x\)-axis (refer to line 10 of algorithm 4) and moves until it either finds the treasure or encounters the \((j+1)\)-th pebble \(p_{b_{j}}\) or \(p_{t_{1}}\) placed at either \((-2j-1,0)\) or \((-2j-2,0)\) or \((-2j-6,0)\) or \((-2j-7,0)\). If the treasure is not found, the cost of reaching this pebble is at most \(2j+7\). Now, from the current position, the agent is instructed to move at an angle \(\pi-\theta_{j}\) (refer to line 1 of algorithm _FindBit_ 5), where \(\theta_{j}=\arctan\left(\frac{-1}{2j}\right)\) until the treasure or a pebble \(p_{2}\) or \(p_{3}\) or \(p_{t_{2}}\) is encountered. If a pebble is found, then this pebble is either \(p_{2}\) placed at (-1,-1) or \(p_{3}\) placed at (-2,-1) or \(p_{t_{2}}\) placed at either (-4,-1) or (-5,-1), respectively. So, the cost of this traversal from the \((j+1)\)-th pebble to either \(p_{2}\) or \(p_{3}\) or \(p_{t_{2}}\) is at most \(\sqrt{\left(2j+2\right)^{2}+1}\). From either of these pebbles, the agent is further instructed to move along an angle \(\pi-\beta_{j}\), where \(\beta_{j}=\arctan\left(\frac{2j+1}{2j-1}\right)\) (refer to line 5 of algorithm 5) until it encounters the treasure or encounters a pebble or reaches \(P\) with \(O(1)\) cost. Now we have the following cases: * _If treasure found_: In this case, the agent has reached its goal, and the whole process terminates. * _If pebble found_: In this case, the pebble found is either \(p_{1}\) or \(p_{b_{1}}\). In either of the case, the agent is further instructed to move along an angle \(\pi+\frac{\pi}{4}\) or \(\pi-\frac{\pi}{4}\) (refer to the lines 12 and 14 of algorithm 5) until it reaches \(P\) or a pebble is found. Hence we have two cases: * _If \(P\) reached_: The agent gains the information that the \(j\)-th bit of \(\mu\) is 0 (refer to the lines 15 and 16 of algorithm 5 and lines 14, 15 and 16 of algorithm 4). So, the path traveled to gain this information is \(P\to p_{b_{j}}\to p_{3}\to p_{1}\to P\). So, the cost of this traversal is at most \((2j+2)+\sqrt{\left(2j\right)^{2}+1}+O(1)\), which is \(O(j)\). * _If pebble found_: In this case, the agent continues to move until \(P\) is reached and in which case the agent gains the information that termination is achieved, i.e., \((j-1)\)-th bit is the terminating bit of \(\mu\). The agent further moves on to execute algorithm _FindTreasure_ (refer to line 18 of algorithm 5, and to lines 15 and 20 of algorithm 4). So, the path travelled to gain this information is \(P\to p_{t_{1}}\to p_{t_{2}}\to p_{b_{1}}\to p_{1}\to P\). So, the cost of this traversal is at most \((2j+7)+\sqrt{\left(2j+2\right)^{2}+1}+O(1)\), which is \(O(j)\). * _If \(P\) is reached_: In this case the agent gains the information that the \(j\)-th bit of \(\mu\) is 1 (refer to the lines 8 and 9 of algorithm 5 and lines 14, 15 and 16 of algorithm 4). So, the path traveled to gain this information is \(P\to p_{b_{j}}\to p_{2}\to P\). So, the cost of this traversal is at most \((2j+1)+\sqrt{\left(2j\right)^{2}+1}+O(1)\), which is again \(O(j)\). * **No pebble found at \(P\)**: In this case, the agent moves in a similar manner as the pebbles are placed in a similar way as for each pebble placed at \((m,n)\), where \(m\neq 0\) and \(n\neq 0\) for the above case, the oracle places the corresponding pebble at \((-m,-n)\). Similarly, the cost to obtain the \(j\)-th bit of the binary string is \(O(j)\). Hence in each case, the cost of finding the \(j\)-th bit of \(\mu\) is \(O(j)\). Lemma 5: _Given \(k\) pebbles and the treasure located outside \(B\), the agent successfully finds the binary string \(\mu\) at cost \(O(k^{2})\)._ Proof: According to lemma 4, the agent successfully determines the \(j\)-th bit value of \(\mu\) at \(O(j)\) cost. Now as the binary string, \(\mu\) is of length \(k\) this implies, the total cost to obtain \(\Gamma\) is \(\sum_{j=1}^{k}O(j)\), i.e., \(O(k^{2})\). Lemma 6: _When the treasure is located outside \(B\), the agent after gaining the binary string \(\mu\), successfully finds the treasure by executing the algorithm Find-Treasure._ Proof: After termination of algorithm _AgentMovement_\(4\), the agent performs the algorithm _FindTreasure_\(6\) with the already acquired binary string \(\mu\) to finally reach the treasure \(T\) if not already reached. The treasure is either located somewhere on the region \(x\geq 0\) (i.e., right half of \(y\)-axis) or \(x\leq 0\) (i.e., left half of \(y\)-axis) and accordingly, the oracle divides the whole left half or right half of \(y\)-axis into \(2^{k-8}\) sectors (refer to line 1 of algorithm 1), where a sector \(S_{i}\) is bounded by half-lines \(L_{i}\) and \(L_{i+1}\) and angle between consecutive half lines is \(\frac{\pi}{2^{k-8}}\). Suppose the treasure is located somewhere in sector \(S_{\Delta}\), so \(\mu\) is the binary representation of \(\Delta\). The agent decodes this value \(\Delta\) after executing the algorithm _AgentMovement_\(4\). The whole aim of the oracle is to align the agent either along the half-line \(L_{\Delta}\) or \(L_{\Delta+1}\). The alignment of the agent along the half-lines \(L_{\Delta}\) or \(L_{\Delta+1}\) depends on the first bit value of \(\mu\), i.e., on \(\mu_{1}\) (refer to line 3 in algorithm 6) in the following manner: * _Case_\(\mu_{1}=0\): If a pebble is found at \(P\) (i.e., \(angle=\pi\) refer to line 2 of algorithm 4) then the agent is instructed to move along an angle \(\frac{\pi}{2}-\frac{\pi\Delta}{2^{k-8}}\), i.e., along the half-line \(L_{\Delta}\) until the treasure or a pebble is found (refer to the lines 4 and 5 of algorithm 6 and line 1 of algorithm 7). Otherwise, if no pebble is found at \(P\) (i.e., \(angle=0\) refer to line 4 of algorithm 4) then the agent is instructed to move along an angle \(\frac{\pi}{2}+\frac{\pi\Delta}{2^{k-8}}\) (refer to lines 11 and 12 of algorithm 6 and to line 1 of algorithm 7) until it finds the treasure or a pebble. * _If treasure found_: Then the algorithm terminates as we have reached our goal (refer to the lines 2 and 3 of algorithm 7). * _If pebble found_: The agent is further instructed to move along an angle \(\pi+\frac{\pi}{2}\) or \(\pi-\frac{\pi}{2}\) depending on the angle \(\pi\) or \(0\) (refer to line 5 of algorithm 7) until treasure is found. * _Case_\(\mu_{1}=1\): If a pebble is found at \(P\) (i.e., \(angle=\pi\)) then the agent is instructed to move along an angle \(\frac{\pi}{2}-\frac{\pi(\Delta+1)}{2^{k-8}}\), i.e., along the half-line \(L_{\Delta+1}\) until the treasure or a pebble is found (refer to the lines 7 and 8 of algorithm 6 and line 1 of Algorithm 7). Otherwise, if no pebble is found at \(P\) (i.e., \(angle=0\)), then the agent is instructed to move along an angle \(\frac{\pi}{2}+\frac{\pi(\Delta+1)}{2^{k-8}}\) (refer to the lines 14 and 15 of algorithm 6 and line 1 of algorithm 7) until it finds the treasure or a pebble. * _If treasure found_: Then the algorithm terminates as we have reached our goal (refer to the lines 2 and 3 of algorithm 7). * _If pebble found_: The agent is further instructed to move along an angle \(\pi-\frac{\pi}{2}\) or \(\pi+\frac{\pi}{2}\) depending on the angle \(\pi\) or \(0\) (refer to line 5 of algorithm 7) until the treasure is found. Hence in each case, the agent successfully finds the treasure after executing the algorithm _FindTreasure_. Lemma 7: _When the treasure is located outside \(B\), the agent after gaining the binary string \(\mu\) finds the treasure at cost \(D(\sin\theta^{\prime}+\cos\theta^{\prime})\), where \(\theta^{\prime}=\frac{\pi}{2^{k^{\prime}}}\) and \(k^{\prime}=k-8\)._ Proof: The agent, after gaining the binary string \(\mu\), executes the algorithms _AgentMovement_ and _FindTreasure_, and successfully reaches the treasure by following the path \(PF\to FT\) from \(P\) (refer Fig. 8). Since the angle between \(L_{\Delta}\) and \(L_{\Delta+1}\) is \(\frac{\pi}{2^{k-8}}\). Hence, \(\angle FPT\) is at most \(\frac{\pi}{2^{k-8}}\) (refer Fig. 8( which is \(\theta^{\prime}\) (say), also \(\angle TFP=\frac{\pi}{2}\) (as \(F\) is the foot of perpendicular of \(T\) to \(L_{\Delta+1}\) if \(\mu_{1}=1\) otherwise if \(\mu_{1}=0\) then \(F\) is the foot of perpendicular of \(T\) to \(L_{\Delta}\)) and \(|PT|\leq D\). So we have \(PF=D\cos\theta^{\prime}\) and \(FT=D\sin\theta^{\prime}\). Hence, the cost of traveling along the sector \(S_{\Delta}\) from \(P\) to reach \(T\) is \(PF+FT\), i.e., \(D(\sin\theta^{\prime}+\cos\theta^{\prime})\). Combining Lemma 6 to Lemma 7, we have the final result of this section summarized by the following theorem. Theorem 4.1: _Given \(k\) pebbles, the agent starting from \(P\) successfully finds treasure with \(O(k^{2})+D(\sin\theta^{\prime}+\cos\theta^{\prime})\)-cost, where \(\theta^{\prime}=\frac{\pi}{2^{k^{\prime}}}\) and \(k^{\prime}=k-8\)._ Remark 1: Consider the function \(f(k)=O(k^{2})+D(\sin\theta^{\prime}+\cos\theta^{\prime})\), where \(\theta^{\prime}=\frac{\pi}{2^{k}-8}\). Note that for \(D,k\rightarrow\infty\) and \(k\in o(\sqrt{D})\), the value of \(\frac{f(k)}{D}\to 1\). In order to demonstrate this fact, we plot the value of \(\frac{f(k)}{D}\) for increasing values of \(D\) in the range \([1000,100000000]\) and for \(k=\lfloor D^{\frac{1}{3}}\rfloor\). Fig. 9 shows the values of \(\frac{f(k)}{D}\) for different values of \(D\) in the above mentioned range and for the fix value of \(k\) for each \(D\). This figure shows that for large value of \(D\), the value of \(\frac{f(k)}{D}\) is very close to 1. ## 5 Conclusion We propose an algorithm for the treasure hunt that finds the treasure in an Euclidean plane using \(k\geq 9\) pebbles at cost \(O(k^{2})+D(\sin\theta^{\prime}+\cos\theta^{\prime})\), where \(\theta^{\prime}=\frac{\pi}{2^{k-8}}\). Proving a matching lower bound remains an open problem to consider in the future. It can be noted that if the agent has some visibility, the problem becomes very trivial even with only one pebble: place a pebble on the line from \(P\) to \(T\) within a distance of \(r\) from \(P\), where \(r\) is the visibility radius of the agent. Starting from \(P\), the agent sees the position of the pebble, move to the pebble, and then continue until it hits the treasure. But the problem becomes challenging if the compass of the agent is not perfect: i.e., if the agent does not have ability to measure an angle accurately. This seems a nice future problem as an extension of the current work. Figure 9: The curve represents the ratio of \(\frac{f(k)}{D}\) for different values of \(D\)
2305.01708
Exploring Xenophobic Events through GDELT Data Analysis
This study explores xenophobic events related to refugees and migration using the GDELT 2.0 database and APIs through visualizations. We conducted two case studies -- the first being an analysis of refugee-related news following the death of a two-year-old Syrian boy, Alan Kurdi, and the second a surge in news articles in March 2021 based on the data obtained from GDELT API. In addition to the two case studies, we present a discussion of our exploratory data analysis steps and the challenges encountered while working with GDELT data and its tools.
Himarsha R. Jayanetti, Erika Frydenlund, Michele C. Weigle
2023-05-02T18:21:00Z
http://arxiv.org/abs/2305.01708v1
# Exploring Xenophobic Events Through GDELT Data Analysis ###### Abstract This study explores xenophobic events related to refugees and migration using the GDELT 2.0 database and APIs through visualizations. We conducted two case studies - the first being an analysis of refugee-related news following the death of a two-year-old Syrian boy, Alan Kurdi, and the second a surge in news articles in March 2021 based on the data obtained from GDELT API. In addition to the two case studies, we present a discussion of our exploratory data analysis steps and the challenges encountered while working with GDELT data and its tools. Xenophobia, Refugees and Migrants, GDELT, Big Data, Data Science for Social Good 1 Footnote 1: [https://www.unhcr.org/en-us/](https://www.unhcr.org/en-us/) Footnote 2: [https://www.iom.int/](https://www.iom.int/) ## 1 Introduction People move around the world in pursuit of better opportunities or to flee conflicts and natural disasters. There are 281 million international migrants, or one in every 30 people worldwide (International Organization for Migration 2021), and more than 82 million of them have been forcefully displaced (UNHCR 2022). These migrants make an effort to fit in with the host communities. However, widespread xenophobic and racist violence makes it difficult to uphold societal order and provide equal access to opportunities, resources, and even human dignity. Hence, it is imperative to study such xenophobic incidents and examine the underlying factors contributing to hostile behavior towards refugees in order to fight xenophobia. The United Nations High Commissioner for Refugees (UNHCR)1 and the International Organization for Migration, (IOM)2 which are responsible for promoting secure and well-organized migration, also have the responsibility to combat against xenophobia. In our study, we use a massive and regularly updated dataset of online and TV and news reporting from GDELT3 to explore xenophobic events. We leveraged Google BigQuery4 and GDELT APIs to extract and access large amounts of data of online and TV news coverage related to refugee and migration topics. BigQuery is a cloud-based data warehousing tool that enables us to query large datasets like GDELT quickly and efficiently. The GDELT API is an open API that provides us access to the massive GDELT database. By using both tools, we were able to extract valuable insights from the vast amounts of data available, allowing us to better understand the dynamics of xenophobic events and how they are portrayed in the media. Footnote 3: [https://www.gdeltproject.org/](https://www.gdeltproject.org/) Footnote 4: [https://cloud.google.com/bigquery](https://cloud.google.com/bigquery) The objective of this paper is to describe how we conducted an exploratory data analysis phase, where we concentrated on particular case study events involving refugees and migration. These events were selected based on our prior research and expertise in the subject matter. Through our analysis of news article data, we were able to identify patterns surrounding specific events by examining the time periods before and after the occurrence of the event. By focusing on these case study events, we were able to see how the media coverage surrounding the topics of refugees, migration, and xenophobia significantly increased around the time of the event. We gained a deeper understanding of how the media portrays these issues using different factors such as the type of event, the overall tone of the news media coverage, and the different actors and countries involved. These insights allow us to better identify and analyze patterns and themes in the data, which can help inform future research in developing better and more suitable interactive visualizations that monitor xenophobic violence against refugees and migrants. ## 2 Background and Related Work In this study, we used Global Data on Events, Location, and Tone (GDELT), which is a digital news database of geolocated events worldwide from 1979 to the present. GDELT, which has billions of records and is continuously updated in real time, is a prime example of big data. The GDELT databases use the Conflict and Mediation Event Observations (CAMEO) taxonomy, a framework for coding event-related data, to automatically code data for use in research (Gerner, Schrodt, Yilmaz, and Abu-Jabr 2002). Researchers in the past have used GDELT data for a variety of studies, such as studying the effects of civil unrest, complexity in terms of political activities, and capturing peace through the Global Peace Index (GPI) (Yonamine 2013, Fang, Gao, Fan, and Yang 2016, Voukelatou, Miliou, Giannotti, and Pappalardo 2022). Vargo et al. studied the power of fake news from 2014 to 2016 in online news media using the GDELT dataset (Vargo, Guo, and Amazeen 2018). Their research revealed that although the prevalence of fake news has risen, these websites do not possess undue influence. Various researchers also used social media platforms (like Twitter5) as well as newspaper articles in opinion mining about a range of topics from online education to industrial production (Fu, Yan, Meng, Wang, Hu, Li, Wang, He, and Wang 2020, Tilly, Ebner, and Livan 2021). Footnote 5: [https://twitter.com/](https://twitter.com/) Various studies have examined forced migration and policy implications in countries that support migration (Frydenlund, Jones, and Padilla 2019, Frydenlund and Padilla 2022, Frydenlund, Yilmaz Sener, Gore, Boshuijzen-van Burken, Bozdag, and De Kock 2019). Yesilbas et al. utilized GDELT to build a large dataset of global news to study the tone, volume, and topics of media coverage of refugees (Yesilbas, Padilla, and Frydenlund 2021). They uncovered that the reason for the negative tone was both because of the anti-migrant sentiment as well as sorrow and empathy for the refugees. Although our research shares similarities with previous studies in that we aim to analyze events related to refugee and migrant communities, as well as the sentiment towards them, our approach differs from previous analyses and datasets. Specifically, our objec tive is to develop a monitoring system in real-time for xenophobic events utilizing GDELT data to identify potential hotspots of violence, thereby enabling us to predict and prevent any escalation. While combating xenophobia is within the mandates of international organizations such as UNHCR and IOM, there is no worldwide tool for tracking these events. The Internal Displacement Monitoring Centre6 has designed a hand-coded data synthesis tool to monitor migration caused by natural disasters. This tool has been widely adopted by major humanitarian organizations. ACLED7 similarly tracks protest and violence across the world. Xenowatch8 is an online heatmap using data that researchers have hand-coded from user-submitted news articles of xenophobic events in South Africa. These tools have had immeasurable impacts on research and policy-level decision-making, but no such tool exists on the global scale for xenophobic events and actions against migrants. The ultimate aim of this study is to construct such a tool; however, this paper will solely discuss our preliminary exploratory analysis. Footnote 6: [https://www.internal-displacement.org](https://www.internal-displacement.org) Footnote 7: [https://acleddata.com/](https://acleddata.com/) Footnote 8: [https://www.xenowatch.ac.za](https://www.xenowatch.ac.za) ## 3 Methodology In this section, we will provide an overview of the data utilized in the study, explain our data collection process, present the findings of several case studies carried out including visualizations, and lastly, discuss the challenges encountered at different stages of this preliminary research. While the long-term goal of this project is to develop a monitoring dashboard for xenophobic violence worldwide, at this point we are in the exploratory data analysis and dashboard design phase. ### Data In our study, we are using the GDELT 2.0 database, which is updated every 15 minutes and translates articles from around the world from 65 different languages into English (The GDELT Project 2015). #### 3.1.1 Understanding the Data We have dedicated a substantial amount of time on this project to fully understand the data that is available to us. Through our efforts, we were able to identify three key tables in the database that hold essential data for our analysis of xenophobia using GDELT data. 1. **Event**: This table contains data about events happening globally. Each row in the Event Table represents a single event. Each event is coded with information such as an event identification number (GLOBALEVENTID), actors (Actor1Code, Actor2Code, action, and location. 2. **Event Mentions**: This table contains a row for each mention of the event in a news article or other source. Each mention is coded with its respective GLOBALEVENTID (which allows linking to the Event Table) and information about the tone of the mention (positive or negative). Specifically, each mention row in Event Mentions contains a GLOBALEVENTID that corresponds to the GLOBALEVENTID of the event that it mentions in the Event Table. This table contains an external identifier (MentionIdentifier) for the source document, which can be utilized to uniquely identify the document. 3. **Global Knowledge Graph (GKG)** (The GDELT Project, GKG 2014): This table connects data from various sources to form an extensive interconnected network that encapsulates everything including events around the world, their corresponding contexts, associated actors, and the overall sentiment of media coverage surrounding the event. The DocumentIdentifier field in the table corresponds to the MentionIdentifier in the Event Mentions table. Additionally, this table comprises the results obtained from the Global Content Analysis Measures (GCAM) system, which employs multiple cutting-edge content analysis tools to capture over 2,230 latent dimensions for each news article monitored by GDELT (The GDELT Project, GCAM 2014). We have illustrated the database schema of the above three tables in Figure 1 to the best of our understanding thus far. As shown in Figure 1, there exists a one-to-many relationship between the GLOBALEVENTID fields in the Event table and the Event Mentions table. This is due to the fact that each mention of an event corresponds to a row in the Mentions table. The MentionIdentifier field in the Mentions table can be utilized to merge the Mentions table with the GKG table. Our understanding of the connections between these tables facilitated our ability to aggregate, filter, and merge database tables as necessary to obtain the desired output. #### 3.1.2 Data Collection Methods and Criteria The GDELT database is a large, open-source database and is supported by Google in the form of cloud computing resources that help users access the data using BigQuery, which uses SQL-like queries. We used Google BigQuery to analyze the data as it can handle large amounts of data without requiring any additional setup or configuration. We identified two distinct criteria to collect data for a specific time frame. We used these two criteria to obtain two separate datasets for our case studies as discussed in Section 3.2. 1. Criteria 1: Events where Actor2Code is REF (indicative of Actor 1 performing the act on'refugee'). 2. Criteria 2: Events where Actor2Code is REF and has GKG themes that pertain to refugees. As our first dataset, we queried the data satisfying criteria 1 where the Actor2Code was REF by connecting to Google Big Query through a Jupyter Notebook. We were able to download the data in a comma-separated values file (CSV). This will be further explained in Section 3.2.1. Figure 1: The relationship among the three primary database tables: Events, Event Mentions, and GKG As our second dataset, we queried the data that met criteria 2. We identified eight main GKG themes that are relevant to refugees: * DISCRIMINATION_IMMIGRATION_XENOPHOBIA * DISCRIMINATION_IMMIGRATION_ANTIIIMMIGRANTS * DISCRIMINATION_IMMIGRATION_OPPOSED_TO_IMMIGRANTS * DISCRIMINATION_IMMIGRATION_AGAINST_IMMIGRANTS * DISCRIMINATION_IMMIGRATION_ATTACKS_ON_IMMIGRANTS * DISCRIMINATION_IMMIGRATION_ATTACKS_AGAINST_IMMIGRANTS * DISCRIMINATION_IMMIGRATION_XENOPHOBE * DISCRIMINATION_IMMIGRATION_XENOPHOBES From now on, we will use the term "GKGthemes_REF" to refer to these eight themes. We made use of the GDELT API9 to gain insight into the data before we query for the data itself. We used the GDELT 2.0 Doc API Client,10 a Python client to fetch data from the GDELT API. By using the timelinevolraw option in this Python library, we obtained the number of articles matching the theme filter and the total news articles monitored by GDELT over time. Footnote 9: [https://blog.gdeltproject.org/gdelt-doc-2-0-api-debutts/](https://blog.gdeltproject.org/gdelt-doc-2-0-api-debutts/) Footnote 10: [https://github.com/alex9smith/gdelt-doc-api](https://github.com/alex9smith/gdelt-doc-api) Figure 2 illustrates the variation of the number of articles with xenophobic themes as a percentage of the total article count over time (monthly data from January 2017 to December 2022). We noticed several spikes in the chart, which indicate a surge in the number of articles compared to the other months. For our second Figure 2: The number of articles with xenophobic themes as a percentage of total article count over time (from January 2017 to December 2022). This data was obtained using the GDELT API. case study, we decided to download the data from March 2021 (spike highlighted in green in Figure 2), where Actor2Code is REF and is categorized by GDELT as a "GKGthemes_REF" (V2Themes like DISCRIMINATION_IMMIGRATION). This will be further discussed in Section 3.2.2. ### Data Download and Analysis Results In this exploratory data analysis phase, we focused on certain case study events that we knew from earlier research and subject matter expertise that caused significant coverage of refugees, migration, and xenophobic sentiments. Our methodology involved first selecting a specific incident related to refugees that received significant attention. We then developed a hypothesis that we aim to test using data analysis. We downloaded the data as described above from the GDELT database around the chosen incident, covering a period ranging from a few months before the event to a few months after. Next, we conducted a feature identification process, in which we identified the characteristics that influence or contribute to the observed outcomes. Finally, we compared the findings from the analysis with our original hypothesis to assess whether our hypothesis was confirmed or disproved. This approach enabled us to gain a deeper understanding of the factors that impact refugee-related issues. #### 3.2.1 Case Study 1: Alan Kurdi Incident Our first case study was set around the death of a two-year-old Syrian boy, Alan Kurdi, born as Alan Shenu and initially reported as Aylan Kurdi (The New York Times 2020). In September 2015, Alan and his family, who were refugees from Syria, attempted to travel to Europe from Turkey. Tragically, Alan, along with his mother and brother lost their life by drowning in the Mediterranean Sea while undertaking this perilous journey. The photograph of Alan's body lying face down on a Turkish beach brought the incident to the forefront of international attention. We downloaded the data six months before and after the incident (March 2015 to March 2016) where Actor2Code is REF.11 Figure 3 illustrates the timeline of the number of news articles, which confirmed a significant surge in attention to refugee-related news around reports of Alan Kurdi's death. Footnote 11: [https://github.com/himarshaj/GDELT_ExploratoryAnalysis_XenophobicEvents/blob/main/Data/AK_before_after.zip](https://github.com/himarshaj/GDELT_ExploratoryAnalysis_XenophobicEvents/blob/main/Data/AK_before_after.zip) We also examined the AvgTone of the news articles around the time to understand the sentiment of news articles. Figure 4 shows the variation of AvgTone over time in an area and line chart. The line in blue shows the median value whereas the green area covers the min and max values for AvgTone over time. Figure 4 shows that the sentiment of the news articles remained consistently negative over time without any abrupt changes in AvgTone. To explore further, we extended the timeline further back in time (to March 2014) and visualized the variation of AvgTone over time, which is presented in Figure 5. Despite the median of AvgTone still remaining consistent on the negative side, we noticed a shift in the range (a higher gap between min and max) before and after the start of January 2015. #### 3.2.2 Case Study 2: Spike in Number of Articles around March 2021 In this case study, we focused on investigating the surge in the number of articles in March 2021, which we observed through the GDELT API, as described in Section 3.1.2. Our initial hypothesis was that this surge is due to the shootings that targeted three separate spas in Atlanta, Georgia on March 16, 2021 (The New York Times 2021). The fact that the majority of the victims of the shootings were women of Asian descent led to widespread outrage. This incident brought attention to the concerning increase in hate crimes and discrimination aimed at Asian communities in the United States. We downloaded the data for March 2021 where Actor2Code is REF and the theme was in the "GKGthemes_REF" set.12 Figure 6 shows the variation in the number of news articles during the month. It was surprising to observe that none of these peaks coincided with the date of the shooting incident. Footnote 12: [https://github.com/himarshaj/GDELT_ExploratoryAnalysis_XenophobicEvents/blob/main/Actor2_REF_mar2021.csv](https://github.com/himarshaj/GDELT_ExploratoryAnalysis_XenophobicEvents/blob/main/Actor2_REF_mar2021.csv) To gain insight into the distribution of countries involved (via country code of Actor 1), we examined the frequency of the top 20 most prevalent Actor1CountryCode as shown in Figure 7. We observed that the highest frequency of articles based on Actor2CountryCode was ESP (the three-digit CAMEO code for Spain), followed by USA and then ITA. We used a choropleth map (Figure 8) to visualize the location-based data more effectively, providing a user-friendly perception. We incorporated a tooltip that displays the country and the corresponding frequency of articles into the map. We also included a checkbox filter for EventRootCode along with its description to increase interactivity and uncover more patterns in the data. For example, in Figure 8 the checkbox filter was turned on for EventRootCode of value 01 which refers to "Make Public Statement". We used the machine-readable CAMEO event code that GDELT has made available alongside human-friendly event descriptions (The GDELT Project ). Upon examining the raw data, we were able to determine that the high number of articles with Actor2CountryCode as ESP was due to a significant number of articles reporting on the increase of African migrants arriving on the Canary Islands, which is an autonomous community of Spain, reported around March 26, 2021. The data analysis revealed that the last spike in the number of articles shown in Figure 6 was linked to this event. Figure 3: The number of news articles from March 2015 to March 2016 where Actor2Code is REF. ### Discussion In this section, we will discuss the lessons we have learned thus far and the challenges we have encountered during the course of this study. One of the significant challenges we encountered was in comprehending the available data, even though there is documentation publicly available. Some of the reasons that contribute to the complexity of understanding the data and how we attempted to overcome those challenges are outlined below. 1. The complexity of the database and the vast amount of data it contains made it very challenging for us to get started with the study. As discussed in Section 3.1.1, we spent a significant amount of time understanding the different tables and how they connect with each other. This proved to be beneficial in enabling us to proceed with our research more efficiently. 2. A large number of columns in each database table made it difficult to understand what fields were important to consider. We spent time understanding each column in the database tables. Given our primary objective of identifying the factors that can be utilized to detect instances of xenophobic violence, all columns in the tables are crucial to us. Therefore, we refrained from excluding any columns in our queries, except in instances where multiple versions of the same data were available, in which case we selected only the most recent version. 3. Data quality issues arise since the database utilizes data from multiple sources. When scrutinizing the data, we identified that some of the fields may contain null or incorrectly labeled values, thereby introducing noise. We have decided to acknowledge such noise and recognize that it is an inherent aspect of large-scale data analysis. By focusing on the larger picture and analyzing the data as a whole, we can identify meaningful patterns and insights that may not be evident from individual entries in database. During our study, we encountered a few challenges while working with certain tools. As a first-time user of BigQuery, we faced some complexities in setting up and utilizing the platform effectively. While it offers powerful data analysis capabilities, the initial learning curve was steep, and we had to spend a significant amount of time familiarizing ourselves with the interface and features. We also faced limitations while using the GDELT API. One such limitation was that the API only provides data on articles produced in 2017 and later, which somewhat restricted the scope of our initial analysis using the API. However, we made use of the GDELT API to some extent to gain insights about the data after 2017, as discussed in Section 3.1.2. During the exploration of the visualizations, we faced several challenges. We iterated through various approaches to visualize the data and continue to refine our options based on the data types. We explored multiple options and approaches in analyzing the data, but have had to revise our methods or start over as we gained a better understanding of the data. As we identified the limitations of the data, we consistently addressed those limitations to refocus our efforts on the most relevant questions. We realized that there is no universal visualization approach that works for all types of data. As it is crucial to ensure the accuracy and validity of the analysis, we had to re-evaluate our approaches to accomplish this objective. Figure 8: A choropleth map to visualize the number of articles based on location with a tooltip (highlighted in red) that displays the country and the corresponding frequency of articles. A checkbox filter (highlighted in green) for EventRootCode along with its description (highlighted in blue). Despite the challenges that we faced, which sometimes required additional time and effort, they ultimately led us to gain a deeper understanding of the data and its limitations. This understanding allowed us to make better design choices and ask more precise research questions for future investigations. ## 4 Future Work Moving forward, our study's results can be used to develop better and more suitable interactive visualizations that monitor xenophobic violence against refugees and migrants. Building upon the insights gained from this study, our next steps involve developing impactful visualizations that can aid us in addressing the following questions: 1. What countries/regions are the "hotspots" of xenophobia? We would like to explore the severity of these events based on the number of refugees and migrants living in the area. 2. How to know when xenophobic outbreaks are escalating and to prioritize them before they reach a critical stage? 3. Can we identify the underlying events that trigger an upswing in xenophobic violence? Our long-term goal is to expand the scope of our data visualization research beyond xenophobia and investigate its potential use in monitoring a broad spectrum of societal issues, including racism, global health disparities, and incidents of loss of life or property caused by climate change worldwide. These potential future applications could significantly enhance the understanding of these complex issues and aid in the development of effective solutions for those issues. ## 5 Conclusion Migration is a common occurrence worldwide as individuals move to new countries to seek better opportunities or to escape natural disasters and conflicts. However, xenophobic and racist violence poses a challenge for migrants as it hinders them from blending into new communities. To combat xenophobia, it is important to examine and understand xenophobic events and underlying factors contributing to hostility towards refugees. We used the GDELT 2.0 database which includes online and TV news to explore xenophobic events. We made use of the BigQuery and GDELT APIs to efficiently access extensive GDELT data and conduct exploratory data analysis, with a focus on case study events related to refugees and migration. For our initial case study, we examined the period surrounding the passing of Alan Kurdi, a two-year-old Syrian boy who died in the Mediterranean Sea while attempting to migrate from Turkey to Europe with his family. We studied the amount of news coverage on refugees and the sentiment expressed in news articles both before and after the event. We found that there was a significant increase in media attention to refugee-related news after Alan Kurdi's death, and the sentiment remained mostly negative with no abrupt changes in the average tone. However, there was a shift in the range of emotions expressed before and after January 2015. Our second case study aimed to investigate a surge in the number of news articles in March 2021 that we found through our insights using the GDELT API. Our findings indicated that the spike in news articles was linked to the increase of African migrants arriving on the Canary Islands, with Spain being the most prominent country code. We used a choropleth map to visualize the data obtained from the location-based analysis. Throughout the study, we encountered various challenges, including the intricate nature of the database, data quality concerns, and the long learning curve of working with specific tools such as BigQuery, as well as the constraints of the GDELT API. Moreover, the visualization process presented its own unique set of challenges, necessitating several approaches and revisions. Despite these challenges, our exploratory study allowed for a better understanding of the data and develop various charts to effectively utilize and visualize data concerning different events. ## Acknowledgments This research was funded under the project "Data Science for Social Good: Mining and Visualizing Worldwide News to Monitor Xenophobic Violence", through the 2022-2023 ODU Data Science Seed Funding Program.
2301.05099
Improving Inference Performance of Machine Learning with the Divide-and-Conquer Principle
Many popular machine learning models scale poorly when deployed on CPUs. In this paper we explore the reasons why and propose a simple, yet effective approach based on the well-known Divide-and-Conquer Principle to tackle this problem of great practical importance. Given an inference job, instead of using all available computing resources (i.e., CPU cores) for running it, the idea is to break the job into independent parts that can be executed in parallel, each with the number of cores according to its expected computational cost. We implement this idea in the popular OnnxRuntime framework and evaluate its effectiveness with several use cases, including the well-known models for optical character recognition (PaddleOCR) and natural language processing (BERT).
Alex Kogan
2023-01-12T15:55:12Z
http://arxiv.org/abs/2301.05099v2
# Improving Inference Performance of Machine Learning with the Divide-and-Conquer Principle ###### Abstract. Many popular machine learning models scale poorly when deployed on CPUs. In this paper we explore the reasons why and propose a simple, yet effective approach based on the well-known Divide-and-Conquer Principle to tackle this problem of great practical importance. Given an inference job, instead of using all available computing resources (i.e., CPU cores) for running it, the idea is to break the job into independent parts that can be executed in parallel, each with the number of cores according to its expected computational cost. We implement this idea in the popular OnnxRuntime framework and evaluate its effectiveness with several use cases, including the well-known models for optical character recognition (PaddleOCR) and natural language processing (BERT). ## 1. Introduction We live in the era of unprecedented attention to machine learning (ML) from researchers and practitioners alike. New ML models across a variety of domains (or modalities, such as video, images and text) are proposed nearly daily, the models grow bigger and more sophisticated, and their components are continuously revised to achieve better accuracy scores on various tasks. While lots of attention is given to training efficiency and prediction accuracy, seemingly less effort is focused on making sure those models perform well when deployed in practice, i.e., during inference [(11)]. As we demonstrate in this paper, some models scale poorly (and at times, even worse!) when the number of available cores in a CPU-based deployment is increased. Why does not the inference on CPUs scale? There are a variety of reasons, and we devote the entire section of this paper to look into some of them. Briefly, they range from the micro-level, such as the use of non-scalable operators inside ML architectures, to macro-level, such as employing ML architectures that process input iteratively. To mitigate those scalability challenges, one might consider redesigning their ML architecture or reimplementing its non-scalable operations with a more efficient version. Such approaches, however, require either substantial ML domain specific expertise, exceptional engineering skills and familiarity with ML frameworks used for inference, significant investments (e.g., to retrain a new model, with a potential risk to the accuracy metrics), or all of the above. In this paper, we take a different approach and propose to leverage the poor scalability of ML models by applying the Divide-and-Conquer Principle, a well-known algorithm design technique in Computer Science [(8)]. Specifically, instead of allocating all available computing resources (CPU cores) to the entire problem, we propose to divide the problem into smaller chunks1, let the framework decide how the computing resources should be allocated among those chunks and then run their respective computations in parallel. We argue that in many use cases, such a division is natural and requires only trivial changes in the user code. We also describe a simple mechanism that allocates computing resources based on the expected computational intensity (or weight) of each chunk. Footnote 1: We note that unlike the classical Divide-and-Conquer Principle [(8)], we divide the problem only once, although it might be possible in some cases to divide it recursively into increasingly smaller chunks that can be executed by one thread each. Consider, for instance, a model for solving a natural language processing (NLP) task such as tweet classification. Our approach allows efficient batching of inference requests of various sizes, eliminating the need for padding (a common, but wasteful solution to deal with batches of requests of variable size) and letting the framework allocate computing resources proportionally to the length of each sequence. We implement the aforementioned allocation mechanism in OnnxRuntime [(24)], a popular framework for training and inferencing ML models, and extend its inference API to allow user code to invoke parallel inference on multiple inputs. We demonstrate the effectiveness of our approach with several use cases, including highly popular models for image processing (PaddleOCR [(14)]) and NLP tasks (BERT [(10)]). The remainder of this paper is organized as following. In Section 2 we elaborate on various reasons for why the inference (on CPUs) commonly does not scale well. Next, we describe in Section 3 the concept and implementation details of the Divide-and-Conquer Principle as it applies to inference. Following that, we present in Section 4 several use cases of ML models where this principle can be applied, along with the performance evaluation of its benefits. We discuss related work in Section 5 and conclude the paper in Section 6. Why is Inference Slow? There are numerous reasons for this lack of scalability. In this section we survey some of them. ### Not "enough" work One reason is simply because the amount of computation required by a model during inference is not "enough" for efficient parallelization. As noted by Aminabadi et al. (Aminabadi et al., 2018), kernel implementations of various ML operations are often geared towards training, which tends to consist of sizable batches of large inputs (e.g., sentences of 512 tokens). During inference, however, the batches tend to be much smaller, and often include just one input (e.g., for real-time / interactive inference). Besides, the inputs themselves can be small, e.g., a tweet or chatbot interaction consisting of just a few words. Consider, for instance, highly popular Transformer-based (Zhu et al., 2017) models for NLP tasks, such as BERT (He et al., 2017) or GPT-3 (Bordes et al., 2018), which rely mostly (but not solely) on matrix multiplication primitives. Those primitives are known to scale well for large matrices (Han et al., 2017; Li et al., 2017; Li et al., 2018). However, when the actual input to the model during inference is short, matrix multiplications involve smaller and therefore, less amendable to efficient parallelization, matrices (Han et al., 2017; Li et al., 2018; Li et al., 2018). ### Non-Scalable Operators Another reason for poor scalability of some ML models is the use of non-scalable (and often, sequential) operators in their architecture. Typically, the overhead of those operators would be negligible compared to other, more scalable parts of the model. Yet, as the number of cores increases and following directly from the Amdahl's Law (Amdahl, 2018), their negative impact of non-scalable operators on the overall inference performance would grow. Considering again the Transformer-based (Zhu et al., 2017) models mentioned above, Dice and Kogan have observed that while matrix multiplication scales well, at least for long inputs, other operations such as layer normalization and softmax do not, contributing to the overall poor scalability of those models (Han et al., 2017). In this paper, we consider a vision-based model, which employs sequentially implemented functions for internal format conversions, which similarly cause the entire model not to scale. We note that some of those cases could be considered a performance bug in the underlying ML framework, which could be fixed by reimplementing the respective operators with more efficient (and parallel) alternatives. This, however, requires lots of engineering effort, which includes performance analysis and deep understanding of corresponding framework implementation details. Besides, some of the ML operators, such as layer normalization (Bordes et al., 2018), require careful coordination among computing threads (e.g., to compute variance and standard deviation of all the hidden units in a layer and then use those statistics to normalize the values of the units) and therefore do not lend themselves naturally for efficient parallelization. ### Framework Overhead Somewhat related to the prior point, an ML framework might add small but measurable overhead in invoking model operations. Most popular ML frameworks, such as PyTorch, Tensorflow or OnnxRuntime, support multiple backends for executing ML operations, targeting different hardware architectures (CPU, GPU, TPU), utilizing different BLAS libraries (MKL, OpenBLAS, oneDNN, etc.), different threading infrastructure (Intel TBB, pthreads, custom implementation, etc.), etc. Dispatching appropriate kernel (implementation) for every operator is efficient, but is sequential and requires non-trivial amount of work, especially when the model is executed _interactively_(Han et al., 2017) (the default execution mode in PyTorch). This overhead becomes substantial as the actual execution time of the kernels reduces with the increased number of cores. In addition to the above, various kernels might require specific memory layout for its input parameters (tensors), and the framework would add appropriate dummy operators for input/output conversion or data preparation (Li et al., 2018). As we demonstrate later in this paper, these operators might add substantial overhead as well. ### Model Architecture Quite often the high-level architecture of an ML model itself plays a substantial role in causing inference not to scale. For instance, some ML models, especially ones built for video and image processing (e.g., (Han et al., 2017; Li et al., 2018; Li et al., 2018)), are composed as a multi-phase pipeline. The first phase of the pipeline would typically identify the points of interest in the input (e.g., text boxes in an image or a moving object in a video), while subsequent phases would process those points (iteratively or as a batch) to solve the predefined problem (e.g., identify text in the boxes or classify the moving object in the video). The inference latency of such models might grow linearly with the number of objects identified in the first phase. Furthermore, if even one phase of the pipeline does not scale well, the scalability of the entire pipeline is impaired. ### Padding Batching multiple inputs and processing them at once is a well-known way of improving inference throughput (Han et al., 2017; Aminabadi et al., 2018; Aminabadi et al., 2018; Aminabadi et al., 2018; Aminabadi et al., 2018). In fact, multiple serving system for machine learning models (such as TensorFlow Serving (Kipf et al., 2017) or TorchServe (TorchServe, 2017)) include tunable parameters that configure how long an inference server can wait in order to batch as many input requests as possible. However, when inputs in a batch do not have exactly the same shape, they need to be padded to be processed efficiently, since underlying kernels typically anticipate batches of homogeneous inputs. The padding leads to reduced computational efficiency, since it is treated by kernels as the rest of the input, even though the corresponding output produced by the model is dismissed. ## 3. Divide-and-Conquer Principle Applied to Inference In this section, we describe the application of the Divide-and-Conquer Principle (Dwork et al., 2017) to the inference of ML models at the conceptual level and as a concrete realization by implementing it in the OnnxRuntime framework. We note that applying this principle does not directly address the reasons for poor scalability detailed in the previous section. In fact, the advantage of our approach is that one does not have to identify and/or fix any scalability bottlenecks in their models to rip the benefits of its underlying idea. ### Concept The basic idea is pretty straightforward -- consider a computation job \(J\), which can be broken into \(k\) independent parts, \(j_{1}\), \(j_{2}\),..., \(j_{k}\), which can be executed in parallel. Assume we have an oracle assigning relative weight \(w_{i}\in(0,1]\) corresponding to, e.g., the number of required floating point operations (FLOPs) or single-thread latency of the computation job part \(j_{i}\). Finally, assume we have \(C\) computing cores available. We strive to allocate to each part the number of cores relative to its weight, namely, we assign \(c_{i}=max\{1,\lfloor w_{i}*C\rfloor\}\) cores for the part \(j_{i}\). This effectively means allocating \(c_{i}\) worker threads for \(j_{i}\) since we later create one worker thread per core (as common in ML frameworks, including in OnnxRuntime). Note that \(\sum_{i=1}^{k}c_{i}\) might be larger than C. This is obvious when the number of job parts, \(k\), is larger than C, but it is possible even when \(k\leq C\). This does not create a problem other than implying that some job parts will be run after other job parts have finished (rather than running them all in parallel). At the same time, due to the rounding-down (floor) function intended to reduce the above possibility of oversubscription, some unallocated cores might remain. To avoid this waste of available resources, we sort all the job parts by their remaining unallocated weight, i.e., by \(w_{i}*C-\lfloor w_{i}*C\rfloor\), and assign one core to each part in the descending order, up until all cores are allocated. The C++like pseudo-code for the entire algorithm is given in Listing 1. Naturally, the idea described above raises the question of how to assign relative weight to a job part \(j_{i}\). In all our cases considered in Section 4, the weight is simply set proportionally to the size of input tensors. Specifically, let \(s_{i}\) be the size of the input tensor for job part \(j_{i}\). We set \(w_{i}\) to \(\frac{s_{i}}{\sum_{i=1}^{k}s_{i}}\), essentially assuming that the amount of computation (expressed as the number of required FLOPs) grows roughly linearly with the input tensors' size. In general, however, assigning weight can be done with the help of a profiling phase and a lightweight classification mechanism, which associates job parts of the same (or similar) shape (as the one encountered during the profiling phase) to the relative weight obtained during profiling. ### Implementation Details We extend the API of the InferenceSession class of OnnxRuntime with a new prun method. This method is modeled after the existing run method used as the main entry point when running inference. The main difference is that prun accepts a list of inputs (instead of just one) and returns a list of outputs. Internally, the implementation of prun iterates over the list of inputs, calculates their size (after validating those are tensors) and corresponding relative weight, and applies the allocation algorithm described in Listing 1 to associate the number of worker threads with each input (job part). Following that, the implementation creates one worker thread for each input, and runs them in parallel. Each worker thread, in turn, creates a thread pool of the size calculated by the allocation algorithm (the thread pool includes the worker thread itself), and invokes the run method of the InferenceSession object with that thread pool. The entire patch of the OnnxRuntime codebase to implement the prun functionality and other minor internal changes (such as having the run method to accept a thread pool as an optional argument instead of always using the default pool) consisted of around 200 lines of code. On the user side, the code also has to change to make use of the new prun API. Those changes, however, are quite straightforward. Instead of invoking run for every job, a user needs to create a list of job parts and call prun. In addition, the user needs to rearrange the post-processing code to iterate over the results of prun, and apply any post-processing to each returned output (object). As an example of what the user code changes entail, we show the original Python code (edited for brevity and clarity) of the TextRecognizer class in PaddleOCR (Listing 2) alongside the modified version that makes use of the new prun API (Listing 3). ## 4. Use Cases Before we detail the use cases where the Divide-and-Conquer Principle is beneficial and report on our performance findings, we give a brief summary of our evaluation setup and methodology. We run all our experiments on a 16-core AMD-based VM in Oracle Cloud (aka OCI VM.Standard.E3.Flex). (We also ran some experiments on a newer E4 shape, but have not noticed substantial differences). To reduce performance variability, especially as we create separate thread pools for the variants that use prun, we use thread binding (pinning), for all the evaluated variants. Every experiment was repeated \(5\) times, and we report the mean. We note that the standard deviation of all reported results, except for one specific case discussed below, was extremely low (typically, less than 1% of the mean). For our experiments, we use the latest release versions (as of the date of writing this paper) of the corresponding software, specifically OnnxRuntime v1.11.1 and PaddleOCR v2.5. ### Sequential Pipeline Our first example of where applying the Divide-and-Conquer Principle is extremely useful is PaddleOCR (PaddleOCR, 2016). PaddleOCR is a lightweight OCR system, which consists of three parts: Text Detection, Text Classification (called Detection Boxes Rectify in (PaddleOCR, 2016)) and Text Recognition. Each of those parts corresponds to a separate ML model. ``` 1classTextRecognizer(object): 2def_init_(self,args): 3... 4self.predictor=ort.InferenceSession(args.file_path) 5self.postprocess_op=build_post_process(args) 6... 7def_call_(self, img_list): 8img_num=len(img_list) 9forbeg_img_noinrange(@, img_num, batch_num): 10end_img_no=min(img_num, beg_img_no+batch_num) 11inputs=prepare(img_list, beg_img_no, end_img_no) 12all_inputs.append(inputs) 13all_outputs=self.predictor.prun(all_inputs) 14foroutputsinall_outputs: 15preds=outputs[@] 16rec_result=self.postprocess_op(preds) 17all_results.add(rec_result) 18returnall_results ``` Listing 3. Modified TextRecognizer class implementation (uses prun). Added or modified lines are in red ### Use Cases Before we detail the use cases where the Divide-and-Conquer Principle is beneficial and report on our performance findings, we give a brief summary of our evaluation setup and methodology. We run all our experiments on a 16-core AMD-based VM in Oracle Cloud (aka OCI VM.Standard.E3.Flex). (We also ran some experiments on a newer E4 shape, but have not noticed substantial differences). To reduce performance variability, especially as we create separate thread pools for the variants that use prun, we use thread binding (pinning), for all the evaluated variants. Every experiment was repeated \(5\) times, and we report the mean. We note that the standard deviation of all reported results, except for one specific case discussed below, was extremely low (typically, less than 1% of the mean). For our experiments, we use the latest release versions (as of the date of writing this paper) of the corresponding software, specifically OnnxRuntime v1.11.1 and PaddleOCR v2.5. ### Sequential Pipeline Our first example of where applying the Divide-and-Conquer Principle is extremely useful is PaddleOCR (PaddleOCR, 2016). PaddleOCR is a lightweight OCR system, which consists of three parts: Text Detection, Text Classification (called Detection Boxes Rectify in (PaddleOCR, 2016)) and Text Recognition. Each of those parts corresponds to a separate ML model. The OCR pipeline accepts an image file and passes it first through the text detection phase whose objective is to locate text areas in the image. The output of this phase is a list of potential text boxes' coordinates. Next, the list is iterated over, and each item in that list (i.e., a text box) is sent to the text classification model, which decides whether the box needs to be transformed into a horizontal rectangle box before the actual text recognition takes place. Based on the classifier's decision, each box is altered respectively. Finally, the list is iterated over again, and each item is sent to the text recognition model for inference, which recognizes the text in the given box and produces the actual character sequence based on the supplied character dictionary. This process is depicted in Figure 1, which is a redacted version of Figure 2 from (Kumar et al., 2018). In our experiments with PaddleOCR, we observe that the system does not scale well with the increase in the number of available cores. We demonstrate that in Figure 2 depicting inference latency as a function of available cores (which directly translates into the number of worker threads used by the runtime). For all experiments in this section, including the one in Figure 2, we use a subset of images from the OpenImages dataset (Kumar et al., 2018), selected according to a criterium described below. In Figure 2, we break the total latency into time spans corresponding to the three phases of the OCR pipeline discussed above. As one can notice, the average inference latency goes down from \(554\) ms for \(1\) thread to \(364\) ms for \(4\) cores and then back up to \(435\) ms for \(16\) cores. Interestingly, the Text Classification phase shows negative scalability, where it takes \(27\) ms to process an image, on average, with \(1\) thread, but it takes \(38\) ms to do the same with \(16\) threads -- a slowdown of \(1.4\)x. This shows an example of a system where, beyond a certain point, adding more threads not only does not help, but actually harms performance. Discussing concrete reasons for the lack of scalability of these specific models is not in the scope of this paper. For a curious reader, however, we note that a built-in OnnxRuntime profiling tool shows inflated execution times for the output reordering operators (which are inserted by the framework, along with the input reordering operator, to convert the memory layouts of input arguments for various kernels). We apply the Divide-and-Conquer Principle to the last two phases of the OCR pipeline, namely the Text Classification and Recognition. To that end, instead of invoking the corresponding models for each text box produced by Text Detection, we send all the boxes to the runtime (by invoking the prun API) and effectively let the runtime decide how many cores / worker threads to allocate each box based on its relative size. The required changes to implement this functionality in the Text Recognition phase are depicted in Listing 3; the changes to the Text Classification phase are similar. For our performance evaluation, we compare the prun implementation as discussed in Section 3 (and depicted in Listing 1), which we denote as prun-def on the charts, to a few simple variants. The first variant, denoted as prun-1, simply allocates one worker thread to each input in the list given to prun. The second variant, denoted as prun-eq, allocates an equal number of cores for each input (but at least one), i.e., sets \(c_{i}=max\{1,\lfloor k/C\rfloor\}\). Our motivation is to show that trivial solutions might also be useful in certain scenarios (as discussed below), yet they tend to underperform compared to prun-def. We note that the benefit of prun in this use case is possible only when there are at least two text boxes identified in the Text Detection phase. Otherwise, the other two phases would not be used (if no text boxes detected) or the prun-def variant will use the same (maximum) number of cores as the base (unmodified) version (if only one text box is detected). As a result, the subset of images used for performance evaluation in this section includes images with at least two identified text boxes. The pie chart in Figure 3 shows the distribution of the actual number of boxes detected in the first phase of the OCR pipeline for the entire dataset. The total number of images in the dataset was \(500\) - this number was chosen to keep the evaluation times reasonably short. (We note that we also ran evaluations on a larger dataset that includes images with less than two text boxes and confirmed that the use of prun does not create any overhead in those cases.) Figure 1. PaddleOCR 3-phase pipeline (edited version of Figure 2 from (Kumar et al., 2018)). In light of the discussion above, we break down the comparison of the latency results by the number of detected boxes, as depicted in Figure 4. The latency numbers in this figure were collected with \(16\) cores; we discuss the overall scalability trends later on. We also break down the performance in two of the phases where we have used prun, namely Text Classification (Figure 4 (a)) and Recognition (Figure 4 (b)). Considering the results in Figure 4, one can notice that, as expected, the benefit of prun increases with the number of detected text boxes. For instance, when considering the total end-to-end latency (Figure 4 (c)), with only two boxes prun-def outperforms base by \(1.28\)x. However, with \(9\) and \(10\)+ boxes, prun-def outperforms base by \(2.33\)x and \(1.81\)x, respectively. It is interesting to compare the performance of prun-def with other pun-based variants. As one can notice in Figure 4 (a), the prun-1 variant produces the lowest latency when the number of detected boxes is small. In fact, the base variant also performs better than prun-def in this case. We attribute this to two factors. First, this specific phase of the pipeline shows negative scalability, which can be also seen in Figure 2. Therefore, best performance is achieved when fewer threads per box is used in this phase, which is what prun-1 effectively achieves. Second, prun-def (and prun-eq) create and destroy more threads than prun-1 in those cases as they create thread pools containing more threads for each prun invocation. This adds small, but non-negligible overhead given that the the execution time of this phase is short. In the future work, we intend to experiment with reusing thread pools between prun invocations. As the number of detected boxes increases, however, all prun variants allocate less threads (or even just \(1\)) per each box, and they allocate a similar number of threads for their pools, thus closing the gap with the prun-1 variant. When the Text Recognition phase is concerned (cf. Figure 4 (b)), however, it is apparent from Figure 2 that one can improve its latency by using more than one thread. We note that, quantitively, this phase is also far more dominant than the Text Classification one. Here, prun-def manages to achieve best or close to best result across all counts of detected boxes, which translates to overall highly competitive end-to-end inference performance (cf. Figure 4 (c)). In general, the results in Figure 4 call for a dynamic mechanism, which would choose the best thread allocation strategy based on the given workload and available resources. Devising and experimenting with such a strategy is left for future work. Finally, we shed more light on how the scalability improves with the use of prun in Figure 5, where we vary the number of cores (and therefore, the total number of worker threads) available for OnnxRuntime. Once again, we include the latency of each of the two last phases of PaddleOCR (denoted as Rec for Text Recognition and Cls for Text Classification) along with the end-to-end (Total) latency. We include only the results of the base and prun-def variants (denoted simply as prun in Figure 5), for clarity. Overall, one can notice similar trends to the ones discussed above. In the base version, the Text Recognition phase does scale up to \(4\) threads, but then its performance suffers as the number of threads increases. The prun variant avoids this performance degradation, and in fact, continues to scale up to \(16\) threads. Indeed, when considering the Text Recognition phase only, the prun variant outperforms base by more than \(2.4\)x at \(16\) threads. However, since both variants have an identical Text Detection phase, which according to Figure 2 subsumes a substantial part of the total latency, the end-to-end speedup of prun is only \(1.5\)x at \(16\) threads. ### Batching of Heterogeneous Inputs Our next example concerns with the Transformer architecture (Krizhevsky et al., 2017), which revolutionized the domain of NLP when it was introduced in 2017 and has been applied to other domains since then (e.g., (Dong et al., 2018; Krizhevsky et al., 2017)). This architecture consists of a stack of layers, each composed of a self-attention block followed by a fully connected network (Krizhevsky et al., 2017). Past work has Figure 3: Distribution of the number of detected text boxes in the input dataset. Figure 2: Inference latency of PaddleOCR with a varying number of threads, broken down by the three phases of the pipeline. shown that the majority of computation cycles in Transformers is spent on (scalable) matrix multiplication operations, yet up to one third of the cycles is spent elsewhere (i.e., less scalable operations) [(11)]. It is well-known that one way to improve the inference performance (specifically, throughput) of Transformers is through _input batching_(3; 15; 30). This strategy works well, however, when the inputs have the same length. Otherwise, one has either give up on batching, or pad inputs to the same length. The latter results in wasted computation cycles, since special padding tokens are treated exactly as input tokens by the architecture and dismissed at the end of the computation. This situation presents an ideal case for applying the Divide-and-Conquer Principle. Instead of padding the inputs of various lengths up to the longest input in the batch, we can run inference on those inputs (as they are, without padding) using the prun API, and let the runtime decide how many cores should be used to process each of the inputs. We modify the Transformer benchmark built into the OnnxRuntime [(25)] to implement this strategy. To evaluate the effectiveness of the approach described above, we set up an experiment where we generate \(X\) inputs of a length chosen uniformly and randomly out of the range \([16,512]\). We then compare the pad-batch version in which all \(X\) inputs are padded to the longest length in the given batch with the prun version in which the inference is invoked with prun on all inputs in the batch. We show results with the highly popular BERT model [(10)] (specifically, "bert-based-uncased"). We have also experimented with other Transformer-based models (such as "bert-large-uncased" or "roberta-base") measuring similar qualitative results. We note that this experiment includes inherent amount of randomness -- a batch of small sentences is as likely to be chosen as a batch of long sentences. In an attempt to reduce the anticipated high variance of the results, we opted to repeat the experiment \(1000\) times, and so for each \(X\), each data point is an average of \(1000\) results. Figure 6 presents the throughput results with batches of various sizes (i.e., \(X\) varies from \(2\) to \(8\)), with error bars depicting the standard deviation of the reported mean. Even though prun outperforms the pad-batch variant across all batch sizes, the variance in the measured results remains exceptionally high. As a result, we setup two additional experiments in a more controlled way likely to produce more stable results. In the first, we simply preset the lengths of various sequences in each batch. For instance, a batch denoted as "\(16\)-\(64\)-\(256\)" includes three sentences, one is \(16\), another is \(64\) and yet another is \(256\) tokens long. We show the results of this experiment in Figure 7. Here, the prun version easily outperforms the pad-batch variant, which has to pad all sequences to the longest sequence in a batch. As one might expect, the benefit from using prun increases with the number of sentences in a batch, as this variant eliminates all the redundant work associated with padding. In the second experiment, we use a batch of \(1\) long sentence (\(256\) tokens long) and \(X\) short sequences of \(16\) tokens each, where we vary \(X\) between \(0\) and \(15\). We show the throughput results of this experiment in Figure 7, along with a curve depicting the number of threads allocated by prun for the long sequence in the batch. There are several interesting observations that can be made here. First, when \(X\)=0, i.e., the batch contains only one long sentence, both variants employ all available cores to process that batch, producing similar result. This shows that the Figure 4. The impact of using prun in PaddleOCR. Figure 5. Total (end-to-end) inference latency of PaddleOCR with a varying number of threads. Also shown the latency of Text Classification (Cls) and Text Recognition (Rec) phases overhead of using prun when the input has only one chunk is negligible. Second, the throughput of the pad-batch version grows, but modestly with the increase in the number of short sequences. This is because, as stated above, a larger batch of (padded) sequences helps to achieve better throughput with Transformers. At the same time, the throughput growth with prun is much more dramatic up to \(3\) short sequences in a batch and then it declines, but stays well above that achieved with pad-batch. Both phenomena can be explained with the fact that inferencing a sequence of \(256\) tokens takes about the same time with \(16\) threads as it takes with \(13\). Thus, adding a few short sequences into the batch, each allocated with just \(1\) thread (as they have small relative weight), has negligible impact on the latency, but improves throughput. With more short sequences in a batch, less threads are allocated for the long sequence (as can be seen in Figure 7) and its inference latency grows. This causes the overall throughput to decrease. ### Batching of Homogeneous Inputs Our last example follows directly from the discussion in Section 2 on the lack of scalability in ML models. As already mentioned, while Transformers models heavily use scalable matrix multiplication operations, they also employ less scalable operations. The impact of the latter grows with the increase in the number of cores. Therefore, one may benefit form the Divide-and-Conquer Principle applied to Transformers _even when the batch includes inputs of the same length_. As a concrete example, consider a batch of two inputs. Instead of using all available cores to process the batch, we will use half the cores for each input. Intuitively, the less scalable operators create less relative overhead when less cores are used and the input sequence is shorter (i.e., contains half the tokens compared to the entire batch). Figure 9 demonstrates this effect with batches of inputs of equal lengths. In addition to the pad-batch variant (which we simply call batch here, as no padding is required) and prun, we include a no-batch variant, which runs inference on each sequence in a given batch one at a time. Note that we include the latter to simply demonstrate the benefits of batching in general, confirming previous findings (Bahdan et al., 2015; Li et al., 2017; Li et al., 2018). Each set of bars in Figure 9 corresponds to a batch of \(4\) sentences with the given length (from \(64\) tokens to \(512\)). Overall, the prun version yields a more modest (yet non-trivial) speedup over batch compared to the case of non-homogeneous inputs in Section 4.2. This is expected, since in this case the room for improvement (over batch) does not include wasted computation related to padding. ## 5. Related Work As mentioned in the Introduction, the major focus of the ML community has been on improving the accuracy and training performance of proposed models, while efficient inferencing and serving of those models receives relatively less attention. Yet, there have been some notable exceptions of work focused specifically on inference performance, and Figure 8. Throughput of inferencing BERT on a batch containing one long sentence of \(256\) tokens and \(X\) short sequence with \(16\) tokens each, where \(X\) varies \(0\) to \(15\). In addition, we show how many threads are dedicated to the inference of the one long sentence in the batch in the prun variant Figure 6. Throughput of inferencing BERT on batches of sequences of sizes chosen randomly from the range \([16,512]\) Figure 7. Throughput of inferencing BERT on batches of sequences of various preset sizes we survey the most relevant results hereafter. As an aside, we note that many of the results below come from less formal blog posts published by various companies, highlighting the great practical importance of efficient inference. Wang et al. (Wang et al., 2017) explore various factors that influence inference performance in TensorFlow, including the choice of a specific math library, a thread pool library, availability of SIMD (single instruction multiple data) support, etc. They identify data preparation as one of the causes for poor scalability of small matrix multiplication operations, something we more generally attribute to framework overhead in Section 2. They come up with a set of guidelines one can use to tune TensorFlow settings to achieve better performance compared to the one achieved with settings recommended by TensorFlow authors or Intel. With the tremendous rise in popularity of Transformers, several papers and blog posts focus on its inference performance. Dice and Kogan investigate inference performance of Transformers on CPUs (Dice and Kogan, 2017). Their analysis shows that most inference computation cycles are spent in matrix multiplication operations. Hence, they propose an adaptive matrix multiplication optimization aimed at reducing the latency of those operations and subsequently improving the overall inference performance. Intel engineers describe an effort to optimize inference of BERT in Apache MXNet using the GluonNLP toolkit, where one of the ideas is to quantize the model for better performance with lower precision (Kogan et al., 2017). Similar quantization ideas (along with _distillation_, another common method of reducing the size of a model (Kogan et al., 2017)) were employed by Roblox to speedup their deployment of BERT on CPUs (Roblox et al., 2018). The same blog post also mentions that eliminating padding of input sentences has led to better performance (though the authors did that for batches of \(1\) input only). A Microsoft team (Kogan et al., 2017) describes their effort on accelerating BERT with OnnxRuntime through operation fusion that helps to reduce the amount of overhead (e.g., memory copying) in invoking each kernel individually. A few recent papers and projects have looked into the deficiency of padding of heterogenous inputs. Fang et al. (Fang et al., 2019) propose a sequence-length-aware batch scheduler, which aims to batch requests of a similar size, thus reducing the cost of zero padding of all requests into one batch. It requires a profiling phase during which the inference cost of various batches is collected. Du et al. (Du et al., 2019) propose to carefully redesign the GPU kernels employed by Transformers to eliminate most redundant computation associated with zero padding. The Effective Transformer project by ByteDance (Bretton et al., 2017) aims to dynamically remove and restore padding during different calculation stages. All those efforts target specifically the inferencing Transformers on GPUs, and it is not clear how efficient they would be on CPUs and/or with other architectures. Beyond Transformers, Liu at et. (Liu et al., 2019) describe NeoCPU, an approach for optimizing CNN inference on CPUs. NeoCPU proposes a configurable design of an efficient convolution operation that can be tuned efficiently to popular CPUs. This design is coupled with a scheme for obtaining the best memory layout for data in different operations of a CNN model, in order to minimize the overhead of transforming the data between various individual operations. ## 6. Discussion In this paper, we have discussed various reasons for the lack of scalability of inferencing ML models. While the reasons vary from micro to macro-levels, the common motive is that existing ML frameworks are geared towards high performance training. This is expressed by the fact that kernels for common operations are typically optimized for large batches with long inputs, ignoring relatively small overheads in various parts of those frameworks that are immaterial to the overall training performance. However, during inference the batches tend to be much smaller and contain shorter inputs, thus making those overheads more prominent. A somewhat similar observation has been made by Aminabadi et al. (Aminabadi et al., 2019). We leverage this poor scalability and describe a simple, yet powerful approach, in which the given input is broken into chunks and each chunk is processed in parallel, instead of using all available resources for the entire input. As we demonstrate with a few well-known models, this approach improves inference scalability and ultimately can lead to over \(2\)x latency and throughput improvements. This work offers several directions for future research. First, we want to explore more dynamic thread allocation strategies, e.g., ones that can better adjust to the cases where the weight of a work chunk does not correlate linearly with its size and/or where the underlying model performs best while running with a single thread. Second, we want to find ways to automate splitting the input into chunks that can be processed in parallel, lowering the cost (in terms of user code changes) of using prun even further. Finally, we want Figure 9. Throughput of inferencing BERT with batches of \(4\) sequences of equal size to explore other use cases where the use of prun would be beneficial, including other ML models that feature a pipeline-based architecture (e.g., [21, 29]). ## Acknowledgments The author would like to thank Dave Dice for valuable comments on an early draft of this paper.
2305.18259
GlyphControl: Glyph Conditional Control for Visual Text Generation
Recently, there has been an increasing interest in developing diffusion-based text-to-image generative models capable of generating coherent and well-formed visual text. In this paper, we propose a novel and efficient approach called GlyphControl to address this task. Unlike existing methods that rely on character-aware text encoders like ByT5 and require retraining of text-to-image models, our approach leverages additional glyph conditional information to enhance the performance of the off-the-shelf Stable-Diffusion model in generating accurate visual text. By incorporating glyph instructions, users can customize the content, location, and size of the generated text according to their specific requirements. To facilitate further research in visual text generation, we construct a training benchmark dataset called LAION-Glyph. We evaluate the effectiveness of our approach by measuring OCR-based metrics, CLIP score, and FID of the generated visual text. Our empirical evaluations demonstrate that GlyphControl outperforms the recent DeepFloyd IF approach in terms of OCR accuracy, CLIP score, and FID, highlighting the efficacy of our method.
Yukang Yang, Dongnan Gui, Yuhui Yuan, Weicong Liang, Haisong Ding, Han Hu, Kai Chen
2023-05-29T17:27:59Z
http://arxiv.org/abs/2305.18259v2
# GlyphControl: Glyph Conditional Control for Visual Text Generation ###### Abstract Recently, there has been a growing interest in developing diffusion-based text-to-image generative models capable of generating coherent and well-formed visual text. In this paper, we propose a novel and efficient approach called GlyphControl to address this task. Unlike existing methods that rely on character-aware text encoders like ByT5 and require retraining of text-to-image models, our approach leverages additional glyph conditional information to enhance the performance of the off-the-shelf Stable-Diffusion model in generating accurate visual text. By incorporating glyph instructions, users can customize the content, location, and size of the generated text according to their specific requirements. To facilitate further research in visual text generation, we construct a training benchmark dataset called LAION-Glyph. We evaluate the effectiveness of our approach by measuring OCR-based metrics and CLIP scores of the generated visual text. Our empirical evaluations demonstrate that GlyphControl outperforms the recent DeepFloyd IF approach in terms of OCR accuracy and CLIP scores, highlighting the efficacy of our method. ## 1 Introduction Denoising diffusion probabilistic models [26; 2; 27; 21; 23; 22; 12] have significantly boosted the development of general text-to-image generation by showing the capability of generating surprisingly high-quality images over the past few years. Although currently plenty of diffusion-based text-to-image generation methods could produce abundant fantastic and photo-realistic images, most existing methods still lack the ability to produce legible and readable text in generated images [21; 22] due to the complex and fine-grained structure within the visual text. Several very recent efforts have made preliminary attempts to address the task of visual text generation. Motivated by the inspiring analysis in unCLIP [21], the spelling information inside the prompts can not be accurately modeled with the raw CLIP text embedding, the follow-up efforts including eDiff-I [1] and Imagen [23] attempt to leverage the potential of large language model such as T5 [20], which is trained on the text-only corpus, as the text encoder in image generation. With the strength of T5 embedding on encoding individual objects within the prompts [1], eDiff-I produces more accurate visual text. The very recent DeepFloyd IF model further follows the design of Imagen and demonstrates impressive performance in rendering legible text. Besides, [14] found that the text encoders (both CLIP and T5) used in most existing mainstream text-to-image generation models lack sufficient character-level information for spelling due to the usage of BPE tokenizer, thus they verify that adopting the character-aware model ByT5 [28] instead could bring significant improvements. Figure 1: Illustrating selected \(512\times 512\) GlyphControl samples for different text prompts and glyph conditions. Our GlyphControl can generate coherent images with well-formed visual text. Despite efforts to modify text encoders used in generation models, layout errors such as missing or merged glyphs still exist in generated images [14], implying that merely relying on textual input prompts would not be sufficient for accurate visual text rendering. To address this problem, we propose to incorporate text glyph information into the off-the-shelf powerful text-to-image generation models for visual text generation. We formulate the task of visual text generation as a glyph-conditional control problem. Specifically, we propose to control the visual text generation with an additional glyph image.2 The glyph image acts as an explicit spatial layout prior to enforcing the diffusion models generating coherent and well-formed visual text. We show some qualitative results in Figure 1 to show that our method is capable of generating diverse images with well-formed visual text. Footnote 2: The glyph image is a whiteboard image where the characters are rendered with a single particular font while keeping the same content, position, and size as the realistic visual text. In our implementation, we introduce two key innovations including (i) a GlyphControl framework that can augment the off-the-shelf text-to-image generation model by exploiting the shape information encoded in the glyph image with a ControlNet branch, and (ii) a LAION-Glyph benchmark that consists of \(1\sim 10\) M text-image pairs augmented with additional OCR detection results that record the presented text information. We further create two evaluation benchmarks, including SimpleBench and CreativeBench, to assess the performance of our method and the other strong methods such as DeepFloyd IF. To demonstrate the effectiveness of our approach, we conduct thorough experiments and show that our approach consistently achieves much higher OCR accuracy than the DeepFloyd IF. For example, on SimpleBench and CreativeBench, our approach gains +\(15\%\) (\(48\%\) vs. \(33\%\)) and +\(13\%\) (\(34\%\) vs. \(21\%\)) than the very recent powerful DeepFloyd (IF-I-XL) while only requiring less than \(22\%\) parameters. We summarize our main contributions as follows: * We propose a glyph-conditional text-to-image generation model named GlyphControl for visual text generation, which outperforms DeepFloyd IF and Stable Diffusion in terms of OCR accuracy and CLIP score while saving the number of parameters by more than \(3\times\). * We introduce a visual text generation benchmark named LAION-Glyph by filtering the LAION-2B-en and selecting the images with rich visual text content by using the modern OCR system. We conduct experiments on three different dataset scales: LAION-Glyph-100K, LAION-Glyph-1M, and LAION-Glyph-10M. * We report flexible and customized visual text generation results. We empirically show that the users can control the content, locations, and sizes of generated visual text through the interface of glyph instructions. ## 2 Related Work Text-to-image Diffusion Models.Denoising Diffusion Probabilistic Model [8] and its successors [18; 21; 1; 22; 23] have demonstrated impressive performance on high-quality image synthesis with text prompts. GLIDE [18] emphasizes the necessity of the classifier-free guidance over CLIP guidance and the usage of cascaded diffusion models [9; 23; 1; 21] for high-fidelity, high-resolution generation. Imagen [23] introduces generic large language models (T5-XXL text encoder) into the text-to-image generation while demonstrating comparable or superior image quality to the CLIP text encoder. Moreover, eDiff-I [1] concatenates the CLIP text embeddings and T5 text embeddings to benefit from the strengths of both two text encoders. Unlike the aforementioned pixel-level diffusion models, Latent Diffusion [22] transforms the image into latent features and applies the diffusion model in the latent space to decrease training and inference costs. In this work, we adopt Stable Diffusion (v2.0), an application of the latent diffusion method in a text-to-image generation but trained with additional data and a powerful CLIP text encoder, as the base model. Controllable Image Generation.To achieve more customized image synthesis, users could apply additional conditions, such as segmentation maps or depth maps [22], onto diffusion models. Beyond this intuitive approach, multiple diffusion-based methods of image editing [16; 11; 19; 5] demonstrate promising performance in controlling the content of synthesized images. Recently, more related works [29; 17; 10] focus on flexible and composable control of image synthesis. Composer [10] decomposes the image generation into multiple factors and generates images by re-combining them. While both T2IAdapter [17] and ControlNet [29] can incorporate different conditional maps, such as segmentation maps or depth maps, as additional data into the pre-trained diffusion models, demonstrating accurate structure or color control without dampening the generation ability of the original models. Considering the fact that the glyphs of visual text essentially belong to geometric structures, we adopt ControlNet as the basic framework to generate visual text by controlling the local structure with additional glyphs. Visual Text Generation.Although diffusion models could generate high-fidelity images, current mainstream text-to-image generation models such as unCLIP [21] and Stable Diffusion have trouble rendering legible and readable text onto images. Several previous works [6; 7] demonstrate that diffusion models have the ability to generate visual text with different fonts but do not extend to general image generation. Due to the findings that CLIP embedding could not precisely perceive the spelling information in the input prompts [21; 1], both Imagen [23] and eDiff-I [1] utilize the large language model T5 [20] to achieve superior visual text generation. The recent open-sourced image generation model DeepFloyd IF [12], inspired by Imagen, takes the T5-XXL as the text encoder as well demonstrating impressive performance in visual text generation. Furthermore, [14] thoroughly exploits the strengths of character-aware language models like ByT5 [28] over character-blind counterparts such as mainly used CLIP and T5. With the usage of ByT5 in the generation, the semantic errors of rendered text decrease while the errors related to the layout of glyphs still exist, implying that the auxiliary information about glyph images would be necessary. Recently, GlyphDraw [15] successfully renders Chinese characters onto images by adding glyph images into the input of the diffusion model and also fusing extracted glyph embedding with text embedding as a condition. Based on similar insights, we utilize glyph images as conditional maps to control image synthesis. Compared to the above methods, we could specify the contents, locations, and sizes of text, which brings more customized and flexible designs. ## 3 Approach ### Preliminary Stable Diffusion [22].We have selected the "stable-diffusion-2-base" (SD 2.0-base3) as the foundational model in this work. The Stable Diffusion model is a highly capable and versatile text-to-image generative model that has been meticulously trained from scratch. The training process of basic models involves \(550\)k steps at resolution \(256\times 256\), focusing on a subset, with an aesthetic score of \(4.5\) or higher, of the LAION-5B dataset. What makes the difference between stable-diffusion-2-base and previous versions is that the model is continuously trained on the same dataset with a resolution of at least \(512\times 512\) pixels, which contributes to the model's ability to generate more detailed and visually appealing images. The training of stable-diffusion-2-base costs hundreds of hours with 128\(\times\) A100 GPUs. In this work, by employing the off-the-shelf "stable-diffusion-2-base" model and refining it through rigorous training processes, we aim to achieve superior results in the visual text generation domain. Footnote 3: [https://huggingface.co/stabilityai/stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base) ControlNet [29].The ControlNet is a powerful network that enhances pre-trained large diffusion models like Stable Diffusion with additional input conditions. It learns task-specific conditions in an end-to-end manner, even with limited training data. The network has a trainable copy and a locked copy of the diffusion model's weights, enabling it to retain general capabilities while fine-tuning for specific tasks. The ControlNet incorporates "zero convolution" layers to connect the trainable and locked components, gradually adapting convolution weights from zeros to optimized parameters. This allows precise control over the model's behavior in various applications. ### GlyphControl Framework.The GlyphControl framework consists of several key components: (i) an OCR engine for detecting text information in the given image, (ii) a Glyph render for rendering the detected text in a whiteboard image at corresponding locations, (iii) an image VAE encoder that projects the input image into a latent code, and an image VAE decoder that reconstructs an output image based on the latent code, (iv) a text encoder (OpenAI CLIP text encoder) that converts the input text into a text embedding, (v) a U-Net encoder and decoder that performs the denoising diffusion process, and (vi) a Glyph ControlNet that encodes the conditional glyph information by processing the glyph image rendered by the Glyph render. More details of the GlyphControl framework can be seen in Figure 2. Furthermore, Figure 3 showcases some example images of the rendered glyph images. To incorporate glyph information, we introduce the concept of glyph input conditions by rendering glyph images and feeding them into the ControlNet branch. Unlike conventional conditions used in the original ControlNet [29], accurate visual text rendering greatly benefits from the use of rendered glyph images. We specifically chose the ControlNet architecture for its proficiency in controlling precise geometric structures. With our GlyphControl approach, we can successfully generate legible and readable visual text. This is achieved by utilizing pre-rendered glyph images as input condition maps for the ControlNet, allowing us to control the generated glyphs at the layout level. Furthermore, we specify the words in Figure 3: Illustrating of the generated glyph images based on the glyph render (LAION-Glyph-1M). Figure 2: **Illustrating the framework of GlyphControl. (a) The GlyphControl architecture comprises a pre-trained Stable Diffusion model as a “locked copy” and a randomly initialized ControlNet model as a “trainable copy.” (b) During the training process, the input image \(x\) undergoes encoding with a VAE encoder, resulting in a latent embedding \(z_{0}\). The diffusion process is then applied to \(z_{0}\), generating a noised latent embedding \(z_{t}\). Additionally, we utilize an OCR engine (PP-OCR [4]) to extract text from images and employ a glyph render to generate a whiteboard image. This image exclusively represents recognized characters as black regions, forming the glyph image \(g\). Consequently, both the text embedding (based on text caption \(c\)) and the noised latent embedding are fed into the U-Net (locked copy) and the Glyph ControlNet (trainable copy). This enables the estimation of the noise term \(\varepsilon(z_{t},t)\), with the crucial step involving passing the glyph image to the Glyph ControlNet to extract vital glyph information for rendering well-formed text. (c) During inference, our method supports diverse user instructions for customizing the rendering of the glyph image \(g\). Subsequently, we sample a noise latent embedding \(z_{T}\) from Gaussian noise and employ the DDIM scheme to perform the denoising process, estimating the denoised latent embedding \(z_{0}\). Finally, \(z_{0}\) is sent to the VAE decoder, resulting in the construction of the final output image \(y\).** the input text prompts (e.g., "A storefront with "GlyphControl" written on it") and leverage the CLIP text encoder to understand the semantic meaning of the words. Glyph Instructions.One major advantage of our GlyphControl approach is its ability to support customized glyph instructions (\(i\)), which enables the specification of various constraints on the rendered text in the final output image. Our GlyphControl framework provides support for three types of text information customization: * **Text character information**: GlyphControl allows for the specification of not only single words but also phrases or sentences composed of multiple words. As long as the text is intended to be placed within the same area, users can customize the text accordingly. * **Text line information**: GlyphControl provides the flexibility to assign words to multiple lines by adjusting the number of rows. This feature enhances the visual effects and allows for more versatile text arrangements. * **Text box information**: With GlyphControl, users have control over the font size of the rendered text by modifying the _width_ property of the text bounding box. The location of the text on the image can be specified using the _coordinates_ property of the top left corner. Additionally, the _yaw rotation angle_ property of the text box allows for further adjustments. By default, the text is rendered following the optimal width-height ratio, but users can define a specific _width-height ratio_ to precisely control the height of the text box. We demonstrate the effectiveness of these glyph instructions in Figure 4, where our approach successfully generates legible text according to the specified instructions. For instance, in Figure 4, we showcase examples where users can customize the positions of the rendered text, adjust the font size, or place multiple groups of text at different locations to achieve personalized designs. Additionally, users have the option to split the text into multiple rows or rotate the text boxes for improved arrangement. Our controllable text generation approach opens up possibilities for automated personalized art designs in the future. Moreover, in the experimental section, we provide empirical evidence showcasing that our method achieves significantly higher OCR accuracy compared to the recent DeepFloyd model. Implementation.We utilize the same architecture and initial weights for the VAE and U-Net as the SD 2.0-base model. Our training process incorporates PP-OCRv3 [3] as the OCR engine. During inference, users need to provide glyph instructions to generate customized images. For rendering glyphs, we leverage the tools available in the ImageDraw module of the Python library Pillow. ### LAION-Glyph Benchmark Overview.The training process of Stable Diffusion and DeepFloyd has greatly benefited from the utilization of the extensive multi-modal dataset LAION-5B [24]. However, there is currently a Figure 4: Illustrating the qualitative results of GlyphControl in terms of flexible user controllability. The whiteboard images depict the corresponding glyph condition maps alongside the generated images on the right side. The ”[X]” symbol (in the last example) is used as a replacement for the rendered words on the T-shirt. notable absence of openly available datasets specifically tailored for visual text generation tasks. To bridge this gap, we introduce the LAION-Glyph benchmark. To construct this benchmark, we start with LAION-2B-en, which is a subset of LAION-5B [24], and selectively choose specimens that exhibit abundant visual text content using the PP-OCR engine. Pipeline.Our data construction process consists of two consecutive steps. In the first step, we apply an aesthetic score prediction model to filter out images with an aesthetic score higher than 4.5. Next, we utilize the PP-OCRv3 [3] engine for text detection and recognition. To ensure the quality of the data, we discard images where all OCR boxes are located at the image border. Additionally, we remove images with OCR areas that are less than 5% or have more than 5 bounding boxes, as these cases may lead to text recognition or image reconstruction failures. To address inaccuracies in the original captions from the LAION dataset, we generate new captions using the BLIP-2 [13] model. As a result, we have curated a high-quality LAION-Glyph dataset consisting of 10 million images. This dataset includes detailed OCR information and captions that are well-formed and accurate. Statistics.As illustrated in Fig. 5, the character count in the images is primarily concentrated within the range of 10 to 50 characters, with the majority of samples containing fewer than 150 characters. In terms of word distribution, the most common cases consist of 3 to 5 words, while instances with more than 15 words are relatively rare. Additionally, the number of bounding boxes is fairly evenly distributed, although images with only one box are less prevalent. To facilitate training and evaluation, we partitioned the LAION-Glyph dataset into three scales: LAION-Glyph-100K, LAION-Glyph-1M, and LAION-Glyph-10M, using a random division approach. ## 4 Experiment ### Training Details We train our framework on three different dataset scales: LAION-Glyph-100K, LAION-Glyph-1M, and LAION-Glyph-10M for \(60\times\) epochs, \(20\times\) epochs, and \(6\times\) epochs, respectively. The initial weights of both the SD branch and Glyph ControlNet branch are copied from the SD 2.0-base model. For both the Glyph ControlNet and Zero-Conv blocks, we set the base learning rate to \(1\mathrm{e}{-4}\). The U-Net decoder is kept frozen during training. The caption dropping rates for the SD branch and Glyph ControlNet branch are set to \(0.1\) and \(0.5\), respectively. The input images are maintained at a resolution of \(512\times 512\). ### Evaluation MetricsWe evaluate the effect of visual text generation on OCR accuracy. We measure OCR exact match accuracy, denoted as \(\mathbf{Acc}\), which assesses the word-level agreement between the OCR recognition results and the ground truth visual text. In other words, it represents the complement of the Word Error Rate (WER), i.e., \(1{-}\mathrm{WER}\). As the DeepFloyd model tends to generate visual text in all capital letters, regardless of the original form of the words in the prompt, we introduce the OCR capitalization-insensitive exact match accuracy \(\mathbf{\hat{Acc}}\). This measure allows for a fairer comparison by disregarding the case of the text. Additionally, we incorporate character-level OCR Figure 5: Illustrating the statistics on LAION-Glyph-10M. Left: Distribution of character counts in each image. Middle: Distribution of word counts in each image. Right: Distribution of detected bounding boxes in each image. accuracy by employing the Levenshtein distance for partial matching evaluation. We report the average Levenshtein distance **LD** for each word, providing insights into the accuracy at the character level. In addition to the OCR-based metrics mentioned earlier, we evaluate the image-text alignment of the generated visual text images using the CLIP score, as done in previous works [1][23]. Due to the inclusion of additional condition glyph maps in our framework, we do not report FID-30K or FID-10K scores, but instead assess fidelity through visual observation. BenchmarkWe construct two evaluation benchmarks by incorporating prompt templates from previous works on visual text generation [14][15] and embedding different words selected by us into these templates. * **SimpleBench**: A simple text prompt benchmark following [14]. The format of prompts remains the same: _"A sign that says "\(\text{<word>}\)"_. * **CreativeBench**: A creative text prompt benchmark adapted from GlyphDraw [15]. We adopt diverse English-version prompts in the original benchmark and replace the words inside quotes. As an example, the prompt may look like: _"Little panda holding a sign that says "\(\text{<word>}\)"_._ or _"A photographer wears a t-shirt with the word "\(\text{<word>}\)-" printed on it'_ In accordance with [14], we collect a pool of single-word candidates from Wikipedia. These words are then categorized into four buckets based on their frequencies: \(\textbf{Bucket}^{\textbf{1k}}_{\textbf{top}}\), \(\textbf{Bucket}^{\textbf{10k}}_{\textbf{1k}}\), \(\textbf{Bucket}^{\textbf{10k}}_{\textbf{10k}}\), and \(\textbf{Bucket}^{\textbf{100k}}_{\textbf{100k}}\). Each bucket contains words with frequencies in the respective range. To form input prompts, we randomly select 100 words from each bucket and insert them into the aforementioned templates. Consequently, we generate four images for each word during the evaluation process. ### Main Results Table 1 presents a comparison of our method with the most representative generative models, including Stable Diffusion and DeepFloyd. Our method achieves the highest OCR accuracy on both benchmarks compared to DeepFloyd and the original Stable-Diffusion model. Notably, the OCR accuracy of Stable Diffusion is almost zero, indicating that it is unable to generate legible text. In contrast, our framework, with the addition of glyph control, enables the diffusion models to render relatively accurate visual text while maintaining fewer or comparable training parameters to the original Stable Diffusion model. Compared to the pixel-level diffusion model DeepFloyd IF, our method achieves better OCR performance with fewer parameters. Additionally, DeepFloyd IF tends to generate capital \begin{table} \begin{tabular}{l|c|c|c|c c} Method & \#Params & Text Encoder & Training Dataset & \(\textbf{Acc}(\%)\)\(\uparrow\) & \(\textbf{Acc}(\%)\)\(\uparrow\) & \(\textbf{LD}\)\(\downarrow\) \\ \hline Stable Diffusion v2.0 & \(865\)M & CLIP(\(354\)M) & LAION 1.2B & 00\(\times\)0 & 3/2 & 4.25\(\%\).01 \\ DeepFloyd (IF-I-M) & \(2.1\)B & T5-XXL(\(4.8\)B) & LAION 1.2B & 0.30\(\cdot\)0.1 & 18/11 & 2.44\(\%\).86 \\ DeepFloyd (IF-I-L) & \(2.6\)B & T5-XXL(\(4.8\)B) & LAION 1.2B & 0.30\(\cdot\)0.7 & 26/17 & 1.97\(\%\).37 \\ DeepFloyd (IF-I-XL) & \(6.0\)B & T5-XXL(\(4.8\)B) & LAION 1.2B & 0.67\(\cdot\)1 & 33/21 & 1.63\(\%\).09 \\ \hline GlyphControl & \(1.3\)B & CLIP(\(354\)M) & LAION-Glyph-100K & 30\(\cdot\)19 & 37/24 & 1.77\(\%\).58 \\ GlyphControl & \(1.3\)B & CLIP(\(354\)M) & LAION-Glyph-1M & 40\(\cdot\)26 & 45/30 & 1.59\(\cdot\)2.47 \\ GlyphControl & \(1.3\)B & CLIP(\(354\)M) & LAION-Glyph-10M & **42/28** & **48/34** & **1.43/2.40** \\ \end{tabular} \end{table} Table 1: Comparison results of OCR-related metrics with prior methods in the field of visual text generation is shown in the table. The results are averaged over four-word frequency buckets. The results on SimpleBench/CreativeBench are presented on the left/right side of the slash, respectively. It is important to note that the total number of parameters reported in the second column of the table does not include the text encoder. The LAION 1.2B dataset comprises image-text pairs with predicted aesthetic scores of 4.5 or higher in LAION 5B. All the DeepFloyd models use IF-II-L (1.2B) and Stable \(\times\)4 as the upscale models to progressively increase the image resolutions from \(64\times 64\) to \(1024\times 1024\). \begin{table} \begin{tabular}{l|c|c|c|c|c} Method & \multicolumn{1}{l}{l} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ \cline{2-7} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-7} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \end{tabular} \end{table} Table 2: Illustrating the CLIP score comparison results based on the settings described in Table 1. The average results across four word frequency buckets on both benchmarks are provided. characters regardless of the original word form, resulting in lower performance on case-sensitive metrics and reduced flexibility in real-world usage. The comparison among the DeepFloyd IF models demonstrates that increasing the model parameters can enhance the accuracy of generated visual text. Similarly, for our method, training on a larger dataset can improve text generation performance. For example, the OCR accuracy **Acc** increased from \(37\%\) to \(48\%\) when trained on a larger specialized visual text-related dataset. This highlights the importance of leveraging larger datasets specifically focused on visual text in order to achieve further improvements in performance. In addition to the OCR-related evaluation, we also assess the consistency between prompts and generated images, as shown in Table 2. Our method consistently outperforms or achieves comparable results to other general text-to-image models in terms of the CLIP score. This indicates that our model has the capability to accurately represent the specified words in the text prompts within the generated images, while still maintaining the fundamental ability to align image and text. Furthermore, we conduct a comparison of the generation performance across different benchmarks and word buckets (see Figure 6). Generally, all methods demonstrate higher OCR accuracy on the SimpleBench compared to the more challenging CreativeBench, which features more diverse prompts. However, CLIP scores tested on the SimpleBench are slightly lower than those on the CreativeBench, which could be attributed to the richer descriptions present in the prompts of the latter. Additionally, it is observed that words with high-frequency appearances are generally easier to render onto images compared to rarely-used words. An interesting finding is that low-frequency words consistently exhibit higher CLIP scores. This might be attributed to the CLIP embedding's tendency to overlook low-frequency words in the prompts, potentially leading to overestimation. ### Qualitative Analysis As depicted in Figure 7, both DeepFloyd IF and Stable Diffusion exhibit various types of errors during text rendering, including missing glyphs (as shown in the first row of Figure 7 (b)), repeated or merged glyphs, and misshapen glyphs (as illustrated in the first row of Figure 7 (a)). In some challenging cases (as seen in the second row of Figure 7), these models even fail to generate any text at all. In contrast, our method, which incorporates additional glyph images as structure control, can accurately generate legible text that aligns well with the input text prompts by placing the pre-rendered glyph images in the appropriate positions. Furthermore, as demonstrated in the third example, our method can effectively handle cases where multiple groups of visual text need to be rendered at different locations within the images, while the other two models struggle to handle such combinations of visual text. Figure 6: Comparison to Stable Diffusion (SD) and DeepFloyd across different benchmarks and buckets. Figure 7: Qualitative comparison results. The left column presents the text prompt and the other five columns show the images generated by Stable Diffusion (SD), DALL-E 2, Stable Diffusion XL (SD XL), DeepFloyd IF (IF), and our GlyphControl. The results demonstrate that both DeepFloyd IF and Stable Diffusion exhibit limitations in generating text, including typos and incomplete text generation. ### Ablation Experiment To assess the generalization capability of our approach on different training datasets, we curate a specialized OCR-related training dataset called TextCaps 5K. This dataset consists of images related to signs, books, and posters, which are extracted from the training set of the TextCaps v0.1 Dataset [25]. The original purpose of this dataset was image captioning with an understanding of visual text content. We fine-tune our model on the TextCaps 5K dataset for an additional 40 epochs, using the same training settings as those applied to the model trained on LAION-Glyph-100K (as shown in Table 1). This fine-tuning process aims to evaluate how well our model performs on a different training dataset and further validate its robustness and adaptability. Through our experiments, we discover that unlocking the frozen U-Net decoder in the original Stable Diffusion model can significantly enhance the OCR accuracy of visual text rendering, as demonstrated in Table (a)a. This improvement can be attributed to the decoder's improved adaptation to the smaller TextCaps 5K dataset during training. As training progresses, the accuracy of text generation gradually improves, as shown in Table (b)b. However, it is worth noting that the generated images tend to resemble the samples in TextCaps 5K. As depicted in Figure 8, the visual text regions in the center of the generated images appear more like conventional signs (Figure 8 (d)), without seamlessly merging with the background, while the images generated by the model pre-trained on LAION-Glyph-100K exhibit greater creativity and diversity (Figure 8 (a)). Therefore, in order to generate photo-realistic images that accurately render visual text, it is essential to not only have a training dataset with a large number of high-quality samples containing abundant visual text content, but also to include more realistic images with diverse scenes and creative elements. ## 5 Conclusion This paper presents the GlyphControl method, which is remarkably simple yet highly effective in generating legible and well-formed visual text. The success of our approach can be attributed to two key contributions: (i) the utilization of the glyph ControlNet, which encodes text shape information based on rendered glyph images, and (ii) the development of the LAION-Glyph benchmark, which provides the advantages of large-scale training data. By combining the GlyphControl method with the LAION-Glyph benchmark, our approach achieves remarkable performance and consistently outperforms even the recent DeepFloyd IF model in terms of OCR accuracy. We believe that our work can serve as a valuable baseline for future research endeavors in building robust visual text generation models. \begin{table} \begin{tabular}{c|c|c c c|c} Training Dataset & \begin{tabular}{c} Training Epochs \\ \end{tabular} & \begin{tabular}{c} **Acc**(\%)\(\uparrow\) \\ \end{tabular} & \begin{tabular}{c} **A**ec**(\%)\(\uparrow\) \\ \end{tabular} & \begin{tabular}{c} **LD**\(\downarrow\) \\ \end{tabular} & \begin{tabular}{c} CLIP Score\(\uparrow\) \\ \end{tabular} \\ \hline \multirow{4}{*}{TextCaps 5K} & pre-trained & 30/19 & 37/24 & 1.77/2.58 & 33.7/36.2 \\ & \(10\) & 48/28 & 56/34 & 1.31/2.32 & 33.8/35.5 \\ & \(20\) & 50/34 & 61/41 & 1.03/2.01 & **34.3**/35.7 \\ & 40 & **61/43** & **68/49** & **0.76**/**1.38** & \(34.2\)**/**36.3** \\ \end{tabular} \end{table} Table 3: Ablation experiments on TextCaps 5K. We report the average results on two benchmarks.
2306.01258
On-shell action of $\text{T}\bar{\text{T}}$-deformed Holographic CFTs
In this work, we study the holographic dual of the $\text{T}\bar{\text{T}}$ deformation following the mixed boundary condition proposal. We point out that a boundary term should be included in the gravity action in the holographic dictionary. In particular, we consider the deformed CFT defined on a sphere (dS) or AdS background and explain the difference between the holographic results and field theory results.
Jia Tian
2023-06-02T03:49:18Z
http://arxiv.org/abs/2306.01258v2
# On-shell action of TT-deformed Holographic CFTs ###### Abstract In this work, we study the holographic dual of the \(\mathrm{T}\overline{\mathrm{T}}\) deformation following the mixed boundary condition proposal. We point out that a boundary term should be included in the gravity action in the holographic dictionary. In particular, we consider the deformed CFT defined on a sphere (dS) or AdS background and explain the difference between the holographic results and field theory results. ###### Contents * 1 Introduction * 2 Holographic dictionary of \(\mathrm{T}\overline{\mathrm{T}}\) deformed theories * 2.1 Setup * 2.2 The On-shell action * 3 Examples * 3.1 Torus background: the thermal state in flat spacetime * 3.2 Conical background: the primary state in flat spacetime * 3.3 Non-flat boundary * 3.4 Sphere background in the vacuum state * 3.5 Poincare disk background in the vacuum state * 4 Conclusion and Discussion * A Convention and Notation * B Sphere partition functions ## 1 Introduction The \(\mathrm{T}\overline{\mathrm{T}}\) deformations [1, 2] have been studied extensively in recent years due to their remarkable properties [3, 4]. From the perspective of quantum gravity, the two most appealing properties are that the deformed theory is conjectured to be non-local but UV complete and that the deformations have interesting applications to AdS/CFT holography [5, 6, 7]. The holographic dual of the \(\mathrm{T}\overline{\mathrm{T}}\) deformation of a holographic CFT is a bulk gravity theory with mixed boundary conditions for the bulk fields [8]. The holographic dictionary for the \(\mathrm{T}\overline{\mathrm{T}}\) -deformed theories can be summarized as [3]1 Footnote 1: the notation will become clear in a moment. \[Z_{\mathrm{T}\overline{\mathrm{T}}\,,\mathrm{CFT}}[\gamma^{[\mu]}_{\alpha \beta}]=Z_{\mathrm{grav}}\left[g^{(0)}_{\alpha\beta}+\frac{\mu}{16\pi G_{N} }g^{(2)}_{\alpha\beta}+\frac{\mu^{2}}{(16\pi G_{N})^{2}}g^{(4)}_{\alpha\beta} =\gamma^{[\mu]}_{\alpha\beta}\right], \tag{1}\] where \(\mu\) is the deformation parameter. Another well-used holographic proposal is the geometric bulk cut-off proposal [9, 10] (or glue-on proposal [11]) which has some limita tions as explained in [8]. However, the cut-off proposal is simpler to apply. For example, it is proposed that the entanglement entropy can be computed by the Ryu-Takayanagi (RT) formula [12] in the cut-off geometry, and for vacuum states the result agrees with the field theory calculations [13]. But a disagreement has also been found in [14] when they considered the deformed entanglement entropy of a thermal state. In this paper, we want to argue that the RT formula does not give the full answer of the entanglement entropy because there is an extra term in the gravity action. The key observation of us is that in the dictionary (1) we should add a proper boundary term to the gravity action2. The presence of this boundary term is natural considering that the double trace deformation not only shifts the source but also shifts the generating function [16, 17]. We will compute several examples explicitly to confirm this observation. Shifting the action by a universal constant will only change the entanglement entropy by some constant which can be absorbed into the UV cut-off. However, if the entanglement entropy does not depend on the UV cut-off or the constant is not universal but state-dependent then this constant should have physical meaning and the correct RT formula should capture it. Footnote 2: Perhaps this is already hidden written in [8]. Usually \(\mathrm{T}\overline{\mathrm{T}}\) deformation is only well defined for the theories on flat spacetime. For example, the important factorization property of the \(\mathrm{T}\overline{\mathrm{T}}\) operator will lose on a general curved background [18]. However for holographic CFTs, in the large \(N\) limit, composite operators always factorize and the dictionary (1) applies to arbitrary 2-dimensional (2d) backgrounds even though constructing the general bulk dual is very challenging. By restricting our considerations to the vacuum state and a curved background with a constant Ricci curvature, we find that the construction of the bulk dual can be simplified to solve the well-known and integrable Liouville equation. The partition function of the \(\mathrm{T}\overline{\mathrm{T}}\) -deformed CFT on a sphere in the vacuum state has been derived in [13] by solving the flow equation directly. Our holographic result differs from it by a universal Weyl anomaly constant which is proportional to the Euler character of the background and a \(\mu\)-dependent constant. From a deformation perspective, our holographic result is favored because it has an undeformed limit. In contrast, the partition function derived in [13] diverges in the limit \(\mu\to 0\). The Weyl anomaly constant is also important. It turns out that it will cancel out the UV divergence in the entanglement entropy. From our point of view, this cancellation explains why in [13] the entanglement entropy is UV finite. We further consider the AdS background. Because the AdS space is not compact the on-shell action is divergent however the holographic entanglement entropy can be computed in the same way and the result is similar to the one in the sphere background case. ## 2 Holographic dictionary of \(\overline{\rm TT}\) deformed theories In this section, we briefly review the derivation of the holographic dictionary (1). The derivation [8] is based on the variational principle and the holographic dual of the double-trace deformation [16, 17]. Our convention and notation are summarized in Appendix A. ### Setup The \({\rm T}\overline{\rm T}\) deformation of 2d field theory with action \(S_{\rm CFT}\) is defined by the flow equation \[\frac{dS_{\rm CFT}^{[\mu]}}{d\mu}=\int d^{2}x\sqrt{\gamma}\,{\rm T}\bar{\rm T}^ {[\mu]}, \tag{2.1}\] where in principle the 2d background can be arbitrary. In the linear order, the deformation is just a double trace deformation \[S_{CFT}^{[\mu]}=S_{CFT}+\mu\int d^{2}x\sqrt{\gamma}\,{\rm T}\overline{\rm T} \ +{\cal O}(\mu^{2}). \tag{2.2}\] The double trace deformation does two things to the generating function. It shifts the source by the expectation value of the dual operator and it shifts the generating function (or the on-shell action) by _subtracting_ the double trace operator, _i.e._ \[W^{[\mu]}=W_{0}-\mu\int d^{2}x\sqrt{\gamma}\,{\rm T}\overline{\rm T}. \tag{2.3}\] Using the defining property of the generating function \(\delta W=\frac{1}{2}\int d^{2}x\sqrt{\gamma}T_{\alpha\beta}\delta\gamma^{ \alpha\beta}\), we can obtain a flow equation \[\frac{1}{2}\partial_{\mu}\left(\int d^{2}x\sqrt{\gamma^{[\mu]}}T_{\alpha\beta }^{[\mu]}\delta\gamma_{\alpha\beta}^{[\mu]}\right)=-\delta\left(\int d^{2}x \sqrt{\gamma^{[\mu]}}{\rm T}\bar{\rm T}^{[\mu]}\right), \tag{2.4}\] which can be solved by [8] \[\gamma_{\alpha\beta}^{[\mu]} =\gamma_{\alpha\beta}^{[0]}+\frac{1}{2}\mu\hat{T}_{\alpha\beta}^ {[0]}+\frac{1}{16}\mu^{2}\hat{T}_{\alpha\rho}^{[0]}\hat{T}_{\sigma\beta}^{[0]} \gamma^{[0]\rho\sigma}, \tag{2.5}\] \[\hat{T}_{\alpha\beta}^{[\mu]} =\hat{T}_{\alpha\beta}^{[0]}+\frac{1}{4}\mu\hat{T}_{\alpha\rho}^{[ 0]}\hat{T}_{\sigma\beta}^{[0]}\gamma^{[0]\rho\sigma}, \tag{2.6}\] where \(\hat{T}_{\alpha\beta}=T_{\alpha\beta}-\gamma_{\alpha\beta}T\). The proposal of [8] is that \(\gamma^{[\mu]}_{\alpha\beta}\) and \(T^{[\mu]}_{\alpha\beta}\) are the sources and dual operators of the deformed holographic theory which is still an asymptotic AdS\({}_{3}\) gravity theory. The general asymptotic AdS\({}_{3}\) solution can be written in the Fefferman-Graham gauge like [20] \[ds^{2}=g_{\alpha\beta}(\rho,x^{\alpha})dx^{\alpha}dx^{\beta}+ \frac{d\rho^{2}}{4\rho^{2}},\quad g_{\alpha\beta}(\rho,x^{\alpha})=\frac{g^{(0 )}_{\alpha\beta}}{\rho}+g^{(2)}_{\alpha\beta}+\rho g^{(4)}_{\alpha\beta}, \tag{2.7}\] where \(g^{(2)}\) corresponds to the initial expectation value of the CFT operator \[\hat{T}^{[0]}_{\alpha\beta}=\frac{1}{8\pi G_{N}}g^{(2)}_{\alpha \beta}. \tag{2.8}\] The solution (2.5) implies that the deformed boundary metric is \[\gamma^{[\mu]}_{\alpha\beta}=g^{(0)}_{\alpha\beta}+\frac{\mu}{16 \pi G_{N}}g^{(2)}_{\alpha\beta}+\frac{\mu^{2}}{(16\pi G_{N})^{2}}g^{(4)}_{ \alpha\beta}=\rho_{c}g_{\alpha\beta}(\rho_{c}),\quad\rho_{c}=\frac{\mu}{16\pi G _{N}}, \tag{2.9}\] which gives the holographic dictionary (1) as proposed in [8]. The dictionary seems to suggest that the on-shell action of the holographic theory is simply \[I^{[\mu]}_{\rm Euclidean}=I^{[0]}_{\rm Euclidean}\left( \gamma^{[\mu]}_{\alpha\beta}=g^{(0)}_{\alpha\beta}+\frac{\mu}{16\pi G_{N}}g^{(2 )}_{\alpha\beta}+\frac{\mu^{2}}{(16\pi G_{N})^{2}}g^{(4)}_{\alpha\beta}\right). \tag{2.10}\] But we will show that it is not correct. The proper on-shell action should be \[I^{[\mu]}_{\rm on-shell} = I^{[0]}_{\rm Euclidean}\left(\gamma^{[\mu]}_{\alpha\beta}=g^{(0 )}_{\alpha\beta}+\frac{\mu}{16\pi G_{N}}g^{(2)}_{\alpha\beta}+\frac{\mu^{2}}{( 16\pi G_{N})^{2}}g^{(4)}_{\alpha\beta}\right)-\mu\int\sqrt{\gamma^{[\mu]}}{ \rm T}{\rm\bar{T}}^{[\mu]} \tag{2.11}\] \[\equiv I^{[\mu]}_{\rm bulk}+I^{[\mu]}_{\rm bdy},\] which is similar to the one (2.3) of double trace deformation. The second term is a surface integral and it will not modify the bulk equation of motion. But it contributes to the on-shell action and it is necessary to include it if we want to compute the correlation functions or entanglement entropy correctly. For example, it will potentially modify the description of the RT surface which we also comment on below. Using (2.5) and (2.6) one can show that \(\sqrt{\gamma}{\rm 1\bar{T}}\) is invariant under the flow so the boundary term in (2.11) can also be written as \[-\mu\int\sqrt{\gamma^{[\mu]}}{\rm T}{\rm\bar{T}}^{[\mu]}=-\mu\int \sqrt{g^{[0]}}{\rm T}{\rm\bar{T}}^{[0]}. \tag{2.12}\] ### The On-shell action Choosing the conformal gauge, the boundary metric can be written as \[g^{(0)}_{ij}dx^{i}dx^{j}=e^{\phi}dyd\bar{y}, \tag{2.13}\] and complete 3d metric is given by [20]3 Footnote 3: The AdS radius \(l\) is set to be 1. \[ds^{2}=\frac{d\rho^{2}}{4\rho^{2}}+\frac{1}{\rho}e^{\phi}dyd\bar{ y}+\frac{1}{2}\mathcal{T}_{\phi}dy^{2}+\frac{1}{2}\bar{\mathcal{T}}_{\phi}d \bar{y}^{2}+\frac{1}{4}R_{\phi}dyd\bar{y}\] \[+\frac{1}{4}\rho e^{-\phi}(\mathcal{T}_{\phi}dy+\frac{1}{4}R_{\phi }d\bar{y})(\bar{\mathcal{T}}_{\phi}d\bar{y}+\frac{1}{4}R_{\phi}dy), \tag{2.14}\] where \[\mathcal{T}_{\phi}=\partial_{y}^{2}\phi-\frac{1}{2}(\partial_{y} \phi)^{2}+L(y),\quad\bar{\mathcal{T}}_{\phi}=\bar{\partial}_{y}^{2}\phi-\frac{ 1}{2}(\bar{\partial}_{y}\phi)^{2}+\bar{L}(\bar{y}),\quad R_{\phi}=4\partial_{ y}\bar{\partial}_{y}\phi \tag{2.15}\] and the boundary Ricci scalar for the metric \(g^{(0)}_{ij}\) is \(R^{(0)}=-e^{-\phi}R_{\phi}\). \(L(y)\) and \(\bar{L}(\bar{y})\) are two arbitrary functions that characterize different states of the holographic CFT. This bulk solution is the most general AdS\({}_{3}\) solution and it is a generalization of the well-known Banados geometry which has a flat boundary metric. The vacuum state corresponds to the case when \(L(y)=\bar{L}(\bar{y})=0\) and the vacuum metric can be mapped to the Poincare metric by 4 Footnote 4: For non-trivial \(L(y)\) and \(\bar{L}(\bar{y})\) the metric can also be mapped to Poincaré metric since AdS\({}_{3}\) gravity has no local degrees of freedom but the transformation is much more complicated. \[\frac{1}{\eta}=\rho^{-1/2}e^{\phi/2}+\frac{1}{4}\rho^{1/2}e^{- \phi/2}|\partial_{y}\phi|^{2},\quad z=y+\frac{1}{2}\frac{\rho e^{-\phi}\bar{ \partial}_{y}\phi}{1+\frac{1}{4}\rho e^{-\phi}|\partial_{y}\phi|^{2}}, \tag{2.16}\] with the resulting metric \[ds^{2}=\frac{d\eta^{2}+dzd\bar{z}}{\eta^{2}}. \tag{2.17}\] The 3d Euclidean AdS Einstein gravity has the action \[I^{[0]}_{E}=\frac{1}{16\pi G_{N}}\left(-\int_{B}\sqrt{h}(R+2)-2 \int_{\partial B}\sqrt{\gamma}K+2\int_{\partial B}\sqrt{\gamma}\right). \tag{2.18}\] Here \(\partial B\) is the UV regulator surface which is usually chosen to be at \[\rho_{UV}=\delta^{2}, \tag{2.19}\] which implies \[\sqrt{h}=\sqrt{g^{(0)}}\left(\frac{1}{2\rho^{2}}+\frac{e^{-\phi}R_{ \phi}}{8\rho}+\frac{e^{-2\phi}}{128}(R_{\phi}^{2}-16\mathcal{T}_{\phi}\bar{ \mathcal{T}}_{\phi})\right), \tag{2.20}\] \[\sqrt{\gamma}=\sqrt{g^{(0)}}\left(\frac{1}{\delta^{2}}+\frac{e^{ -\phi}R_{\phi}}{4}+\frac{e^{-2\phi}}{64}(R_{\phi}^{2}-16\mathcal{T}_{\phi}\bar {\mathcal{T}}_{\phi})\delta^{2}\right),\] (2.21) \[\sqrt{\gamma}K=\sqrt{g^{(0)}}\left(\frac{2}{\delta^{2}}-\frac{e^{ -2\phi}}{32}(R_{\phi}^{2}-16\mathcal{T}_{\phi}\bar{\mathcal{T}}_{\phi})\delta^ {2}\right). \tag{2.22}\] The 3d metric degenerates at the positions where \(\sqrt{h}=0\) which has two solutions \[\rho_{\pm}=\frac{8e^{\phi}}{-R_{\phi}\pm 4\sqrt{\mathcal{T}_{\phi}\bar{ \mathcal{T}}_{\phi}}}. \tag{2.23}\] When \(\rho_{+}>0\), we should only include the spacetime below the curve \(\rho_{H}=\rho_{+}\). Then the on-shell action is equal to \[I_{E}^{[0]}=-\frac{1}{16\pi G_{N}}\int_{\partial B}\sqrt{g^{(0)}}\left(\frac{ 1}{2}R^{(0)}(1+\log\rho_{H})+2\sqrt{\mathcal{T}_{\phi}\bar{\mathcal{T}}_{\phi} }\right)+\frac{c}{6}\chi(\partial B)\log\delta, \tag{2.24}\] where we have used the relation \(c=\frac{3}{2G_{N}}\) and \(\chi(\partial B)\) denotes the Euler character of the boundary manifold \(\partial B\). The second term is divergent but universal which captures the Weyl anomaly. Sometimes in the literature, the second term is ignored so that the on-shell action is finite. But later we will see that this term will contribute to the entanglement entropy. When the boundary metric is flat and \(\chi=0\), the on-shell action reduces to \[I_{E}^{[0]}=-\frac{c}{12\pi}\int\sqrt{g^{(0)}}\sqrt{\mathcal{T}_{\phi}\bar{ \mathcal{T}}_{\phi}}. \tag{2.25}\] To compute the first term in the deformed on-shell action (2.11), we only need to choose the physical metric \(\gamma^{[\mu]}\) on which the deformed theory is defined, solve the bulk metric according to the holographic dictionary and then substitute the results into the general formula (2.24). To compute the boundary term we use the holographic dictionary (2.8) and the result is \[-\mu\int\sqrt{g^{(0)}}\mathrm{T}\bar{\mathrm{T}}^{[0]}=\frac{\mu}{4096G_{N}^{ 2}\pi^{2}}\int\sqrt{g^{(0)}}e^{-2\phi}\left(R_{\phi}^{2}-16\mathcal{T}_{\phi} \bar{\mathcal{T}}_{\phi}\right), \tag{2.26}\] which in the flat limit reduces to \[-\frac{\mu c^{2}}{576\pi^{2}}\int\sqrt{g^{(0)}}\mathcal{T}_{\phi}\bar{ \mathcal{T}}_{\phi}. \tag{2.27}\] Examples ### Torus background: the thermal state in flat spacetime The thermal state in flat spacetime corresponds to choosing the following parameters \[\phi=0,\quad\mathcal{T}_{\phi}=\bar{\mathcal{T}}_{\phi}=2L_{0}, \tag{3.1}\] where \(L_{0}\) is some constant. The bulk geometry is the BTZ black hole whose metric is \[ds^{2}=\frac{d\rho^{2}}{4\rho^{2}}+\frac{(1+L_{0}\rho)^{2}dx^{2}+(-1+L_{0}\rho )^{2}d\tau^{2}}{\rho} \tag{3.2}\] and the asymptotic boundary manifold is a torus with the spatial and thermal period being \(w_{0},\beta_{0}=\pi/\sqrt{L_{0}}\), respectively. Using (2.25), we can get the on-shell action \[I_{E}^{[0]}=-\frac{c}{12\pi}\omega_{0}\beta_{0}(2L_{0})=-\frac{c\pi}{6}\frac{ \omega_{0}}{\beta_{0}}. \tag{3.3}\] When the deformed metric is flat so is the boundary metric and these two metrics are related by a coordinate transformation [8] which is reminiscent of the dynamical coordinate transformation interpretation of \(\mathrm{T}\overline{\mathrm{T}}\) deformation introduced in [22, 23]. According to the dictionary (1), this bulk solution (3.2) is dual to the deformed theory defined on the 2d torus with metric5 Footnote 5: Note that here \(\rho_{c}\) can be both positive and negative. \[ds^{2}=(L_{0}\rho_{c}+1)^{2}dx^{2}+(L_{0}\rho_{c}-1)^{2}d\tau^{2}\equiv dX^{2} +dY^{2}, \tag{3.4}\] whose spatial and thermal period \(w,\beta\) are related by \(w_{0},\beta_{0}\) via \[\beta_{0}=\frac{1}{2}\left(\beta+\sqrt{\beta^{2}+4\pi^{2}\rho_{c}}\right), \quad\omega_{0}=\frac{1}{2}\omega\left(\frac{\beta}{\sqrt{\beta^{2}+4\pi^{2} \rho_{c}}}+1\right). \tag{3.5}\] Therefore the bulk part of the deformed on-shell action is simply given by substituting (3.5) into (3.3) \[I_{\mathrm{bulk}}^{[\mu]} = -\frac{c\pi}{6}\frac{\omega_{0}}{\beta_{0}}=-\frac{c\pi}{6}\frac {\omega}{\sqrt{\beta^{2}+4\pi^{2}\rho_{c}}} \tag{3.6}\] \[= -\frac{c\pi}{6}\frac{\omega}{\beta}+\frac{c\pi^{3}\omega\rho_{c} }{3\beta^{3}}+\mathcal{O}(\rho_{c}^{2}), \tag{3.7}\] and the boundary part is \[I^{[\mu]}_{\rm bdy} = -\mu\int\sqrt{\gamma^{[0]}}\langle{\rm T}{\rm T}{\rm T}\ \rangle=-\mu\int\left(\frac{L_{0}}{8\pi G_{N}}\right)^{2}=-\frac{\mu\omega_{0} \beta_{0}}{(8\pi G)^{2}}\left(\frac{\pi^{2}}{\beta_{0}^{2}}\right)^{2} \tag{3.8}\] \[= -\frac{2c\rho_{c}\pi^{3}w}{3\sqrt{4\pi^{2}\rho_{c}+\beta^{2}}( \beta+\sqrt{4\pi^{2}\rho_{c}+\beta^{2}})^{2}}=-\frac{c\pi^{3}w\rho_{c}}{6\beta ^{3}}+{\cal O}(\rho_{c}^{2}).\] Adding these two terms together we get the final deformed on-shell action \[I^{[\mu]}_{\rm on\mbox{-shell}} = \frac{c\omega(\beta-\sqrt{\beta^{2}+4\pi^{2}\rho_{c}})}{12\pi\rho _{c}}=-\frac{c\pi\omega}{6\beta}+\frac{c\pi^{3}\omega\rho_{c}}{6\beta^{3}}+{ \cal O}(\rho_{c}^{2}) \tag{3.9}\] \[= I^{[0]}_{\rm on\mbox{-shell}}+\mu\int\sqrt{\gamma^{[0]}}\langle {\rm T}{\rm T}\ \rangle+{\cal O}(\rho_{c}^{2}), \tag{3.10}\] which matches the perturbative results of the field theory. To verify the proposal of the on-shell action. Let us consider the entanglement entropy of the whole spatial circle. Using the replica trick, the Renyi entropy can be computed as \[S_{n} = \frac{1}{1-n}\log\frac{Z_{n}}{Z_{1}^{n}}=\frac{1}{1-n}\log\frac{e ^{-I^{[\mu]}_{\rm on\mbox{-shell}}(n\beta)}}{e^{-nI^{[\mu]}_{\rm on\mbox{-shell }}(\beta)}}=\frac{c\omega\left(n\sqrt{4\pi^{2}\rho_{c}+\beta^{2}}-\sqrt{4\pi^{2 }\rho_{c}+n^{2}\beta^{2}}\right)}{12(n-1)\pi\rho_{c}} \tag{3.11}\] \[= \frac{c\pi\omega}{3\sqrt{4\pi^{2}\rho_{c}+\beta^{2}}}-\frac{(n-1 )c\pi\omega\beta^{2}}{6(4\pi^{2}\rho_{c}+\beta^{2})^{3/2}}+{\cal O}((n-1)^{2}).\] Therefore the entanglement entropy is \[S_{1}=\frac{c\pi\omega}{3\sqrt{4\pi^{2}\rho_{c}+\beta^{2}}}=\frac{c\pi w_{0}} {3\beta_{0}}=\frac{\gamma}{4G_{N}}, \tag{3.12}\] which agrees with the RT formula. ### Conical background: the primary state in flat spacetime Let us start from the vacuum Poincare AdS\({}_{3}\) metric \[ds^{2}=\frac{dwd\bar{w}+dz^{2}}{z^{2}}, \tag{3.13}\] and consider the conformal transformation \(w=y^{n},\bar{w}=\bar{y}^{n}\). Using the Banados map [26] we obtain a excited bulk geometry in the form (2.14) with \[\phi=0,\quad{\cal T}_{\phi}=\frac{n^{2}-1}{2y^{2}}\equiv\frac{2a^{2}}{y^{2}}, \quad\bar{\cal T}_{\phi}=\frac{2a^{2}}{\bar{y}^{2}}. \tag{3.14}\] In the radial coordinates the 3d metric is \[ds^{2}=\frac{d\rho^{2}}{4\rho^{2}}+\frac{1}{\rho}\left((1+\frac{a^{2}\rho}{r^{2}}) ^{2}dr^{2}+(r-\frac{a^{2}\rho}{r})^{2}d\theta^{2}\right), \tag{3.15}\] which degenerates at \(\rho_{H}=r^{2}/a^{2}\), so we only need to consider the spacetime below it. The boundary manifold has a conical singularity at \(r=0\) since \(\theta\sim\theta+2\pi/n\). Using (2.25) we can get the on-shell action \[I_{E}^{[0]}=-\frac{c}{12\pi}\int_{\epsilon_{0}}^{\Lambda_{0}}rdr \int_{0}^{\frac{2\pi}{n}}d\theta\,\frac{2a^{2}}{r^{2}}=-\frac{ca^{2}}{3n}\log \frac{\Lambda_{0}}{\epsilon_{0}}. \tag{3.16}\] The bulk solution (3.15) is dual to the deformed theory which is defined on the 2d disk with metric \[ds^{2}=(1+\frac{a^{2}\rho_{c}}{r^{2}})^{2}dr^{2}+(r-\frac{a^{2} \rho_{c}}{r})^{2}d\theta^{2}\equiv dR^{2}+R^{2}d\theta^{2}, \tag{3.17}\] where \[r=\frac{1}{2}\left(\sqrt{R^{2}+4a^{2}\rho_{c}}+R\right). \tag{3.18}\] Then the deformed on-shell action is given by \[I_{\text{on-shell}}^{[\mu]}=-\frac{ca^{2}}{3n}\log\frac{\Lambda _{0}}{\epsilon_{0}}-\mu\int\sqrt{\gamma^{[0]}}\langle\text{T}\overline{\text{ T}}\ \rangle\] \[=-\frac{ca^{2}}{3n}\log\frac{\Lambda+\sqrt{4a^{2}\rho_{c}+ \Lambda^{2}}}{\epsilon+\sqrt{4a^{2}\rho_{c}+\epsilon^{2}}}+\frac{2a^{4}c\rho_ {c}}{3n}\left(\frac{1}{\left(\Lambda+\sqrt{4a^{2}\rho_{c}+\Lambda^{2}}\right) ^{2}}-\frac{1}{\left(\epsilon+\sqrt{4a^{2}\rho_{c}+\epsilon^{2}}\right)^{2}} \right)\] \[=\frac{a^{2}c}{3n}\log\frac{\epsilon}{\Lambda}+\frac{a^{4}c\rho_ {c}}{6n}\left(\frac{1}{\epsilon^{2}}-\frac{1}{\Lambda^{2}}\right)+\mathcal{O} (\rho_{c}^{2})=I_{\text{on-shell}}^{[0]}+\mu\int\sqrt{\gamma^{[0]}}\langle \text{T}\overline{\text{T}}\ \rangle+\mathcal{O}(\rho_{c}^{2}) \tag{3.19}\] as desired. ### Non-flat boundary The general bulk solution (2.14) depends on three arbitrary functions \(\phi(y,\bar{y}),L(y)\) and \(\bar{L}(\bar{y})\). It is either dual to a CFT defined on the boundary with metric \(g_{\mu\nu}^{[0]}\) or dual to a T\(\overline{\text{T}}\) -deformed QFT defined on a 2d manifold with metric \[\gamma_{\alpha\beta}^{[\mu]}dx^{\alpha}dx^{\beta} = \frac{(8-R^{(0)}\rho_{c})^{2}}{64}e^{\phi}\left(dy+\frac{4\rho_{c }e^{-\phi}\overline{\mathcal{T}}_{\phi}}{8-R^{(0)}\rho_{c}}d\bar{y}\right) \left(d\bar{y}+\frac{4\rho_{c}e^{-\phi}\mathcal{T}_{\phi}}{8-R^{(0)}\rho_{c}} dy\right) \tag{3.20}\] \[\equiv e^{\sigma}dUd\bar{U}, \tag{3.21}\] where the two metrics are related by a coordinate transformation and a Weyl transformation [27] as \[\lambda(y,\bar{y})\left(dy+\frac{4\rho_{c}e^{-\phi}\bar{\mathcal{T}}_ {\phi}}{8-R^{(0)}\rho_{c}}d\bar{y}\right)=dU,\quad\bar{\lambda}(y,\bar{y}) \left(d\bar{y}+\frac{4\rho_{c}e^{-\phi}\mathcal{T}_{\phi}}{8-R^{(0)}\rho_{c}}dy \right)=d\bar{U}, \tag{3.22}\] \[e^{\sigma}=\frac{(8-R^{(0)}\rho_{c})^{2}}{64\lambda\bar{\lambda} }e^{\phi}. \tag{3.23}\] The two functions \(\lambda\) and \(\bar{\lambda}\) are integrating factors that are usually hard to determine but they do exist according to the theory of differential equations. In particular, they are not unique because the conformal gauge is not unique. If \(\lambda,\bar{\lambda}\) are proper integrating factors then \(F(U(y,\bar{y}))\lambda\) and \(\bar{F}(\bar{U}(y,\bar{y}))\bar{\lambda}\) are also good integrating factors. When the boundary manifold is flat _i.e._\(\phi=0\) we can choose \(\lambda=\bar{\lambda}=1\) such that \[dU=dy+\frac{1}{2}\bar{\mathcal{T}}_{\phi}(\bar{y})d\bar{y},\quad d \bar{U}=d\bar{y}+\frac{1}{2}\mathcal{T}_{\phi}(y)dy,\quad e^{\sigma}=\frac{1}{ 8}, \tag{3.24}\] which implies that metric \(\gamma_{\alpha\beta}\) is also flat. In general, if we choose an arbitrary \(\gamma_{\alpha\beta}\) it is very challenging to construct the bulk solution. Below we will consider some special cases where the metric \(\gamma_{\alpha\beta}\) is maximally symmetric. ### Sphere background in the vacuum state To fix \(\gamma_{\alpha\beta}^{[\mu]}\) to be a sphere, one should start from the general metric (3.20) and solve the Einstein equation \(R^{[\mu]}=-\Lambda,\,\Lambda<0\). However, for the special case \(L=\bar{L}=0\) which corresponds to the ground state, it turns out that if the boundary metric \(g_{\alpha\beta}^{[0]}\) is a sphere then \(\gamma_{\alpha\beta}^{[\mu]}\) is also a sphere but with a different radius. The boundary Einstein equation \(R^{(0)}=-\Lambda_{0}\) is much simpler to solve because it is just the famous Liouville equation \[4\partial_{y}\partial_{\bar{y}}\phi-e^{\phi}\Lambda_{0}=0, \tag{3.25}\] whose solution is \[\phi=-2\log(1+y\bar{y})+\log(\frac{-8}{\Lambda_{0}}),\quad\Lambda_{0}<0. \tag{3.26}\] The resulting 3d metric can be written as \[ds^{2}=\frac{d\rho^{2}}{4\rho^{2}}+\frac{(8+\Lambda_{0}\rho)^{2}}{(-8\Lambda_ {0})(1+\kappa^{2})^{2}\rho}(d\kappa^{2}+\kappa^{2}d\varphi^{2}),\quad y=\kappa e ^{\rm i\varphi}, \tag{3.27}\] which can be transformed into the AdS global coordinates \[ds^{2}=d\tilde{\rho}^{2}+\sinh^{2}\tilde{\rho}(d\theta^{2}+\sin^{2}\theta d \varphi^{2}) \tag{3.28}\] via the coordinate transformation \[\theta=2\arctan\kappa,\quad\sinh\tilde{\rho}=\frac{8+\Lambda_{0} \rho}{\sqrt{-32\Lambda_{0}\rho}}. \tag{3.29}\] Using the dictionary (1) one can easily find that deformed metric \[ds^{2}=-\frac{(8+\Lambda_{0}\rho_{c})^{2}}{8\Lambda_{0}(1+y\bar{ y})^{2}}dyd\bar{y}\equiv-\frac{8}{\Lambda}\frac{dyd\bar{y}}{(1+y\bar{y})^{2}}, \tag{3.30}\] indeed describes a sphere with a different radius given by \[\Lambda_{0}=-\frac{8(\rho_{c}\Lambda-4+2\sqrt{4-2\rho_{c}\Lambda })}{\rho_{c}^{2}\Lambda}. \tag{3.31}\] Ignoring the universal Weyl anomaly divergent term for a moment, the on-shell action (2.24) for this geometry is \[I^{[\mu]}_{\text{bulk}}=-\frac{c}{6}\left(1+\log\frac{8}{- \Lambda_{0}}\right)=\frac{c}{6}\left(1+\log\frac{\rho_{c}^{2}\Lambda}{\Lambda \rho_{c}-4+2\sqrt{4-2\Lambda\rho_{c}}}\right), \tag{3.32}\] \[I^{[\mu]}_{\text{bdy}}=\frac{\mu}{4096G_{N}^{2}\pi^{2}}\int \sqrt{g^{[0]}}e^{-2\phi}R_{\phi}^{2}=-\frac{c\Lambda_{0}\rho_{c}}{48},\] (3.33) \[I^{[\mu]}_{\text{on-shell}}=I^{[\mu]}_{\text{bulk}}+I^{[\mu]}_{ \text{bdy}}=-\frac{c}{6}\left(1+\log\frac{8}{-\Lambda}\right)+\frac{c\Lambda \rho_{c}}{48}+\mathcal{O}(\rho_{c}^{2}). \tag{3.34}\] Comparing with the results (C.8) of the field theory we see that they are different by a \(\mu\)-dependent constant 6 Footnote 6: Note that the relation between the Ricci scalar and the radius of the sphere is \(\Lambda\rightarrow-\frac{2}{r^{2}}\). \[I^{[\mu]}_{\text{on-shell}}-S^{\mu}_{QFT}=\frac{c}{6}\log \frac{24\pi}{c\mu}. \tag{3.35}\] This is because instead choosing the limit condition \(S^{\mu}_{QFT}(r=0)=0\), we have chosen a different scheme \[I^{[\mu]}_{\text{on-shell}}(\Lambda=-\infty)=\frac{c}{6}\log \frac{24\pi}{c\mu}. \tag{3.36}\] The decision to set \(S^{\mu}_{QFT}(r=0)=0\) in [13] is based on the assumption that the action should become zero when the sphere collapses to a point. From our perspective, it is not a favorable option as the action \(S^{\mu}_{QFT}\) lacks an undeformed limit. In our approach, there is an ambiguity to choose the UV cut-off when we define the regulated on-shell action and we have specially chosen the one which has an undeformed limit. By using the replica trick, one can show that shifting the on-shell action will shift the entanglement entropy as \[I_{\text{on-shell}}\to I_{\text{on-shell}}+\alpha,\quad S_{A} \to S_{A}-\alpha. \tag{3.37}\] Usually, this constant can be absorbed into the UV cut-off when the constant is finite. When \(\alpha\) is also infinite, the situation becomes very subtle. For example, if we also include the Weyl anomaly contribution, the total action should be \[I^{[\mu]}_{\text{on-shell,Weyl}}=I^{[\mu]}_{\text{on-shell}}+ \frac{c}{6}\log\delta^{2}. \tag{3.38}\] Note that if the entanglement entropy of the theory with the action \(I^{[\mu]}_{\text{on-shell,Weyl}}\) has a universal UV divergence \[\frac{c}{3}\log\frac{1}{\delta} \tag{3.39}\] then the theories with the action \(I^{[\mu]}_{\text{on-shell}}\) and also \(S^{\mu}_{QFT}\) will not have it because it is canceled by the universal Weyl anomaly. This is the reason why in [13], a UV finite entanglement entropy is obtained. Have figured out how the entanglement entropy for different actions is related to each other, let us compute the deformed entanglement entropy. It turns out that the RT formula gives the correct result up to a constant. It is useful to compute the undeformed entanglement entropy first. Let us consider a single interval with two endpoints. Using the isometry of the sphere we can always choose them to be \((\rho,y,\bar{y})=(\delta^{2},0,0)\) and \((\rho,y,\bar{y})=(\delta^{2},\tan\frac{\theta_{0}}{2},\tan\frac{\theta_{0}}{2})\). Then using the RT formula we can find the entanglement entropy \[S_{A}=\frac{\gamma}{4G_{N}}=\frac{c}{6}\log\left(\frac{8\sin^{2} \frac{\theta_{0}}{2}}{-\Lambda_{0}\delta^{2}}\right). \tag{3.40}\] The geodesic length can be easily found by transforming the two points into the Poincare coordinates via (2.16). Then the deformed entanglement entropy should be \[S_{A}^{(\mu)} = \frac{c}{6}\log\left(\frac{8\sin^{2}\frac{\theta_{0}}{2}}{- \Lambda_{0}\delta^{2}}\right)=\frac{c}{6}\log\left(\frac{\Lambda\rho_{c}^{2} \sin^{2}\left(\frac{\theta_{0}}{2}\right)}{\delta^{2}\left(2\sqrt{4-2\Lambda \rho_{c}}+\Lambda\rho_{c}-4\right)}\right), \tag{3.41}\] \[= \frac{c}{6}\log\frac{8\sin^{2}\frac{\theta_{0}}{2}}{-\Lambda \delta^{2}}-\frac{c^{2}\Lambda\mu}{576\pi}+\mathcal{O}(\mu^{2}). \tag{3.42}\] For the special case when \(\theta_{0}=\pi\), the result coincides with one in [13] if we take accounts of the shift due to Weyl anomaly and the shift between \(I^{[\mu]}_{\text{on-shell}}\) and \(S^{\mu}_{QFT}\). The other thing we want to emphasize is that our result is valid for both signs of the deformation parameter \(\mu\). ### Poincare disk background in the vacuum state The situation is very similar to the one with the sphere background. For negative Ricci curvature, the solution of the Liouville equation is \[\phi=-2\log(1-y\bar{y})+\log(\frac{8}{\Lambda_{0}}),\quad\Lambda_{0}>0 \tag{3.43}\] and the deformed metric is \[\gamma^{(\mu)}_{ij}dx^{i}dx^{j}=\frac{(8+\Lambda_{0}\rho_{c})^{2}}{8\Lambda_{0} (1-y\bar{y})^{2}}dyd\bar{y}\equiv\frac{8}{\Lambda}\frac{dyd\bar{y}}{(1-y\bar{y })^{2}}, \tag{3.44}\] with \[\Lambda_{0}=-\frac{8\left(2\sqrt{4-2\Lambda\rho_{c}}+\Lambda\rho_{c}-4\right) }{\Lambda\rho_{c}^{2}}. \tag{3.45}\] Since the \(AdS\) space is not compact, the on-shell action has an IR divergence. So let us focus on the holographic entanglement entropy. For the interval with endpoints at \((y,\bar{y})=(0,0)\) and \((a,a)\), the entanglement entropy is \[S^{(\mu)}_{A} = \frac{c}{6}\log\frac{8a^{2}}{(\Lambda_{0}(1-a^{2})\delta^{2})}= \frac{c}{6}\log\left(\frac{a^{2}\Lambda\rho_{c}^{2}}{(a^{2}-1)\,\delta^{2} \left(2\sqrt{4-2\Lambda\rho_{c}}+\Lambda\rho_{c}-4\right)}\right) \tag{3.46}\] ## 4 Conclusion and Discussion In this paper, we want to emphasize that in the holographic dictionary (1) of the \(\rm T\overline{T}\) -deformed CFTs the gravity action should also include an additional boundary term. This appearance of this boundary term inherits from the fact that the leading order \(\rm T\overline{T}\) deformation is a double trace deformation. We confirm this observation in some explicit examples including the BTZ, conical spacetime and the vacuum state of a holographic CFT defined in a sphere background. This observation suggests that the RT formula may be modified to correctly compute the holographic entanglement entropy of the \(\rm T\overline{T}\) deformed theories. We will address that in future work. ## Acknowledgments I thank Huajia Wang for the valuable discussion and collaboration on related topics. I also want to thank many of the members of KITS for interesting related discussions. JT is supported by the National Youth Fund No.12105289 and funds from the UCAS program of special research associate. Convention and Notation We are interested in 2d Euclidean spacetime and we define the holomorphic and anti-holomorphic coordinates as \[z=x+\mathrm{i}y,\quad\bar{z}=x-\mathrm{i}y.\] (A.1) The partition function is defined via the path integral \[Z=\int\mathcal{D}[\phi(x,y)]e^{-S[\phi(x,y)]},\] (A.2) where \(\phi(x,y)\) denotes all the dynamical fields and \(S[\phi]\) is the action. Assuming that the background metric is \(ds^{2}=g_{\alpha\beta}dx^{\alpha}dx^{\beta}\) then the energy-momentum tensor is defined as \[T^{\alpha\beta}=-\frac{2}{\sqrt{g}}\frac{\delta S}{\delta g_{\alpha\beta}}.\] (A.3) In the coordinate of \((z,\bar{z})\), the energy-momentum tensor is \[T_{zz} =\frac{1}{4}\left(T_{11}-T_{22}-\mathrm{i}T_{12}-\mathrm{i}T_{21} \right),\] (A.4) \[T_{\bar{z}\bar{z}} =\frac{1}{4}\left(T_{11}-T_{22}+\mathrm{i}T_{12}+\mathrm{i}T_{21 }\right),\] (A.5) \[T_{z\bar{z}} =T_{\bar{z}z}=\frac{1}{4}\left(T_{11}+T_{22}\right)\equiv\Theta.\] (A.6) For later convenience, we also introduce the renormalized energy-momentum tensor \[T=-2\pi T_{zz},\quad\bar{T}=-2\pi T_{\bar{z}\bar{z}}.\] (A.7) The \(\mathrm{T}\overline{\mathrm{T}}\) composite operator is defined as \[\mathrm{T}\overline{\mathrm{T}}\ \equiv\frac{1}{8}\left(T_{\alpha\beta}T^{ \alpha\beta}-(T_{\alpha}^{\alpha})^{2}\right)=T_{zz}\bar{T}_{\bar{z}\bar{z}}- \Theta^{2}.\] (A.8) We will also assume \(\mathrm{T}\overline{\mathrm{T}}\) satisfies the factorization property \[\langle\mathrm{T}\overline{\mathrm{T}}\ \rangle=\frac{1}{8}\left(\langle T^{ \alpha\beta}\rangle\langle T_{\alpha\beta}\rangle-\langle T_{\alpha}^{\alpha} \rangle^{2}\right),\] (A.9) which is true for field theories living on infinite Euclidean planes and cylinders or in holographic CFTs. Then the deformed theory can also be defined through the flow equation [29, 9]: \[\langle T_{\alpha}^{\alpha}\rangle=\frac{c}{24\pi}R+\frac{\mu}{4}\left( \langle T^{\alpha\beta}\rangle\langle T_{\alpha\beta}\rangle-\langle T_{ \alpha}^{\alpha}\rangle^{2}\right)\] (A.10) together with the conservation equation \(\nabla_{\alpha}\langle T^{\alpha\beta}\rangle=0\). Sphere partition functions In this appendix we re-derive the sphere partition function in our convention following [13]. We consider the metric \(ds^{2}=e^{\sigma}(d\theta^{2}+\sin^{2}\theta d\phi^{2}),\quad\sigma=2\log r\). Considering a small Weyl transformation \[\sigma\rightarrow\sigma+\delta\sigma,\quad e^{\sigma} \rightarrow(1+\delta\sigma)e^{\sigma},\] (C.1) \[g_{\alpha\beta}\rightarrow(1+\delta\sigma)g_{\alpha\beta}\] (C.2) the action changes as \[\delta S=-\frac{1}{2}\int\sqrt{g}\delta\sigma T_{\alpha}^{\alpha} d^{2}x\quad\rightarrow\quad\frac{\delta S}{\delta\sigma}=\frac{\delta S}{ \delta r}\frac{\delta r}{\delta\sigma}=-\frac{1}{2}\int\sqrt{g}T_{\alpha}^{ \alpha}d^{2}x\] (C.3) therefore \[\frac{d}{dr}\log Z=\frac{1}{r}\int d^{2}x\sqrt{g}T_{\alpha}^{ \alpha}.\] (C.4) More generally if we include the boundary then \[\delta\log Z=\frac{1}{2}\int_{M}d^{2}x\sqrt{g}T_{\alpha}^{\alpha} \delta\sigma+\frac{c}{24\pi}\int_{\partial M}K\delta\sigma dl\] (C.5) where \(K\) is the geodesic curvature of the boundary. For the sphere, there are no boundaries so we do not need to include the boundary term. Next, we want to determine \(T_{\alpha}^{\alpha}\) from the trace relation (A.10). The crucial observation is that by symmetry the stress tensor takes the form \(T_{\alpha\beta}=\alpha g_{\alpha\beta}\). Substituting into the trace relation gives \[\alpha=\frac{\sqrt{\frac{c\mu}{6\pi r^{2}}+4}-2}{\mu}.\] (C.6) thus \[\frac{d\log Z}{dr}=\frac{8\pi r\left(\sqrt{\frac{c\mu}{6\pi r^{2} }+4}-2\right)}{\mu}=-\frac{dS_{\rm QFT}^{\mu}}{dr}.\] (C.7) Integrating both sides and imposing the initial condition \(S_{\rm QFT}^{\mu}(r=0)=0\) gives \[S_{\rm QFT}^{\mu} = -\frac{i\pi c\mu+4\sqrt{6\pi}r^{2}\sqrt{\frac{c\mu}{r^{2}}+24\pi} +2c\mu\tanh^{-1}\left(\sqrt{\frac{c\mu}{24\pi r^{2}}+1}\right)-48\pi r^{2}}{6 \mu}\] (C.8) \[= -4\sqrt{\frac{2\pi c}{3\mu}}r+\frac{8\pi r^{2}}{\mu}+{\cal O}(r^ {3})\] (C.9) or \[S_{\rm QFT}^{\mu}=-\frac{c}{6}+\frac{c}{6}\log\frac{c\mu}{96\pi r ^{2}}-\frac{c^{2}\mu}{576\pi r^{2}}+{\cal O}(\mu^{2})\] (C.10) which does not have a \(\mu=0\) limit.
2304.07050
Lossy Compression of Large-Scale Radio Interferometric Data
This work proposes to reduce visibility data volume using a baseline-dependent lossy compression technique that preserves smearing at the edges of the field-of-view. We exploit the relation of the rank of a matrix and the fact that a low-rank approximation can describe the raw visibility data as a sum of basic components where each basic component corresponds to a specific Fourier component of the sky distribution. As such, the entire visibility data is represented as a collection of data matrices from baselines, instead of a single tensor. The proposed methods are formulated as follows: provided a large dataset of the entire visibility data; the first algorithm, named $simple~SVD$ projects the data into a regular sampling space of rank$-r$ data matrices. In this space, the data for all the baselines has the same rank, which makes the compression factor equal across all baselines. The second algorithm, named $BDSVD$ projects the data into an irregular sampling space of rank$-r_{pq}$ data matrices. The subscript $pq$ indicates that the rank of the data matrix varies across baselines $pq$, which makes the compression factor baseline-dependent. MeerKAT and the European Very Long Baseline Interferometry Network are used as reference telescopes to evaluate and compare the performance of the proposed methods against traditional methods, such as traditional averaging and baseline-dependent averaging (BDA). For the same spatial resolution threshold, both $simple~SVD$ and $BDSVD$ show effective compression by two-orders of magnitude higher than traditional averaging and BDA. At the same space-saving rate, there is no decrease in spatial resolution and there is a reduction in the noise variance in the data which improves the S/N to over $1.5$ dB at the edges of the field-of-view.
M Atemkeng, S Perkins, E Seck, S Makhathini, O Smirnov, L Bester, B Hugo
2023-04-14T10:50:24Z
http://arxiv.org/abs/2304.07050v1
# Lossy Compression of Large-Scale Radio Interferometric Data ###### Abstract Radio telescopes produce vast amounts of data and the data volume is set to increase as upcoming telescopes (e.g. SKA, ngVLA) come online. The vast amounts of data are an important issue to deal with in the context of calibration, deep wide-field imaging, storage and archiving for sub-sequence processing. This work proposes to reduce visibility data volume using a baseline-dependent lossy compression technique that preserves smearing at the edges of the field-of-view. We exploit the relation of the rank of a matrix and the fact that a low-rank approximation can describe the raw visibility data as a sum of basic components where each basic component corresponds to a specific Fourier component of the sky distribution. As such, the entire visibility data is represented as a collection of data matrices from baselines, instead of a single tensor. This has the benefit of parallel computation and allows the investigation of baseline-dependent rank-\(r\) decomposition. The proposed methods are formulated as follows: provided a large dataset of the entire visibility data; the first algorithm, named _simple SVD_ projects the data into a regular sampling space of rank-\(r\) data matrices. In this space, the data for all the baselines has the same rank, which makes the compression factor equal across all baselines. The second algorithm, named _BDSVD_ projects the data into an irregular sampling space of rank-\(r_{pq}\) data matrices. The subscript \(pq\) indicates that the rank of the data matrix varies across baselines \(pq\), which makes the compression factor baseline-dependent. MeerKAT and the European Very Long Baseline Interferometry Network are used as reference telescopes to evaluate and compare the performance of the proposed methods against traditional methods, such as traditional averaging and baseline-dependent averaging (BDA). For the same spatial resolution threshold, both _simple SVD_ and _BDSVD_ show effective compression by two-orders of magnitude higher than traditional averaging and BDA. At the same space-saving rate, there is no decrease in spatial resolution and there is a reduction in the noise variance in the data which improves the S/N to over 1.5 dB at the edges of the field-of-view. The proposed compression methods offer superior compression but requires processing data at full data resolution, for reduced implementation complexity. keywords: Instrumentation: interferometers, Methods: data analysis, Methods: numerical, Techniques: interferometric ## 1 Introduction Radio interferometric arrays consist of an assembly of radio antennas that are correlated in pairs to produce complex data values, known as visibilities or sampled spatial frequencies. The data volume grows quadratically with the number of antennas, very large sky surveys, high spectral and temporal resolutions. Processing and storing this volume of visibility data has become a challenge, for example calibrating and translating the data to the image space via resampling and fast Fourier transform operations. The large volume of visibility data is an important problem to deal with in the context of calibration and deep wide-field imaging with current radio interferometric arrays such as MeerKAT (Jonas, 2009), ASKAP (Johnston et al., 2008), NenuFAR (Zarka et al., 2015), LOFAR (van Haarlem et al., 2013) and future radio interferometers, including the Square Kilometre Array (SKA)(Dewdney et al., 2009). To resolve the morphological structure of compact sources, the SKA will sample very high spatial frequencies with long baselines. This high spatial frequency sampling will take the SKA to an unprecedented visibility data volume era; this will require more computation and improved data compression strategies. Long-term data archiving of calibrated products is necessary, and will benefit from the implementation of new data compression algorithms. Data compression is critical to reduce the costs involved in storing, processing and archiving data. Many data compression strategies exist, however, choosing a relevant strategy depends on the science case and how the information related to the science case is encoded in the data. Therefore, a high data compression rate requires that we understand the information in the data and the science case. For a given dataset it is possible that only a few data points contain the majority of information relevant to the science, while other points contain noise or information irrelevant to the case (Bobin et al., 2008). In visibility data, for example, the noise from system electronics, signal from unwanted areas of the sky and spatial frequencies from redundant baselines can be removed from the data. There are mathematical tools that can break down this type of dataset into a new form that makes it easier to understand how the information in the dataset is represented based on some degree of importance. The properties of each of these mathematical tools for breaking down the dataset are different and their suitability depends on the science case. Additional factors must also be considered when choosing a compression strategy, to find the best tradeoff between the compression ratio, signal loss and science. In practice, the visibility data are integrated and averaged over finite time and frequency intervals which introduces decorrelation effects. To limit decorrelation, the finite time and frequency intervals are kept very small so that the phase term on long baselines is preserved. The decorrelation attenuates and changes the morphology of sources at the edges of the field-of-view (FoV). If the time and frequency intervals are scaled up to some limit, for a given FoV, then traditional averaging can be used to reduce the data volume. Deconvolution routines must correctly take into account the smearing effects to avoid limiting the dynamic range of the image; this is discussed in details by several authors (Cotton, 2009; Atemkeng et al., 2016; Tasse et al., 2018; Atemkeng et al., 2020). Since high-resolution sampling of the spatial frequency is only required on long baselines, baseline-dependent averaging (BDA) methods have recently gained popularity and can be used to compress the visibility data and suppress sources out of the FoV while maintaining spectral and temporal resolutions on the long baselines. While BDA can potentially offer compression capabilities on short baselines and maintain the high spectral and temporal resolution required on long baselines, it still requires further investigation. The Measurement Set v2.0 (MS)(Kemball and Wieringa, 2000) format does not support storing variable frequency bins as imposed by BDA. The Measurement Set v2.0 format can, in principle, store variable frequency bins required by BDA, thus this approach requires the creation of a spectral window for each decomposition of the frequency domain. Current calibration algorithms expect the data to be regularly sampled. However, to make use of BDA compression capabilities, calibration algorithms will need to adapt, especially for the different spectral and temporal resolutions along baselines. It is also important to choose appropriate solution intervals for calibration when choosing BDA parameters. Well-defined interpolation windows and weighting schemes are needed for imaging. This is not a problem for imaging algorithms as they place unstructured samples on a regular grid. Visibility data is frequently archived for later use; traditional averaging and/or BDA is applied before archiving. However, many criteria must be considered when choosing compression parameters, as high spectral and temporal resolution is imperative for certain science cases (e.g. VLBI science, spectral line studies) and for calibration routines that take into account the variation of the primary beam (PB) and the effect of the ionosphere, for example. Traditional averaging and/or BDA is accompanied by a decrease in the spectral and temporal resolution at least on shorter baselines for BDA and must therefore be applied with care. There is, therefore, a need to investigate data compression techniques that can archive visibility data with little to no decrease in spectral and temporal resolution. In this work, we use low-rank approximation for visibility data compression. A low-rank approximation minimises a loss function that measures the best-approximating data matrix with a reduced-rank relative to the original data matrix i.e. a low-rank approximation represents a higher-rank data matrix by a reduced-rank data matrix with little to no loss of information. A data matrix, \(\mathbf{V}\) of size \(M\times N\) has a rank\(-r\) approximation given by: \[\mathbf{V}\simeq\mathbf{AC}, \tag{1}\] where the data matrices \(\mathbf{A}\) and \(\mathbf{C}\) are of size \(M\times r\) and \(r\times N\), respectively. The number of entries in the rank\(-r\) approximation is \(r(M+N)\), which can now be stored in the place of \(\mathbf{V}\) to save memory and/or computation since \(r(M+N)\) is considerably smaller than \(MN\) if \(r\) is relatively small and \(M,N\) are relatively large. Thus, \(1\leq r<MN/(M+N)\). \(\mathcal{O}(r(M+N))\) disk space is now required to save the rank\(-r\) approximation of \(\mathbf{V}\) rather than \(\mathcal{O}(MN)\). As of now, in the big data regime, many fields applying image processing project a high-rank dataset to a lower-rank by _dimensionality reduction_. That is a process of feature (dimension) extraction that preserves the most relevant information (reduction) in the data. Dimensionality reduction primarily functions as a pre-processing step to reduce the need for high computing resources, and is classified into two different classes; linear and nonlinear. The most well-known low-rank approximation that belongs to the linear class of dimensionality reduction algorithms is the singular value decomposition (SVD). The SVD decomposes the data into separate sets of relevant features, noisy and redundant components. The properties of the SVD are fully explored in other fields. It is popular in signal processing due to its orthogonal matrix decomposition. Several authors use SVD to tackle different tasks with interferometric data; for example Offringa et al. (2010) discussed using SVD to separate strong radio frequency interference from weak astronomical signals. To speed up imaging and deconvolution, Kartik et al. (2017) used SVD to compress interpolated and gridded visibility data. Recent work proposes using holographic measurement of the PB and SVD to deal with spatial and spectral variation of the PB; e.g. Iheanetu et al. (2019) discussed the possibility of determining a sparse representation of the PB with a few features and Ansh-Narh et al. (2020) discussed mitigating noise in the PB by choosing the strongest components of the decomposition. The interest in exploiting the SVD to compress the visibility data is not accidental; high-sensitivity radio interferometric arrays are dominated by short baselines: SVD exploits redundancies in short baselines and can isolate noise in the data. A rank\(-r\) SVD decomposition of a data matrix of size \(M\times N\) requires \(\mathcal{O}(rMN)\) operations: \(\mathcal{O}(N^{3})\) in the case of a square data matrix (Gu and Eisenstat, 1996; Demmel et al., 2007; Shishkin et al., 2019). Computation of this magnitude is impractical in a big data regime. However, the SVD methods described in this paper should not be confused with a method that treats the entire visibility data as a single tensor and where there is an assumption of similarity between different baselines. We exploit the relation of the SVD to the rank of a matrix and the fact that the SVD describes the raw visibility data as a sum of components, where each component corresponds to a specific Fourier component of the sky distribution. As such, the entire visibility data is represented as a collection of matrices derived for each baseline, instead of a single tensor. This facilitates baseline-dependent rank\(-r\) decomposition that can be performed independently for each baseline, allowing for parallel computing. The proposed methods are formulated below. Given a dataset of visibility data: * The first algorithm projects the data into a regular sampling space of rank\(-r\) data matrices. In this space, the data for all the baselines has the same rank, which makes the compression factor equal across all baselines. We referred to this algorithm as _simple SVD_. The _simple SVD_ is shown to be effective in compressing the visibility data by two-orders of magnitude higher than traditional averaging with a reduction in the noise variance in the data. * The second algorithm projects the data into an irregular sampling space of rank-\(r_{pq}\) data matrices. The extra subscript \(pq\) indicates that the rank of the data matrix varies across baselines, which makes the compression factor baseline-dependent. We referred to this method as baseline-dependent SVD (_BDSVD_). It should be noted that _simple SVD_ and _BDSVD_ are used differently compared to traditional averaging and BDA. The compressed visibility data from traditional averaging and BDA are used directly as input to other tasks such as imaging or calibration, with a lower computation cost as opposed to the uncompressed data. This leads to a decrease in spectral and temporal resolution. The _simple SVD_ and _BDSVD_ are used to archive the visibility data and the data must be decompressed before subsequent use. This has the advantage of maintaining the spectral and temporal resolution of the original data. The use cases are summarised in Table 1. In this paper, we implement the _simple SVD_ and _BDSVD_ serially and provide detailed justification for parallel implementation. Whereafter we suggest a parallel algorithm that can reduce the computation from \(\mathcal{O}(N_{b}MN^{2})\) to \(\mathcal{O}(N_{b}MN^{2}/N_{p})\); where \(N_{b}\) and \(N_{p}\) are the number of baselines and the number of computing nodes, respectively. The implementation of this algorithm is not part of this work, we leave it for future work. The rest of the paper is organised as follows: Section 2 gives an overview of radio interferometric measurements and the theory behind SVD. Section 3 describes the mathematical formalism behind the algorithms. Section 4 analyses the effects of the proposed algorithms on the image and the noise. Simulation results are discussed in Section 5 and Section 6 draws conclusions and suggests future works. Our mathematical notations are summarised in Table 2. ## 2 Background In this section, we discuss the mathematical framework of the visibility data measurements provided by the radio interferometric measurement equation (RIME) formalism (Hamaker et al., 1996; Smirnov, 2011). The framework is used to understand, describe and measure the performance of _simple SVD_ and _BDSVD_. ### Full-sky RIME and traditional averaging Under the RIME formalism and for a single point source, Smirnov (2011) establishes the relationship between a measured visibility at time \(t\) and frequency \(\nu\) and the averaged visibility over some time and frequency integration interval \([t_{0},t_{1}]\times[v_{0},v_{1}]\). We extend this relationship for a full-sky RIME formalism: \[\mathcal{V}(\mathbf{u}_{pq}(t,\nu))=\sum_{\mathbf{I}}\big{(}\mathcal{G}_{pt\nu\nu} \mathcal{I}_{l}\mathcal{G}_{qlt\nu}^{\dagger}\big{)}\mathcal{R}_{pqt\nu}^{ \dagger}. \tag{2}\] The scalar term \(\mathcal{R}_{pqt\nu}^{\dagger}=\mathrm{e}^{-2t\pi(-I_{0})u_{pqt\nu}}\) describes the effects of the position of antennas \(p,q\) and the rotation of the baseline vector \(\mathbf{u}_{pq}(t,\nu)\equiv\mathbf{u}_{pqt\nu}\) which tracks a source located in the direction of the unit vector \(\mathbf{I}\). A compensating delay \(I_{0}\) is introduced by the correlator to enforce that \(\mathcal{R}_{pqt\nu}^{\dagger}\equiv 1\) at the phase centre of the observation: \[\mathbf{u}_{pqt\nu}=\frac{\nu}{c}\begin{bmatrix}u_{pqt}\\ v_{pqt}\\ w_{pqt}\end{bmatrix},\mathbf{I}=\begin{bmatrix}l\\ m\\ n\end{bmatrix},\mathbf{I}_{0}=\begin{bmatrix}0\\ 0\\ 1\end{bmatrix}.\] In the above equations, \({}^{\dagger}\) is the complex transpose operator, \(c\) stands for the speed of light, \(\mathbf{I}_{l}\) is the sky brightness, \(\mathcal{G}_{pt\nu}\) and \(\mathcal{G}_{qlt\nu}^{\dagger}\) group the direction-dependent Jones matrices of antenna \(p\) and \(q\), respectively. Below is an approximate of the average of Eq. 2 over \([t_{0},t_{1}]\times[v_{0},v_{1}]\): \[V_{pq(\nu t)_{i}}\simeq\sum_{\mathbf{I}}\mathrm{sinc}\frac{\Delta\Phi}{2}\mathrm{ sinc}\frac{\Delta\psi}{2}\big{(}\mathcal{G}_{pt(\nu t)_{i}}\mathcal{I}_{l} \mathcal{G}_{q(\nu t)_{i}}\big{)}\mathcal{R}_{pq(\nu t)_{i}}^{\dagger}, \tag{3}\] where \((\nu t)_{i}=t_{i}v_{i}\) with \(t_{i}=(t_{1}+t_{0})/2\) and \(v_{i}=(v_{1}+v_{0})/2\). The phase differences, \(\Delta\Phi\) and \(\Delta\psi\) are defined as: \[\Delta\Phi =\mathrm{arg}\mathcal{G}_{pt\nu_{1}v_{1}}+\mathrm{arg}\mathcal{G }_{qlt_{1}v_{1}}^{\dagger}+\mathrm{arg}\mathcal{R}_{pqt_{1}v_{1}}^{\dagger}\] \[\qquad\qquad\qquad-\Big{(}\mathrm{arg}\mathcal{G}_{pt\nu_{1}\nu_{ 1}}+\mathrm{arg}\mathcal{G}_{qlt_{1}v_{1}}^{\dagger}+\mathrm{arg}\mathcal{R}_ {pqt_{1}v_{1}}^{\dagger}\Big{)}\] \[\Delta\psi =\mathrm{arg}\mathcal{G}_{pt\nu_{1}}+\mathrm{arg}\mathcal{G}_{qlt_{1 }v_{1}}^{\dagger}+\mathrm{arg}\mathcal{R}_{pqt_{1}v_{1}}^{\dagger}\] \[\qquad\qquad\qquad-\Big{(}\mathrm{arg}\mathcal{G}_{pt\nu_{1}v_{1}}+ \mathrm{arg}\mathcal{G}_{qlt_{1}v_{0}}^{\dagger}+\mathrm{arg}\mathcal{R}_{pqt_{1 }v_{0}}^{\dagger}\Big{)}.\] The average in Eq. 3 always results in a net loss of amplitude for an off-phase centre source; in practice the conditioning \(t_{1}-t_{0}\to 0\) and \(v_{1}-v_{0}\to 0\) cannot be satisfied because the phase terms in \(\mathcal{R}_{pqt\nu}^{\dagger},\mathcal{G}_{pt\nu}\) and \(\mathcal{G}_{qlt\nu}^{\dagger}\) vary over time and frequency. This loss of amplitude is known as time and frequency decorrelation (or smearing) when it is caused only by the scalar \(\mathcal{R}_{pqt\nu}^{\dagger}\). The general term _decoherence_ is used when the amplitude loss is caused by the cumulative phase variation of \(\mathcal{R}_{pqt\nu}^{\dagger},\mathcal{G}_{pt\nu}^{\dagger},\mathcal{G}_{qlt \nu}^{\dagger}\) since \(\mathcal{G}_{pt\nu}\) and \(\mathcal{G}_{qlt\nu}^{\dagger}\) also hold complex phases which vary in time and frequency. Eq. 3 shows that for a full sky RIME, the decoherence can be measured separately as the sum of the contribution of each individual source. When considering the decoherence introduced by all the complex terms, an approximation to the average of each individual source that is part of the sum in Eq. 3 can be expressed as the phase differences. In the decoherence formulation discussed above, we have neglected the case of decoherence caused by \(\mathbf{I}_{l}\) because this work focuses on the decoherence of point sources rather than extended sources for which the phase term in \(\mathbf{I}_{l}\) varies considerably over time and frequency. ### BDA and implementation To aggressively compress visibility data and mitigate amplitude lost caused by traditional averaging, several authors have discussed the potential of using BDA (Cotton, 2009; Alemkeng et al., 2018; Wijnholds et al., 2018). As shown in Eq. 3 the product of the three complex terms \(\mathcal{R}_{pqt\nu}^{\dagger},\mathcal{G}_{pt\nu}\) and \(\mathcal{G}_{qlt\nu}^{\dagger}\) is attenuated by _sinc_ functions. The degree of attenuation at the edges of the FoV is determined by the width of these _sinc_ functions which depend on the spatial frequencies that each baseline samples over \([t_{0},t_{1}]\times[v_{0},v_{1}]\). On long spacing, traditional averaging with a wide \([t_{0},t_{1}]\times[v_{0},v_{1}]\) will result in a narrow _sinc_ and significant drop in source amplitude. In order to keep amplitude loss equal across all baselines for a given FoV, the widths of the _sinc_ functions must remain constant, so that the time-frequency interval over which the data is averaged varies. This method is substantial for dense-core interferometric arrays where data is aggressively averaged on shorter baselines. A small time-frequency interval is required to reach the width limit of the _sinc_ functions on longer baselines, resulting in less compression. It is advantageous for a BDA implementation to integrate with existing software, formats and specifications to benefit from compression, and for testing with real observational data. The contemporary specification for radio astronomy data is the MS v2, based on the CASA Table Data System (CTDS) format. We have, therefore, developed a BDA implementation targeting this specification and format. MS rows are grouped by baseline, sorted by time and aggregated into bins whose \(\mathrm{sinc}(\Delta\Phi/2)\) does not exceed the acceptable decorrelation tolerance defined by \(\mathcal{R}^{d}_{pqstr}\). Each bin's \(\mathrm{sinc}(\Delta\Phi/2)\) is inverted, firstly to obtain \(\mathrm{sinc}(\Delta\Phi/2)\) and secondly, the bin frequency integration interval \([\nu_{0},\nu_{1}]\). Practically, each bin can have a different frequency interval and this channelisation does not conveniently fit into the CTDS format which has a fixed number of channels per spectral window. Therefore, to fit BDA data into this format, we discretise the channelisation by subdividing the original spectral window by the integral factors of its channels to produce new spectral windows representing each discretisation. Then, each BDA bin is output to a single row in the output MS and each row is associated with a different spectral window, thereby trading a small factor of compression for compatibility with the MS. ### Low-rank approximation: SVD The SVD break down a given complex data matrix \(\mathbf{V}\) having \(M\) rows and \(N\) columns into three independent matrices: \[\mathbf{V}=\mathbf{A}\mathbf{\Lambda}\mathbf{C}^{\dagger}, \tag{4}\] where \(\mathbf{\Lambda}\) is a diagonal matrix of size \(M\times N\), \(\mathbf{A}\) of size \(M\times M\) and \(\mathbf{C}\) of size \(N\times N\) are unitary matrices. These matrices are defined as: \[\mathbf{\Lambda}=\begin{bmatrix}\mathbf{\Lambda}_{r}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}\end{bmatrix},\ \ \mathbf{A}=\begin{bmatrix}\mathbf{A}_{r},\mathbf{a}_{r+1}, \cdots,\mathbf{a}_{M}\end{bmatrix},\ \ \mathbf{C}=\begin{bmatrix}\mathbf{C}_{r}\\ \mathbf{c}_{r+1}\\ \vdots\\ \mathbf{c}_{N}\end{bmatrix}. \tag{5}\] The decomposition does not require \(\mathbf{V}\) to be a square matrix. Matrix \(\mathbf{\Lambda}_{r}=\mathrm{diag}(\eta_{1},\eta_{2},\cdots,\eta_{r})\) of size \(r\times r\) is diagonal with \(r=\mathrm{min}(M,N)\) and \(\eta_{k}\) are the singular values of \(\mathbf{V}\). As discussed in Stewart (1998), \(\eta_{k}=\sqrt{\lambda_{k}}\) where \(\lambda_{k}\) are the eigenvalues of \(\mathbf{V}^{\dagger}\mathbf{V}\). We note that all the entries in the diagonal of \(\mathbf{\Lambda}_{r}\) are all non-zeros and are in decreasing order. Here, \(\mathbf{A}_{r}=[\mathbf{a}_{1},\mathbf{a}_{2},\cdots,\mathbf{a}_{r}]\) and \(\mathbf{C}_{r}=[\mathbf{c}_{1},\mathbf{c}_{2},\cdots,\mathbf{c}_{r}]^{\mathrm{T}}\), where \({}^{\mathrm{T}}\) is the transpose operator. The vectors, \(\mathbf{a}_{k}\) of size \(M\) and \(\mathbf{c}_{k}\) of size \(N\) are the left and right singular vectors of \(\mathbf{V}\), respectively. \(\mathbf{a}_{k}\) and \(\mathbf{c}_{k}\) are eigenvectors of \(\mathbf{V}\mathbf{V}^{\dagger}\) and \(\mathbf{V}^{\dagger}\mathbf{V}\), respectively. Eq. 4 remains valid if rewritten as: \[\mathbf{V} =\mathbf{A}_{r}\mathbf{\Lambda}_{r}\mathbf{C}_{r}^{\dagger} \tag{6}\] \[=\sum_{k=1}^{r}\eta_{k}\mathbf{a}_{k}\mathbf{c}_{k}^{\dagger}. \tag{7}\] Each \(\eta_{k}\) quantifies how best the corresponding component \(\eta_{k}\mathbf{a}_{k}\mathbf{c}_{k}^{\dagger}\) contributes to the relevant features in Eq. 7; such as in the reconstruction of \(\mathbf{V}\). The larger \(\eta_{k}\) is, the more \(\eta_{k}\mathbf{a}_{k}\mathbf{c}_{k}^{\dagger}\) effectively contributes to the reconstruction of \(\mathbf{V}\). Note that \(\mathbf{a}_{k}\) and \(\mathbf{c}_{k}^{\dagger}\) determine the geometry of the features contained in \(\eta_{k}\mathbf{a}_{k}\mathbf{c}_{k}^{\dagger}\). ## 3 Proposed lossy compression methods The low-rank approximation methods described in this section are applied to the raw visibility data (continuous data), and at each baseline separately. We note that the methods can also be applied to gridded visibility data. However, the gridded data lie on a regular grid where data for all baselines have been interpolated together, making it difficult to gauge acceptable thresholding of the nonzero singular values. Also, since each baseline observes a different portion of the sky, the signal distortion and attenuation are baseline-dependent and the noise variance is different per-visibility data. These effects cannot be taken into account in gridded visibility data. It is insufficient to store the images (therefore gridded visibility data) because typically calibration problems dominate and data has, out of necessity, be recalibrated most of the time to create science ready products for specific continuum, transient and line subdomains. This cannot be done well with all the baselines being convolved onto grid spacing's. Following the above recommendations, we formally express the visibility data (for a given baseline) in a compact and robust matrices formalism. Assume a non-polarized sky, a single channel timeslot \begin{table} \begin{tabular}{l l l} \hline \hline **Methods** & **Compression** & **Preserved spectral and temporal resolution within the FoV?** & **Speedup processing?** \\ \hline \hline **Traditional averaging** & yes & no & yes \\ **BDA** & yes & no & yes \\ _Simple SVD_ & yes & yes & no \\ _BDSVD_ & yes & yes & no \\ \hline \hline \end{tabular} \end{table} Table 1: The use cases of traditional averaging and BDA vs. _simple SVD_ and _BDSVD_ \begin{table} \begin{tabular}{l l} \hline \hline **Notation** & **Description** \\ \hline \hline \(\mathbf{V}\) & The data matrix for \(N_{b}\) baselines before compression \\ \(\mathbf{V}_{n}\) & The data matrix for \(N_{b}\) baselines after compression \\ \(\mathbf{\mathcal{V}}\) & The unsampled visibility data \\ \(V_{pqstr}\) & The sampled visibility data for \(pq\) at \(t\), \(\nu\) \\ \(\mathbf{n}_{pqq}\) & The baseline \(pq\) vector \\ \(\mathbf{I}\) & The sky brightness in the direction of \(\mathbf{I}\) \\ \(\mathbf{V}_{pq}\) & The rank\(-r\) data matrix for baseline \(pq\) \\ \(\mathbf{V}_{pq,n}\) & The rank\(-n\) approximation of \(\mathbf{V}_{pq}\) \\ \(\eta_{pqk}\) & The singular value \\ \(\mathbf{C}\mathbf{C}\) & The compression factor \\ \(N_{b}\) & The number of baselines \\ \(\mathbf{M}\), \(\mathbf{N}\) & The size of matrices \\ \(\mathbf{\epsilon}\) & The maximum threshold error \\ \(\mathbf{\varepsilon}\) & The minimum percentage signal to preserve \\ \(\mathbf{I}_{\mathrm{SB}}^{\mathrm{d}}\) & The dirty image for baseline \(pq\) \\ \(\mathbf{\tilde{T}}_{\mathrm{SB}}^{\mathrm{g}}\) & The compressed dirty image for baseline \(pq\) \\ \(\mathbf{I}^{\mathrm{g}}\) & The dirty image matrix \\ \(\mathbf{\tilde{T}}^{\mathrm{d}}\) & The compressed dirty image matrix \\ \(\mathbf{I}_{\mathrm{loss}}^{\mathrm{d}}\) & The loss image matrix \\ \(\mathbf{\mathcal{F}}\) & The Fourier transform operator \\ \(S/R\) & The signal to noise \\ \hline \hline \end{tabular} \end{table} Table 2: Main mathematical notations: lowercase bold letters are vectors and uppercase bold letters are matrices. Calligraphic capital letters are used to designate functional forms. Everything else is a constant. visibility measured by a baseline \(pq\) is the complex Stokes value: \[V_{pq,n}=\delta_{pq,i}(\mathcal{V}+\mathcal{E}), \tag{8}\] where \(\delta_{pq,i}=\delta(\mathbf{u}-\mathbf{u}_{pq,i}\mathbf{v}_{q})\) is a delta-function shifted to the \(uv\)-point being sampled, \(\mathcal{V}\) is discussed in Eq. 2 and \(\mathcal{E}\) is the contaminating random noise with zero mean and r.m.s. \(\Sigma\). Assuming an observation of \(M\) timeslots and \(N\) channels, we can package the visibility data for single baseline \(pq\) into a single data matrix, \(\mathbf{V}_{pq}\) of size \(M\times N\): \[\mathbf{V}_{pq}=\begin{bmatrix}V_{pq,n_{1},n_{2}}&\cdots&V_{pq,n_{1},n_{N}}\\ \vdots&\ldots&\vdots\\ V_{pq,n_{1},n_{2}}&\cdots&V_{pq,n_{N}}\end{bmatrix}. \tag{9}\] The formalism can be extended to data structure with orthogonal bases capable of fully describing the sky. We discuss the compression of \(\mathbf{V}_{pq}\) and then, in Section 4, we provide a detailed analysis of the compression effect on \(\Sigma\) and the sky image. Note that it is possible to remod \(\mathbf{V}_{pq}\) into sub-matrices and independently find the low-rank of each sub-matrix. The latter should be considered in cases where computation is a bottleneck (i.e. \(MN\rightarrow\infty\)) and where approximating local low-rank matrices is an advantage (Lee et al., 2016). ### Method 1: _Simple SVD_ Finding a very small low-rank data matrix of \(\mathbf{V}_{pq}\) without information loss is challenging because the entries in \(\mathbf{V}_{pq}\) are strongly correlated. Uncorrelated \(\mathbf{V}_{pq}\) is rarely possible in a real observation due to system electronics, and the fact that each visibility data point comes from a pairwise correlation between antenna voltages, etc. This implies that there exists no low-rank approximation, \(\mathbf{V}_{pq,n}\) of rank\(-n\) that can perfectly reconstruct \(\mathbf{V}_{pq}\) of rank\(-r\): \[\mathbf{V}_{pq,n} \simeq\mathbf{V}_{pq},\ 1\leq n<r. \tag{10}\] \[\mathbf{V}_{pq,n} =\mathbf{A}_{pq,n}\mathbf{A}_{pq,n}\mathbf{C}_{pq,n}^{\dagger}\] (11) \[=\sum_{k=1}^{n}\eta_{pq}\mathbf{k}\mathbf{n}_{pq}\mathbf{k}\mathbf{C}_{pqk}^{ \dagger}. \tag{12}\] For simplicity, Eq. 11 can also be written as a Kronecker product, \(\otimes\) after vectorisation: \[\text{vec}(\mathbf{V}_{pq,n})=(\mathbf{C}_{pq,n}^{\dagger}\otimes\mathbf{A}_{pq,n})\mathbf{ \eta}_{pq}, \tag{13}\] where \(\mathbf{\eta}_{pq}=[\eta_{pq,1},\eta_{pq,2},\cdots,\eta_{pqn}]^{\text{T}}\). This _simple SVD_ only allows an equal compression factor across all the baselines as \(n\) is fixed. The compression factor is the ratio between the number of data points of the uncompressed data and the number of data points of the compressed data. The entries of \(\mathbf{A}_{pq,n}\) and \(\mathbf{C}_{pq,n}^{\dagger}\) are complex numbers, while the entries of \(\mathbf{\Lambda}_{pq,n}\) are real numbers. In terms of computer storage requirements, if a complex entry counts as one entry, a real entry should count as 0.5. Therefore, if the size of \(\mathbf{V}_{pq}\) is \(M\times N\) then one can show that the number of elements in the sub-matrices needed to compute \(\mathbf{V}_{pq,n}\) is \(n(M+N+0.5)\) which leads to an overall compression factor, \(CF\) of: \[CF=\frac{MN}{n(M+N+0.5)}. \tag{14}\] In this setting, one has to choose \(CF\) carefully to determine the number of singular values to retain on all the baselines: \[n=\big{[}\frac{MN}{CF(M+N+0.5)}\big{]}, \tag{15}\] where \(\lceil\cdot\rceil\) is the ceiling operator. The compression loss is computed from the tensor norm: \[||\mathbf{V}-\mathbf{V}_{n}||\leq\epsilon||\mathbf{V}||, \tag{16}\] where \(\epsilon\) is the maximum threshold error provided to control the divergence between the visibility data tensors \(\mathbf{V}\) and \(\mathbf{V}_{n}\) representing the data for the \(N_{b}\) baselines before and after compression, respectively. In this work, we defined a tensor norm as the sum of the Frobenius norm, \(||\cdot||_{\text{F}}\) per baseline: \[||\mathbf{V}-\mathbf{V}_{n}||=\sum_{pq}||\mathbf{V}_{pq}-\mathbf{V}_{pq,n}||_{\text{F}}. \tag{17}\] Eq. 17 can be computed from the sum of the Euclidean norm of the singular values not retained at each baseline: \[||\mathbf{V}-\mathbf{V}_{n}||=\sum_{pq}\sqrt{\sum_{k=n+1}^{r}\eta_{pqk}^{2}}. \tag{18}\] Note that in Eq 18, \(k\) runs from \(n+1\) to \(r\) i.e. \((r-n)\) singular values are not retained in the compression. \(||\mathbf{V}||\) is defined as: \[||\mathbf{V}|| =\sum_{pq}||\mathbf{V}_{pq}||_{\text{F}} \tag{19}\] \[=\sum_{pq}\sqrt{\sum_{k=1}^{r}\eta_{pqk}^{2}}. \tag{20}\] Instead of using a threshold error, a minimum percentage of the signal to preserve, \(\varepsilon\) can be specified: \[||\mathbf{V}_{n}||\geq\varepsilon||\mathbf{V}||, \tag{21}\] \[||\mathbf{V}_{n}|| =\sum_{pq}||\mathbf{V}_{pq,n}||_{\text{F}}\] (22) \[=\sum_{pq}\sqrt{\sum_{k=1}^{n}\eta_{pqk}^{2}}. \tag{23}\] If the constraint \(\epsilon\) or \(\varepsilon\) is given then it is computationally cheap to obtain the corresponding value of \(n\) that satisfies the constraint as shown in Algorithm 1. Note that these norms could directly be obtained from computing the Euclidean norm of \(\text{vec}(\mathbf{V}_{pq,n})\), \(\text{vec}(\mathbf{V})\) and \(\text{vec}(\mathbf{V}_{n})\); the vectorisation versions of \(\mathbf{V}_{pq,n}\), \(\mathbf{V}\) and \(\mathbf{V}_{n}\), respectively. ### Source position in the sky and baselines effects on the distribution of singular values As discussed in Section 2.1 the baselines observe different portions of the sky due to the variation of the phase terms in \(\mathcal{R}_{pq,n}^{\dagger}\), \(\mathcal{G}_{pltr}\) and \(\mathcal{G}_{qtr}^{\dagger}\), as well as the different Nyquist sampling due to the baselines geometry and distribution. With this information, we noted that decorrelation is baseline-dependent. Given this finding, the following questions arise: _Is the distribution of the singular values equal across all baselines?_ If not, _how do the strength and decay of the singular values differ at each baseline?_ Can the variation in strength and decay of the singular values be used to aggressively compress the data and what limits the compression?_ To answer the above questions, Figure 1 shows the first 20 singular values of three baselines; the longest baseline (left-column panels), medium-length baseline (middle-column panels) and shortest (right-column panels) of a 1 Jy source at the phase centre (top-row panels), 5 deg (middle-row panels) and 10 deg (bottom-row panels) away from the phase centre. The singular values are obtained by simulating the MeerKAT telescope at 1.4 GHz. The simulation corresponds to 10 000 timeslots per second with a total bandwidth of 0.8 MHz divided into 10 channels each of width 80 kHz. This figure shows some important behaviors of the distribution of the singular values across baselines, therefore, it deserves a detailed explanation. Regardless of the position of the source, the first component in each baseline captures the most features in the original data. In addition, the strength and decay of the singular values are different at each baseline and source position, with shorter baselines decaying rapidly compared to longer baselines where the decay is slow. At the phase centre of the observation, the singular values decay faster; the degree of decay becomes slower as the source moves away from the phase centre. In other words: * For a source at the phase centre, the first component in all the baselines captures the entire features in the original data, in other words \(\eta_{pq1}\) is very large and \(\forall k>1,\eta_{pqk}\sim 0\). This result aligns with what is known; for all baselines, decorrelation is negligible at the phase centre of the observation. * The spread of the singular values is baseline-dependent; the singular values are spread to several components on the longer baselines while for shorter baselines only a few of the first components capture features; for example, say \(pq\) and \(\alpha\beta\) are two baselines with length \(||\mathbf{u}_{pq}||_{2}<||\mathbf{u}_{\alpha\beta}||_{2};\exists i\) such that \(\forall k>i\): \[\eta_{pqi}\gg\eta_{\alpha\beta i}\text{ and }\eta_{\alpha\beta k}\gg\eta_{pqk},\] (24) where \(||.||_{2}\) is the Euclidean norm. This result aligns with the fact that longer baselines observed severe amplitude loss compared to short baselines. * far-field sources experience severe amplitude loss compared to nearby field sources. The above results show that if we were to consider the same number of \(n\) components to retain (i.e. \(r-n\) components to discard) across all the baselines as described in Section 3.1, this will have a direct impact on baseline compression errors, \(||V_{pq}-\mathbf{V}_{pq,n}||_{\text{F}}\), which will be small on short baselines compared to long baselines. Figure 2 shows these compression errors from the simulation in Figure 1, where the first eigh components of each of the baselines are retained. In this figure, the compression errors are plotted against baseline lengths; left-, middle-, and right panels are the compression errors when the source is at the phase centre, 5 deg and 10 deg, respectively. The following is observed: * At the phase centre, \(||V_{pq}-\mathbf{V}_{pq,8}||_{\text{F}}\sim 0\) for all baselines \(pq\). * On longer baselines, \(||V_{pq}-V_{pq,8}||_{\text{F}}\) increased rapidly for far-field sources. * For any two baselines of mismatched length, say \(pq\) and \(\alpha\beta\) with \(||\mathbf{u}_{pq}||_{2}<||\mathbf{u}_{\alpha\beta}||_{2}\) we have: \[||\mathbf{V}_{pq}-\mathbf{V}_{pq,8}||_{\text{F}}<||\mathbf{V}_{\alpha\beta}-\mathbf{V}_{ \alpha\beta,8}||_{\text{F}}.\] (25) We note that the singular values experience a slow decay for out-of-phase centre sources and on long baselines. Since the strength of a singular value, \(\eta_{pqk}\) indicates the degree of features that the component, \(\eta_{pqk}\mathbf{\hat{u}}_{\alpha pk}\mathbf{\hat{v}}_{pqk}^{\dagger}\) contributes to the original data for baseline \(pq\), depending on how the singular values are decaying for the given baseline, the components with smaller singular values could be discarded. By so doing, we observe the following: * A significant number of components could be discarded on shorter baselines with little or no loss of source amplitude. This leads to aggressive data compression with negligible effect on image fidelity. * While attempting to preserve the loss of source amplitude and image fidelity, only a few components could be discarded on longer baselines. This makes the compression baseline-dependent. However, for a given FoV, a sophisticated compression approach is to carefully choose \(n\) on each of the baselines so that the errors in Eq. 25 are equal on all the baselines. This method is described in Section 3.3. ### Method 2: Baseline-dependent SVD (_Bdsvd_) The _BDSVD_ method finds the corresponding baseline-dependent number of components to retain each baseline so that the compression errors between baselines do not vary, for example for any two baselines \(pq\) and \(\alpha\beta\) with \(||\mathbf{u}_{pq}||_{2}<||\mathbf{u}_{\alpha\beta}||_{2}\) find the number of components \(n_{pq}\) and \(n_{\alpha\beta}\), respectively so that: \[||\mathbf{V}_{pq}-\mathbf{V}_{pq,n_{pq}}||_{\text{F}}\equiv||\mathbf{V}_{\alpha\beta}-\bm {V}_{\alpha\beta,n_{\alpha\beta}}||_{\text{F}}, \tag{26}\] where \[\mathbf{V}_{pq,n_{pq}}=\sum_{k=1}^{n_{pq}}\eta_{pqk}\mathbf{a}_{pqk}\mathbf{e}_{pqk}^{ \dagger}. \tag{27}\] It should be noted that the maximum threshold error \(\epsilon\) (or minimum percentage of the signal to preserve \(\varepsilon\)) is now verified on each individual baseline as opposed to Eq. 16 (or to Eq. 21): \[||\mathbf{V}_{pq}-\mathbf{V}_{pq,n_{pq}}||_{\text{F}}\leq\epsilon||\mathbf{V}_{pq}||_{\text{ F}} \tag{28}\] \[||\mathbf{V}_{pq,n_{pq}}||_{\text{F}}\geq\epsilon||\mathbf{V}_{pq}||_{\text{F}}. \tag{29}\] Intuitively, since \(||\mathbf{u}_{pq}||_{2}<||\mathbf{u}_{\alpha\beta}||_{2}\) this means that \(n_{pq}<n_{\alpha\beta}\) while the cumulative strength of the singular values of a compact source are equal on all baselines, \[\sum_{k=1}^{n_{pq}}\eta_{pqk}^{2}=\sum_{k=1}^{n_{pq}}\eta_{\alpha\beta k}^{2}. \tag{30}\] Figure 1: The first 20 singular values of three baselines; the longest baseline (left-column panels), medium-length baseline (middle-column panels) and shortest (right-column panels) of a 1 Jy source at the phase centre (top-row panels), 5 deg (middle-row panels) and 10 deg (bottom-row panels) away from the phase centre. The singular values are obtained from simulating the MeerKAT telescope at 1.4 GHz. The data is sampled at 1 s and 80 kHz during 166 min 40 s with 0.8 MHz bandwidth. Figure 2: Compression errors from the simulation in Figure 1, where the first eight components of each of the baselines are retained. At the phase centre, the compression error tends to zero for all baselines. On long baselines, the compression error increased rapidly for far-field sources. The compression error is small on short baselines compared to long baselines. With this information, Eq. 32 is correct and valid. \[||\mathbf{V}_{n}|| =\sum_{pq}\sqrt{\sum_{k=1}^{n_{pq}}\eta_{pqk}^{2}} \tag{31}\] \[=N_{b}\sqrt{\sum_{k=1}^{n_{pq}}\eta_{pqk}^{2}}. \tag{32}\] In the above, the number of components to retain is baseline-dependent, thus the compression factor also becomes baseline-dependent: \[CF_{pq}=\frac{MN}{n_{pq}(M+N+0.5)}, \tag{33}\] which is larger for shorter baselines compared to longer baselines, for example \(CF_{pq}>CF_{\alpha\beta}\) since \(n_{pq}<n_{\alpha\beta}\). As opposed to Eq. 14, the _BDSVD_ overall \(CF\) follows: \[CF=\frac{N_{b}MN}{(M+N+0.5)\sum_{pq}n_{pq}}. \tag{34}\] Algorithm 2 describes an iterative process to find the baseline-dependent number of component, \(n_{pq}\) given \(\epsilon\) or \(\epsilon\). For compression factor \(CF\), the space-saving \(SS\) is measured as: \[SS=(1-CF^{-1})\times 100\%. \tag{35}\] ### Computation complexity and parallelisation Low-rank approximation has computational drawbacks. It is expensive to compute the SVD of a very large matrix. For visibility data compression as discussed in this paper, for some cases (e.g. to archive the data), we can overlook this computational drawback since we have to compress the data only once at a time. In other cases, for example big data radio interferometers such as the SKA, the amount of visibility data on each of the baselines is very large, even the once-off compression can become a challenge since finding the exact \(CF_{pq}\) involves the computation of singular values at each baseline sequentially, which is an expensive computational task. For example, if the visibility data for each baseline has size \(M\times N\) with \(M\geq N\), then for \(N_{b}\) baselines, the sequential full rank-\(n\)_simple SVD_ scales as: \[s_{\text{cost}}\sim\mathcal{O}\left(N_{b}MN^{2}\right). \tag{36}\] _BDSVD_ scales closely with _simple SVD_ for equal \(CF\). However, the good news is that visibility data has some natural basis consisting of row data. This information can be used to speed up the compression via a parallel algorithm. The row visibility data for a given baseline can be shared with multiple compute nodes; for example, for each baseline, we can subdivide the row data into chunks of \(\mathbf{V}_{pq}^{[1]}\) and compute the SVD for all the chunks in parallel. The subscript \({}^{[i]}\) indicates the \(i^{th}\) chunk of the row visibility data. We briefly discuss this distributed compression process in Algorithm 3, however, further investigation is needed to assess its computational efficiency in practice and to determine at what rate of visibility data size the algorithm should be enforced. Algorithm 3 shows that we can compute the SVD for each of the baselines in parallel (line 1); assume \(N_{p}\) parallel nodes. Also (as shown in line 5) for each baseline, to find \(\mathbf{\Lambda}_{pq}\), \(\mathbf{\Lambda}_{pq}\) and \(\mathbf{\Lambda}_{pq}\) involve computing all \(\mathbf{V}_{pq}^{[i]\dagger}\mathbf{V}_{pq}^{[i]}\) in parallel. \(\mathbf{V}_{pq}^{\dagger}\mathbf{V}_{pq}\) is then obtained from summing all \(\mathbf{V}_{pq}^{[i]\dagger}\mathbf{V}_{pq}^{[i]}\). The latter suggests that the complexity of Algorithm 3 scales as: \[p_{\text{cost}}\sim\mathcal{O}\left(N_{b}MN^{2}/N_{p}\right). \tag{37}\] Ideally, we would hope to demonstrate strong scaling over Eq. 36, but this remains outside the scope of this work. ## 4 Effects on the image and noise: analytical quantification The SVD is not perfectly orthogonal in practice, both noise and signal are present in each of the components of the decomposition. A trade-off between the amount of signal and noise to remove is necessary. _Simple SVD_ and _BDSVD_ also result in a small loss of signal and retain a small amount of noise. Our goal now is to provide the mathematical models to quantify the signal loss and the noise removed as a form of contribution at all baselines. ### Effect on the image As the Fourier transform \(\mathcal{F}\) is linear, the compressed dirty image \(\mathbf{\widetilde{T}}_{\text{pq}}^{\text{d}}\) of a single baseline is the sum: \[\mathbf{\widetilde{T}}_{\text{pq}}^{\text{d}} =\mathcal{F}\mathbf{V}_{pq,n_{pq}} \tag{38}\] \[=\sum_{k=1}^{n_{pq}}\eta_{pqk}\mathbf{I}_{pqk}^{\text{d}}, \tag{39}\] where \(\eta_{pqk}\mathbf{I}_{pqk}^{\text{d}}\) is the \(k^{th}\) linearly independent component of the sky image seen by the baseline \(pq\) and \(\mathcal{F}\) is a unitary linear operator. In relation to Parseval's theorem, we have: \[\mathbf{I}_{pqk}^{\text{d}}=\mathbf{a}_{pqk}^{\prime}\mathbf{c}_{pqk}^{\prime \prime}, \tag{40}\] where \(\mathbf{a}_{pqk}^{\prime}\) and \(\mathbf{c}_{pqk}^{\prime}\) are vectors. The singular value, \(\eta_{pqk}\) indicates how best \(\eta_{pqk}\mathbf{I}_{pqk}^{\text{d}}\) contributes to the quality and fidelity of \(\mathbf{\widetilde{T}}_{\text{pq}}^{\text{d}}\). A closer look at Eq. 39 shows that the singular values in both the visibility and image domains are equal, thanks to the linear and unitary properties of the Fourier transform. This means that the choice of the domain where the data should be compressed does not matter; the number of components to be retained in the visibility domain would eventually be the same in the image domain. Alternatively, the dirty image is derived from summing Eq. 39 across all the baselines: \[\mathbf{\widetilde{T}}^{\text{d}} =\sum_{pq}\mathbf{\widetilde{T}}_{\text{pq}}^{\text{d}} \tag{41}\] \[=\sum_{pq}\Bigg{(}\sum_{k=1}^{n_{pq}}\eta_{pqk}\mathbf{I}_{pqk}^{ \text{d}}\Bigg{)}. \tag{42}\] If \(\mathbf{I}^{\text{d}}=\sum_{pq}\mathcal{F}\mathbf{V}_{pq}\) is the Fourier transform of the uncompressed data (i.e. the uncompressed image of the sky), then to quantify the net loss in signal per-pixel, \(\mathbf{I}_{\text{loss}}^{\text{d}}\), the following difference is adopted as a standard fidelity metric: \[\mathbf{I}_{\text{loss}}^{\text{d}} =\mathbf{I}^{\text{d}}-\mathbf{\widetilde{T}}^{\text{d}} \tag{43}\] \[=\sum_{pq}\Bigg{(}\sum_{k=1}^{r}\eta_{pqk}\mathbf{I}_{pqk}^{\text{d}}- \sum_{k=1}^{n_{pq}}\eta_{pqk}\mathbf{I}_{pqk}^{\text{d}}\Bigg{)}. \tag{44}\] For \(n_{pq}<r\), we have \[\mathbf{I}_{\text{loss}}^{\text{d}} =\sum_{pq}\sum_{k=n_{pq}+1}^{r}\eta_{pqk}\mathbf{I}_{pqk}^{\text{d}}, \tag{45}\] where the entries of \(\mathbf{I}_{\text{loss}}^{\text{d}}\) are different from 0. ``` Data:\(\boldsymbol{V}\); \(\epsilon\) or \(\varepsilon\) Result: the list of the number of singular values to retain at each baseline \(\{n_{pq}\},\forall pq\) 1\(n_{pq}\gets 0;\boldsymbol{V}_{pq,n_{pq}}\leftarrow\boldsymbol{0}\) 2if\(\epsilon\) is giventhen 3forall baseline\(pq_{q}\)do 4do 5\(\boldsymbol{V}_{pq,n_{pq}}\leftarrow\boldsymbol{V}_{pq,n_{pq}-1}\) 6\(\boldsymbol{+}\,p_{pqn_{pq}}\boldsymbol{a}_{pqn_{pq}}\epsilon_{pqn_{pq}}^{ \dagger}\epsilon_{pqn_{pq}}^{\dagger}\) 7while\(\frac{||\boldsymbol{V}_{pq}-\boldsymbol{V}_{pq,n_{pq}}||_{\boldsymbol{V}}}{|| \boldsymbol{V}_{pq}||_{\boldsymbol{V}}}>\epsilon\) and \(n_{pq}\leq r\); 8\(n_{pq}\gets n_{pq}+1\) 9 end for 10 11 12 13 end for 14 15 16 end for ``` **Algorithm 2**Finding \(\{n_{pq}\},\forall pq\) using _BDSVD_. Here, all the \(\boldsymbol{V}_{pq}\) are taken from \(\boldsymbol{V}\). ``` Data:\(\boldsymbol{V}\); \(N_{p}\) Result: The SVD of \(\boldsymbol{V}\) decomposed per baseline: \(\{\boldsymbol{\Delta}_{pq},\boldsymbol{\Delta}_{pq},\boldsymbol{\mathbf{C}}_{pq}\}\), \(\forall pq\) 1forall baseline \(pq_{q}\) in paralleldo 2forall\(i\), in paralleldo 3\(\boldsymbol{V}_{pq}^{\dagger}\boldsymbol{V}_{pq}\boldsymbol{v}_{pq}\boldsymbol{ +}=\boldsymbol{V}_{pq}^{[1]\dagger}\boldsymbol{V}_{pq}^{[1]}\) 4 end for 5\(\boldsymbol{\mathbf{C}}_{pq}\boldsymbol{\Delta}_{pq}^{2}\boldsymbol{\mathbf{C}}_{pq}^ {\dagger}=\boldsymbol{V}_{pq}^{\dagger}\boldsymbol{V}_{pq}\) 6\(\Lambda_{pq}=\sqrt{\Lambda_{pq}^{2}}\) 7forall\(i\), in paralleldo 8\(\boldsymbol{\mathbf{A}}_{pq}^{[i]}=\boldsymbol{V}_{pq}^{[i]}\boldsymbol{ \mathbf{C}}_{pq}\boldsymbol{\Lambda}_{pq}^{-1}\) 9 end for 10\(\Lambda_{pq}=\{\boldsymbol{\mathbf{A}}_{pq}^{[i]}\},\;\forall i\) 11 end for ``` **Algorithm 3**Finding \(\{\boldsymbol{\mathbf{A}}_{pq},\boldsymbol{\mathbf{\Lambda}}_{pq},\boldsymbol{ \mathbf{C}}_{pq}\},\forall pq\) using parallel computing. Here, all the \(\boldsymbol{V}_{pq}\) are taken from \(\boldsymbol{V}\), and \(N_{p}\) is the number of compute nodes. ### Noise filtering and S/N Three important questions arise surrounding noise filtering when _simple SVD_ and _BDSVD_ are used i) _how does the noise behave for simple SVD where the same number of components are retained on all the baselines? ii) how does _BDSVD_ and its varying compression factor affect the noise at each baseline and how does the filtered noise differ from the simple SVD and traditional averaging? and iii) on what type of baseline is noise heavily filtered when using simple SVD and _BDSVD?_ In this section, we address these questions through well-posed conditioning and discussion. The entries of the compact visibility data matrix \(\boldsymbol{V}_{pq}\) come from sampling the sum of \(\boldsymbol{\mathcal{V}}\) and noise \(\mathcal{E}\) as shown in Eq. 8. Thus, each \(\eta_{pqk}\) reflects the strength of the signal contribution from a component which is sampled from \(\boldsymbol{\mathcal{V}}+\mathcal{E}\). Small \(\eta_{pqk}\) corresponds to components of \(\boldsymbol{V}_{pq}\) that are heavily corrupted by noise, which means that if one retains components with the bigger \(\eta_{pqk}\) this should be equivalent to removing noise and keeping useful signal. This means that with _simple SVD_ and _BDSVD_, the noise in the compressed data is reduced for \(n<r\) and \(n_{pq}<r\), respectively. Below, we provide details on the noise filtering capability of _simple SVD_ and _BDSVD_ compared to traditional averaging at the same \(CF\). For the same compression factor, the analytical visibility noise penalty estimates of _simple SVD_ and _BDSVD_ is the relative decrease in noise over traditional averaging: \[\Xi_{X}=\frac{\Sigma_{X}}{\Sigma_{avg}}, \tag{46}\] where \(\Sigma_{X}\) and \(\Sigma_{avg}\) are compressed noise using _simple SVD_ (or _BDSVD_) and traditional averaging, respectively. The goal is to show that \(\Xi_{X}<0\) which means \(\Sigma_{X}<\Sigma_{avg}\). Assuming that for all baselines and samples, the noise term has constant r.m.s. \(\Sigma\), when denoising a signal by averaging \(n_{avg}\) samples, the reduction in noise is well understood to be \(\Sigma_{avg}=\Sigma/\sqrt{\mu_{avg}}\) if the noise is not correlated between samples; which confirms that \(\Sigma_{avg}<\Sigma\). As discussed, the SVD removed some noise in the data, therefore, \(\Sigma_{X}<\Sigma\). However, at the same compression factor, it is not trivial to see that \(\Sigma_{X}<\Sigma_{avg}\) analytically (we refer the reader to Appendix A for a detailed mathematical discussion). Empirical measurements are used in Section 5 to show that at the same compression factor \(\Sigma_{X}<\Sigma_{avg}\) and therefore \(\Xi_{X}<0\). In the image domain, the noise penalty estimate of the centre pixel is given by \[\Xi_{X}^{W}=\frac{\sum_{pq_{i}\nu_{i}}W_{pq_{i}\nu_{j}}^{2}\Xi_{X}^{2}}{(\sum_ {pq_{i}\nu_{j}}W_{pq_{i}\nu_{j}})^{2}}, \tag{47}\] where \(W_{pq_{i}\nu_{j}}\) is the imaging weight per visibility. As discussed above, components with smaller \(\eta_{pqk}\) are strongly noisy and are candidates for removal when noise is of concern. In addition, components with larger \(\eta_{pqk}\) contain the most signal to retain. _Simple SVD_ and _BDSVD_ only retain components with larger \(\eta_{pqk}\) for given thresholding and components that do not meet this threshold are rejected. _BDSVD_ differs from _simple SVD_ in that the number of components, \(n_{pq}\) to retain is baseline-dependent. As mentioned in Section 3.2, the first \(\eta_{pqk}\) are large for shorter baselines and decrease faster compared to longer baselines, where \(\eta_{pqk}\) decreases slowly (Figure 1 clearly shows this behaviour of \(\eta_{pqk}\)). This means that if the same number of components is retained on all the baselines as in _simple SVD_, then some of the components that contain the signal will be discarded on longer baselines, hence distorting the compressed data. On the other hand, on the shorter baselines, several noisy components are retained in addition to all the components with a strong signal. _BDSVD_ takes advantage of this drawback and constructs a baseline-dependent number of components to retain such that only components with strong signal strength are retained on each baseline. This heavily filters out noise and maintains signal fidelity compared to _simple SVD_ at the same \(CF\). With this in mind, then at the same compression factor, we can write: \[\Sigma_{\text{svd}}>\Sigma_{\text{ddsvd}}\implies\Xi_{\text{svd}}^{W}>\Xi_{ \text{bdsvd}}^{W}, \tag{48}\] where \(\Sigma_{\text{svd}}\) and \(\Sigma_{\text{bdsvd}}\) (respectively \(\Xi_{\text{svd}}^{W}\) and \(\Xi_{\text{bdsvd}}^{W}\)) are the noise (respectively the noise penalty) using _simple SVD_ and _BDSVD_, respectively. Analytically, Eq. 48 clearly shows that _BDSVD_ reduces noise compared to _simple SVD_. Empirical measurements are used in Section 5 to confirm the analytical result in Eq. 48. The metric we used to measure the signal to noise, \(S/R\) in decibel in each pixel of the compressed image is: \[S/R=10\log_{10}\frac{\widetilde{\mathbf{I}}_{\mathbf{I}}}{\sigma_{\text{pix}}+c_{\text{ noise}}}, \tag{49}\] where \(\widetilde{\mathbf{I}}_{\mathbf{I}}\) is the compressed version of \(\mathbf{I}_{\mathbf{I}}\); the sky without any electronics corruption and effects that can disrupt the signal in the path towards the instrument (see Eq. 2). In this formulation, \(c_{\text{noise}}\) represents the signal coming from sources that are outside the FoV and \(\sigma_{\text{pix}}\) is the per-pixel noise in the dirty image: \[\sigma_{\text{pix}}^{2}=\frac{\sum_{pq\iota_{V}j}W_{pq\iota_{V}j}^{2}\Sigma_{ \mathbf{X}}^{2}}{(\sum_{pq\iota_{V}}W_{pq\iota_{V}j})^{2}}. \tag{50}\] ## 5 Simulations The MeerKAT and the The European Very Long Baseline Interferometry Network (EVN) are used as reference telescopes in this section to evaluate and compare the performance of each method; traditional averaging, BDA, _simple SVD_ and _BDSVD_. Three different metrics are evaluated; i) source amplitude loss is measured vs. baseline lengths, baseline-dependent compression factor vs. baseline lengths and east-west baseline lengths, ii) amplitude loss is measured relative to the source's position in the sky; iii) to measure the spectral and temporal resolution, the Point Spread Function (PSF) shape is measured relative to the source's position in the sky and (iv) the S/R is also measured relative to the source's position in the sky. Note that for a rank\(-n\) SVD decomposition of two-dimensional visibility data, \(\mathbf{V}_{pq,n}\) as defined in Eq. 11; \(M\) and \(N\) are the number of time steps and channels, respectively. In the space where \(\mathbf{V}_{pq,n}\) is defined, the row and column data are the time and frequency observations, respectively. When \(\mathbf{V}_{pq,n}\) is projected into the rank\(-n\) SVD space, the time and frequency observations are mixed, which makes it difficult to completely determine the direction in which the compression of rank\(-n\) is performed. To compare the proposed methods with traditional averaging and BDA, we need to evaluate the above metrics in a precise compression direction, such as in time or frequency. We adopt the following strategy, for example, to evaluate the metrics in time: \(\mathbf{V}_{pq,n}\) is scaled from a data matrix of size \(M\times N\) to size \(M_{1}\times M_{2}\times N\), where rank\(-n\) SVD is only applied in the time direction with chunk of size \(M=M_{1}\times M_{2}\). The notation \(CF=CF_{\mathbf{i}}\times CF_{\mathbf{v}}\) adopted in this section means that the data is compressed by a factor of \(CF_{\mathbf{i}}\) in time and \(CF_{\mathbf{v}}\) in frequency. The discussion in this section does not include the compression in the frequency direction (\(CF_{\mathbf{v}}=1\)); however, equivalent performance is observed in the frequency direction if \(\mathbf{V}_{pq,n}\) is scaled to \(M\times N_{1}\times N_{2}\) and the rank\(-n\) SVD is applied only to entries \(N=N_{1}\times N_{2}\) of \(\mathbf{V}_{pq,n}\). ### MeerKAT telescope For the MeerKAT telescope at 1.4 GHz, a point source is simulated during 166 min 40 s which gives 10 000 time steps of 1 s each. The simulation has a total bandwidth of 0.8 MHz divided into 10 channels each 80 kHz wide. The 1.4 GHz MeerKAT telescope observes a full FoV with radius \(\sim 2.25\) deg; the second null of the PB without time and frequency smearing effects with a time step of 1 s and a channel width of 80 kHz. This high-resolution dataset is scaled to \(100\times 100\times 10\), where _simple SVD_ and _BDSVD_ are applied separately at each frequency chunk of size \(100\times 100\). The compression factors we adopt are \(CF=25\times 1\), \(CF=35\times 1\) and \(CF=50\times 1\), which translates to a space-saving of 96.0%, 97.14% and 98.0% in the time direction, respectively. With this high-resolution dataset, \(CF=50\times 1\) is the maximum time compression factor that can be achieved with the proposed methods. This is because \(CF_{\mathbf{i}}=50\) for rank-1 SVD; in other words only one component is kept across all baselines and the compression factor as described in Eq. 33 becomes \(100\times 100/(100+100+0.5)\sim 50\). The strategies adopted for traditional averaging and BDA are: * Two low-resolution MSs with bins size \(25s\times 5kHz\) and \(50s\times 5kHz\) are created to receive the resampled visibilities for traditional averaging with \(CF=25\times 1\) and \(CF=50\times 1\), respectively. * Using the method described in Section 2.2, a third MS is created to receive the resampled visibility for BDA with \(CF=25\times 1\) and \(CF=50\times 1\). * A fourth MS of the same copy as the high-resolution MS is created to receive resampled visibility for _simple SVD_ and _BDSVD_. No backup policy is required to save the decompressed visibility data for _simple SVD_ and _BDSVD_. The adopted compression factors are \(CF=25\times 1\) and \(CF=50\times 1\) for _simple SVD_, while for _BDSVD_\(CF=25\times 1\) and \(CF=35\times 1\) are adopted. Since \(CF=50\times 1\) is the maximum compression factor _BDSVD_ with \(CF=50\times 1\) will be equivalent to _simple SVD_ with \(CF=50\times 1\). Due to the latter, we do not run _BDSVD_ with \(CF=50\times 1\). #### 5.1.1 Amplitude vs. east-west baseline lengths This first test aims to quantify the decorrelation of a single point source of 1 Jy amplitude placed at 2.25 deg; the second null of the MeerKAT at 1.4 GHz PB. The results are shown in Figure 3. For \(\varepsilon=99\%\), Figure 3 (top panel) shows the baseline-dependent compression factor \(CF_{pq}\) in logarithmic scale against increasing east-west baseline lengths. The rank\(-n\) SVD-related methods, \(CF_{pq}\) is computed from Eq.33 after calculating all \(n_{pq}\) using Algorithm 2, while for BDA \(CF_{pq}\) is computed following the discussion in Atemkeng et al. (2018). This is an important result of this study, so it deserves a detailed explanation. Although we understand the \(CF_{pq}\) of BDA as discussed in Atemkeng et al. (2018), the \(CF_{pq}\) of _BDSVD_ shows a different pattern and strength for east-west projected baselines. It can be observed that for BDA the \(CF_{pq}\) are strictly decreasing for increasing east-west projected baseline length, whereas for _BDSVD_, east-west projections of equal lengths can have different \(CF_{pq}\) where some \(CF_{pq}\) are significantly larger than others. However, this behaviour still shows that more data is compressed on the smaller east-west baseline lengths compared to the longer; although the pattern and strength are not similar to that of BDA. Figure 3 (middle panel) shows the \(CF_{pq}\) as a function of increasing baseline length. We observe that baselines of equal lengths can have different \(CF_{pq}\) when BDA or _BDSVD_ are in effect. Whereas \(CF_{pq}\) shows more decreasing strengths with _BDSVD_ and randomly distributed with BDA, which confirms that decorrelation is not a degree of baseline length but rather the east-west baseline length. At the same overall compression factor \(CF\), BDA and _BDSVD_ see strictly different \(CF_{pq}\). Figure 3 (bottom panel) shows the source amplitude of 1 Jy as a function of increasing east-west projection lengths. It is clear from this result that _BDSVD_ outperformed BDA. The source amplitude is attenuated with BDA compared to _BDSVD_ which retains more than \(\varepsilon=99\%\) of the source amplitude. Additionally, we observe with _BDSVD_ that the amplitude of the decorrelated source is not constant for some equal east-west baseline lengths. Indeed, as observed in Figure 3 (top panel); \(CF_{pq}\) varies for certain equal east-west baseline lengths, which can result in different degrees of decorrelation. Similar behaviour can be observed with BDA, where decorrelation is revealed faster for east-west baseline lengths belonging to the same averaging range. For example, all east-west baselines with a length between 2.2 \(km\) and 3.5 \(km\) fall within the same averaging range where \(\sim\) 2 bins are averaged together. Since the length of the east-west baseline in the same averaging range is different; explaining the different degrees of decorrelation. #### 5.1.2 Amplitude vs. source position This section examines the result of the amplitude-decorrelation with respect to the source position in the sky. The noise penalty is also measured and compared for each of the compression methods. To measure the thermal noise, the high-resolution dataset as described above is used to populate an empty sky with 1 Jy thermal noise, where the different compression methods are applied. Figure 4 shows the results of time compression. There are important points to note about these results. A compression factor of \(CF=25\times 1\) is needed to retain at least \(\varepsilon=99\%\) of the source amplitude. At this compression regime, _simple SVD_ and _BDSVD_ keep the source amplitude almost flat with negligible attenuation of 1% that starts around a radius of \(\sim\) 4 deg upwards, while the 1% attenuation starts around a radius of \(\sim\) 1.32 deg and \(\sim\) 0.8 deg upward for the BDA and traditional averaging, respectively. A compression factor of \(CF=50\times 1\) provides up to a FoV with radius \(\sim\) 2.9 deg for \(\varepsilon=95\%\) whereas at this compression factor, traditional averaging and BDA can only provide a FoV with radius up to \(\sim\) 1.32 deg and \(\sim\) 2 deg, respectively. It can also be noted that traditional averaging only provides a FoV with a radius of \(\sim\) 1.32 deg for \(CF=25\times 1\) whereas to obtain the same FoV with a radius of \(\sim\) 1.32 deg using _simple SVD_, we can compress the data by two-order of magnitude higher than traditional averaging. Traditional averaging or BDA cannot compress data by a factor of \(CF=25\times 1\) and achieve \(\sim\) 2.25 deg radius with 1% attenuation. However, at this 1% attenuation rate, _BDSVD_ achieves \(\sim\) 2.25 deg radius for \(CF=35\times 1\). _BDSVD_ draws its potential from the advantages of both BDA and _simple SVD_. The values of \(\Xi\sim 1\) for BDA, this result has also been observed in Atemkeng et al. (2018) and at the same compression factor \(CF\), the reduction in noise of both BDA and traditional averaging varies with \(\sqrt{CF}\). Values of \(\Xi<1\) for _simple SVD_, which demonstrates the common use of SVD to denoise signals. The values of \(\Xi\) for _BDSVD_ are lower than that of _simple SVD_ which confirms the theoretical result discussed in Section 4.2; the noise performance of _BDSVD_ exceeds that of _simple SVD_ at the same compression factor. #### 5.1.3 PSF distortion vs. source position As discussed in Section 1, a negative aspect of averaging the visibility data is the distortion of the PSF for which the longer baselines are the major contributors. The amplitude of the PSF at a given position in the sky provides a measure of the signal loss while its width (say at the FWHM) describes how widely the source is spread out at that position in the sky. In this section, we evaluate the PSF distortion by measuring the width of the PSF at the FWHM when each of the discussed compression methods are applied in time. The simulated high-resolution dataset of \(1s\times 80kHz\) bins discussed in Section 5.1 is reused where the three compression factors \(CF=25\times 1\), \(CF=35\times 1\) and \(CF=50\times 1\) are compared for all the different methods. Results are shown in Figures 5 and 6. Figure 5 depicts the PSF of a source at 2.25 deg when the high-resolution dataset is compressed for \(CF=25\times 1\) with _BDSVD_ (top left), _simple SVD_ (top right), BDA (middle left) and traditional averaging (middle right). Figure 5 displays the vertical (bottom left) and horizontal (bottom right) directions of the normalised PSFs as well. It is clear from this visual inspection that the width of the PSF is different for all the compression methods, with traditional averaging having a wider PSF compared to the other methods. Although BDA is a potential compression method that retains amplitude loss, the resulting PSF Figure 3: The top panel shows the baseline-dependent compression factor \(CF_{pq}\) in the logarithmic scale against increasing east-west baseline lengths while the middle panel shows \(CF_{pq}\) in logarithmic scale as a function of baseline length. The bottom panel shows the amplitude of the 1 Jy point source against increasing east-west baseline lengths. BDA sees strictly different \(CF_{pq}\) than _BDSVD_ while _BDSVD_ outperformed BDA in preserving the 1 Jy source amplitude. shape differs completely from that of traditional averaging and the SVD-related methods. In order to quantify the width of the PSFs, Figure 6 displays the average of the radial and tangential PSF resolutions measured at the FWHM against distance from the phase centre of the observation. _BDSVD_ shows excellent capabilities that maintain the PSF without any distortion up to a radius greater than 3 deg with \(CF=25\times 1\) and \(CF=35\times 1\). A similar result is observed for _simple SVD_ with \(CF=25\times 1\). However, for _simple SVD_ the PSF distortion starts at \(\sim 1.2\deg\) radius for \(CF=50\times 1\). When traditional averaging or BDA is in action, the PSF quickly begins to observe distortion starting at around \(\sim 1.2\deg\) for each of the compression factors. These results confirm that _BDSVD_ and _simple SVD_ can maintain the PSF without any distortion up to above a radius of 2.25 deg; the maximum radius that the MeerKAT telescope at 1.4 GHz is capable to achieve. #### 5.1.4 Relative S/N The S/N as discussed theoretically in Section 4.2 is shown in Figure 7 as a function of source position in the sky. The simulation in Section 5.1.2 is used to measure the source amplitude when the compression methods are applied with the different compression factors. Two simulations are included in this section to measure the thermal noise and noise from sources outside the FoV. The high-resolution dataset as described above is used to populate i) an empty sky with 1 Jy thermal noise and ii) a 10 Jy source is simulated at 20 deg away from the phase centre, and the different compression methods are applied separately to each simulation. The far-field contamination is measured from images of size \(2^{10}\times 2^{10}\). Note that apart from the signal from the far-field source, this image should be empty. _BDSVD_ and _simple SVD_ benefit from a strong improvement of S/N to about \(\sim 1.5\) dB at 2.5 deg compared to traditional averaging. However, BDA does not improve the S/N compared to traditional averaging; this is understood since thermal noise reduction scales are the same for BDA and traditional averaging, while BDA does not suppress sources that are outside the FoV compared to traditional averaging. ### Evn We investigate the application of _simple SVD_ and _BDSVD_ in VLBI to keep decorrelation down to a certain level while significantly compressing the data. We simulate a 15 min full EVN (i.e. Badary, Effelsberg, Hartebeesthoek, Jodrell Bank, Medicina, Noto, Onsala, Shanghai, Svetloo, Torun, Westerbork, Zelenchukskaya) at 1.6 GHz observation. With a total bandwidth of 50 kHz channelised into 10 channels each of width 5 kHz, the 15 min observation is sampled at each 0.01 s which gives 90 000 time steps. To apply the _simple SVD_ and _BDSVD_, the 90 000 time steps are scaled to \(300\times 300\), which allows us to investigate compression factors of \(CF=10\times 1\), \(CF=95\times 1\) and \(CF=150\times 1\). A single point source with 1 Jy amplitude is simulated and the amplitude of the source is measured and compared to that of traditional averaging and BDA. The results are shown in Figure 8. It is observed that for a smearing factor of 1% and \(CF=10\times 1\), traditional averaging and BDA would result in a FoV of 6 arcmin and 18 arcmin, respectively. Whereas with \(CF=95\times 1\), _BDSVD_ can image a FoV of above 30 arcmin with a superior noise reduction capability. Although these results give a Figure 4: MeerKAT telescope observing at 1.4 GHz during 166 min 40 s with a total bandwidth of 0.8 MHz; demonstrating the degree of a \(1\ Jy\) source amplitude loss at various sky positions and the associated noise penalty. Smearing against source distance from the phase centre, for traditional averaging and BDA with \(CF=25\times 1\) and \(CF=50\times 1\) and for _simple SVD_, _BDSVD_ with \(CF=25\times 1\), \(CF=35\times 1\) and \(CF=50\times 1\). The space-saving \(\SS\) and the noise penalty \(\Xi\) are given relative to the traditional averaging bins. good taste in the VLBI regime, the implementation of the different compression methods on real data is imperative. ## 6 Conclusions One of the major contributors to the large data volumes is the long baselines of an array as they influence the degree of data sampling required to avoid decorrelation of the astrophysical signal. With BDA, the sampling rate is baseline-dependent as short baselines can be sampled far more coarsely leading to a high compression rate. BDA is an established technique for compressing radio interferometric data. However, this technique results in irregularly sampled data which, while technically supported by the MS format via variable frequency bins supported by many spectral windows, requires the data to be restructured in ways that reduce performance when processing. This work shows an approach that uses a low-rank matrix approximation to achieve greater compression rates while significantly minimising smearing compared to BDA or traditional averaging. What is even Figure 5: MeerKAT telescope observing at 1.4 GHz during 166 min 40 s with a total bandwidth of 0.8 MHz. The PSF of a source at 2.25 deg is shown when the high-resolution dataset is compressed for \(CF=25\times 1\) with _BDSVD_ (top left), _simple SVD_ (top right), BDA (middle left), traditional averaging (middle right), the vertical (bottom left) and horizontal (bottom right) directions of the normalised PSFs. more exciting is that when combined the low-rank approximation and baseline-dependent formalism, _BDSVD_, effectively eliminates smearing within the FoV for the MeerKAT telescope and the EVN. However, this method has three caveats. Firstly, SVD is computationally expensive and may be impractical for large datasets when performed on every baseline independently. Fortunately, we show that the computations can be distributed across multiple computer nodes as discussed in Algorithms 3. Secondly, _simple SVD_ and _BDSVD_ do not affect current calibration algorithms or software. The possibility of calibrating visibility data remains since the compressed data is decompressed before it is calibrated. This implies that the temporal and frequency resolutions are recovered and consequently, do not modify the solution intervals required for calibration. We also note that the capability of current imaging software is maintained. A very important result to note is that linear transformation between the compressed data, the compressed noise and the source is maintained, thus showing that it is possible to image the sky in the compressed domain if \(\mathbf{I}^{d}_{p\notin\mathbf{k}}\) is well understood. This opens a potential research investigation that could possibly see the data be imaged in the SVD space. Thirdly, the implementation discussed in Section 2.2 describes a method which compresses BDA data to an MS-compatible format on both disk and in memory. By contrast the SVD compression schemes described here produce singular values that must be expanded "on-the-fly" to full resolution in memory. There is, therefore, an interesting contrast between the two approaches as the first decreases both the amount of data and computation (FLOPs) performed by algorithms on that data. This is achieved at the cost of implementation complexity, especially with regards to calibration techniques which must reason about solution intervals that lie across multiple data points on the sparse domain in which BDA data lies. _Simple SVD_ and _BDSVD_ offer superior compression to BDA and traditional averaging, but requires processing data at full data resolution, for reduced implementation complexity. Decompressing data "on-the-fly" is already offered in the PyDATA ecosystem by packages such as BLOSC (Haenel, 2014), which compresses data into chunks and decompresses them exactly into the L1 cache of a CPU. This reduces the memory intensity of an algorithm, which is beneficial since modern CPUs are starved for data by their memory system (Alted, 2010) and radio interferometric data is very large. In such a regime, it may be possible to transmit the singular values into the L1 cache of a CPU and expand the data to full resolution for consumption by algorithms. Additional possible future work would investigate an online compression-decompression strategy using the full archiving potential of SVD-related methods. Real data that is contaminated with Radio Frequency Interference (RFI), power is distributed across all visibilities when decompressing the SVD. There are two ways to avoid this situation: either the RFIs are removed from the data before applying the SVD or the SVD is used to remove the RFI during compression. The latter requires intense investigation as future work, since removing the RFI is equivalent to removing the higher singular value components. Removing higher singular value components is problematic because it is unclear where weak radio signals will begin to be suppressed in the process. Real data usually has a few entries flagged for some reason. Interesting future work would be to investigate how these flagged entries affect singular values. We expect that the situation will fall somewhere between the two below scenarios: Figure 6: MeerKAT telescope observing at 1.4 GHz during 166 min 40 s with a total bandwidth of 0.8 MHz. The average of the radial and tangential PSF resolution measured at the FWHM are shown at various sky positions when traditional averaging and BDA with \(CF=25\times 1\) and \(CF=50\times 1\) and for _simple SVD_, _BDSVD_ with \(CF=25\times 1\), \(CF=35\times 1\) and \(CF=50\times 1\) are applied. The PSF is not distorted at the edges of the FoV when the SVD-related methods applied. * If the flagged real data entries belong to the same neighbourhood and are assigned the same value; this increases the similarity in the neighbourhood and hence SVD will result in a lower rank (consequently, the compression factor will be bigger) compared to no flagging; and * If the flagged real data entries are not from the same neighbourhood (e.g. randomly flagged data), the rank of the SVD will be higher (consequently, the compression factor will be smaller) compared to no flagging. To address this problem, a potential solution would be to perform in-painting on the model data where flags are present and then use this in-painted data to fill in the flagged entries in the uncompressed data prior to applying the SVD. ## Acknowledgements This work is based upon research supported by the South African Research Chairs Initiative of the Department of Science and Technology and National Research Foundation. The MeerKAT telescope is operated by the South African Radio Astronomy Observatory, which is a facility of the National Research Foundation, an agency of the Department of Science and Innovation. MA acknowledges support from Rhodes University. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author
2304.04559
Event-based Camera Tracker by $\nabla$t NeRF
When a camera travels across a 3D world, only a fraction of pixel value changes; an event-based camera observes the change as sparse events. How can we utilize sparse events for efficient recovery of the camera pose? We show that we can recover the camera pose by minimizing the error between sparse events and the temporal gradient of the scene represented as a neural radiance field (NeRF). To enable the computation of the temporal gradient of the scene, we augment NeRF's camera pose as a time function. When the input pose to the NeRF coincides with the actual pose, the output of the temporal gradient of NeRF equals the observed intensity changes on the event's points. Using this principle, we propose an event-based camera pose tracking framework called TeGRA which realizes the pose update by using the sparse event's observation. To the best of our knowledge, this is the first camera pose estimation algorithm using the scene's implicit representation and the sparse intensity change from events.
Mana Masuda, Yusuke Sekikawa, Hideo Saito
2023-04-07T16:03:21Z
http://arxiv.org/abs/2304.04559v1
# Event-based Camera Tracker by \(\nabla_{t}\)NeRF ###### Abstract When a camera travels across a 3D world, only a fraction of pixel value changes; an event-based camera observes the change as sparse events. How can we utilize sparse events for efficient recovery of the camera pose? We show that we can recover the camera pose by minimizing the error between sparse events and the temporal gradient of the scene represented as a neural radiance field (NeRF). To enable the computation of the temporal gradient of the scene, we augment NeRF's camera pose as a time function. When the input pose to the NeRF coincides with the actual pose, the output of the temporal gradient of NeRF equals the observed intensity changes on the event's points. Using this principle, we propose an event-based camera pose tracking framework called TeGRA which realizes the pose update by using the sparse event's observation. To the best of our knowledge, this is the first camera pose estimation algorithm using the scene's implicit representation and the sparse intensity change from events. ## 1 Introduction Camera localization/tracking is one of the fundamental functionality of computer vision. The field of use lies in many applications like automotive, augmented reality, and robotics. Event-based cameras detect sparse intensity changes with extremely high temporal resolution (\(>\)\(10,000\) fps). This unique feature makes it a suitable sensor for tracking fast-moving scenes, and many researchers have been exploring several approaches to utilize high-speed observation. Recently, [10, 21] showed the pose & motion is recovered by minimizing the error between the estimated intensity change and integrated events (Fig.2). Thanks to the low-latency nature of events, their method works well even in a rapid camera motion. However, they need a dense operation to differentiate the error w.r.t pose & motion; it could not take advantage of the _sparsity_ of events. Therefore, the computational cost increases linearly with the processing rate, making it difficult to run the algorithm in real-time on devices with limited computational resources. NeRF [37] is an implicit light-field representation using a neural network, which enables unified perceptions of the 3D world that is difficult for the existing explicit MAP (e.g., CAD model). Many NeRF-extention have been proposed [54] to model a complex 3D scene; we believe these advancements make the NeRF representation a novel candidate for representing 3D MAP for camera localization/tracking. Utilization of NeRF as a 3D MAP for camera pose localization has already been explored [30, 62]. Their basic idea for realizing camera pose estimation is minimizing the error between the estimated intensity frame from NeRF and the observed intensity frame, w.r.t input camera pose (Fig.3). However, these methods can not estimate the camera pose by using sparse intensity change. These existing studies motivate one question; how can we utilize sparse events to recover the camera pose without converting them into dense frames? We show: When coordinates of events (where the event camera detects intensity changes) and the current camera pose estimate on the event's time \(S_{t}\) are input to NeRF, it outputs the intensities at that point and time. By viewing the input camera pose to NeRF as a function of time, we found that the temporal gradient of the intensity w.r.t the event's timestamp is the estimation of intensity changes on that pixel. Pose & motion is obtained by minimizing \(|\)estimated intensity changes \(-\) observed intensity changes (events)\(|\) (Theorem-1, Fig.1). Based on this principle, we propose an event-based camera pose tracking framework called minimization of the \(\nabla_{t}\)NeRF(\(S_{t}\)) (TeGRA). Unlike the conventional dense approach (frame-based), which operates in intensity-space, TeGRA works sparsely (event-based) in gradient-space; it updates the pose & motion by evaluating the error only for the pixels where the events have been observed. By the sparse mechanism, the number of pixels to be evaluated is 99.8% less than the conventional dense algorithm on our created event dataset. To the best of our knowledge, this is the first approach to realizing camera pose estimation using the implicit representation of the 3D scene and the sparse observation of intensity changes. We provide the theoretical proof of TeGRA. Furthermore, we created a photo-realistic event dataset for 6DoF camera pose tracking with a ground-truth pose called EvTrack (EVent-based TRACKing dataset). Using the EvTrack, we experimentally proved the concept. ## 2 Related Work ### Camera Pose Tracking from Intensity Frame There is a long history in the field of camera pose tracking. Many algorithms are based on frame-based observation, namely video sequence. Methods based on KLT [5, 34] are among the most popular visual tracking algorithms. KLT computes the camera pose by aligning the observed intensity frame with the known scene's intensity map using an image gradient. Recently, the deep neural network (DNN) based feature extractor has been utilized to exploit more rich features [13] than the raw pixel value. This KLT-based method works well in many scenarios. However, it easily collapses in a fast-moving scene, which induces a significant difference between the observed frame and the estimated frame. The difference makes the gradient-based algorithms trapped into a local minimum. Another line of research utilizes DNN to directly regress the pose between pairs of images [7, 8, 26, 28, 55]. They have an advantage in computational efficiency (over the iterative gradient-based algorithm) because they can update the pose by a single forward pass of the network. They may also suffer from performance degeneration when there is a significant difference in the pairs of images due to the fast camera motion. Either gradient-based or regression-based, the problem due to the fast camera motion might be mitigated by using an expensive high-speed camera. However, processing them at a higher rate is infeasible due to increased computational complexity. ### Camera Pose Tracking from Events An event-based camera [43, 44, 49, 53, 14] is a bio-inspired vision sensor that mimics biological retinas. It differs from a frame-based camera in its H/W design; its report per-pixel intensity changes as asynchronous event streams. Thanks to this unique operation principle, event-based cameras have significant advantages over conventional frame-based cameras, e.g., low latency, high dynamic range (HDR), and blur-free. The most important feature of the event-based camera for camera pose tracking would be its high temporal resolution. Their temporal resolution is equivalent to \(>\!10,000\) fps, making it a suitable sensor for robust tracking in fast-moving scenes. **Use of Event frame for Camera Pose Estimation.** There have been many attempts to utilize the distinct features of the event-based camera for tracking [1, 2, 10, 12, 19, 21, 22, 47, 21]. Recently, KLT has been extended to event data to realize robust camera pose tracking in high-speed and HDR scenarios [21, 10, 22]. They update camera pose and motion by minimizing the error in the integrated event frame between estimation and observation (Fig.2). The estimation is computed from the current estimate of the pose & motion using the pre-build 3D intensity map. By utilizing the low-latency nature of events, their methods work well even in rapid camera motion. However, computing the derivatives of the error w.r.t pose requires computationally intensive dense rendering of the 3D map. The dense computation needs to be repeated until convergence for each frame. Therefore, running the algorithm in real-time on devices with limited computational resources is difficult. Our goal in this study is to realize an efficient sparse algorithm for camera pose tracking using the implicit representation of a 3D scene. ### Implicit Scene Representations Representing data, such as image, video, and shape using implicit neural representation gained much attention such as for data compression [15, 35], novel-view synthesis [36, 37, 38], 3D-shape modeling [3, 11, 40, 50], and image registration [30, 62] to name a few. NeRF [37] utilizes the implicit neural representations to represent the 4D light-field. Given a set of images paired with a camera pose, NeRF learns the intensities of each pixel for a given camera pose. Each pixel's intensity is computed by integrating the RGB color and density along the corresponding ray using the volumetric rendering technique. Due to its flex ibility, NeRF extensions are exploding beyond its original application of novel-view synthesis; e.g., 3D-shape reconstruction [39, 56], disentangle lighting [59, 64, 9, 51], image editing [61, 25, 33, 25], object separation [4, 60], semantic label propagation [65], modeling time-varying objects [41, 45, 29, 27, 29, 32], and depth estimation [58, 24]. **Use of NeRF for Camera Pose Estimation** These novel functionalities realized by NeRF such as modeling lighting and moving objects would realize a unifying perception of the 3D world that is difficult for the existing explicit MAP (e.g., CAD model). Furthermore, recent progress of NeRF-extention enabled it to model a large environment, such as a large house [17], and an entire city [54]. We believe these advancements make the NeRF representation a novel candidate for representing 3D MAP for camera localization/tracking. iNeRF [62] uses the inverse of NeRF to estimate the camera pose. BARF [30], NeRF- [57], iMAP [52], NICE-SLAM [66] realized simultaneous camera pose estimation and 3D scene reconstruction. Their basic idea for realizing camera pose estimation is minimizing the error Figure 4: Tracking using event stream and implicit scene (TeGRA) Figure 3: Tracking using intensity frame and implicit scene Figure 2: Tracking using event stream and explicit scene between the estimated intensity frame from NeRF and the observed intensity frame, w.r.t input camera pose (Fig.3). These methods can not estimate the camera pose by using sparse intensity change. We aim to derive a camera pose tracking algorithm using sparse intensity changes observation (i.e., events) to realize efficient camera pose tracking. ## 3 Method ### Preliminaries **Problem Statement** Our goal in this study is to develop an efficient camera pose tracking algorithm TeGRA using sparse observation of intensity-change event (IC-event) stream \(\mathbf{e}_{t}\) as follows: \[\mathrm{TeGRA}:(\mathbf{e}_{t},S_{t}^{\mathrm{ini}},\dot{S}_{t}^{\mathrm{ini} })\mapsto(S_{t}^{\mathrm{opt}},\dot{S}_{t}^{\mathrm{opt}}), \tag{1}\] where \((S_{t}^{\mathrm{ini}},\dot{S}_{t}^{\mathrm{ini}})\) and \((S_{t}^{\mathrm{opt}},\dot{S}_{t}^{\mathrm{opt}})\) are the initial and optimized pose & motion at time \(t\) respectively. TeGRA utilizes a differentiable implicit representation of the static 3D world. **NeRF** This study assumes that the 3D scene is implicitly represented by NeRF [37]. NeRF, \(G_{\mathcal{M}}\) was initially proposed for novel-view synthesis; it represents the scene using a neural network that is differentiable w.r.t to its input pose \(S\). The parameter of the NeRF, \(\mathcal{M}\), is pre-trained before tracking. NeRF \(G_{\mathcal{M}}\) takes an image coordinate \((x,y)\) and 6DoF pose \(S\in\mathfrak{se}(3)\) of the camera as inputs and renders the RGB intensity \(\mathbf{c}\) at that coordinate; \[G_{\mathcal{M}}(x,y,S)=\mathbf{c}. \tag{2}\] **IC-Event** The IC-event stream \(\mathbf{e}_{t}\) is a set of events observed in time interval \([t,t+\tau]\) as follows: \[\mathbf{e}_{t}=[e^{1},...,e^{i},...,e^{M}];\;\;e^{i}=[e^{i}_{x},e^{i}_{y},e^{i }_{u},e^{i}_{r}]^{\mathrm{T}}, \tag{3}\] where \((e^{i}_{x},e^{i}_{y})\in\mathbb{R}^{2}\) is the image coordinates where the intensity change has been detected, \(e^{i}_{u}\in\mathbb{R}\) is the timestamp of the change, and \(e^{i}_{r}\in\mathbb{R}\) is the intensity change value. The intensity change value \(e_{r}(x,y)\) on \((x,y)\) within time interval \(\Delta t\) is defined using _true_ intensity \(L(x,y,t)\) as follows: \[e_{r}(x,y)=\frac{L(x,y,t+\Delta t)-L(x,y,t)}{\Delta t}. \tag{4}\] Events are detected where the intensity change \(e_{r}(x,y)\) exceeds the predefined threshold \(\delta\). For ease of discussion, we define \(\bar{e}^{i}_{u}\), which is relative timestamp w.r.t time \(t\); \(\bar{e}^{i}_{u}:=e^{i}_{u}-t\). That is \(\bar{e}^{i}_{u}\) changes depending on the time \(t\) we consider. Some event-based cameras directly detect the intensity change \(e_{r}\), such as Celex-V [14], or IC-event is obtained from high-speed video data. We left the evaluation using the binary event for future work1. Footnote 1: At this time, we consider there are two possible approaches to make TeGRA compatible with the binary-event; 1) Modify the loss function of (5) to adapt to the binary-event (Sec.5.2), 2) Convert the binary-event to IC-event using the timestamp (supplement-D). In this work, we use IC-event for simplicity and left the exploration for future work. ### The answer of \(\nabla_{i}\)NeRF(\(S_{t}\))= 0 is true pose & motion By viewing the input camera pose to NeRF as a function of time, we found that camera pose & motion \((S,\dot{S})\) can be recovered by minimizing the error between the temporal gradient of the 3D-scene represented as NeRF \(G_{\mathcal{M}}\) and IC-event stream \(\mathbf{e}\) (Fig.5: visual explanation). **Theorem 1**.: _The minimizer of \(|\nabla_{i}\mathrm{NeRF}(S_{t})-\mathrm{event}|\) is true pose \(\&\) motion (\(S_{t}^{\mathrm{gt}},\dot{S}_{t}^{\mathrm{gt}}\)):_ \[\begin{split}& S_{t}^{\mathrm{gt}},\dot{S}_{t}^{\mathrm{gt}}\\ &=\underset{S,\dot{S}}{\mathrm{argmin}}\underbrace{\left\|\frac{ \partial G_{\mathcal{M}}\left(e^{i}_{x},e^{i}_{y},S_{t}+\bar{e}^{i}_{u}\dot{S} _{t}\right)}{\partial\bar{e}^{i}_{u}}-e^{i}_{r}\right\|_{2}}_{:=\mathcal{L}}. \end{split} \tag{5}\] Proof.: To prove the theorem-1, we'll show the \(i\)-th element of the Eq.(5) equals to zero when we have the true pose & motion \((S_{t}^{\mathrm{gt}},\dot{S}_{t}^{\mathrm{gt}})\). Now, consider the temporal gradient of the NeRF of Eq.(2) for \(i\)-th event on time \(e^{i}_{u}\): \[\begin{split}&\frac{\partial G(e^{i}_{x},e^{i}_{y},S_{t}+\bar{e}^{i}_ {u}\dot{S}_{t})}{\partial\bar{e}^{i}_{u}}\\ &=\lim_{\Delta u\to 0}\frac{G(e^{i}_{x},e^{i}_{y},S_{t}+(\bar{e}^{i}_ {u}+\Delta u)\dot{S}_{t})-G(e^{i}_{x},e^{i}_{y},S_{t}+\bar{e}^{i}_{u}\dot{S}_{ t})}{\Delta u}.\end{split} \tag{6}\] From the definition of NeRF of Eq.(2), \(L(x,y,t)\) equals to the NeRF's output, when the estimated pose & motion is _true_: \[L(x,y,t+\bar{e}^{i}_{u})=G(x,y,S_{t}^{\mathrm{gt}}+\bar{e}^{i}_{u}\dot{S}_{t}^ {\mathrm{gt}}). \tag{7}\] Plugging this relation to Eq.(6) \[\begin{split}&\frac{\partial G(e^{i}_{x},e^{i}_{y},S_{t}^{\mathrm{ gt}}+\bar{e}^{i}\dot{S}_{t}^{\mathrm{gt}})}{\partial e^{i}_{u}}\\ &=\lim_{\Delta u\to 0}\frac{L(e^{i}_{x},e^{i}_{y},t+\bar{e}^{i}_ {u}+\Delta u)-L(e^{i}_{x},e^{i}_{y},t+\bar{e}^{i})}{\Delta u}\\ &=\lim_{\Delta u\to 0}\frac{L(e^{i}_{x},e^{i}_{y},e^{i}_{u}+\Delta u)-L(e^{i}_ {x},e^{i}_{y},e^{i}_{u})}{\Delta u}\end{split} \tag{8}\] This equals to the definition of \(e^{i}_{r}\) of Eq.(4) when \(\Delta t\) is sufficiently small. ### Sparse Tracking Algorithm: TeGRA Using the theorem-1, we propose an event-based camera pose-tracking algorithm called TeGRA. The algorithm minimizes \(\mathcal{L}\) in Eq.(5) using gradient-decent (Fig. 4). The input to TeGRA is the IC-event stream \(\mathbf{e}_{t}\) at time \(t\) and the initial estimate of pose & motion (\(S_{t}^{ini},\dot{S}_{t}^{ini}\)). To obtain the derivative to update pose & motion, we differentiate the loss \(\mathcal{L}\) w.r.t \((S_{t},\dot{S}_{t})\) through the temporal gradient of \(G_{\mathcal{M}}\) on each event's timestamp \(e_{u}^{i}\). Then, pose & motion is updated using the sum of all events contributions. See listing 1 for pseudo-PyTorch code. ## 4 Experiments To demonstrate the effectiveness of the proposed TeGRA, we created the 6DOF camera pose tracking dataset for event data, called EvTrack, with ground-truth camera pose. We used BOP challenge 2020 scenes [23] because it is photo-realistic, and the scene well represents the indoor localization scenario. The BOP uses the BlenderProc [16] to render realistic images using ray tracing. We first show the tracking result of TeGRA for proof of concept (POC) (Sec.4.3). Next, we show the intensive qualitative comparison with a dense algorithm (Fig.3) using intensity in terms of pose estimation accuracy (Sec.4.4). ### Event-based camera pose tracking dataset (EvTrack) The dataset consists of five scenes, _mix_, _hb_, _lm_, _tyl_ and _ycbb_, the last four scenes correspond to the BOP data split and _mix_ scenes include all the four data to simulate the ordinary indoor situation. We used _mix_ scene for proof of concept in the tracking scenario and use the other four for the quantitative evaluation in terms of pose estimation accuracy. We generated 500 images for NeRF training and three camera trajectories (Fig.6 right) for each of the five scenes simulating the drone hovering around a room. The IC-event stream is generated by using pairs of consecutive images (total of 1,000), \(\{L(t),L(t+\nu)\}\) (\(L(t)\in\mathbb{R}^{H\times W}\)), by subtracting them. The size of image \((H,W)\) is \((480\times 640)\). The threshold \(\delta\) for triggering the event is set to \(0.05\) (intensities are normalized to \([0,1]\)) for all scenes. In our experiment, we converted the RGB IC event to a grayscale IC event since most event cameras detect grayscale intensity changes. Each event stream is generated using five consecutive frames, assuming the motion is approximately linear within the interval. ### Implementation We use Pytorch [42] to train the NeRF-model and run TeGRA for tracking\({}^{2}\). **Training** We follow the network settings from the original NeRF with minor modifications; use the softplus activation for the volume density \(\sigma\) as recommended in BARF [30] for improved stability. **Tracking** We add the RGB-to-gray layer (Sec.3.3) to use grayscale IC-events. We randomly select 750 event pixels from the observed event and update the pose & motion for n_itr (1,000) times; it amounts to 0.2% of the dense algorithm. The learning rate \(\eta\) of the \(S,\dot{S}\) is set to \(5\times 10^{-5}\) exponentially decaying to \(5\times 10^{-6}\) toward n_itr. The temporal gradient of intensity w.r.t the event timestamp \(e_{u}\) is computed by using Pytorch's autograd.grad. Figure 5: Pose & motion estimation (visual explanation of theorem-1) ### Proof of Concept (Tracking) For the POC of the proposed idea, we applied TeGRA for tracking. We used _seq0_ from _mix_ scene. We use estimated pose & motion from the previous timestep to initialize \((S_{t}^{\text{ini}},\hat{S}_{t}^{\text{ini}})\) in the next timestep as discussed in Sec.3.3. The results are shown in Fig.6. We confirmed that the pose is successfully tracked without drifting3. Finally, the pose estimation error was \((0.13^{\circ},0.0003)\). The average number of events per pose update was 0.2% of the entire pixel. Footnote 3: See supplement- and frame-based dense tracking algorithm which we compare the accuracy (supplement-C). ### Quantitative Benchmarks (Pose Estimation) To quantitatively evaluate the performance of the TeGRA, we compare the pose estimation accuracy with the dense algorithm using the difference in intensity (Fig.3). In this experiment, we randomly initialized the pose and motion for each stream. The results are shown in Tab.1. Both achieved comparable accuracy in maintaining camera-pose tracking, while ours used only 2.4% of pixels. ## 5 Conclusion How can we utilize sparse events for recovering the camera pose? Answer: Camera pose is recovered by minimizing the error between the temporal gradient of the scene represented as a NeRF and sparse events. Our tracking algorithm, TeGRA, could update the pose using sparse event points. This mechanism is a significant advantage over the existing image-space algorithm, which requires dense computation. We demonstrate TeGRA in a tracking scenario with unseen background clutter. We believe the proposed idea opens the door for realizing an event-based camera pose tracking using implicit 3D-scene representation. This study focuses on demonstrating the algorithm in a naive setup; therefore, we left large areas for future work, either from an experimental or algorithmic perspective. ### Application to Real-World Data One of our ultimate goals is to utilize the proposed method in the practical autonomous driving scenario. More specifically, as future work, we plan to apply the TeGRA for Block-NeRF [54], which realized scaling up the NeRF into Figure 6: Camera pose tracking results from _mix_ scene (Sec.4.3); ground-truth (cyan) and optimized trajectory (magenta) (visualized in every 7 timestep). a city-scale automotive environment. The TeGRA could incorporate the mip-NeRF [6] rendering algorithm, which is key to realizing the large-scale modeling in Block-NeRF. The principle we present (theorem-1) is compatible with a variety of NeRF-variant as long as the 3D-scene is represented in the form of Eq.(2), where \(G_{\mathcal{M}}\) is differentiable w.r.t pose \(S\). ### Asynchronous Update using Binary-Event In this study, we use IC-event instead of binary-event, and synchronous event stream instead of asynchronous one. We choose this experimental setup mainly due to the implementational difficulties in generating event streams. It requires engineering effort to generate asynchronous binary-event streams, such as modifying event-camera simulators like ESIM [46]. To make the algorithm compatible with the binary polarity \(e_{p}\), the loss term in Eq.(5) needs to be modified slightly: \[\mathcal{L}_{bin}=\sum_{i}\left\|\mathrm{SoftSgn}\left(\frac{\partial G_{ \mathcal{M}}\left(e_{x}^{i},e_{y}^{i},S_{t}+\bar{e}_{u}^{i}\dot{S}_{t}\right)} {\partial\bar{e}_{u}^{i}}\right)-e_{p}^{i}\right\|_{2}, \tag{9}\] where the \(\mathrm{SoftSgn}\) is a soft version of the sign function, which maps continuous intensity change into (soft) polarity. We left this exploration as future work. ### Speed Up Thanks to the sparse mechanism of TeGRA, the number of pixels to be evaluated for computing the pose update is significantly lower than the entire pixel (Sec.4.3). Speeding up the NeRF for real-time rendering is an active research topic [63, 31, 20, 34]. For example, FastNeRF [20] utilizes a separate network for position-dependent MLP and direction-dependent MLP for speed up. As discussed above, the proposed mechanism is compatible with other NeRF-variants. We expect combining our sparse mechanism with these approaches is a vital topic to realize real-time tracking on mobile devices. We'll incorporate the method and then evaluate the FLOPS and wall-clock time. ### Extention to SLAM It is an exciting research direction to extend our algorithm into simultaneous localization and mapping (SLAM). Now, NeRF is emerging as an entirely new framework for SLAM [57, 52, 66, 30]. iMAP [52] is pioneering work utilizing NeRF for realizing real-time SLAM. We expect incorporating TeGRA will significantly speed up the NeRF-based SLAM.
2307.05317
Automatic Generation of Semantic Parts for Face Image Synthesis
Semantic image synthesis (SIS) refers to the problem of generating realistic imagery given a semantic segmentation mask that defines the spatial layout of object classes. Most of the approaches in the literature, other than the quality of the generated images, put effort in finding solutions to increase the generation diversity in terms of style i.e. texture. However, they all neglect a different feature, which is the possibility of manipulating the layout provided by the mask. Currently, the only way to do so is manually by means of graphical users interfaces. In this paper, we describe a network architecture to address the problem of automatically manipulating or generating the shape of object classes in semantic segmentation masks, with specific focus on human faces. Our proposed model allows embedding the mask class-wise into a latent space where each class embedding can be independently edited. Then, a bi-directional LSTM block and a convolutional decoder output a new, locally manipulated mask. We report quantitative and qualitative results on the CelebMask-HQ dataset, which show our model can both faithfully reconstruct and modify a segmentation mask at the class level. Also, we show our model can be put before a SIS generator, opening the way to a fully automatic generation control of both shape and texture. Code available at https://github.com/TFonta/Semantic-VAE.
Tomaso Fontanini, Claudio Ferrari, Massimo Bertozzi, Andrea Prati
2023-07-11T15:01:42Z
http://arxiv.org/abs/2307.05317v1
# Automatic Generation of Semantic Parts for Face Image Synthesis ###### Abstract Semantic image synthesis (SIS) refers to the problem of generating realistic imagery given a semantic segmentation mask that defines the spatial layout of object classes. Most of the approaches in the literature, other than the quality of the generated images, put effort in finding solutions to increase the generation diversity in terms of style _i.e._ texture. However, they all neglect a different feature, which is the possibility of manipulating the layout provided by the mask. Currently, the only way to do so is manually by means of graphical users interfaces. In this paper, we describe a network architecture to address the problem of automatically manipulating or generating the shape of object classes in semantic segmentation masks, with specific focus on human faces. Our proposed model allows embedding the mask class-wise into a latent space where each class embedding can be independently edited. Then, a bi-directional LSTM block and a convolutional decoder output a new, locally manipulated mask. We report quantitative and qualitative results on the CelebMask-HQ dataset, which show our model can both faithfully reconstruct and modify a segmentation mask at the class level. Also, we show our model can be put before a SIS generator, opening the way to a fully automatic generation control of both shape and texture. Code available at [https://github.com/TFonta/Semantic-VAE](https://github.com/TFonta/Semantic-VAE). Keywords:Image Synthesis Variational Autoencoder Face Editing. ## 1 Introduction The task of Semantic Image Synthesis (SIS) consists in generating a photo-realistic image given a semantic segmentation mask that defines the shape of objects. The mask is usually an image in which the pixel values define a specific semantic class (like eyes, skin, hair, _etc._ in the case of human face). This allows for accurately defining the spatial layout and shape of the generated images, while maintaining a high degree of freedom in terms of textures and colors. Indeed, those can be randomly generated [16] or by extracting a specific style from a reference image [7, 9]. A nice feature of SIS methods is that the semantic mask can be manipulated to alter the shape of objects in the generated samples. However, currently this is done manually by using custom painting software allowing the user to modify the shape of one or more mask parts. Attempts of performing automatic face shape parts manipulation have been done, yet with different techniques, such as by using a 3D deformable model of the face [3]. Whereas manual alteration of the semantic masks is fun, it turns out impractical when the objective is to modify the shape of a large number of images. In the attempt of overcoming this limitation, in this paper we explore the problem of the automatic generation and manipulation of classes in segmentation masks, and propose a method that allows to generate and edit the shape of any number of parts. The proposed model can be used to produce a large variety of novel semantic masks that can then be used in conjunction with any SIS model to generate previously unseen photo-realistic RGB images. This is achieved by designing an architecture composed by an encoder that embeds each of the semantic mask parts separately, a recurrent module composed by a series of bi-directional LSTMs [11] that learns the relationships between the shape of different mask parts and, finally, a decoder that maps the latent representation back into a realistic semantic mask. The model is trained as a Variational Autoencoder (VAE), so combining a reconstruction loss with a KL divergence in order to induce a specific distribution in the latent space. This enables the generation, interpolation or perturbation of semantic classes; these specific features, to the best of our knowledge, are still unexplored in the literature. Overall, the main contributions of this paper are the following: * we explore the novel problem of automatic generation and editing of local semantic classes in segmentation masks, independently from the others; * we propose a novel architecture combining a VAE and a recurrent module that learns spatial relationships among semantic classes by treating them as elements of a sequence, under the observation that the shape of each part has an influence on the surrounding ones. More in detail, each part embedding is subsequently fed into the LSTM block so to account for shape dependencies, and then employed by the decoder to generate the final mask. The proposed architecture can finally be used in combination with any SIS architecture to boost the shape diversity of the generated samples; * we quantitatively and qualitatively validate our proposal in the task of face parts editing, and report and extensive analysis of the advantages, limitations and challenges. ## 2 Related Works Given that no prior works addressed the problem presented in this paper, in the following we summarize some recent literature works on semantic image synthesis and variational autoencoders. **Semantic Image Synthesis.** Semantic Image Synthesis approaches can be divided into two main categories: diversity-driven and quality-driven. Both of them take inspiration and improve upon the seminal work of Park _et al._, named SPADE [9], where semantic image synthesis is achieved by means of custom, spatially-adaptive normalization layers. Methods in the former category focus on the task of generating samples having the shape conditioned over semantic masks, but the style is generated randomly in order to achieve an high degree of multi-modality. Some examples of these approaches are [10, 16]. The trend here points towards increasing the granularity of the generated texture; for example, in CLADE [13] styles are generated at the class-level, while INADE [12] is able to generate instance-specific style by sampling from a class-wise estimated distribution. On the other side, quality-driven methods try to extract a specific style from a target image and to apply it over the generated results, in the attempt of both maintaining the shape defined by the mask and the texture defined by a reference image. An example of paper falling in this category is MaskGAN [7], in which a style mapping between an input mask and a target image is achieved using instance normalization. Also in this case, efforts are put into finding solutions to increase the precision and granularity of the style control. To this aim, Zhu _et al._ developed SEAN [18], a method that is able to extract the style class-wise from each of the different semantic part of an image and map it locally over the corresponding area of the input mask. Another work following the same trend is SC-GAN [16]. Overall, it turns out clearly that none of the recent literature works deals with the problem of locally manipulating the face shape by acting on segmentation masks. **Variational Autoencoders.** Autoencoders introduced in [8] were proposed as a way to achieve a compressed latent representation of a set of data, but they lack generation capabilities. On the contrary, Variational Autoencoders (VAE) [6] described data generation through a probabilistic distribution. Indeed, once trained using a combination of reconstruction loss and Kullback-Leibler divergence, they can generate new data by simply sampling a random latent distribution and feeding it to the decoder. There exist several variations of VAE such as Info-VAE [17], \(\beta\)-VAE [5] and many more [1, 14]. ## 3 Network Architecture The main objective that guided the design of the model architecture is that of performing automatic manipulation and generation of semantic masks, independently for each class. A semantic segmentation mask can be represented as \(C\)-channel image, where each channel is a binary image containing the shape of a specific object class _i.e._\(M\in[0,1]^{C\times H\times W}\). So, each pixel belongs to a unique class _i.e._ has value 1 only in a single channel, and each class shape is complementary to all the others, _i.e._ there is no intersection between the semantic classes. The challenge behind manipulating or generating a specific semantic class in a segmentation mask is that its shape, and the shape of all its surrounding classes, need to be adapted so that the above properties are maintained. At the same time, the spatial arrangement of each class have also to be realistic, since it is a scenario-dependent property. In the case of facial features, the spatial relations of the different face parts need to be preserved; as example, the nose should be mostly centered between eyes. ### Architecture To account for the above challenges, we designed our proposed architecture (Fig. 1) to have 4 main components: (1) an MLP \(\mathcal{M}\) to independently encode the mask channels into a latent representation \(m_{e}\). This allows us to operate on the mask channels directly in the compressed space; (2) an LSTM-Feed Forward block \(\mathcal{L}\) composed of three bi-directional LSTM layers to process the encoded mask channels \(m_{e}^{j}\) and account for possible misalignments resulting from manipulating a semantic class, and a feed-forward block \(\mathcal{F}\) to further re-arrange the processed mask encodings; (3) finally, a convolutional decoder \(\mathcal{D}\) to reconstruct the complete semantic mask \(M_{o}\). **MLP Encoder.** The encoder \(\mathcal{M}\) is a simple MLP made up of three linear layers, each followed by a ReLU activation function. Each mask channel is first flattened so that the input mask has size \(M\in\mathbb{R}^{C\times H^{2}}\) where \(H=W=256\) is the spatial size of the mask; each linear layer of the encoder has a hidden size of 256, so that \(m_{e}=[m_{e}^{1},\cdots,m_{e}^{C}]=\mathcal{M}(M)\in\mathbb{R}^{C\times 256}\). **Bi-directional LSTM Block.** The bi-directional LSTM [11] block was designed to process the encoded mask channels \(m_{e}\) one after another, as if they were frames of a temporal sequence. The goal is that of correcting possible inconsistencies resulting from manipulating or generating a class embedding \(m_{e}^{c}\), based on the information of the other classes. Intuitively, if we change the shape of a facial part in the mask _e.g._ nose, the surrounding parts need to be adjusted Figure 1: Proposed architecture: the segmentation mask \(M\in[0,1]^{C\times 256\times 256}\) is processed so that each channel (class) \(c\in C\) is flattened and passes through a MLP encoder to obtain class-wise embeddings \(m_{e}^{c}\) for each semantic class. The embeddings \(m_{e}=[m_{e}^{1},\cdots,m_{e}^{C}]\) then pass through a set of three bi-directional LSTM layers followed by a feed-forward block that learn relationships across the classes. The processed embeddings are finally reshaped to form a set of feature maps \(m_{d}\in\mathbb{R}^{C\times 16\times 16}\), and then fed to a convolutional decoder which outputs a new mask \(M_{o}\in[0,1]^{C\times 256\times 256}\). The model is trained with (1) a pixel-wise weighted cross-entropy loss (\(\mathcal{L}_{wCE}\)), and (2) a \(KL\)-divergence loss \(\mathcal{L}_{KL}\) applied to the embeddings \(m_{e}\) so to push them towards following a \(\mathcal{N}(0,1)\) distribution, enabling their generation from noise or manipulation. so that the combined result looks realistic and artifact-free. One problem arising from using a recurrent module is that of choosing the order in which the channels are processed. Temporal sequences have a unique ordering implicitly defined by the time flow, whereas in our scenario there is no clear nor unique way of choosing the order by which processing the face parts, being them simply parts of a spatial layout. This motivated us to opt for the bi-directional variant of the LSTM; indeed, the latter processes the sequence in both directions (first to last, and last to first), so that each class embedding is influenced by all other classes, not only by the previously processed ones. Each class embedding \(m_{e}^{c}\) is thus processed, and provided as hidden state both for the subsequent \(m_{e}^{c+1}\) and previous \(m_{e}^{c-1}\) classes. In addition, differently from the standard use of LSTMs where only the last processed embedding keeps flowing through the network, we also store the embeddings at intermediate steps \(m_{e}^{c}\). In doing so, once all the \(C\) embeddings have been processed, we end up with the same number of \(C\) embeddings, one for each class. Finally, following the same principle of [15], a feed-forward block composed of two linear layers equipped with GeLu [4] activation function is stacked after the LSTMs so to make the embeddings better fit the input to the decoder. Decoder.Finally, the convolutional decoder \(\mathcal{D}\) is responsible for learning to reconstruct the segmentation mask from the \(C\) embeddings resulting from the previous steps. In particular, the \(C\) embeddings are reshaped into a set of \(C\) feature maps \(m_{d}\in\mathbb{R}^{C\times 16\times 16}\). These are processed by 4 residual blocks, equipped with SiLU [2] activation function and group normalization. The decoder outputs the reconstructed segmentation mask \(M_{o}\in\mathbb{R}^{C\times 256\times 256}\). ### Loss Functions The model is trained to self-reconstruct the input segmentation mask, without any other specific strategy to guide the manipulation process. The output mask is generated by minimizing a pixel-wise class prediction, using a cross-entropy loss. In particular, we used a weighted variant of the standard cross entropy \(\mathcal{L}_{CE}\). More in detail, we observed that the problem resembles a highly imbalanced classification problem; indeed, smaller parts such as, for face masks, the eyes or the nose, are significantly under-represented in the data _i.e._ occupy a smaller number of pixels, with respect to larger parts such as skin or hair, ultimately weighing less in the overall loss computation. So, the weights are set considering this imbalance; smaller weights will be assigned to bigger parts, and bigger weights will be assigned to smaller parts. We calculate the weights \(\mathbf{w}=[w_{0},\cdots,w_{C}]\) based on the overall training set statistics, in the following way: \[\mathbf{w}=1-\frac{1}{NHW}\sum_{N}^{i}\sum_{H}^{j}\sum_{W}^{k}x_{c,i,j,k}\ \forall\ c\in C \tag{1}\] where \(N\) is the number of samples in the training set, and \(H\) and \(W\) are the height and width of the semantic mask, respectively. Given that each of the mask channels can contain only one or zero values, this equation provides a series of \(C\) weights that rank each of the semantic parts by their average size. The equation of the final weighted cross entropy \(\mathcal{L}_{wCE}\) therefore becomes: \[\mathcal{L}_{wCE}=-\sum_{x}\mathbf{w}(y(x))y(x)log(\hat{y}(x)) \tag{2}\] where \(y(x)\), \(\hat{y}(x)\), and \(\mathbf{w}(y(x))\) are the ground-truth class labels, the predicted labels, and the weight for the ground-truth class at pixel \(x\), respectively. In addition to the weighted cross entropy, a KL-Loss \(\mathcal{L}_{KL}\) is used to push the latent codes of each of the parts to have zero mean and unit variance and allow the generation process where a random latent code is sampled from \(\mathcal{N}(0,1)\). Ultimately, the full loss utilized to train the model is: \[\mathcal{L}=\mathcal{L}_{wCE}+\lambda\mathcal{L}_{KL} \tag{3}\] where \(\lambda\) is the KL weight and is set to 0.0005 in all the experiments. ## 4 Experimental Results In this section, we report the results of an experimental validation. We show both quantitative and qualitative results, in terms of reconstruction accuracy and different generation or manipulation tasks. In fact, despite our goal being that of performing editing of semantic masks at the class level, we also need to make sure the reconstruction process does not degrade the segmentation accuracy of the input masks and in turn compromise the subsequent image synthesis. As dataset to train and test our model, we used the CelebAMask-HQ [7], which is composed by 30K high resolution face images (1024\(\times\)1024) along with the corresponding segmentation masks. Out of the 30K samples, 28K were used for training and 2K for testing. ### Reconstruction, Generation and Perturbation Given that no prior works addressed this particular problem, before analyzing the ability of the model to manipulate the mask parts, we compare our solution with some baseline architectural designs in terms of reconstruction accuracy, in a sort of an extended ablation study. Reconstruction results are reported in Table 1 in terms of pixel-wise classification accuracy (Acc) and Mean Intersection over Union (mIoU). In particular, the following configurations were explored: a simple encoder-decoder trained with the standard cross-entropy (row 1), the model with 1 or 3 standard LSTMs trained with standard cross entropy (rows 2 and 3), our final model with 3 standard LSTMs trained with the weighted cross entropy (row 4), our final model with 3 bidirectional LSTMs trained with regular cross entropy (row 5), and the final architecture (bottom row). Quantitatively, we observe a generally-high reconstruction accuracy in all the cases. The simplest architecture (w/o LSTM the highest accuracy but lower mIoU. A visual inspection of the results suggests that the the additional processing due to the LSTM block induces a slight smoothing of high-frequency details such as the hair contour. This is caused by the compression of each semantic part in the encoding phase, and also by the Bi-directional LSTM block pass which makes more difficult for the decoder to exactly reproduce the corresponding input. This hypothesis is supported if looking at the results obtained with either 1 or 3 LSTM layers; indeed the two measures decrease when stacking more LSTM layers. On the other hand though, we will show (Fig. 4) that removing such layers severely compromises the manipulation ability. Nevertheless, when comparing configurations including 3 LSTM layers, our final architecture scores the highest accuracy. In particular, it obtains the highest mIoU, which indicates the overall shape and spatial arrangement of parts is best preserved. This is also supported by the results in Fig. 2, which shows per class mIoU results of different configurations. Indeed, even though the configuration w/o LSTM tends to perform better with bigger parts like skin or hair, our final architecture manages to push the quality of the smaller parts up thanks to the combination of bidirectional LSTMs and weighted cross entropy loss, resulting in an overall better mIoU. \begin{table} \begin{tabular}{c|c|c} **Method** & mIoU \(\uparrow\) & Acc \(\uparrow\) \\ \hline \hline w/o LSTM block & 68.49 & **94.24** \\ \hline 1 LSTM w/o Bidir. w/o weighted CE & 68.15 & 93.85 \\ \hline 3 LSTMs w/o Bidir. w/o weighted CE & 67.35 & 90.91 \\ \hline 3 LSTMs w/o Bidir & 68.34 & 90.94 \\ \hline 3 LSTMs w/o weigthed CE & 69.56 & 92.12 \\ \hline **Ours** & **70.31** & 92.39 \\ \end{tabular} \end{table} Table 1: Reconstruction results comparing our solution with different baselines. Figure 2: mIoU results per class. In Fig. 3 some results for both reconstruction, generation and perturbation of different parts in the semantic masks are presented. More in detail, we refer to _generation_ when a novel latent code drawn from the normal distribution \(\hat{m}_{e}^{j}\sim\mathcal{N}(0,1)\) is substituted to its encoded counterpart \(m_{e}^{j}\) and passed to the bi-directional LSTM block in order to generate a particular part \(c\). On the other side, we refer to _perturbation_ when a random noise vector drawn from the normal distribution \(z\sim\mathcal{N}(0,1)\) is added to an existing latent code _i.e._\(\hat{m}_{e}^{j}=m_{e}^{j}+z\). Indeed, in the latter, usually the shape of the generated parts is more similar to the original input, while in the first case the generated shape can be (and usually is) completely different. Regarding reconstruction, we can see how the proposed method manages to maintain the overall shape of the semantic mask parts, supporting the results in Table 1. Nevertheless, as discussed above, a certain degree of smoothing in the results can be noted. This represents a minor limitation of the current proposal. On the other side, results when generating parts from scratch, or by perturbing an existing latent code, are impressive. Our method is not only able to generate realistic parts independently from one another, but also, thanks to the recurrent part of the model, is able to adapt the shape of the parts surrounding the one that is being generated in order to produce a realistic final result. This can be particularly appreciated for example when perturbating the nose latent code in Fig 3 in the third row: indeed, the nose is made longer by the perturbation and as a consequence the mouth is deformed accordingly. Figure 3: Results for reconstruction, generation and perturbation of different mask parts. Finally, in Fig. 4 we show some qualitative results to prove that the final architecture is indeed better in the generative task which is the main purpose of this paper. Starting from the top, it is clear how when generating hair the proposed model is much more capable of producing a realistic results without generating undesired classes (like the pink part in the model without LSTM). Then, in the second row, is proved how our model is much better at rearranging all the semantic parts in order to create a realistic mask with a newly generated part. Finally, in the last row, we can see how the mouth part is generated correctly by almost every configuration, but, at the same time, our model is able to generate much more varied and diverse results. ### Interpolation In Fig. 5 interpolation results are presented. Interpolation is done by choosing a part \(c\) from a source and a target mask and merging together the corresponding latent vectors using an interpolation factor \(\alpha\). More in detail, the interpolation equation is the following: \[m_{c}^{int}=\alpha\cdot m_{c}^{t}+(1-\alpha)\cdot m_{c}^{s} \tag{4}\] where \(m_{c}^{t}\) and \(m_{c}^{s}\) are the latent codes of the part \(c\) of the target and source images, respectively. In addition, \(\alpha=0\) is equal to reconstructing the source image, while \(\alpha=1\) represents a sort of "face part swapping", that is a specific face part is swapped from a target face to a source one. Indeed, it is evident how the KL loss, that pushes the latent codes to have almost zero mean and unit variance, allows to easily interpolate every mask part. In particular, while increasing the interpolation factor \(\alpha\), the shape changes continuously. The only previous method that we are aware of capable of performing a similar task is MaskGAN [7]; however, MaskGAN can only perform global mask interpolations, and can not independently manipulate individual parts. Figure 4: Qualitative results of different ablations experiments. ### Semantic Image Synthesis with Shape Control In this section we qualitatively show results for the main purpose of our model, that is equipping SIS generators with a module to enable automatic shape control. In Fig. 6, several mask with automatically generated parts are fed to a state-of-the-art SIS model in order to produce new and diverse face images. We chose to use the SEAN [9] generator to this aim because SEAN can very precisely control the image generation thanks to its semantic region-adaptive normalization layers. Previous to our proposal, the editing of masks could only be done manually. Results in Fig. 6 clearly show that, provided a generator that is accurate enough to handle local shape changes, the shape of the generated faces can be automatically edited by means of our solution. This paves the way to a very efficient way of employing SIS models, for example, for data augmentation which can be very helpful for task like re-identification, classification or detection. ## 5 Conclusion In this paper, we introduced the problem of automatic manipulation of semantic segmentation masks, and presented a preliminary novel architecture to achieve this goal, with a specific application to face part editing. The proposed system is able do generate or manipulate any semantic part by just feeding random noise Figure 5: Interpolation results of different parts taken from a source and target mask (first and last columns, respectively). Values of the interpolation factor \(\alpha\) go from 0 to 1, where 0 means no interpolation. to the LSTM block in the place of the latent representation of the corresponding part. We show the efficacy of our architecture through a series of quantitative and qualitative evaluations. Even if we observed the tendency of smoothing the shapes of the generated results, still our method is able to generate realistic semantic parts, and can be readily used in combination with potentially any SIS models so to generate a virtually infinite number of RGB results. Finally, we believe there is still large room for improvements. For example, extending the proposal to different scenarios with less constrained objects layout or more classes would represent a valuable feature for a SIS model. Also, currently, the shape manipulation is not controlled, meaning that it is not yet possible to generate parts with a specific shape or attributes, _e.g._ long nose or curly hair. All the above are features that we plan to investigate in future works. ## 6 Acknowledgments This work was supported by PRIN 2020 "LEGO.AI: LEarning the Geometry of knOwledge in AI systems", grant no. 2020TA3K9N funded by the Italian MIUR.
2303.00798
Collisions of localized patterns in a nonvariational Swift-Hohenberg equation
The cubic-quintic Swift-Hohenberg equation (SH35) has been proposed as an order parameter description of several convective systems with reflection symmetry in the layer midplane, including binary fluid convection. We use numerical continuation, together with extensive direct numerical simulations, to study SH35 with an additional nonvariational quadratic term to model the effects of breaking the midplane reflection symmetry. The nonvariational structure of the model leads to the propagation of asymmetric spatially localized structures (LSs). An asymptotic prediction for the drift velocity of such structures is validated numerically. Next, we present an extensive study of possible collision scenarios between identical and nonidentical traveling structures, varying a temperature-like control parameter. The final state may be a simple bound state of the initial LSs or longer or shorter than the sum of the two initial states as a result of nonlinear interactions. The Maxwell point of the variational system is shown to have no bearing on which of these scenarios is realized. Instead, we argue that the stability properties of bound states are key. While individual LSs lie on a modified snakes-and-ladders structure in the nonvariational SH35, the multi-pulse bound states resulting from collisions lie on isolas in parameter space. In the gradient SH35, such isolas are always of figure-eight shape, but in the present non-gradient case they are generically more complex, some of which terminate in T-point bifurcations. A reduced model consisting of two coupled ordinary differential equations is proposed to describe the linear interactions between the tails of the LSs in which the model parameters are deduced using gradient descent optimization. For collisions leading to the formation of simple bound states, the reduced model reproduces the trajectories of LSs with high quantitative accuracy.
Mathi Raja, Adrian van Kan, Benjamin Foster, Edgar Knobloch
2023-03-01T19:55:24Z
http://arxiv.org/abs/2303.00798v1
# Collisions of localized patterns in a nonvariational Swift-Hohenberg equation ###### Abstract The cubic-quintic Swift-Hohenberg equation (SH35) has been proposed as an order parameter description of several convective systems with reflection symmetry in the layer midplane, including binary fluid convection. We use numerical continuation, together with extensive direct numerical simulations (DNSs), to study SH35 with an additional nonvariational quadratic term to model the effects of breaking the midplane reflection symmetry. The nonvariational structure of the model leads to the propagation of asymmetric spatially localized structures (LSs). An asymptotic prediction for the drift velocity of such structures, derived in the limit of weak symmetry breaking, is validated numerically. Next, we present an extensive study of possible collision scenarios between identical and nonidentical traveling structures, varying a temperature-like control parameter. These collisions are inelastic and result in stationary or traveling structures. Depending on system parameters and the types of structures colliding, the final state may be a simple bound state of the initial LSs, but it can also be longer or shorter than the sum of the two initial states as a result of nonlinear interactions. The Maxwell point of the variational system, where the free energy of the global pattern state equals that of the trivial state, is shown to have no bearing on which of these scenarios is realized. Instead, we argue that the stability properties of bound states are key. While individual LSs lie on a modified snakes-and-ladders structure in the nonvariational SH35, the multi-pulse bound states resulting from collisions lie on isolas in parameter space, disconnected from the trivial solution. In the gradient SH35, such isolas are always of figure-eight shape, but in the present non-gradient case they are generically more complex, although the figure-eight shape is preserved in a small subset of cases. Some of these complex isolas are shown to terminate in T-point bifurcations. A reduced model is proposed to describe the interactions between the tails of the LSs. The model consists of two coupled ordinary differential equations (ODEs) capturing the oscillatory structure of SH35 at the linear level. It contains three parameters: two interaction amplitudes and a phase, whose values are deduced from high-resolution DNSs using gradient descent optimization. For collisions leading to the formation of simple bound states, the reduced model reproduces the trajectories of LSs with high quantitative accuracy. When nonlinear interactions lead to the creation or deletion of wavelengths the model performs less well. Finally, we propose an effective signature of a given interaction in terms of net attraction or repulsion relative to free propagation. It is found that interactions can be attractive or repulsive in the net, irrespective of whether the two closest interacting extrema are of the same or opposite signs. Our findings highlight the rich temporal dynamics described by this bistable nonvariational SH35, and show that the interactions in this system can be quantitatively captured, to a significant extent, by a highly reduced ODE model. ## I Introduction Spatially localized structures (LSs) are observed in a wide variety of physical systems, from solitary water waves [1] to neurons [2], fluid convection [3; 4], shear flows [5; 6] and reaction-diffusion systems [7; 8], to name only a few. Generically, these systems are subject to dissipation and require forcing to maintain the structure, see [9] for a review of spatial localization in such systems. A simple model of pattern formation in forced dissipative systems is provided by the bistable Swift-Hohenberg equation, originally suggested in the context of pattern formation in Rayleigh-Benard convection [10; 11]. This equation supports well-known localized solutions that are organized in a _snakes-and-ladders_ bifurcation structure [12; 13; 14]. When the Swift-Hohenberg equation has gradient structure, solutions with nontrivial time dependence are precluded. However, nongradient generalizations of the Swift-Hohenberg equation arise frequently in applications [15] and these permit both time dependence, see e.g. [16], and persistent propagation, e.g. [17]. In this paper, we consider a specific instance of such models, namely the one-dimensional cubic-quintic Swift-Hohenberg equation with broken reflection symmetry, \[\partial_{t}u=ru-(1+\partial_{x}^{2})^{2}u+b_{3}u^{3}-u^{5}+\epsilon(\partial _{x}u)^{2}. \tag{1}\] Here the parameter \(\epsilon\) controls both the nongradient structure of the equation (the equation has gradient structure when \(\epsilon=0\)) and the breaking of the symmetry \(u\to-u\) (the equation is invariant under \(u\to-u\) when \(\epsilon=0\)). Since the equation is also symmetric under spatial reflections, \(x\to-x\), both effects are required for spontaneous propagation of LSs in this system. Equation (1) was suggested in [18] as a model of binary fluid convection with broken midplane reflection symmetry and its properties are indeed in qualitative agreement with direct numerical simulations of the Navier-Stokes equations describing this system [19]. The present work extends significantly the collision studies undertaken in [18] and clarifies a number of key issues. In the following, we refer to Eq. (1) as SH35. When \(\epsilon=0\), the problem reduces to variational form such that, on a domain of any size \(L\), there exists a free energy functional \[\mathcal{F}[u(x)]=\int_{0}^{L}\left(-\frac{1}{2}ru^{2}+\frac{1}{2}[(1+\partial_{x }^{2})u]^{2}-\frac{b_{3}}{4}u^{4}+\frac{u^{6}}{6}\right)dx, \tag{2}\] with the property that \(\partial_{t}u=-\frac{\delta\mathcal{F}}{\delta u}\). Thus \(\mathcal{F}\) decreases along trajectories towards local minima as \(t\rightarrow\infty\), and these represent stable steady states of the system. The free energy of spatially periodic patterns passes through zero at a \(r=r_{M}\), known as the _Maxwell point_. In the vicinity of the Maxwell point one can create a variety of localized structures involving both the pattern state and the trivial state \(u=0\) at little or no cost in energy. These are of two types, localized even solutions, denoted here as \(L_{0}\) (\(L_{\pi}\)) if their maximum (minimum) is located at their center, and localized odd solutions, denoted as \(L_{\frac{\pi}{2}}\) (\(L_{\frac{3\pi}{2}}\)) if they have a negative (positive) slope at the center. When \(0<\epsilon\ll 1\), the symmetry \(u\rightarrow-u\) as well as the variational structure is broken but similar solutions continue to exist, albeit with modified properties, as described in [18]. The symmetric solution branches \(L_{0},L_{\pi}\) remain symmetric and stationary. Odd solution profiles cease to be odd and hence propagate. In this paper, we first study the propagation of isolated structures and then go on to investigate in detail the collisions that can result. In contrast to the collisions familiar from studies of integrable partial differential equations on the real line, here the collisions are inelastic and can lead to annihilation and sticking as well as scattering. One question of interest concerns the role, if any, played by the Maxwell point of the variational system in such collisions when \(\epsilon\) is small: for example, is the collision process accompanied by nucleation or annihilation of new wavelengths according to the free energy minimization principle valid at \(\epsilon=0\) or, if this principle is not followed, what other mechanism determines collision outcomes? The remainder of this paper is structured as follows. In section II, we describe the general bifurcation structure of Eq. (1) obtained from numerical continuation. Next, in section III we present an asymptotic computation of the drift speed of asymmetric LS in the limit of weak symmetry breaking \(0<\epsilon\ll 1\), whose accuracy is confirmed by comparison with direct numerical simu Figure 1: Bifurcation diagram for Eq. (1) with \(\epsilon=0\). The patterned state branch is shown in blue with a sample solution profile in red (lower left inset). The snaking branches of symmetric (\(L_{0},L_{\pi}\)) and antisymmetric (\(L_{\pi/2},L_{3\pi/2}\)) LSs are shown in black (sample profiles in yellow and green, respectively). For clarity, only three of the interconnecting rung states are shown (detail in upper right inset), cf. [18]. Larger norm indicates longer LSs. Stable solutions are found on branches with positive slope. lations (DNSs) and numerical continuation. In section IV, we present the results of extensive DNSs of all possible collision scenarios in this system, and describe the dependence of the collision outcome on the control parameter \(r\), with the stability of multi-pulse bound states playing a key role. We also show that the bound-state solutions arising from collisions are in many cases a part of non-trivial isolas, whose structure depends on the symmetry breaking parameter \(\epsilon\). In section V, we present a reduced model of the interactions between colliding patterns, which is based on the linear structure of SH35 and accurately captures the trajectories of LSs so long as no significant nonlinear interactions creating or destroying wavelengths occur. The paper concludes in section VI with a discussion of our results. All our results are obtained for the choice \(b_{3}=2\) as used in [18], employing periodic boundary conditions on a domain of size \(L=40\pi\). ## II Bifurcation analysis In this section we provide an overview of the solution structure of Eq. (1), first in the case \(\epsilon=0\), and then in the case \(\epsilon>0\), obtained from numerical continuation using AUTO [20] and pde2path [21]. The results are presented in terms of the \(L_{2}\) norm of the solutions \(u(x)\) given by \[\|u\|_{2}\equiv\sqrt{\frac{1}{L}\int_{0}^{L}u^{2}(x)dx} \tag{3}\] and provide important background information for subsequent sections. ### The variational case: \(\epsilon=0\) We first consider the variational problem with \(\epsilon=0\) and employ numerical continuation to construct the bifurcation diagram shown in Fig. 1, cf. [13]. A trivial flat state \(u=0\) exists at all \(r\), and undergoes a subcritical bifurcation to a periodic pattern at \(r=0\) (red profile in lower left inset in Fig. 1). Four snaking branches bifurcate from the periodic pattern branch in secondary bifurcations, corresponding to the symmetric LSs \(L_{0}\), \(L_{\pi}\) and the antisymmetric LSs \(L_{\pi/2}\), \(L_{3\pi/2}\) mentioned in the introduction. The branches overlap in pairs owing to the symmetry \(u\rightarrow-u\), and both sets display characteristic snaking behavior in the vicinity of the Maxwell point \(r_{M}\)[9; 13]. In addition, rung states connect the symmetric and antisymmetric snaking branches, arising in pitchfork bifurcations close to every saddle-node bifurcation on these branches [18]. Each rung actually corresponds to four branches of unstable asymmetric localized solutions, related by the symmetries \(x\rightarrow-x\) and \(u\rightarrow-u\)[14]. Examples of symmetric and antisymmetric solution profiles are shown in the lower left inset in Fig. 1, together with the periodic pattern state. When the LSs fill the available domain the snaking branches reconnect to the periodic pattern. Figure 2 shows the free energy of the periodic pattern from Fig. 1 versus \(r\). The stable part of the periodic pattern state (thick blue line) has a free energy which decreases monotonically with \(r\) and changes sign at the Maxwell point \(r_{M}\approx-0.675\). ### The nonvariational case: \(\epsilon>0\) Here we describe how the bifurcation structure changes in the nonvariational case, specifically for \(\epsilon=0.03\) (Fig. 3). As in the case \(\epsilon=0\), there is a trivial branch \(u=0\), a periodic pattern branch emerging subcritically from it, and two snaking branches. However, the two snaking branches in Fig. 3 now correspond to \(L_{0}\) and \(L_{\pi}\), since these states are no longer related by symmetry, and consequently snake in phase before reconnecting to the periodic state. Moreover, the \(L_{\pi/2}\), \(L_{3\pi/2}\) states and the rung states reconnect, forming a sequence of 'Z'-shaped branches consisting of asymmetric solutions. As a consequence of the nonvariational structure of the problem when \(\epsilon>0\), these asymmetric solutions drift and hence may collide. The structure of several Z branch solutions at different \(r\) values is shown in the inset in Fig. 3. It is important to observe that the Z branch states may be stable or unstable. Stable solutions are present on the "diagonal" part of each Z, i.e. in the range \(-0.70\lesssim r\lesssim-0.63\) (except for the lowest branch, which is stable only between \(-0.65\lesssim r\lesssim-0.62\)). The corresponding drift velocity \(c\) depends on \(r\), as shown in Fig. 4 for several of the states depicted in Fig. 3. In Fig. 4, the stable part of any given Z branch corresponds to the segment between \(-0.70\lesssim r\lesssim-0.63\), where the drift speed is largest. We emphasize that longer asymmetric LSs drift more slowly than short ones. Figure 2: Free energy \(F[u]\), defined in Eq. (2), of the periodic pattern state versus \(r\). Thick (thin) lines correspond to stable (unstable) solutions. The dotted vertical red line indicates the Maxwell point \(r_{M}\approx-0.675\), where the free energy changes sign. ## III Drift velocity of isolated structures ### Asymptotic theory To compute the drift velocity of asymmetric LSs using perturbation theory in the limit of weak symmetry breaking, \(0<\epsilon\ll 1\), we introduce the fast and slow time variables \[\tau=\epsilon^{\frac{1}{2}}t,\qquad T=\epsilon t \tag{4}\] and denote their spatial phase by \(\theta(T)\). Following the calculation presented in [17], we posit the expansion \[u(x,t)= U_{0}[x-\theta(T)]+\epsilon^{\frac{1}{2}}u_{1}[x-\theta(T),\tau] \tag{5}\] \[+\epsilon u_{2}[x-\theta(T),\tau]+o(\epsilon),\] where \(U_{0}\) is a known moving pattern whose propagation velocity we seek to determine. At leading order, Eq. (1) implies \[rU_{0}-(1+\partial_{x}^{2})^{2}U_{0}+b_{2}U_{0}^{3}-U_{0}^{5}=0, \tag{6}\] while at \(O(\epsilon^{\frac{1}{2}})\) we obtain \[\mathcal{L}u_{1}=ru_{1}-(1+\partial_{x}^{2})^{2}u_{1}+3b_{2}U_{0}^{2}u_{1}-5U_ {0}^{4}u_{1}=0, \tag{7}\] Figure 4: Drift velocity \(c\) versus \(r\) for various Z branch states at \(\epsilon=0.03\); colors correspond to the branches in Fig. 3. The curves shown correspond to the Z branches for which profiles are displayed in Fig. 3. Figure 3: Bifurcation diagram for Eq. (1) with \(\epsilon=0.03\). Stable Z branch states with at least two wavelengths exist in the range \(-0.70\lesssim r\lesssim-0.63\). Inset: sample solution profiles at color-coded locations in ascending order, cf. [18]. Larger norm indicates longer LSs. Stable solutions are found on branches with positive slope. where the operator \(\mathcal{L}\equiv r-(1+\partial_{x}^{2})^{2}+3b_{2}U_{0}^{2}-5U_{0}^{4}\) is self-adjoint. Differentiating Eq. (6) with respect to \(x\), one finds that \(\mathcal{L}U_{0}^{\prime}=0\), i.e. if \(U_{0}\) solves (6), then \(U_{0}^{\prime}\) solves (7). In [17], the authors considered \(r\) to be asymptotically close to the edge of the snaking interval, with a focus on the dynamics of depinning. This required taking into account two additional solutions which lie in the null space of \(\mathcal{L}\): one symmetric and one asymmetric mode. Here, we instead consider asymmetric states which are located within the snakes-and-ladders structure, away from the saddle-nodes of the snaking branches. Consequently \(U_{0}^{\prime}\) is the only function in the null space of \(\mathcal{L}^{\dagger}=\mathcal{L}\). Since this translation mode is already included in the Ansatz (5) we can set \(u_{1}\equiv 0\). Finally, at \(O(\epsilon)\), we obtain \[-U_{0}^{\prime}\theta_{T}=\mathcal{L}u_{2}+\epsilon(U_{0}^{\prime})^{2}. \tag{8}\] Multiplying by \(U_{0}^{\prime}\) and integrating over the domain, using the fact that \(\mathcal{L}\) is self-adjoint and that \(\mathcal{L}U_{0}^{\prime}=0\), we obtain \[-\theta_{T}\int(U_{0}^{\prime})^{2}dx=\int U_{0}^{\prime}\mathcal{L}u_{2}+ \epsilon(U_{0}^{\prime})^{3}dx=\int\epsilon(U_{0}^{\prime})^{3}dx. \tag{9}\] Rearranging, we arrive at the drift velocity, \[c\equiv\theta_{T}=-\epsilon\frac{\int_{0}^{L}\left(U_{0}^{\prime}(x)\right)^{ 3}dx}{\int_{0}^{L}\left(U_{0}^{\prime}(x)\right)^{2}dx}, \tag{10}\] a special case of a more general result [22]. Equation (10) is invariant under \((\epsilon,U_{0})\rightarrow(-\epsilon,-U_{0})\), a symmetry that is inherited from Eq. (1). For symmetric profiles \(U_{0}(x)\), the numerator vanishes; symmetric states therefore remain at rest even when \(\epsilon>0\). However, for the asymmetric profiles shown in Fig. 3 the velocity \(c\) is nonzero. This is due to the nonsinusoidal nature of these profiles and in particular the contribution from the fronts at either end of the LS profile. We define the number \(n\) of _significant extrema_ in any given LS as the number of extrema whose amplitude is larger than \(1/e\) times the maximum amplitude in the LS, and the number of _significant wavelengths_ as \(1/2\) the number of significant extrema. The adjective _significant_ will be dropped in the following when there is no ambiguity. For solutions with many significant wavelengths, \(n\gg 1\), the numerator converges to a constant, while the denominator grows approximately linearly with \(n\). This implies that \(c\propto 1/n\) at large \(n\), which is quantitatively consistent with the results from numerical continuation, as shown in Fig. 5: longer structures move more slowly, as already highlighted in the discussion of Fig. 4. ### Numerical verification To determine the range of validity of the prediction in Eq. (10), we perform direct numerical simulations (DNSs) of Eq. (1) for selected values of the parameter \(\epsilon\). We also perform numerical continuation in \(\epsilon\). The results reported below extend those in [22] where the asymptotic result was compared with numerical continuation at only one value of \(\epsilon\), \(|\epsilon|=0.01\). For this purpose, we consider a one-dimensional periodic domain of the same length \(L=40\pi\) as in the numerical continuation and solve Eq. (1) using a semi-implicit pseudo-spectral numerical scheme for spatial derivatives [23] and a fourth-order Runge-Kutta time-stepping scheme. All DNS results presented in this paper were obtained on a finely resolved uniform spatial grid of 8192 grid points (corre Figure 6: Comparison of the theoretically predicted drift velocity \(c\) with the corresponding DNS results for stable states on the second lowest Z branch. Blue solid line indicates the theoretical prediction from Eq. (10), with the integrals evaluated numerically. The green dash-dotted line shows the velocity obtained from numerical continuation. Red circles indicate the DNS results. The DNS and numerical continuation are in excellent agreement, including the deviation from asymptotic behavior at larger \(\epsilon\). The asymptotic theory is in close quantitative agreement with both DNS and numerical continuation at small \(\epsilon\). The Z branch states cease to exist above \(\epsilon=0.13\). Inset shows the Z branch solution profiles at \(\epsilon=0\) (blue, antisymmetric) and \(\epsilon=0.13\) (red, asymmetric) at \(r=-0.66\). Figure 5: Log plot showing \(c(r=-0.67)\) (blue) vs. number of wavelengths \(n\) overlaid with \(1/n\) (black). A satisfactory agreement is observed. sponding to approximately 100 grid points per pattern wavelength), except when specified otherwise. We use a time step of \(dt=0.01\). Larger values of \(dt\) led to incorrect drift speeds. To verify Eq. (10), we consider a state on the second lowest \(Z\) branch at \(\epsilon=0.13\), \(r=-0.66\), shown in the inset in Fig. 6. The state is asymmetric: a small change in peak/trough amplitudes with increasing \(\epsilon\) can be discerned, a consequence of symmetry breaking when \(\epsilon>0\). Figure 6 shows the drift velocity \(c\) as a function of \(\epsilon\), with excellent quantitative agreement between DNSs, numerical continuation and theory for \(\epsilon\lesssim 0.1\). For \(\epsilon\gtrsim 0.1\), the numerically obtained drift velocities deviate from the asymptotics, with the asymptotic prediction overestimating the numerical value by less than 10%. However, the DNSs and numerical continuation continue to show excellent agreement. At \(\epsilon\approx 0.13\) the state under consideration ceases to exist, since beyond this point the bifurcation structure of the system is qualitatively altered (cf. [18]). Additional runs with the smaller time step \(dt=0.001\) were also performed, and gave the same results as those for \(dt=0.01\), indicating that the DNSs are well resolved in time. We also repeated the DNSs on a coarser grid with 2048 grid points and obtained drift velocities \(c\) that are indistinguishable from those shown in Fig. 6, indicating that the simulations are also well resolved in space. ## IV Collisions of localized structures As shown in Fig. 3, multiple stable LSs can coexist in the domain, and these may undergo collisions. In this section we present the results of extensive DNSs of such collisions between different types of LSs performed using the numerical solver described in the previous section. For all results described below, we take \(\epsilon=0.03\), unless stated otherwise. ### Overview of DNS results Here we describe the rich collision phenomenology that is observed when different types of LSs collide at various values of \(r\), as illustrated in Fig. 7. We distinguish the following four collision scenarios: * Scenario A: collision between two asymmetric states which differ in the number of significant wavelengths, and travel in the same direction, with the shorter, faster LS catching up to the longer, slower LS, as predicted by Eq. (10). The two colliding extrema are of opposite sign. We focus specifically on collisions between two LSs of length two and three wavelengths. Four different possible outcomes are observed: deletion of one extremum, formation of a drifting bound state without a change in the number of extrema, and the creation of one or four new extrema. Note that the bound states at \(r=-0.68\) and \(r=-0.645\) shown in Fig. 7 differ in their separation. The cases \(r=-0.7\) and \(r=-0.635\) in fact involve two _separate_ consecutive collisions due to the periodicity of the domain, of which only the first is shown in Fig. 7. In the former case, the second collision (not shown) results in a drifting bound state, while in the latter case (shown in Fig. 8), the asymmetric LS is rendered symmetric and stationary owing to the nucleation of an additional extremum, resulting in a larger but stationary bound state. * Scenario B: a drifting asymmetric state collides with a stationary symmetric state with a maximum at its center. The two colliding extrema are of opposite sign. We focus on a two-wavelength asymmetric LS and a symmetric LS with three positive and two negative large-amplitude extrema. Four different possible collision outcomes are observed: deletion of one extremum, formation of a drifting bound state without change in the number of extrema, and the creation of one or four new extrema. * Scenario C: same as scenario B but with the symmetric LS flipped by \(u\rightarrow-u\) so that the two colliding extrema are of the same sign. We focus on a two-wavelength asymmetric LS and a symmetric structure with three negative and two positive large-amplitude extrema. Five different possible collision outcomes are observed: deletion of one extremum, formation of a drifting bound state without change in the number of extrema, and the creation of one, three or five new extrema. * Scenario D: head-on collision between two asymmetric states. The two colliding extrema are of the same sign. We focus on a pair of identical two-wavelength patterns (collisions between asymmetric LSs of distinct sizes were also considered, and gave qualitatively similar results). Four different possible collision outcomes were observed: deletion of two extrema and the creation of three or five new extrema. No pure bound states were observed in this case. Figure 7 shows a summary of the space-time plots depicting the different possible collision outcomes. All collisions were simulated using DNS on a uniform grid of 2048 points to facilitate longer simulation times. Some cases were also repeated with 8192 grid points and no change in the collision behavior was observed. Three qualitatively distinct collision outcomes are observed across all collision scenarios: deletion of extrema, formation of a bound state, and the creation of new extrema. Some of these states travel while others are symmetric and hence stationary. One notices that scenarios A and B are quite similar to one another in terms of the outcome realized at a given \(r\) (except near \(r=-0.645\) see also Fig. 12). Scenarios C and D are similar in the same sense. Since the closest extrema in the collisions in scenarios A and B are both of opposite sign but are of the same (negative) sign in scenarios C and D, one might surmise that the relative sign of interacting extrema is important in determining the qualitative collision outcome, cf. [18]. Our detailed results broadly support this suggestion but indicate that this sign is not the sole factor determining the collision outcome since differences between scenarios persist. Figure 7 reveals that the speed \(c\) of the traveling state plays a significant role in determining the collision outcome. This speed is controlled by the value of the parameter \(r\) since \(r\) controls the degree of asymmetry of each traveling state, but it also depends on the length of the structure, cf. Fig. 4. For example, in scenario A at \(r=-0.7\) a narrow LS catches up to a wider LS, with the resulting interaction stopping the former and leaving the latter unaffected. Because of periodic boundary conditions the wider, slower moving LS collides with the stationary state in a subsequent collision, creating a drifting bound state (not shown, but see Fig. 8 for a similar multiple collision at \(r=-0.635\)). Such drifting bound states are not observed in any of the other scenarios at this value of \(r\) where a narrower and so faster LS interacts with a broader LS at rest. Figure 9 shows an extreme case of scenario B, where a single-wavelength asymmetric LS collides with a stationary symmetric structure and is annihilated. This is because this asymmetric LS is located on the lowest Z branch in Fig. 3 and so is minimal in the sense that no stable LS shorter than one significant wavelength exists. Hence, the deletion of an extremum implies annihilation of this LS. Figure 7: Space-time plots of collision scenarios A-D for five different values of \(r\) indicated by gray dashed vertical lines in Fig. 12, shown with time along the vertical axis and space along the horizontal axis. The different collision scenarios involve different time scales, therefore distinct time axes are specified for each case. A rich zoology of different collision outcomes is found: one or two extrema may be deleted (for \(r\) near \(-0.7\)), or bound states can form without the creation or deletion of any extrema (in all scenarios except scenario D), or one, three, four or five extrema may be added in the collision, depending on the value of \(r\) and on the scenario. Some of the resulting states travel while others are symmetric and hence stationary. ### Change in \(L_{2}\) norm before and after collision To quantify the change in pattern size during a collision, we measure the \(L_{2}\) norm defined in Eq. (3). Figure 10 shows a sample time series of \(\|u\|_{2}\) from a chasing collision (scenario A) at \(r=-0.65\), where the collision first produces a metastable bound state, but three new extrema are generated at late times by nonlinear interactions. Figure 10 suggests the simple metric \[\Delta\|u\|_{2}=\|u_{\rm final}\|_{2}-\|u_{\rm initial}\|_{2}, \tag{11}\] where \(u_{\rm initial}\) is the state before the collision, i.e., when the distance between the two structures is still large, and \(u_{\rm final}\) is the resulting state a long time after the collision has occurred. Importantly, the collision dynamics are independent of the initial distance between the colliding LSs, since there is no inertia in the system. This is illustrated in Fig. 11, where the initial distance in scenario B (symmetric-asymmetric collision) at \(r=-0.65\) is varied by fractions of the wavelength associated with the spatial eigenvalue, \(\lambda=2\pi/\beta\), with \(\beta\) defined in Eq. (12). Figure 11 shows that the collision dynamics are self-similar under shifts in the initial distance: simply accounting for the time required to propagate over the additional distance leads to data collapse. This has also been verified explicitly for other collisions, including from scenario D. We deduce from these observations that the only variables affecting collision outcomes are indeed the chosen pair of colliding structures, and the control parameter \(r\) (at fixed \(\epsilon\)). The phenomenology illustrated in Fig. 10 is reminiscent of collision dynamics described in terms of _scatters_[24; 25]: unstable stationary or time-periodic patterns which direct the evolution in state space during the collision process along their stable and unstable manifolds. While these ideas were proposed in the context of models other than SH35, they may be applicable for some of the collision events observed here. Figure 12 shows \(\Delta\|u\|_{2}\) as a function of \(r\) for all four scenarios. A value of \(\Delta\|u\|_{2}\approx 0\) corresponds to the formation of a bound state without a change in the overall pattern size, a positive value indicates the creation of one or more extrema, while a negative value indicates that one or more extrema were deleted. Figure 8: The \(L_{2}\) norm versus time for a collision between two chasing asymmetric structures from scenario A (\(r=-0.635\)) consisting of two stages. In the first stage, the smaller asymmetric LS is rendered symmetric and hence stationary by the addition of an extremum. In the second stage, the larger asymmetric LS suffers the same fate resulting in a stationary bound state. Figure 10: The \(L_{2}\) norm versus time for a collision of two chasing asymmetric LSs (scenario A) at \(r=-0.65\). The collision of the two chasing structures occurs at \(t\approx 20~{}000\), leading to the creation of a traveling metastable bound state, whose structure changes slowly over time, as visible from the slope in \(\|u\|_{2}\). At \(t\approx 40~{}000\), three additional extrema are nucleated by nonlinear interactions, behavior resulting in the jump in \(\|u\|_{2}\). Figure 9: Annihilation of a single-wavelength asymmetric structure in collision with a symmetric LS at \(r=-0.65\). This is a special case of scenario B with an asymmetric structure consisting of only two extrema, located at the leftmost edge of the range of stability of the lowest Z branch in Fig. 3. Note the coarser colorbar compared to Fig. 7. The range of \(r\) shown in Fig. 12 corresponds to the interval with stable propagating solutions when \(\epsilon=0.03\). Vertical gray dashed lines indicate the cases depicted in Fig. 7. Near the leftmost edge of the existence interval, at \(r=-0.7\), one (A-C) or two extrema (D) are deleted in the collision. Near the rightmost edge, at \(r=-0.635\), new extrema are created: either one (A, B) or five extrema (C, D). Away from these boundary cases, the scenarios differ more substantially. At \(r=-0.68\), one observes either the formation of bound states (A, B), or the creation of three additional extrema (C, D). At \(r=-0.66\), new extrema are added in the collision in all cases: either four additional extrema (A, B) or three (C, D). At \(r=-0.645\) and \(r=-0.65\) (not shown), scenarios A and C revert to \(\Delta\|u\|_{2}\approx 0\), i.e. the formation of a bound state, while new extrema continue to be added in scenarios B (four extrema) and D (three extrema). As already mentioned in the discussion of Fig. 7, scenarios A and B are similar to one another, as are C and D, in the sense that for most values of \(r\), they display the same number of extrema added or deleted. However, Fig. 12 reveals deviations between these scenarios in the interval \(-0.65\lesssim r\lesssim-0.64\). To ensure that these are not numerical artefacts, we repeated the runs in scenarios A and B in this range of \(r\) at higher spatial resolution, using 8192 instead of 2048 grid points, and continued the simulation until very late times (\(t\approx 150~{}000\)). We also repeated the runs with significantly smaller time steps, \(dt=0.005\) and \(dt=0.001\), with the same collision outcome, confirming that the nonmonotonic dependence of \(\Delta\|u\|_{2}\) on \(r\) is a robust result that we discuss further below. The black dash-dotted vertical line in Fig. 12 indicates the location of the Maxwell point \(r=r_{M}\) for the variational case \(\epsilon=0\). The figure shows that the values of \(r\) where \(\Delta\|u\|_{2}\) changes differ between scenarios and that, in addition, these locations lie far from \(r=r_{M}\). We conclude that the Maxwell point of the variational case has little, if any, relevance in determining the collision outcome in the nonvariational case, even in the weak symmetry-breaking case considered here. The overall trend of increasing \(\Delta\|u\|_{2}\) with increasing \(r\) can be viewed as a reflection, in the nonvariational regime, of similar behavior in the variational case: when \(\epsilon=0\), the free energy \(\mathcal{F}\) of the stable pattern state decreases with increasing \(r\), as shown in Fig. 2, and the periodic pattern becomes increasingly energetically favored. Apart from some exceptions at \(r\geq-0.65\) (to be discussed further below), the increasing trend in the number of extrema added in a collision is compatible with this intuition. Finally, even for a fixed number of extrema added or lost, Fig. 12 reveals a slow decrease in \(\Delta\|u\|_{2}\) with increasing \(r\). This is associated with small changes in the profile of the extrema as \(r\) changes. ### Bound states: isolas and stability Figure-eight isola structures extending over the entire width of the snaking region have been previously described in the quadratic-cubic Swift-Hohenberg equation without nonvariational terms, arising from multi-pulse Figure 11: The \(L_{2}\) norm versus time for symmetric-asymmetric collisions from scenario B (at \(r=-0.65\)) for initial conditions shifted by different fractions \(\Delta x\) of the wavelength \(\lambda=2\pi/\beta\), with \(\beta\) the imaginary part of the spatial eigenvalue defined in Eq. (12). Inset shows the same data with time shifted by \(\Delta t=\Delta x/c\), where \(c\) is the drift speed of the asymmetric LS: all curves collapse exactly, indicating that the collision dynamics are independent of initial separation. Figure 12: Change in the \(L_{2}\) norm before and after a collision, \(\Delta\|u\|_{2}\) defined in Eq. (11), for different collision scenarios, shown as a function of \(r\). Labels indicate the number of extrema gained or lost in the collision. Vertical dashed gray lines correspond to the values of \(r\) shown in Fig. 7. The vertical black dash-dotted line shows the location of the Maxwell point \(r_{M}\) for \(\epsilon=0\) (all remaining results shown are for \(\epsilon=0.03\)). solutions consisting of two LSs bound together in close proximity [26; 27]. Here, we find that for bound states formed from collisions at \(\epsilon>0\), the properties of the isolas to which these states belong depend strongly on the collision scenario. Figure 13 shows two isolas at \(\epsilon=0.03\) (thin lines) obtained from a continuation in \(r\), starting from two bound states generated by a collision in scenarios A and B at \(r=-0.68\) and \(r=-0.69\), respectively (points highlighted in Fig. 13), together with the results of continuing these isolas in \(\epsilon\) to the variational case \(\epsilon=0\) (thick lines). The figure shows that in the former case (red curves) the isola retains its shape as it transforms into the corresponding isola of the variational problem. In the latter and more typical case, the isola at \(\epsilon=0.03\) takes the form of a spaghetti-like tangle (light blue), requiring substantial simplification prior to reaching the corresponding isola of the variational problem (thick blue). Figure 14 shows several additional isolas of traveling bound states at \(\epsilon=0.03\) obtained by colliding longer initial LSs and superposed on the full bifurcation diagram (colors match between Figs. 13 and 14). All figure-eight isolas seen in Fig. 14 are from scenario A, while tangled isolas are from scenario B. It is important to note that the rightmost edge of the figure-eight isola in scenario A shown in Fig. 13 (thin red line) is located near \(r\approx-0.66\), and not at \(r\approx-0.63\), the right boundary of the snaking region for \(\epsilon=0.03\). This is a consequence of the fact that two-pulse states with a small separation between the LSs lie on isolas that are significantly smaller than those for bound states with a larger separation. This is so not only when \(\epsilon=0\)[26] but also when \(\epsilon=0.03\). Figure 15 shows an example of a narrow figure-eight isola at \(\epsilon=0\) obtained from a bound state at \(\epsilon=0.03\) and \(r=-0.68\) together with a wider figure-eight isola obtained by continuing a bound state, also obtained from a collision in scenario A with \(\epsilon=0.03\), but at \(r=-0.645\). Despite their similarity (see profiles in Fig. 15, top panel) these profiles are part of separate isolas: the solution profiles of the narrow isola feature an interaction region between the two LSs comprising the bound state which is one wavelength shorter than that of the larger isola solution profiles, a property that is preserved upon numerical continuation. As the separation between the two LSs decreases and their interaction becomes stronger, the figure-eight isolas shrink, with the rightmost edge moving farther and farther to the left. This decrease in size plays a key role in determining collision outcomes: the value \(r\approx-0.66\) (Fig. 13, thin read line) coincides with the parameter value in Fig. 12 where new extrema are first created in scenario A. This observation suggests the following way of understanding the observed phenomenology: when two LSs approach one another in a collision, there are two possibilities. Either (a) a stable multipulse bound state exists at the value of \(r\) in question for the given pair of LSs, or (b) no such state exists or at least it is not stable. In case (a), the collision will lead to the formation of a stable bound state. In case (b), on the other hand, no such state is available, and the system creates or deletes extrema to reach another stable solution. With this hypothesis, the nonmonotonic behavior in the number of extrema created summarized in Fig. 12 can be explained as follows: at parameter values \(-0.65\lesssim r\lesssim-0.64\), there is an island of stability where stable bound states exist. When no stable bound state exists, a strongly nonlinear interaction must occur, resulting in LSs with a different number of extrema; this number increases with increasing \(r\), as discussed earlier. To test the hypothesis that the existence and stabil Figure 14: Isola structures (colored) corresponding to the traveling two-pulse bound states generated in different collision scenarios superposed on the bifurcation diagram from Fig. 3 (gray). All results are for \(\epsilon=0.03\). Figure 13: Traveling bound states obtained from numerical continuation of collision results in scenarios A and B at \(\epsilon=0.03\). Thin pink line: traveling state obtained from a chasing collision (scenario A) at \(r=-0.68\). Thin light blue line: similar traveling state obtained from a symmetric-asymmetric collision (scenario B) at \(r=-0.69\). Starting points for continuation in \(r\) are marked with circles. Continuation in \(\epsilon\) to the variational case \(\epsilon=0\) yields isolas of stationary two-pulse states. Thick red line: scenario A at \(\epsilon=0\). Thick dark blue line: scenario B at \(\epsilon=0\). ity of a bound state is the determining factor in setting the collision outcome, we perform stability experiments. Specifically, to verify that the bound state either does not exist or is unstable in scenarios A, B with \(-0.66\lesssim r\lesssim 0.65\) (where new extrema form), we performed DNS at these values of \(r\), initializing with the post-collision final states from \(r=-0.68\), \(r=-0.67\), \(r=-0.65\) and \(r=-0.645\). In each of these cases we observed the eventual generation of additional extrema. Only one exception was observed, at \(r=-0.66\) in scenario B, when a bound state from \(r=-0.68\) was used as the initial condition and found to remain a pure bound state indefinitely, without any change in the number of extrema. Thus collisions may trigger the creation of new extrema despite the existence of a stable bound state, provided the bound state has only a small basin of attraction. We repeated the above study for scenario C: stable bound states formed from collisions at \(r=-0.645\) and \(r=-0.65\) were used as initial conditions at \(r=-0.635\), \(r=-0.64\), \(r=-0.655\), \(r=-0.66\). In all cases but one, the same number of extrema was spontaneously created as observed in the collision experiments. The exception, where a bound state remains indefinitely stable, was again found near the transition from a bound state to the creation of new extrema in the collision, namely at \(r=-0.64\). In summary, these stability experiments largely confirm the proposed hypothesis for explaining Fig. 12: collisions yield bound states when these exist as stable states, and otherwise lead to a change in the number of extrema (typically creation of new extrema). The only deviations from this paradigm are observed when stable bound states exist but have a small basin of attraction. Then the finite perturbation provided by a collision can trigger a change in the number of extrema. The isolas corresponding to multi-pulse bound states at \(\epsilon>0\) can take other forms as well. Figures 16 and 17 show two such cases, in which a complex tangled branch terminates at either end in bifurcation points located at the center of a _spiral_ structure. These figures correspond to the continuation of bound states resulting from scenario A collisions at \(r=-0.645\) and \(r=-0.7\), respectively. As mentioned in Sec. IV.1, the latter is in fact the result of two _separate_ collisions. The first collision occurs between two moving LSs and leaves one stationary and symmetric LS and one moving LS as shown in Fig. 7. The second collision (not shown) occurs after this, when the moving LS collides with the stationary LS from the other side due to periodic boundary conditions, forming a traveling bound state. The bifurcations identified in Figs. 16 and 17 are of codimension two and occur as the separation between the two LSs in the bound state grows without limit, while the structures independently change their amplitude and degree of asymmetry such that their velocities remain the same. As the branch spirals towards the center, the separation between the constituent LSs grows. These bifurcations are highly reminiscent of T-point bifurcations, also known as Bykov points [28; 29], which have been described as corresponding to an equilibrium-to-equilibrium heteroclinic cycle, with two branches of primary homoclinic orbits bifurcating from it. Bifurcations of this type have been observed in a variety of systems, including the Lorenz63 system [30], a model of calcium pulses in pancreatic cells [31], and in type-I excitable media [32]. In contrast, here the two constituent LSs individually approximate a homoclinic orbit to the trivial state \(u=0\) and at the T-point the state of the system corresponds to a double homoclinic cycle. Of course, in a finite domain we cannot reach this T-point, and our numerical continuation results therefore only produce a periodic structure that approximates this double homoclinic cycle. To the best of our knowledge, a T-point bifurcation from such a double homoclinic cycle described here has not been observed previously. Note that such T-points cannot occur for stationary LSs. Figure 15: The \(L_{2}\) norm vs. control parameter \(r\) for two isolas at \(\epsilon=0\), obtained by numerical continuation from traveling bound states in scenario A with \(\epsilon=0.03\) resulting from collisions at \(r=-0.645\) (large isola) and \(r=-0.68\) (small isola), respectively. Color-matched solution profiles are shown in the top panel, where the red solutions (right), corresponding to the small isola, feature a separation between the two single-pulse LSs shorter by one wavelength as compared with the blue profiles (left), which correspond to the large isola. ## V Reduced model of interacting localized structures An important concept in the analysis of LSs is the notion of _spatial dynamics_ that applies equally well in the comoving frame. In the simplest case one linearizes the equation of motion (1) about the trivial state, and considers solutions of the form \(u\propto e^{\lambda x}\), cf. [9; 12]. The roots \(\lambda\) of the resulting characteristic polynomial, known as _spatial eigenvalues_, govern the behavior of the system in the vicinity of the trivial solution \(u\equiv 0\). They are given by \[\lambda=\pm\sqrt{\pm\sqrt{r}-1}. \tag{12}\] In the case of interest, \(r<0\) so that LSs exist, \(\lambda\) is complex: \(\lambda=\pm\alpha\pm i\beta\) with \(\alpha,\beta>0\). The (positive) real part of \(\lambda\) describes the exponential growth of \(u\) away from \(u=0\) and towards LS on a \(1/\alpha\) spatial scale, while the nonzero imaginary part of \(\lambda\) implies that this growth is not monotonic, but oscillatory, with wavelength \(2\pi/\beta\). For stationary LS the profile decays towards \(u=0\) in the same manner. Interactions between LSs will be mediated by these exponentially growing/decaying oscillatory tails, unless the proximate (i.e. most strongly interacting) large-amplitude extrema of the two respective structures come close enough for nonlinear effects to become relevant. Such interactions via oscillating tails are found in a wide range of problems and have been studied extensively [33; 34; 35; 36; 37]. Here, in a spirit similar to these works, in particular [33], we propose a simple reduced model to quantitatively describe these interactions. Consider two LSs, with the closest order one amplitude extrema of either structure located at \(x_{i}\), \(i=1,2\). Each LS is characterized by a drift velocity \(c_{i}\), \(i=1,2\), known from the analysis in section III. Note that \(c_{i}\) is zero for symmetric LS, and nonzero for asymmetric LS. The proposed reduced model equations, reminiscent of overdamped particle dynamics, read \[\frac{dx_{1}}{dt} =c_{1}+g_{1}\cos\left(\beta|x_{1}-x_{2}|-\phi\right)e^{-\alpha|x _{1}-x_{2}|}\] \[\frac{dx_{2}}{dt} =c_{2}+g_{2}\cos\left(\beta|x_{1}-x_{2}|-\phi\right)e^{-\alpha|x _{1}-x_{2}|}. \tag{13}\] Equations (13) are integrated using a fourth-order Runge-Kutta method with initial conditions corresponding to DNS data for any given collision. The model involves three unknown parameters describing the interac Figure 16: Scenario A bound state isola continued from \(r=-0.645\) and ending in T-points at either end. Solution profiles and parts of the spiral branch are shown at increasing levels of magnification in the upper panels. Figure 17: Scenario A bound state isola continued from \(r=-0.7\) and ending in T-points at either end. Solution profiles and parts of the spiral branch are shown at increasing levels of magnification in the upper panels. tion: two amplitudes \(g_{1},g_{2}\) and a phase \(\phi\). The values of these parameters will depend on \(r\) and on the colliding structures in question. Below, we compare the predictions of the reduced model given by Eq. (13) with high-resolution DNS results using a gradient descent optimization to determine \(g_{1}\), \(g_{2}\) and \(\phi\), as described in appendix A. To conveniently assess the quantitative agreement between the particle-like trajectories, we consider the deviation of the relative distance between proximate extrema from free propagation. For \(x_{2}>x_{1}\) (without loss of generality), we define \[\chi(t)\equiv x_{2}(t)-x_{1}(t)-(c_{2}-c_{1})t-[x_{2}(0)-x_{1}(0)]. \tag{14}\] ### Comparison between reduced model and DNS #### v.1.1 Chasing: scenario A Figure 18 shows a comparison between the DNS results (contour plot) and the model predictions (dashed yellow lines) for a chasing collision of two asymmetric extrema (scenario A). There is good agreement between the extrema trajectories in both cases, including at a quantitative level, as can be seen from Fig. 19. Figure 20 shows the model parameters obtained by the gradient descent method described in appendix A at different values of \(r\) for the chasing collision shown above. In all cases shown here, a pure bound state is formed in the collision, without the addition or deletion of any extrema. The parameter values are seen to depend only weakly on \(r\), reflecting small changes in spatial structure as \(r\) is varied for a given type of LS. #### v.1.2 Symmetric-asymmetric collision: scenario B Figure 21 shows an overlay of the reduced model trajectories on top of the DNS results, similar to Fig. 18, but for a collision between a symmetric LS with a maximum at its center and a two-wavelength asymmetric LS (scenario B). There is again good agreement between the trajectories of the closest extrema in both cases, including at a quantitative level, as one can see in terms of the quantity \(\chi\) from Fig. 22. #### v.1.3 Flipped symmetric-asymmetric collision: scenario C Figure 23 shows an overlay of the reduced model trajectories on top of the DNS results, similar to Fig. 18 for a collision from scenario C. There is good agreement between the trajectories of the proximate extrema in both cases, including at a quantitative level, as one can see in terms of the quantity \(\chi\) from Fig. 24. #### v.1.4 Head-on collision: scenario D Figure 25 shows an overlay of the reduced model trajectories on top of the DNS results for a collision from scenario D. While the trajectories in Fig. 25 look qualitatively consistent, the quantitative perspective provided by Fig. 26 reveals that the reduced model with optimal parameters clearly deviates from the DNS results in this case. The gradient descent method was applied with \(t_{f}=500\) (see appendix A) to find the set of parameters used in Fig. 26, but no better agreement was found for other choices of \(t_{f}\), which were also tested. The deviation between the reduced model trajectories Figure 19: Residual distance \(\chi\) between proximate extrema (defined in Eq. (13)) versus time from the DNS and the reduced model in scenario A, corresponding to Fig. 18. A good quantitative agreement is observed between DNS and model. The interaction is effectively repulsive, cf. section V.2. Figure 18: Overlay of the reduced model trajectories on top of DNS data for scenario A. Colored contour plot shows the DNS results at \(\epsilon=0.03\), \(r=-0.68\) in scenario \(A\) (collision between two asymmetric structures that are two and three wavelengths long, respectively). Yellow dashes indicate trajectories predicted by the reduced model (13) with parameters \(g_{1}=0.53489982\), \(g_{2}=-0.30631508\) and \(\phi=-1.08804942\), as well as \(c_{1}=0.0020\) and \(c_{2}=0.00128276\). See also Fig. 19. and the DNS result is in agreement with expectations. Since the reduced model is exclusively built on the linear structure of SH35, while the creation of new extrema is a highly nonlinear process, the reduced model is not expected to capture the dynamics correctly once nonlinear interactions become dominant. At earlier times, the values of \(\chi\) produced by the reduced model remain close to the DNS data, and only deviate near the time of extremum creation. This highlights the limitations of the otherwise very successful reduced model analysed here. ### Sign of the interaction: attractive or repulsive? It is interesting to note that in all the cases described above, when the interacting extrema were of opposite signs (scenarios A, B), we found \(g_{1}>0\) and \(g_{2}<0\) (for \(-\pi<\phi<0\)). Conversely, when the two interacting extrema were of the same sign (scenarios C, D), we systematically found \(g_{1}<0\) and \(g_{2}>0\) (for \(-\pi<\phi<0\)). However, it is nontrivial to deduce what this implies for the sign of the effective interaction, due to its oscillatory nature. We can define an effective sign of the interaction as follows. First we introduce the _equilibrium distance_\(d_{eq}\) as the value of \(|x_{2}-x_{1}|\) in the bound state after the collision has occurred. From Eq. (13), we see that equilibrium implies \[\cos\left(\beta d+\phi\right)\exp(-\alpha d)=\frac{c_{2}-c_{1}}{g_{1}-g_{2}}. \tag{15}\] In general, Eq. (15) has multiple solutions, as illustrated in Fig. 27. One may surmise that these multiple solutions are related to the overlapping isolas shown in Fig. 15, which differ in their equilibrium distance by one wavelength. When \(c_{2}=c_{1}\), Eq. (15) has infinitely many solutions, similar to the family of bound two-pulse states described in [26] that differ only in their equilibrium separation. By contrast, for \(c_{2}-c_{1}\neq 0\), the number of Figure 21: Colored contour plot of the DNS of Eq. (1) at \(\epsilon=0.03\), \(r=-0.69\) in scenario \(B\) (collision between symmetric and asymmetric LSs, with extrema of opposite sign facing each other). Yellow dashes indicate the trajectories predicted by the reduced model (13) with parameters \(g_{1}=0.3539801\), \(g_{2}=-0.51798143\), and \(\phi=-1.038100556\), as well as \(c_{1}=0\) and \(c_{2}=-0.0019615\). See also Fig. 22. Figure 20: Parameters of the educed model obtained for scenario A and different values of the control parameter \(r\). Red diamonds: \(g_{1}\), blue circles: \(g_{2}\), green triangles \(\phi\). The parameters depend only weakly on \(r\), reflecting minor changes in profile with \(r\). All cases shown correspond to collisions generating pure bound states, except \(r=-0.65\), where a metastable bound state forms (Fig. 10), leading to the appearance of additional extrema at late times. In that special case \(r=-0.65\), the fit was performed with a cut-off time \(t_{f}\) (cf. appendix A) after the initial collision but before the generation of the additional extrema. In the range \(-0.67\lesssim r\lesssim-0.65\), and at \(r\gtrsim-0.64\), the scenario A collisions do not result in bound states, but instead generate new peaks (cf. Fig. 12). Figure 22: Residual distance \(\chi\) between proximate extrema (defined in Eq. (13)) versus time from the DNS and the reduced model for scenario B, corresponding to Fig. 21. The zoom shows the equilibrium separation. The interaction is effectively attractive, cf. section V.2. solutions is finite. In a generic collision at \(\epsilon>0\), since the colliding LSs approach from a large distance, we expect that the physical value of \(d_{eq}\) is given by the largest positive value of \(d\) that solves Eq. (15). We assume that \(d_{eq}\) is known for a given collision of two LSs starting out at a distance \(|x_{2}-x_{1}|=d_{0}\), with \(c_{1}>c_{2}\) (which applies to all examples shown above). In the absence of interactions, the free propagation of the two LSs will reduce the distance from \(d_{0}\) to \(d_{eq}\) in a time \[t_{free}=\frac{d_{0}-d_{eq}}{c_{1}-c_{2}}. \tag{16}\] In the presence of interactions, this time will change to \(t_{eq}\), a time that may be larger or smaller than \(t_{free}\) due to the oscillatory nature of the interaction. We propose the following terminology. If the equilibrium distance in the collision is reached _earlier_ than for free propagation, i.e. if \(t_{eq}<t_{free}\), then we say that the interaction is _effectively attractive_. By contrast, if the equilibrium distance is reached _later_, i.e. \(t_{eq}>t_{free}\), then we say that the interaction is _effectively repulsive_. We note that the effective sign of the interaction can be determined graphically from the sign of \(\chi(t_{eq})\), since Figure 23: Colored contour plot of the DNS of Eq. (1) at \(\epsilon=0.03\), \(r=-0.645\) in scenario \(C\) (collision between symmetric and asymmetric LSs, with extrema of opposite sign facing each other). Yellow dashes indicate the trajectories predicted by the reduced model (13) with parameters \(g_{1}=-0.43756\), \(g_{2}=0.55907\), and \(\phi=-1.013089\), as well as \(c_{1}=0\) and \(c_{2}=-0.001895\). See also Fig. 24. Figure 26: Residual distance \(\chi\) between proximate extrema (defined in Eq. (13)) versus time from the DNS and the reduced model, corresponding to Fig. 25. Inset shows zoom on early times. The vertical dashed line indicates the time \(t\approx 5500\) when new extrema appear. The best-fit model prediction deviates from the DNS slightly before this, as a consequence of nonlinear effects not captured by the model. Figure 24: Residual distance \(\chi\) between proximate extrema (defined in Eq. (13)) versus time from the DNS and the reduced model for scenario C, corresponding to Fig. 23. The interaction is effectively attractive, cf. section V.2. Figure 25: Colored contour plot of DNS of Eq. (1) at \(\epsilon=0.03\), \(r=-0.65\) in scenario \(D\) (head-on collision of two identical structures). Yellow dashes indicate trajectories predicted by the reduced model (13), with parameters \(g_{1}=-g_{2}=-0.551\), \(\phi=-1.90324\), as well as \(c_{1}=-c_{2}=0.0019291\). See also Fig. 26. Eq. (14) implies \[t_{eq}=t_{free}+\frac{\chi(t_{eq})}{|c_{1}-c_{2}|}. \tag{17}\] The time \(t=t_{eq}\), where the distance \(|x_{2}-x_{1}|=d_{eq}\) is highlighted by an arrow in Figs. 19, 22, 24 and 26; since \(x_{2}-x_{1}\) is constant at \(t>t_{eq}\), a linear increase in \(\chi=const+(c_{1}-c_{2})t\) sets in. The sign of \(\chi(t_{eq})\), i.e. the onset of this linear increase, determines the effective sign: if \(\chi(t_{eq})>0\), then the interaction is effectively repulsive but if \(\chi(t_{eq})<0\), then the interaction is effectively attractive. We highlight that even though the collisions from scenarios A and B shown in Figs. 18 and 21 both feature interactions between extrema of opposite signs, the relative signs of their interaction differ: the interaction is effectively repulsive in scenario A (Fig. 19), while it is effectively attractive in scenario B (Fig. 22). This indicates that while the relative sign of interacting extrema does appear to be important, it is not the only factor determining collision outcomes. This is likely related to the observed deviation between scenarios A, B and C, D, respectively, in the parameter range \(-0.65\lesssim r\lesssim-0.64\). ## VI Conclusions In this paper, we have provided an in-depth analysis of LSs in the nonvariational SH35, Eq. (1), using numerical continuation, DNSs, asymptotics and reduced-order modeling to shed new light on the propagation and interaction of LSs in a canonical driven dissipative system. These interactions are highly inelastic, in contrast to systems described by integrable partial differential equations, and lead to both stationary and drifting structures. Moreover, the interactions can be attracting or repelling depending on the nature of the interacting LSs and the parameters. The asymptotic theory predicts a linear dependence of the drift speed of LSs on \(\epsilon\), with excellent quantitative agreement with DNSs and numerical continuation at \(\epsilon\lesssim 0.1\), but significant deviation for \(\epsilon\gtrsim 0.1\). The collisions resulting from this drift are shown to display rich phenomenology: different numbers of extrema can be added or deleted in a collision, depending on the types of structures interacting and the value of \(r\). Alternatively, a pure bound state can be formed, which preserves the number of extrema of the initial structures. We have found that the stability properties of these bound states play a key role in determining whether a collision changes the number of extrema or not: if the bound state exists stably, and has a sufficiently large basin of attraction, then the perturbation resulting from a collision does not change the number of extrema. However, if this is not the case, then extrema will be added or deleted in the collision. When extrema are deleted, this can lead to smaller LSs, or to annihilation in the case of the minimal, single-wavelength asymmetric structure. When extrema are created, metastable states may arise from collisions, which undergo a further nonlinear interaction at late times. This phenomenology is reminiscent of what has been attributed to a _scatter_[24; 25]: unstable stationary or time-periodic patterns which direct orbits during the collision process in the infinite-dimensional state space along their stable and unstable manifolds. While these ideas were proposed in the context of models other than SH35, they may be applicable to some of the collision events observed here. Bound states arising from collisions were shown to lie on isolas which are embedded within the snakes and ladders bifurcation structure. In contrast with the variational case \(\epsilon=0\), the isolas at \(\epsilon>0\) are typically not of figure-eight form, but rather form a complex, spaghetti-like tangled set. Only for the specific subclass of collisions between chasing asymmetric LSs, are the resulting multi-pulse bound states found to lie on isolas whose figure-eight form is preserved at \(\epsilon>0\). In addition, we have also described a novel example of an isola at \(\epsilon>0\) which is not closed, but features T-point bifurcations at either end, each of which involves the gradual separation of a bound state into two separate asymmetric LSs, each separately corresponding to a homoclinic connection to \(u=0\) in the comoving frame. States of this type require drift and so can only be found in nongradient systems of the type studied here. Finally, a reduced model was also proposed, consisting of two coupled ordinary differential equations, describing the interaction of LSs via their oscillatory exponential tails. The reduced model was shown to reproduce a wide range of collisions, with quantitative accuracy, provided no large-amplitude extrema are created or destroyed in the collision. If the number of large-amplitude extrema is altered in the collision, the model still describes the trajectories of the interacting proximate extrema up until Figure 27: Illustration of how the equilibrium distance \(d_{eq}\) is determined from Eq. (15). shortly before the time at which this occurs. After this time, the model fails to describe the DNS results since it does not include the nonlinear structure of Eq. (1). The fact that the trajectories of LSs can be accurately reproduced by the reduced model for a sizable fraction of all observed cases is indicative of the fact that the collisions between localized patterns are to a large extent particle-like, with a nontrivial interaction potential corresponding to the linear structure of Eq. (1). This model led to considerable insight into the nature of the interactions between LSs in the SH35 model and allowed a relatively simple determination of the conditions under which the interaction is effectively attracting or repelling. Collisions of convectons in binary fluid convection, such as those described in [19], feature dynamics beyond the scope of the order parameter description provided by SH35, in particular due to the possibility of complex temporal behavior. Nonetheless, it remains to be understood to what extent the SH35 collision phenomenology described here can be observed between convectons or similar structures in other continuum systems. Specifically, [19] do not observe bound states, while these play an important role in the selection of collision outcomes described here. It remains an open question whether such bound states exist between convectons, given that binary fluid convectons interact nonlocally via a large-scale solute field generated by the pumping mechanism described in [4]. A simple possibility for obtaining more complex temporal behavior in SH35 would be the inclusion of higher-order time-derivatives, e.g. a second-order term that allows for inertia and wave-like solutions. In this case, the interactions described here, which rely exclusively on the exponential tails of LSs, would be enriched due to the possibility of wave radiation. The study of this modified SH35 with higher-order time derivatives is left for a future study. ###### Acknowledgements. We acknowledge support from the National Science Foundation under grants DMS-1908891 and DMS-2009563. We also thank Nicolas Verschueren van Rees for fruitful discussions and for providing us with a high performance C++ solver which we adapted for our DNS studies. ## Appendix A Reduced model parameter estimation by gradient descent In this appendix, we describe the details of the gradient descent method used to systematically determine the reduced model parameters \(g_{1}\), \(g_{2}\) and \(\phi\) corresponding to SH35 solutions for a given \(r\) and a given collision scenario, based on high-resolution DNS data. Given \(u(x,t)\) from DNS on a uniform grid with 8192 points, we track the position of the two extrema that are located closest to one another in the collision, as a function of time \(t\) (note that a high spatial resolution is required for well-resolved tracking). We denote the positions of the closest extrema by \(x_{1}^{DNS}(t)\), \(x_{2}^{DNS}(t)\), corresponding to \(x_{1}(t)\), \(x_{2}(t)\) in Eq. (13). To determine which set of reduced model parameters \(g_{1}\), \(g_{2}\), \(\phi\) most accurately represents the DNS results, we define a cost function \[C_{g_{1},g_{2},\phi}[x_{1},x_{2}]\equiv\sqrt{\frac{1}{t_{f}}\int_{0}^{t_{f}} \sum_{i=1}^{2}\left(x_{i}(t)-x_{i}^{DNS}(t)\right)^{2}dt}, \tag{10}\] where at \(t=0\) the colliding LSs are separated by many wavelengths, and \(t_{f}\) can be chosen to be long after the collision, if no large-amplitude extrema are created or destroyed (e.g. Fig. 18), or instead may be chosen just before nonlinear effects set in (e.g. Fig. 26). The global minimum of the functional \(C_{g_{1},g_{2},\phi}[x_{1},x_{2}]\) corresponds to the best fit between reduced model and DNS. Figure 28: Illustration of the robustness of the gradient descent method: starting from various different initial guesses for the model parameters the method converges to a unique set of parameters \(g_{1}\), \(g_{2}\), \(\phi\). Example shown corresponds to scenario \(B\) at \(r=-0.69\). To perform the gradient descent optimization, we start with an initial guess for \(g_{1}\), \(g_{2}\), \(\phi\), and compute the gradient \(\nabla C=(\partial C/\partial g_{1},\partial C/\partial g_{2},\partial C/\partial\phi)\). Then, in each step, we update parameters by the rule \[(g_{1},g_{2},\phi)\rightarrow(g_{1},g_{2},\phi)-\lambda\nabla C, \tag{10}\] with a parameter \(\lambda\) measuring the step size in parameter space. Figure 28 shows sample trajectories in parameter space (top panel: projection onto the \(g_{1},\phi\) plane, bottom panel: projection onto the \(g_{2},\phi\) plane) obtained by this method for a collision from scenario \(B\) (asymmetric LS colliding with a stationary symmetric LS) at \(r=-0.69\) (collision results in pure bound state). For a wide range of initial conditions, the method converges to a well-defined set of parameters. The resulting model trajectories accurately reproduce those observed in the DNS, as can be seen in Figs. 18 and 19. The significance of Fig. 28 is that it indicates that the method is robust with respect to the precise initial guess for the model parameters \(g_{1},g_{2},\phi\). However, several complicating factors need to be taken into account to obtain the correct optimal model trajectories with this procedure.
2306.13884
Impact of Multiple Phase Transitions in Dense QCD on Compact Stars
This review covers several recent developments in the physics of dense QCD with an emphasis on the impact of multiple phase transitions on astrophysical manifestations of compact stars. It is conjectured that pair-correlated quark matter in $\beta$-equilibrium is within the same universality class as spin-imbalanced cold atoms and the isospin asymmetrical nucleonic matter. This then implies the emergence of phases with broken space symmetries and tri-critical (Lifshitz) points. We construct an equation of state (EoS) that extends the two-phase EoS of dense quark matter within the constant speed of sound parameterization by adding a conformal fluid with a speed of sound $c_{\rm conf.}=1/\sqrt{3}$ at densities $\ge 10~n_{\rm sat}$, where $n_{\rm sat}$ is the saturation density. With this input, we construct static, spherically symmetrical compact hybrid stars in the mass--radius diagram, recover such features as the twins and triplets, and show that the transition to conformal fluid leads to the spiraling-in of the tracks in this diagram. Stars on the spirals are classically unstable with respect to the radial oscillations but can be stabilized if the conversion timescale between quark and nucleonic phases at their interface is larger than the oscillation period. Finally, we review the impact of a transition from high-temperature gapped to low-temperature gapless two-flavor phase on the thermal evolution of hybrid stars.
Armen Sedrakian
2023-06-24T07:21:10Z
http://arxiv.org/abs/2306.13884v2
# Impact of Multiple Phase Transitions in Dense QCD on Compact Stars ###### Abstract This review covers several recent developments in the physics of dense QCD with an emphasis on the impact of multiple phase transitions on astrophysical manifestations of compact stars. To motivate the multi-phase modeling of dense QCD and delineate the perspectives, we start with a discussion of the structure of its phase diagram and the arrangement of possible color-superconducting and other phases. It is conjectured that pair-correlated quark matter in \(\beta\)-equilibrium is within the same universality class as spin-imbalanced cold atoms and the isospin asymmetrical nucleonic matter. This then implies the emergence of phases with broken space symmetries and tricritical (Lifshitz) points. The beyond-mean-field structure of the quark propagator and its non-trivial implications are discussed in the cases of two- and three-flavor quark matter within the Eliashberg theory, which takes into account the frequency dependence (retardation) of the gap function. We then construct an equation of state (EoS) that extends the two-phase EoS of dense quark matter within the constant speed of sound parameterization by adding a conformal fluid with a speed of sound \(c_{\rm conf.}=1/\sqrt{3}\) at densities \(\geq 10\ n_{\rm sat}\), where \(n_{\rm sat}\) is the saturation density. With this input, we construct static, spherically symmetrical compact hybrid stars in the mass-radius diagram, recover such features as the twins and triplets, and show that the transition to conformal fluid leads to the spiraling-in of the tracks in this diagram. Stars on the spirals are classically unstable with respect to the radial oscillations but can be stabilized if the conversion timescale between quark and nucleonic phases at their interface is larger than the oscillation period. Finally, we review the impact of a transition from high-temperature gapped to low-temperature gapless two-flavor phase on the thermal evolution of hybrid stars. QCD matter; phase diagram; compact stars Vol. 0, No. 0000-0001-5070-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-509-5099-509-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-509-5099-5099-5099-5099-509-5099-509-5099-5099-509-5099-5099-5099-509-5099-5099-5099-5099-5099-5099-5099-5099-509-5099-509-5099-5099-5099-5099-5099-5099-509-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-509-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-509-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-509-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-509-5099-5099-5099-5099-509-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-509-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-509-5099-5099-509-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-509-5099-5099-5099-5099-509-5099-5099-5099-509-5099-5099-5099-5099-509-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-509-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-5099-50999-5099-5099-509 the central density, they can be stabilized if the conversion between nucleonic and quark phases is slow compared to the characteristic period of radial oscillations [17; 18; 19; 20]. To motivate the modeling, Section 2 provides a brief overview of the phase diagram of dense QCD matter as we understand it from the studies of the thermodynamics of various high-density phases, such as the color-superconducting phases [6; 7; 8; 9; 10] or quarkionic phases [21; 22; 23; 24; 25]. Utilizing the knowledge gained from the studies of imbalanced cold atoms [26] and isospin asymmetrical nuclear matter [27; 28], the possible phase structure of pair-correlated quark matter is conjectured based on the universal features of imbalanced pair-correlated fermionic systems. Section 3 discusses the computations of Green's functions in the two- and three-flavor phases [29; 30] and potential new effects arising beyond the adiabatic (frequency-independent) approximation of the gap. In Section 4, the constant speed-of-sound parameterization of the EoS of quark matter phases [31; 32] is used to explore the mass-radius (\(M\)-\(R\)) diagram of compact stars with deconfinement and multiple phase transitions. For two-phase transitions, one from nucleonic to two-flavor quark matter and another from two-flavor to three-flavor phase of quark matter, we recover the _fourth family of compact stars,_ which is separated from the third family by the instability region [15]. Here we show that a high-density phase of conformal fluid at densities \(\geq 10~{}n_{\rm sat}\), where \(n_{\rm sat}=0.16~{}{\rm fm}^{-3}\) is the saturation density, modifies the classically unstable tracks in the \(M\)-\(R\) diagram compared to the case when such transition does not occur [33]. Such modification is phenomenologically important because of the possible stabilization mechanism of radial oscillation modes of hybrid stars [17; 18; 19; 20], which is discussed in Section 6. In Section 5, we discuss the cooling of compact stars with quark cores. We then simulate the thermal evolution of these stars on a time scale on the order of million years with a focus on the impact of the phase transition from the gapped to the gapless phase of 2SC matter in the core of the star. Our conclusions are given in Section 7. ## 2 A Brief Review of the Phase Diagram of Dense QCD Matter in compact stars covers the large number density (\(n\geq n_{\rm sat}\)), large isospin, and relatively low-temperature (\(0\leq T\leq 100\) MeV) portion of the phase diagram of strongly interacting matter. The extremely low temperature (\(T\leq 0.1\)) The MeV regime is relevant for mature compact stars, whereas the higher temperature domain is relevant for supernovas and binary neutron star mergers. The complexity of the phase diagram arises due to the multiple order parameters describing (interrelated) phenomena, which include deconfinement phase transition (with the Polyakov loop as the order parameter of the center symmetry), chiral phase transition (and its condensate as the order parameter), the color-superconducting phases (with the anomalous correlator as the order parameter). Depending on the non-zero value of one or several order parameters, distinct phases may arise: an extensively studied case is color-superconducting phases with various pairing patterns [6; 7; 8; 9; 10]. A more recent suggestion is a confined but chirally symmetric quarkyonic phase at compact star densities [21; 23; 24]. A crude sketch of the phase diagram of strongly interacting matter is shown Figure 1, along with the regions that are covered by current and future facilities (RHIC, NICA, and FAIR). The low-density and low-temperature region of the phase diagram contains nuclei embedded into a sea of charge-neutralizing electrons and neutrons at higher densities. As the density increases, a first-order phase transition to bulk nuclear matter occurs at around \(0.5~{}n_{\rm sat}\). A further increase in density can lead to the deconfinement of nucleons to form quark matter for \(n\geq(2-3)\times n_{\rm sat}\). The transition from nuclear matter to deconfined quark matter could be of the first or second order, or a crossover [6; 7; 8; 9; 10]. The first-order phase transition leaves a marked imprint on the macroscopic properties of compact stars because the EoS contains a density jump, which may give rise to new stable branches of compact stars (i.e., their third family) separated from nucleonic stars by a region of instability [34; 35; 36; 37]. Smooth crossover without changes in the values of the order parameter or the wave function of the three-quark states would be a less dramatic change in the slope of the EoS, best visualized in terms of the speed of sound [10; 22; 38]. As mentioned above, two sequential first-order phase transitions can lead to the appearance of a new branch of compact stars--fourth family--separated from the third family by an instability region [15; 33; 39; 40], assuming the classical stability criterion \(dM/d\rho_{c}>0\) is valid. In the case of slow phase transition between the nuclear and quark matter phases, the two families are not separated by an instability region; i.e., they form a continuous branch where the regions with \(dM/d\rho_{c}<0\) are stabilized [20] (see Section 6). Deconfined quark matter at low temperature and high density is expected to have characteristic features of degenerate Fermi systems, which are familiar from condensed matter physics. Therefore, emergent phenomena such as (color) superconductivity are expected in channels where gluon exchange is attractive [6; 7; 8]. Various color superconducting phases may arise, depending on the number of flavors and colors involved in the pairing, the ratio of \(\Delta/\delta\mu\), where \(\Delta\) is the gap in the quasiparticle spectrum, \(\delta\mu=(\mu_{d}-\mu_{d})/2\) is the difference in the chemical potentials of down (\(d\)) and up (\(u\)) quarks, and the mass of strange quark \(m_{s}\) in the three-flavor quark matter. The two-flavor candidate phases classified according to these parameters are as follows: 1. The "2SC" phase (where the abbreviation refers to two-superconducting-colors) [6] \[\Delta_{2\mathrm{SC}}\propto\langle\psi^{T}(x)C\gamma_{5}\tau_{2}\lambda_{2} \psi(x)\rangle\neq 0,\quad 0\leq\delta\mu<\Delta/\sqrt{2},\] (1) where \(C=i\gamma^{2}\gamma^{0}\) is the charge conjugation operator, \(\tau_{2}\) is the second component of the Pauli matrix acting in the \(\mathrm{SU}(2)_{f}\) flavor space, and \(\lambda_{A}\) is the antisymmetric Gell-Mann matrix acting in the \(\mathrm{SU}(3)_{c}\) color space. The properties of the 2SC phase resemble that of the ordinary BCS theory, including vanishing resistivity and vanishing heat capacity because the quarks near the Fermi surface remain gapped. 2. Phases with broken space symmetries, which are associated with a finite momentum of the condensate [6; 8] (hereafter FF phase) or deformation of the Fermi surface [41] (hereafter DFS phase): \[\Delta_{2\mathrm{SC}}\neq 0,\quad\delta\mu>\Delta/\sqrt{2},\quad \vec{P}\neq 0\quad\mathrm{(FF)}\] (2) \[\Delta_{2\mathrm{SC}}\neq 0,\quad\delta\mu>\Delta/\sqrt{2},\quad \delta\epsilon\neq 0\quad\mathrm{(DFS)}\] (3) where \(\vec{P}\) is the center of mass momentum of a Cooper pair and \(\delta\epsilon\) quantifies the quadrupole deformation of the Fermi surfaces of \(u\) and \(d\) quarks. Figure 1: Sketch of the phase diagram of strongly interacting matter in the temperature and baryonic density plain. Compact stars cover the low-temperature and high-density regimes of this phase diagram. The parameter ranges covered by the FAIR, NICA, and RHIC facilities are also indicated. 3. Mixed phase(s) [42] \[\Delta_{2SC}\propto\langle\psi^{\rm T}(x)C\gamma_{5}\tau_{2}\lambda_{2}\psi(x) \rangle\neq 0,\quad\delta\mu=0,\quad 0\leq x_{s}\leq 1,\] (4) which corresponds to a mixture between a perfectly symmetrical "2SC" superconductor and a normal system accommodating the excess number of \(d\) quarks. Here, \(x_{s}\) is the filling factor defined as the ratio of the superconducting and total volumes. (We assume that there is an excess of \(d\) over \(u\) quarks, as is expected in quark matter in compact stars under \(\beta\)-equilibrium.) The color-flavor-locked (CFL) phase [43] is expected to be the ground state of three-flavor quark matter at asymptotically large densities where the strange quark is massless, Fermi surfaces of quarks coincide, and, therefore, the pairing among quarks occurs in a particularly symmetrical manner. At densities relevant for neutron stars, the perfect CFL phase is unlikely to be realized; rather, some of its variants have \(m_{s}\neq 0\) and/or \(\delta\mu\neq 0\)--chemical potential shifts between various flavors of quarks [44]. Therefore, the phases listed above can be replicated with an allowance of additional non-zero \(us\) and \(ds\) pairings \[\Delta_{ud}\neq 0,\quad\Delta_{sd}\neq 0,\quad\Delta_{su}\neq 0,\quad(m_{s} \neq 0;\,\delta\mu\neq 0).\] (5) A complete phase diagram of quark matter that includes most, if not all, of the phases mentioned above, is not available to date. However, various imbalanced superfluids, such as cold atoms, isospin asymmetrical nuclear matter, and flavor-imbalanced quark matter show a high degree of universality. Thus, possible structures of the phase diagram of quark matter can be conjectured by extrapolating from the detailed studies of the phase diagrams of cold atomic gases [26] and isospin asymmetrical nuclear matter [27]. These are, clearly, speculative and need to be confirmed using explicit computations of relevant quark phases. Figure 2 shows two schematic phase diagrams of color-superconducting matter in the density-temperature plane. For sufficiently large temperatures, the unpaired normal phase is the preferred state of matter, ignoring any other correlation beyond the pairing. The phases with broken symmetries, the FF and the DFS phases, are preferable in temperature-density strips at low temperatures and high densities. At lower temperatures, the PS phase is the preferred one. At higher temperatures, the spatially symmetric 2SC phase dominates. It is seen that the phase diagram contains two tri-critical points, i.e., the points where three different phases coexist. The critical point, which has the FF state at the intersection, is a Lifshitz point as, per construction, it is a meeting point of the modulated (FF), ordered (PS/2SC), and disordered (unpaired) states. Of course, this is the case if the transition temperature to the CFL phase is below the tri-critical temperature; otherwise, the unpaired state should be replaced by a variant of the CFL phase. Note that depending on the parameters of the model, two or one tricritical points may be located on the unpairing line or the line of transition to the CFL phase, as illustrated in Figure 2, left and right panels, respectively. The model can be tuned to produce a four-critical point if both points coincide. We also note that the low-density limit corresponds to the strong coupling regime where the pairs are tightly bound, whereas the high-density limit corresponds to the weak-coupling regime. Therefore, one can anticipate signatures of BCS-BEC crossover. These can be seen by examining several characteristic quantities, for example, the ratio of the coherence length to the interparticle distance \(\xi/d\), where \(\xi/d\gg 1\) corresponds to the BCS and \(\xi/d\ll 1\) corresponds to the BEC limit, or the ratio of the gap to the (average) chemical potential \(\Delta/\mu\), where \(\Delta/\mu\ll 1\) corresponds to the BCS and \(\Delta/\mu\gg 1\) corresponds to the BEC limit. For discussions of BCS=-BEC crossover in dense quark matter, see Refs. [45; 46; 47; 48]. This phenomenon shows a high degree of universality as well; see for example, the studies of nuclear matter [49; 50; 51] and cold atoms [26; 52]. ## 3 Structure of Green Functions in Two-Flavor Quark Matter Order-by-order computations of the magnitude of the gaps in the superconducting phases can be carried out in the weak-coupling (extreme high-density) regime, where the one-gluon exchange is the dominant interaction. Approximate Eliashberg-type equations for the flavor-symmetric 2SC phase were solved within one-gluon exchange approximation in Refs. [53; 54], showing that the pairing gap scales with the coupling \(g\) as \(\Delta\sim\mu g^{-5}\exp(-1/g)\). Such a scaling also applies to the high-density CFL phase, where the perturbative approach is more reliable than at densities relevant to the 2SC phase. More recently, Eliashberg-type equations were solved for two-flavor [29] and three-flavor superconductors [30]. The first study used the quark-meson coupling model, keeping only the frequency dependence of the gap, whereas the second study kept frequency and momentum dependences but ignored the imaginary part of the pairing gap. These theories not only improve the description of quark matter but also lead to phenomenologically important implications, such as the presence of electrons in the CFL phase [30], which are not allowed when the gap is constant [55]. We now briefly outline these approaches following Ref. [29]. The inverse Nambu-Gorkov quark propagator is given by \[S^{-1}(q)=\left(\begin{array}{cc}q+\mu\gamma_{0}-m&\bar{\Delta}\\ \Delta&(q-\mu\gamma_{0}+m)^{T}\end{array}\right), \tag{6}\] where \(q\) is the four-momentum and \(\Delta\) is the gap with \(\bar{\Delta}\equiv\gamma_{0}\Delta^{\dagger}\gamma_{0}\). Equation (6) is written for the case of equal number densities of up and down quarks with a common chemical potential \(\mu\) and mass \(m\). The bare quark-meson vertices \(\Gamma^{i}_{\pi}(q)\) and \(\Gamma_{\sigma}(q)\) are given by \[\Gamma^{i}_{\pi}(q)=\left(\begin{array}{cc}\frac{\pi^{i}}{2}\gamma_{5}&0\\ 0&-(\frac{\pi^{i}}{2}\gamma_{5})^{T}\end{array}\right),\quad\Gamma_{\sigma}(q )=\left(\begin{array}{cc}\mathbb{I}&0\\ 0&-\mathbb{I}\end{array}\right), \tag{7}\] where pions couple to quarks using a pseudo-scalar coupling, whereas \(\sigma\)s couple via a scalar coupling, with \(\mathbb{I}\) being a unit matrix in the Dirac and isospin spaces. Their propagators are given by \[D_{\pi}(q)=\frac{1}{q_{0}^{2}-\mathbf{q}^{2}-m_{\pi}^{2}},\qquad D_{\sigma}(q)= \frac{1}{q_{0}^{2}-\mathbf{q}^{2}-m_{\sigma}^{2}}, \tag{8}\] where \(m_{\pi/\sigma}\) values are the meson masses. The equation for the gap in the Fock approximation is given via Figure 2: Sketch of the phase diagram of strongly interacting matter in the temperature and baryonic density plain, including (collectively indicated) modulated FF-phase and deformed Fermi surface DFS phase. The tri-critical points are shown with dots; the Lifshitz point is adjacent to the FF, unpaired/CFL phases, and homogenous (PS) phases. In the left panel, it is located on the unpairing or CFL-transition (solid line). The dashed lines correspond to the phase-separation lines among various phases. Signatures of BCS–BEC crossover/transition may emerge when moving from high to low densities. \[\Delta(k) = ig_{\pi}^{2}\int\frac{d^{4}q}{(2\pi)^{4}}\left(-\frac{\tau^{i}}{2} \gamma_{5}\right)^{T}S_{21}(q)\frac{\tau^{i}}{2}\gamma_{5}\delta_{ij}D_{\pi}(q-k) \tag{9}\] \[+ ig_{\sigma}^{2}\int\frac{d^{4}q}{(2\pi)^{4}}(-1)^{T}S_{21}(q) \mathbb{I}D_{\sigma}(q-k),\] where \(g_{\pi}\) and \(g_{\sigma}\) are the coupling constants. Adopting the color-flavor structure of the gap function corresponding to a 2SC superconductor, one then finds \[\Delta_{ij}^{ab}(k)=(\lambda_{2})^{ab}(\tau_{2})_{ij}C\gamma_{5}[\Delta_{+}(k) \Lambda^{+}(k)+\Delta_{-}(k)\Lambda^{-}(k)], \tag{10}\] where \(a,b\dots\) refer to the color space, \(i,j,\dots\) refer to the flavor space, and the projectors onto the positive and negative states are defined in the standard fashion as \(\Lambda^{\pm}(k)=(E_{k}^{\pm}+\mathbf{\alpha}\cdot\mathbf{k}+m\gamma_{0})/2E_{k}^{\pm}\), where \(E_{k}^{\pm}=\pm\sqrt{k^{2}+m^{2}}\) and \(\mathbf{\alpha}=\gamma_{0}\mathbf{\gamma}\). The coupled Equations (6)-(10) must be solved for the gap function, which is a function of three-momentum and the frequency. In the low-temperature limit, the relevant momenta are close to the Fermi momentum and the dependence on the magnitude of the three-momentum can be eliminated by fixing it at the Fermi momentum. The gap Equation (9) then depends only on the energy, which reflects the fact that the pairing interaction is not instantaneous--a common feature of the Fock self-energies in ordinary many-body perturbation theory. The solutions for the positive energy projection of the gap function are shown in Figure 3 as a function of frequency. The structure of the real and imaginary components of the gap function shows a maximum around frequencies at which the meson spectral functions are peaked. Thus, it is important to include the retardation effect when the color superconductor is probed at such frequencies. In the low-frequency limit, it is sufficient to use the BCS approximation where the interaction is instantaneous so that the imaginary part vanishes \(\mathrm{Im}\,\Delta(\omega)=0\) and the real part is a constant \(\mathrm{Re}\,\Delta=\Delta(\omega=0)\). Ref. [30] considered the full momentum and energy dependence of the gap in the Fock approximation within the Yukawa model but neglected the imaginary part of the anomalous self-energy. Their work shows that the retardation implies a CFL phase that is not a perfect insulator, as charge neutrality requires some electrons to be present in matter. This is not the case in the treatment based on the BCS model [55]. Thus, the Figure 3: Dependence of the real (solid) and imaginary (dashed) components of the positive energy projection of the gap function on frequency for two different values of the coupling shown by blue and red lines [29]. The BCS theory predicts a constant on-shell value \(\mathrm{Re}\,\Delta=\mathrm{Const.}\) and a vanishing \(\mathrm{Im}\,\Delta(\omega)\). phenomenology of the CFL phase is modified: its specific heat, thermal conductivity, and magnetic response will change due to the contribution of electrons. This example, in which the simple BCS ansatz for the gap is replaced by a more complete gap function, demonstrates some unexpected features of color superconductors, which may be important for their transport and dynamic response to various probes. ## 4 Equation of State and Mass-Radius diagram We have seen in the previous section that the phase diagram of quark matter may have a complicated structure. At minimum, there are two robust phases of color superconducting matter: the low-density two-flavor 2SC phase and the high-density three-flavor CFL phase. See, for example, Ref. [56] for a Nambu-Jona-Lasinio study and compact stars with two phase transitions in this model. However, additional phases are very likely because it is energetically favorable to break the rotational and translational symmetries due to the stress on the paired state induced by the finite mass of the strange quark and \(\beta\)-equilibrium, which induces disparity in the chemical potentials of \(u\) and \(d\) quarks. In addition, or alternatively, quarkyonic phases may interfere. For the specific computation below, we adopt a covariant density functional EoS of nuclear matter in the nucleonic phase [39; 40; 57]. This EoS, in the absence of the phase transition to quark matter, produces nucleonic compact stars with a maximum \(m\equiv M/M_{\odot}\simeq 2.6\), where \(M\) is the gravitational mass of the star and \(M_{\odot}\) is the solar mass. Allowing for phase transition to quark matter, we consider a straightforward extension of the constant speed of sound EoS of Ref. [15] that allows for a conformal phase of quark matter at high densities with a constant speed of sound, i.e., \[p(\epsilon)=\left\{\begin{array}{ll}p_{1},&\epsilon_{1}<\epsilon<\epsilon_{1 }+\Delta\epsilon_{1}\\ p_{1}+s_{1}\big{[}\epsilon-(e_{1}+\Delta\epsilon_{1})\big{]},&\epsilon_{1}+ \Delta\epsilon_{1}<\epsilon<\epsilon_{2}\\ p_{2},&\epsilon_{2}<\epsilon<\epsilon_{2}+\Delta\epsilon_{2}\\ p_{2}+s_{2}\big{[}\epsilon-(e_{2}+\Delta\epsilon_{2})\big{]},&\epsilon_{2}+ \Delta\epsilon_{2}<\epsilon<\epsilon_{3}\\ p_{3},&\epsilon_{3}<\epsilon<\epsilon_{3}+\Delta\epsilon_{3}\\ p_{3}+s_{3}\big{[}\epsilon-(e_{3}+\Delta\epsilon_{3})\big{]},&\epsilon>\epsilon _{3}\end{array}\right. \tag{11}\] where the three pairs of the pressure and energy density \(p_{1,2,3}\) and \(e_{1,2,3}\) correspond, respectively, to the transition from hadronic to quark matter, from a low-density (2SC) quark phase to a high-density (CFL) quark phase, and from the high-density quark phase to the conformal fluid. The squared sound speeds in the quark phases are denoted by \(s_{1}\), \(s_{2}\), and \(s_{3}=c_{\rm conf.}^{2}\). Note that we assume that the 2SC and CFL quark phases are separated by a jump at the phase boundary, as it follows from the study of Ref. [56]. At high densities, the CFL phase reaches the "conformal limit" where the interactions are dominated by the underlying conformal symmetry of QCD. In this limit, the speed of sound squared is \(s_{3}=1/3\) (in units of speed of light), whereas the effects of the pairing gap of the CFL phase can be neglected in a first approximation. Note that we allow for a small jump between proper CFL and conformal zero-gap fluid, but its effect on the observables is marginal, i.e., a smooth interpolation would not change the results. According to Equation (11), the modeling of the EoS of quark phases involves the following parameters: * The three (energy) densities at which the sequential transitions between the nucleonic phase, 2SC phase, CFL phase, and conformal fluid take place. * The magnitudes of the jumps in the energy density at the points of the transition from nuclear to the 2SC phase, \(\Delta\epsilon_{1}\), from the 2SC to the CFL phase, \(\Delta\epsilon_{2}\), and from the CFL to the conformal fluid phase \(\Delta\epsilon_{3}\). * The speeds of sound in the 2SC and CFL phases \(s_{1}\) and \(s_{2}\). The speed of sound of the conformal fluid is held fixed at \(s_{3}=1/3\). Note that for any phase, \(s\leq 1\) by causality. Our model EoS is constructed using the following parameters. The transition pressure and energy density from nuclear and quark matter are \(p_{1}=1.7\times 10^{35}\) dyn/cm\({}^{2}\) and \(\epsilon_{1}=8.4\times 10^{14}\) g cm\({}^{-3}\), respectively. The magnitude of the first jump \(\Delta\epsilon_{1}=0.6\)\(\epsilon_{1}\). The upper range of the energy density of the 2SC phase is determined as \(\epsilon_{1}^{\rm max}=\delta_{2\rm SC}(\epsilon_{1}+\Delta\epsilon_{1})\), where \(\delta_{2\rm SC}\) is a dimensionless parameter measuring the width of the 2SC phase. The magnitude of the second jump is parametrized in terms of the ratio parameter \(r=\Delta\epsilon_{2}/\Delta\epsilon_{1}\). The extent of the CFL phase is determined by limiting its energy density range to \(\epsilon_{2}^{\rm max}=\delta_{CFL}(\epsilon_{2}+\Delta\epsilon_{2})\). The transition to the conformal fluid is assumed to be of the first order with a small (compared to other scales) energy-density jump equal to \(0.1r\). The transition to the conformal fluid phase occurs at densities \(\epsilon_{3}=2.25\)-\(2.57\times 10^{15}\) g cm\({}^{-3}\), i.e., by about a factor of 10 larger than the saturation density. The speeds of sound squared are fixed as \[s_{1}=0.7,\qquad s_{2}=1,\qquad s_{3}=\frac{1}{3}. \tag{12}\] The values of \(s_{1}\) and \(s_{2}\) are chosen to obtain triplet configurations with large enough masses of hybrid stars. The magnitudes of jumps between the nuclear, 2SC, and CFL phases were chosen suitably to produce twin and triplet configurations [15]. Figure 4 shows a collection of EoS constructed based on Equation (11), which shares the same low-density nuclear EoS. In this collection, we vary the parameter \(r\) (as indicated in the plot) for fixed values \(\delta_{2SC}=\delta_{CFL}=0.27\). The corresponding \(M\)-\(R\) relations for static, spherically symmetrical stars obtained from solutions to Tolman-Oppenheimer-Volkoff equations are shown in Figure 5. For the chosen magnitude of the first jump \(\Delta\epsilon_{1}\), the \(M\)-\(R\) curves show the phenomenon of twins--two stars of the same mass but different radii. The radii of twins differ by about 1 km. The more compact configuration is a hybrid star, i.e., a star with a quark core and nuclear envelope, whereas the less compact counterpart is a purely nucleonic star. The second phase transition may or may not result in a classically stable sequence depending on the value of the parameter \(r\) parameterizing the magnitude of the second jump. For small jumps \(r=0.1\) and \(0.23\), new stable branches arise, which are continuously connecting to the stable 2SC branch (\(r=0.1\)) or are separated by a region where the stars are unstable (\(r=0.23\)). It is seen that, in this case, triplets of stars with different radii but the same Figure 4: The pressure vs. energy density (EoS) for nucleonic matter (long-dash-dotted curve) and a series of EoSs that contain two sequential phase transitions via Maxwell construction manifest in the jumps of the energy density. The models differ by the magnitude of the second jump measured in terms of the ratio \(r=\Delta\epsilon_{2}/\Delta\epsilon_{1}\). masses appear. The densest stars contain, in addition to the 2SC phase, a layer of the CFL phase, whereby the central density on the stable branch can exceed the onset density of the conformal fluid. This implies that the densest member of a triplet will contain in its center conformal fluid with \(c_{\rm conf.}=1/\sqrt{3}\). For each \(M\)-\(R\) curve in Figure 5, the star with a central density at which the conformal fluid first appears is shown by a dot (this density is fixed at 10 \(n_{\rm sat}\)). The stable branch of conformal fluid containing stars is followed by a classically unusable branch with \(dM/dp_{c}<0\). For asymptotically large central densities, the masses and radii increase again. The family of the EoSs that differ only in the value of the parameter \(r\) cross at a "special point". This type of crossing has been observed for twin star configurations with a variation in a particular parameter of the EoS [58]; however, the EoS excluded two sequential phase transitions. The behavior of \(M\)-\(R\) curves at very high central densities differs from the ones that were found in Ref. [33], where a branch of ultracompact twin stars with masses of the order of 1 \(M_{\odot}\) and radii in the range of 6-7 km were found for a single phase transition from the nuclear matter to the quark phase. Thus, we conclude that the high-density asymptotics of the EoS modifies the behavior of the \(M\)-\(R\) curves if the conformal limit is achieved at densities of the order of 10 \(n_{\rm sat}\). The observation above may have phenomenological implications for the following reason. The stability of stellar configurations is commonly determined by the requirement that the star's mass must increase with increasing central density (or central pressure), i.e., \(\partial M/\partial\rho_{c}>0\). An alternative and physically more transparent method is to compute the radial modes of oscillation of a star and determine the stable configurations from the requirement that their frequencies are real. Ref. [17] showed that the classical stability conditions fail if the conversion rate is slow, i.e., if its characteristic timescale is longer than the period of oscillations. In that case, the fundamental modes are stable even when \(\partial M/\partial\rho_{c}<0\); i.e., stars with central densities larger than the one corresponding to the maximum-mass star (which lie to the left from the maximum on the \(M\)-\(R\) diagram in Figure 5) will be stable. This observation also applies to configurations with two-phase transitions, as shown in Refs. [19; 20]. Furthermore, Ref. [20] shows Figure 5: The \(M\)–\(R\) relations corresponding to the EoS shown in Figure 4 for several ratios of the second jump. The right panel enhances the high-mass range to demonstrate the emergence of the triplets and the fourth family of compact stars. Note that the different MR curves cross each other at the special point located in the low-mass and low-radius region, in analogy to the single-phase transition case; see Ref. [58]. The blue circles indicate the stars in which the central density corresponds to 10 \(n_{\rm sat}\) at which the conformal fluid sets in. classically unstable stars contribute to the count of same-mass stars, which leads to the appearance of higher-order multiplets such as quadruplets, quintuplets, and sextuplets. We will return to the stability of hybrid stars in Section 6. ## 5 Cooling of Compact Stars with Quark Matter Cores The cooling of compact stars may provide indirect information about quark phases in hybrid stars. The properties of phases of dense quark matter affect both neutrino emission and the specific heat content that determine the cooling rate of a compact object in general; see Refs. [59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75]. Non-superconducting relativistic quark matter cools predominantly via the direct Urca processes involving \(d\), \(u\), and \(s\) quarks [76] \[\begin{split}& d\to u+e+\bar{\nu}_{e},\\ & u+e\to d+\nu_{e},\\ & s\to u+e+\bar{\nu}_{e},\\ & u+e\to s+\nu_{e},\\ \end{split} \tag{13}\] where \(\nu_{e}\) and \(\bar{\nu}_{e}\) are the electron neutrino and antineutrinos. The neutrino emissivity through the direct Urca process for non-strange quarks is given via [77] \[\epsilon_{\beta}=8.8\times 10^{26}\alpha_{s}\bigg{(}\frac{n}{n_{\rm sat}} \bigg{)}Y_{e}^{1/3}T_{9}^{6}\quad\rm ergs\;cm^{-3}\;\;s^{-1}, \tag{14}\] where \(n\) is the baryon density, \(Y_{e}\) is the electron fraction, \(T_{9}\) is the temperature in units of \(10^{9}\) K, and \(\alpha_{s}\) is the running strong coupling constant. The emissivity given by Equation (14) implies that the stars containing unpaired quark matter would cool quickly via this direct Urca process. The cooling would be slower if the quark spectrum contains a gap. In the case of the phenomenologically relevant 2SC phase, two alternatives are possible, depending on whether the Fermi surfaces of quarks are a full gap or they contain zero-gap segments (nodes). The latter feature arises in the case of pairing between fermions on different Fermi surfaces, as discussed in Section 2. Ref. [78] studied a generic case where the quark spectrum is gapped if the parameter \(\zeta=\Delta_{0}/\delta\mu\) associated with the new scale \(\delta\mu=(\mu_{d}-\mu_{u})/2\), where \(\mu_{u,d}\) are the chemical potentials of light quarks and \(\Delta_{0}\) is the gap for \(\delta\mu=0\). The suppression of emissivity by pairing is qualitatively different in the cases in which \(\zeta>1\) and \(\zeta<1\). The novelty arises in the second case, where Fermi surfaces have nodes and particles can be excited around these nodes without any energy cost (which is not the case for gapped Fermi surfaces). Note that in the case of the FF phase, the shift in the chemical potential is replaced by a more general function--the anti-symmetric in the flavor part of the single particle spectrum of up and down quarks. This new physics can be captured by adopting a generic parameterization of the suppression factor of the quark Urca process with pairing suggested in Ref. [78]. The neutrino emissivity of the 2SC phase \(\epsilon_{2SC}^{rg}\) can be related to the Urca rate in the normal phase (14) as \[\epsilon_{2SC}^{rg}(\zeta;T\leq T_{c})=2f(\zeta)\epsilon_{\beta},\quad f(\zeta )=\frac{1}{\exp\Bigl{[}(\zeta-1)\frac{\delta\mu}{T}-1\Bigr{]}+1}, \tag{15}\] where the parameters \(\zeta\) and \(\delta\mu\) were introduced above, \(T\) is the temperature, and \(T_{c}\) is the critical temperature of the phase transition from normal to the 2SC phase. Furthermore, the parameter \(\zeta(T)\) is temperature-dependent and we adopt the parametrization \[\zeta(T)=\zeta_{i}-\Delta\zeta g(T), \tag{16}\] where \(\zeta_{i}\) is the initial value, \(\Delta\zeta\) is the constant change in this function, and the function \(g(T)\) describes the transition from the initial value \(\zeta_{i}\) to the asymptotic final value \(\zeta_{f}=\zeta_{i}-\Delta\zeta\). The transition is conveniently modeled by the following function \[g(T)=\frac{1}{\exp\left(\frac{T-T^{*}}{w}\right)+1}, \tag{17}\] which allows one to control the temperature of transition by adjusting the parameter \(T^{*}\) and the smoothness of the transition via the width parameter \(w\). An additional issue to address is the role of the blue quarks that do not participate in the 2SC pairing. Blue quarks may pair among themselves due to the attractive component of the strong force as in the ordinary BCS case (as both members of the Cooper pair are on the same Fermi surface). Then, the emissivity of blue quarks in the superfluid state is given by \[\epsilon^{b}_{\text{BCS}}(T\leq T_{cb})\simeq\epsilon^{b}_{\beta}(T>T_{cb})\exp \left(-\frac{\Delta_{b}}{T}\right), \tag{18}\] where \(\Delta_{b}\) is the gap in the blue quark spectrum, \(T_{cb}\) is the corresponding critical temperature, and \(\epsilon^{b}_{\beta}\) is the neutrino emissivity of blue quarks in the normal state. As discussed in Section 4, the densest members of the triplets contain cores of CFL matter that is fully gapped. In this case, the excitations are the Goldstone modes of the CFL phase. Their emissivity, as well as the specific heat, is rather small compared to other phases due to their very small number density [79]. In the following discussion, we will ignore the role of the CFL phase in the cooling of hybrid stars. In the conformal fluid phase, we expect three-flavor pairing gap \(\Delta\sim\mu g^{-5}\exp(-1/g)\), \(g=\sqrt{4\pi a_{s}}\), with a spin-flavor structure of the CFL phase. Let us turn to the cooling simulations of hybrid stars with a gapless 2SC superconductor. The cooling tracks are shown in Figure 6, and the input physics beyond the emissivities is discussed elsewhere [61; 62; 67; 68]. The key parameter regulating the behavior of the cooling curves in Figure 6 is the temperature \(T^{*}\), which controls the transition from the gapped to ungapped 2SC phase. Similar results were obtained in the context of rapid cooling of the compact star in Cassiopeia (Cas) A remnant in Ref. [61; 62; 68]. The model has a second parameter, the gap for blue-colored quarks \(\Delta_{b}\), which prohibits rapid cooling via the Urca process involving only blue quarks. The third parameter \(w\) in Equation (17) accounts for the finite time scale of the phase transition--see Refs. [62; 68]--but it is important only for the fine-tuning of the cooling curves close to the age of the Cas A. The various cooling tracks shown in Figure 6 correspond to various values of \(T^{*}\) for fixed values of \(w\) and \(\Delta_{b}\) and stellar configuration of mass 1.93 \(M_{\odot}\). It is seen that if \(T^{*}\) is small, then the quark core does not influence the cooling, because during the entire evolution \(T>T^{*}\); therefore the neutrino emission is suppressed by the fully gapped Fermi surfaces of red-green quarks. For large \(T^{*}\), early transition to the gapless phase occurs, and the star cools fast via the direct Urca process. Note that the value of \(T^{*}\) can be fine-tuned to reproduce not only the current temperature of Cas A but also the fast decline claimed to be observed during the last decade or so; see Ref. [80] and references therein. From the brief discussion above, one may conclude that the phase transitions within the cold QCD phase diagram may induce interesting and phenomenologically relevant changes in the cooling behavior of compact stars. Although we will not discuss in any depth the dependence of cooling tracks on the stellar mass, it should be pointed out that the onset of new phases in the interiors of compact stars, for example, hyperonization, meson condensation, and phase transition to quark matter, lead to mass hierarchy in the cooling curves [81; 82; 83; 84]. Typically, one finds that heavier stars that have central densities beyond the threshold for the onset of the new phase cool faster than the light stars containing only nucleonic degrees of freedom. This is also the case for models of stars studied here. For example, stars with masses \(M\sim 1.1--1.6\,M_{\odot}\) remain warm over longer time scales and are thus hotter than their heavy analogs, which develop large quark cores. ## 6 Stability Criteria for Hybrid Stars The oscillation modes of a compact star are important probes of their internal structure, as has been shown in the case of \(g\) modes, which are sensitive to the size of the density jumps at a first-order phase transition between hadronic and quark matter [85; 86; 87]. They are expected to leave an imprint on the emitted gravitational wave signal during the binary inspiral of a neutron star, as well as in the post-merger phase [88; 89; 90; 91]. As discussed briefly in Section 4, high-central density stars on the descending branch of \(M\)-\(R\) diagram can have phenomenological implications if they are stabilized by some mechanism, which we discuss in this section. The main mode of instability for non-rotating, spherically symmetrical fluid stars in general relativity is the instability against the radial \(f\)-mode of oscillations [92]. If the \(f\)-mode frequency \(\omega_{f}^{2}>0\), the stellar configuration is dynamically stable, and it is unstable if \(\omega_{f}^{2}<0\). The location of this instability point on the \(M\)-\(R\) diagram agrees well with the turning point of the mass-central-density (\(M-\rho_{c}\)) curve. The stars on the ascending branch are stable, whereas those on the descending branch are unusable. The maximum mass is the point of marginal stability. Numerical simulations found some violations of this criterion [93; 94], but quantitative deviations are insignificant. However, recent work found that the agreement between these criteria is strongly violated for stars with first-order phase transition, as we review below. Early work on stellar oscillations with phase transitions inside the star was carried out in the Newtonian theory assuming uniform phases [95; 96]. Two possibilities arise depending on the interplay between the scales in the problem: (a) when the conversion rate from one phase to another is fast, the interface between phases oscillates as a whole when perturbed; (b) if, however, conversion is slow, then the interface is fixed over the period of characteristic oscillations. The second case is interesting because, as shown in Ref. [17], the sign of \(\omega_{f}^{2}\) does not change at the maximum mass \(M_{\rm max}\) but stays positive over a segment where \(\partial M/\partial\rho_{c}<0\). This implies that the classically unstable branch becomes Figure 6: Cooling tracks of compact stars with quark cores in the surface-temperature–age diagram. The masses of the stars are the same, \(M=1.93\ M_{\odot}\), and the different curves correspond to different values of the parameter \(T^{*}\) in units of keV, except the dotted line, which corresponds to \(1\ M_{\odot}\) mass nucleonic compact star without a quark core. The observational points with error bars are shown by green circles; the arrows show the upper limits on surface temperatures of known objects. stable against \(f\)-mode oscillations. Several subsequent studies confirmed this feature in the case of single- [18] and two-phase transitions [19; 20]. The case of two-phase transition was extended in several directions in Ref. [20] by focusing on EoS, which supported classical twin and triplet star configurations, as discussed in Section 2. It was shown that in the case of slow conversion, higher-order multiplet stars arise, since now the stars on the \(\partial M/\partial\rho_{c}<0\) segments of the mass-central-density curve are located on the stable branch. Also, the properties of the reaction mode of a compact star [96], which arises in case (a) with one or more rapid phase transitions, were studied. The fundamental modes of hybrid stars are obtained from the set of equations [97; 98] \[\frac{\mathrm{d}\xi}{\mathrm{d}r} = \left(\frac{\mathrm{d}\nu}{\mathrm{d}r}-\frac{3}{r}\right)\xi- \frac{\Delta P}{r\Gamma P}, \tag{19}\] \[\frac{\mathrm{d}\Delta P}{\mathrm{d}r} = \left[\mathrm{e}^{2\lambda}\left(\omega^{2}\mathrm{e}^{-2\nu}-8 \pi P\right)+\frac{\mathrm{d}\nu}{\mathrm{d}r}\left(\frac{4}{r}+\frac{\mathrm{ d}\nu}{\mathrm{d}r}\right)\right](\rho+P)r\xi\] (20) \[- \left[\frac{\mathrm{d}\nu}{\mathrm{d}r}+4\pi(\rho+P)r\mathrm{e} ^{2\lambda}\right]\Delta P,\] where \(\xi=\xi_{\mathrm{dim}}/r\), with \(\xi_{\mathrm{dim}}\) being the Lagrangian displacement, \(r\) the radial coordinate, \(\Delta P\) the Lagrangian perturbation of pressure, \(\rho\) the mass-energy-density, \(\omega\) the angular frequency, \(\Gamma\) the adiabatic index, and \(\mathrm{e}^{2\nu}\) and \(\mathrm{e}^{2\lambda}\) the metric coefficients entering the Tomann-Oppenheimer-Volkoff equations. In a first approximation, the adiabatic index for a chemically equilibrated relativistic fluid can be taken as that of the matter in \(\beta\)-equilibrium \(\Gamma=[(\rho+P)/P](dP/d\rho)\). The set of Equations (19) and (20) can be solved provided the boundary conditions are known. These are specified by assuming that the displacement field is divergence-free at the center and that the Lagrangian variation of the pressure vanishes at the surface of the star: \[\Delta P(r=0)=-3\Gamma P\xi(r=0),\qquad\Delta P(r=R)=0. \tag{21}\] The \(\omega^{2}\) values obtained in this manner are usually labeled according to the number of radial nodes in \(\xi\) and the \(f\) mode corresponds to the nodeless mode. In the case of multiple phase transitions in the QCD phase diagram, one needs junction conditions that relate the values of Lagrangian perturbations on both sides of the interface between phases. Such junction conditions already appear in the work of Ref. [96] in the Newtonian cases, whereas the the general relativistic case is treated in Ref. [99]. For the _slow conversion rate_ one has the junction condition \[[\Delta P]_{-}^{+}=0,\qquad[\xi]_{-}^{+}=0; \tag{22}\] for _rapid conversion rate_, one has \[[\Delta P]_{-}^{+}=0,\qquad\left[\xi-\frac{\Delta P}{r}\left(\frac{\mathrm{d} P}{\mathrm{d}r}\right)^{-1}\right]_{-}^{+}=0, \tag{23}\] where \(+/-\) refer to the high- and low-density sides of the transition, respectively. At present, it is not possible to state with confidence which limit is realized in quark matter, as the conversion rate varies significantly over the parameter space; see Ref. [100] for a discussion and earlier references. Ref. [20] considered modified junction conditions that smoothly interpolate between the two limiting cases. Phenomenologically, the most interesting implication of the modified stability criteria is the existence of new stable configurations beyond those that are classically stable. In particular, in the case where twins and triplets exist according to classical criteria of stability, additional configurations will arise when conversion between phases at the interface is slow. These can form quadruplets (the maximum number in the case of twins) and quintuplets and sextuplets in the case of triplets. A particular case that allows for classical triplet stars is illustrated in Figure 7, adapted from Ref. [20]. The fundamental mode frequency \(\omega_{f}\) is shown as a function of the central pressure of the stars in two cases when both interfaces (i.e., nucleonic to 2SC and 2SC-CFL) feature rapid or slow conversion. (The case of rapid-slow and slow-rapid conversions are intermediate cases, and we omitted them.) To recover the classical case, one needs to assume that the conversion at each interface is rapid: in this case, the instability region is characterized by the vanishing of the real part of \(\omega_{f}\), as seen in Figure 7. In the case of slow conversions at both interfaces, one finds a continuous positive solution across the values of central densities of the stellar sequences, thus indicating that the stars are always stable, even on the descending branch of the mass-central-pressure curve. To summarize the recent findings regarding the stability of the hybrid stars, we have seen that their stability against the fundamental oscillation modes strongly depends on the junction conditions at the interfaces between the phases. These are determined by the rate of conversion between phases at the phase boundary. In the case of slow phase transitions (i.e., when the conversion time scale is larger than the characteristic period of the oscillations), the usual stability criteria are modified and new stable segments appear that were previously unstable. Alternative variants of junction conditions that are intermediate between slow and rapid conversion were also considered, but the resulting radial modes do not differ significantly from the slow conversion case, with corresponding implications for the stability of the stars [20]. ## 7 Conclusions The investigation of dense QCD through the astrophysics of compact stars is an actively pursued subject. This is due to the substantial observational progress, which includes measurements of the masses and radii of pulsars and gravitational wave signals from mergers of two neutron stars and neutron-star-black-hole binaries. A more thorough comprehension of the thermodynamics of dense QCD, weak interactions, and the dynamics of phase transitions would greatly enhance our ability to model astrophysical phenomena relevant to current observational programs. This work gave an overview of the phase diagram of cold and dense QCD appropriate for compact stars. We stressed that the universality of the phase diagram of imbalanced Figure 7: The fundamental mode of triplet stars as a function of the central pressure of the configuration in the cases when both nucleonic-2SC and 2SC-CFL interfaces feature slow (solid line) or rapid conversion (dotted line). In the case of rapid conversion, the classical stability criteria apply; i.e., there are no real \(f_{0}\) solutions in the region where stars are unstable. The corresponding curves are not shown. For more details, see Ref. [20]. fermionic superfluids, such as cold atomic gases and nuclear matter, provides a valuable guide to the possible arrangement of the color-superconducting phases in neutron stars, the presence of tri-critical points, and BCS-BEC crossovers. The universality allows one to conjecture the possible structures of the phase diagram in the density-temperature plane including such phases, such as the Fulde-Ferrel phase, deformed Fermi surface phase, and the phase separation. As a novel contribution, the previously proposed parametrization of the EoS of dense quark matter with sequential phase transitions was extended to include a conformal fluid at large densities (\(n\geq 10\)\(n_{\text{sat}}\)) with the speed of sound \(c_{\text{conf.}}=1/\sqrt{3}\). The part of the \(M\)-\(R\) diagram that contains twins and triplets remains intact because the transition to conformal fluid occurs at larger central densities than those achieved in these objects. Nevertheless, for large central densities, we find behavior that is qualitatively different from earlier studies of this regime: the \(M\)-\(R\) curves spiral in; i.e., after reaching a minimum, they turn to the right (larger radius region), thus avoiding the region of ultra-compact stars. Therefore, if the conformal limit is reached for densities much larger than those considered here, the ultracompact region with radii 6-7 km can be populated [33]. In the opposite case of the early onset of the conformal limit (as discussed in Section 4), the radii will remain large, but small-mass regions can be populated if the stability criteria are modified by the slow conversion at the interface(s) between the phases. Another interesting new observation is that the change in the magnitude of the jump from 2SC to the CFL phase induces a special point on the \(M\)-\(R\) diagram at which all the curves meet in analogy to the case of single-phase transition; see Ref. [58]. The importance of studying this asymptotically large central density regime is phenomenologically relevant if the conversion between various quark and nuclear phases is slow compared to the characteristic timescale of oscillations, as discussed in Section 6. In this case, the stars on the descending branch of mass-central-density (and its counterpart on the \(M\)-\(R\) diagram) may be stable [17; 18; 19; 20], contrary to the classical requirement \(dM/d\rho_{c}>0\) for the branch to be stable, which in turn leads to higher multipole (beyond triplets) stars on the \(M\)-\(R\) diagram. This research was funded by Deutsche Forschungsgemeinschaft Grant No. SE 1836/5-2 and the Polish NCN Grant No. 2020/37/B/ST9/01937. The data presented in this study are available on request from the author. The author is grateful to M. Alford, J.-J. Li, and P. B. Rau for collaboration on modeling compact stars with quark cores and the referees for helpful comments. The author declares no conflict of interest.
2305.14463
ReadMe++: Benchmarking Multilingual Language Models for Multi-Domain Readability Assessment
We present a comprehensive evaluation of large language models for multilingual readability assessment. Existing evaluation resources lack domain and language diversity, limiting the ability for cross-domain and cross-lingual analyses. This paper introduces ReadMe++, a multilingual multi-domain dataset with human annotations of 9757 sentences in Arabic, English, French, Hindi, and Russian, collected from 112 different data sources. This benchmark will encourage research on developing robust multilingual readability assessment methods. Using ReadMe++, we benchmark multilingual and monolingual language models in the supervised, unsupervised, and few-shot prompting settings. The domain and language diversity in ReadMe++ enable us to test more effective few-shot prompting, and identify shortcomings in state-of-the-art unsupervised methods. Our experiments also reveal exciting results of superior domain generalization and enhanced cross-lingual transfer capabilities by models trained on ReadMe++. We will make our data publicly available and release a python package tool for multilingual sentence readability prediction using our trained models at: https://github.com/tareknaous/readme
Tarek Naous, Michael J. Ryan, Anton Lavrouk, Mohit Chandra, Wei Xu
2023-05-23T18:37:30Z
http://arxiv.org/abs/2305.14463v3
# Towards Massively Multi-domain Multilingual ###### Abstract We present _ReadMe++_, a massively multi-domain multilingual dataset for automatic readability assessment. Prior work on readability assessment has been mostly restricted to the English language and one or two text domains. Additionally, the readability levels of sentences used in many previous datasets are assumed on the document-level other than sentence-level, which raises doubt about the quality of previous evaluations. We address those gaps in the literature by providing an annotated dataset of 6,330 sentences in Arabic, English, and Hindi collected from 64 different domains of text. Unlike previous datasets, ReadMe++ offers more domain and language diversity and is manually annotated at a sentence level using the Common European Framework of Reference for Languages (CEFR) and through a Rank-and-Rate annotation framework that reduces subjectivity in annotation. Our experiments demonstrate that models fine-tuned using ReadMe++ achieve strong cross-lingual transfer capabilities and generalization to unseen domains. ReadMe++ will be made publicly available to the research community. ## 1 Introduction Automatic readability assessment is the task of determining the cognitive load needed by an individual to understand a piece of text (Vajala, 2021). Assessing the readability of a sentence is useful for many applications, including controlling the complexity of machine-translated text (Agrawal and Carpuat, 2019), ranking search engine results according to their readability level (Fourney et al., 2018), or developing tools such as _Grammardy_ that assist writers in enhancing the quality of their text. Enabling such technologies for all the languages of the world requires readability prediction methods that can generalize across different language families and text genres. Despite the active research on readability assessment, the existing literature in this field has been pre-dominantly focused on the English language, making it difficult to assess how proposed methods perform for different languages. Further, prior work suffers from two important evaluation problems. First, it has been often assumed that all sentences from a particular document have the same level of readability (Martinc et al., 2021; Lee and Vajala, 2022) such as the grade levels in the Newsela (Xu et al., 2015) dataset, one of the widely used datasets in prior work. We argue that this assumption is inaccurate, since one particular document can contain sentences with varying levels of readability (Arase et al., 2022). It is important to have sentence-level human annotations. Second, previously used evaluation datasets belong only to one particular domain. However, readability assessment is a task that spans all textual domains. We refer to domain as a collection of texts characterized by consistent features such as topic, style, genre, or linguistic register (Ramponi and Plank, 2020). This aligns with studies on domain adaptation, which examines the impact of distribution Figure 1: English sentences from ReadMe++ belonging to various domains and readability levels on a 6-point scale (1: easiest, 6: hardest). Human labels are compared to fine-tuned BERT and XLM-R predictions. shifts on model performance, and where language models have been shown to struggle when handling data that belong to a different domain from that of their pre-training corpus (Plank, 2016; Farahani et al., 2021; Arora et al., 2021). The lack of a multi-domain multilingual corpus with high-quality annotations has prevented the development of readability prediction methods that can generalize to different languages and unseen domains. To address all these issues, we present _ReadMe++_, a massively multi-domain, multilingual corpus of manually annotated sentences for readability assessment. Our corpus contains 6,330 sentences collected from 64 distinct domains of text in 3 languages (Arabic, English, and Hindi) that belong to different scripts. While annotating sentences for readability can be a subjective procedure, we introduce a Rank-and-Rate approach for annotation using the Common European Framework of Reference for Languages (CEFR) readability levels1 (6-point scale). Our annotation framework reduces subjectivity and provides reliable annotations (SS3). Examples from our corpus are shown in Figure 1. We experiment with a variety of monolingual and multilingual language models. In the supervised setting, we find a consistent trend in English and Arabic of smaller models outperforming larger ones (SS4). Our results also reveal a big discrepancy in performance between monolingual and multilingual models when used for unsupervised prediction (SS5). We also demonstrate how language models fine-tuned using ReadMe++ achieve strong generalization to a large number of unseen domains of text compared with models trained on previous datasets, highlighting the usefulness of the massive domain diversity. We also show how models trained with our corpus can perform better zero-shot cross lingual transfer when evaluated on 4 non-English languages (SS6). Footnote 1: [https://www.coe.int/en/web/common-european-framework-reference-languages/level-descriptions](https://www.coe.int/en/web/common-european-framework-reference-languages/level-descriptions) ## 2 Related Work Datasets for Readability Assessment.Many of the existing datasets that have been used in readability assessment research are mainly collected from sources that provide parallel or non-parallel text with various levels of writing (Vajjala and Lucic, 2018; Xia et al., 2016; Xu et al., 2015; Vajjala and Meurers, 2012; Azpiazu and Pera, 2019; Martinez et al., 2021; Khallaf and Sharoff, 2021). Sentences are automatically assigned readability scores based on the writing level of the document to which they belong (_document-level automatic annotation_). This assumes that all sentences within one article have the same readability level, which is not an entirely correct assumption. For instance, sentences that appear in a 5th grade school book need not be of the exact same level of readability. Additionally, some corpora such as Newsela (Xu et al., 2015) have been rewritten for simplification with the sentence length as a metric to guide the humans performing the simplification. This can cause misleading correlations for metrics that are largely based on sentence length such as many of the traditional feature-based metrics and the neural approach of Martinez et al. (2021). Another line of work manually annotated sentences on their level of complexity (_sentence-level manual annotation_) using various scales (0-100, \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multicolumn{1}{c}{**Dataset**} & \#**Languages** & \#**Scripts** & \#**Domains** & \multicolumn{2}{c}{**Annotation**} \\ & & & & & **Document-level** & **Sentence-level** \\ \hline WieeBit Corpus (Vajjala and Meurers, 2012) & 1 (en) & 1 (Latin) & 1 & ✓ & \\ Newsela (Xu et al., 2015) & 1 (en) & 1 (Latin) & 1 & ✓ & \\ Cambridge (Xia et al., 2016) & 1 (en) & 1 (Latin) & 1 & ✓ & \\ MTDE (De Clero and House, 2016) & 2 (en, \(\mathrm{nl}\)) & 1 (Latin) & 4 & ✓ & ✓ \\ OneStopEnglish (Vajala and Lucic, 2018) & 1 (en) & 1 (Latin) & 1 & ✓ & \\ CompDS (Brunato et al., 2018) & 2 (en, \(\mathrm{it}\)) & 1 (Latin) & 1 & & ✓ \\ Stajner et al., 2017) & 1 (en) & 1 (Latin) & 2 & & ✓ \\ VikiviA (Apriza and Pera, 2019) & 6 (en, \(\mathrm{fr}\), \(\mathrm{it}\), \(\mathrm{es}\), \(\mathrm{eu}\), \(\mathrm{ea}\)) & 1 (Latin) & 1 & ✓ & \\ TextComplexityDE (Naderi et al., 2019) & 1 (de) & 1 (Latin) & 1 & & ✓ \\ Slovenian SB (Martinc et al., 2021) & 1 (sl) & 1 (Latin) & 1 & ✓ & \\ Rao et al., 2021) & 1 (rh) & 1 (Chinese Idoograms) & 1 & ✓ & \\ ALC Corpus (Khalaf and Sharoff, 2021) & 1 (ar) & 1 (Arabic) & 1 & ✓ & \\ Gloss Corpus (Khalaf and Sharoff, 2021) & 1 (ar) & 1 (Arabic) & 1 & ✓ & \\ CEFR-SP (Arase et al., 2022) & 1 (en) & 1 (Latin) & 3 & & ✓ \\ \hline **ReadMe++ (Ours)** & 3 (ar, \(\mathrm{en}\), \(\mathrm{hi}\)) & 3 (Arabic, Latin, Brahmic) & 64 & & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of datasets commonly used in the literature for evaluating readability models. Most previous datasets are annotated on a corpus-level and cover one domain and languages from the latin script only. ReadMe++ provides more domain and typological diversity while being manually annotated at the sentence level. 5-point, 7-point, etc.) (De Clercq and Hoste, 2016; Stajner et al., 2017; Naderi et al., 2019; Brunato et al., 2018). Individual rating of sentences with no descriptions that relate ratings to language abilities results in subjective annotations. The recent work of Arase et al. (2022) addressed this subjectivity problem by using the CEFR levels as a scale for annotation, a standard that describes the language ability of a learner. The introduced CEFR-SP dataset was annotated by English teaching professionals. However, their work only covers 17k English sentences collected from 3 domains: Wikipedia, Newsela, and the Sentence Corpus of Remedial English (SCoRE) (Chujo et al., 2015). Instead of scale, we focus on domain and language diversity. Unlike previous datasets that mostly cover the Latin script and one or two domains, ReadMe++ covers 64 different domains in 3 different scripts and is manually annotated by native speakers according to the CEFR levels using our rank-and-rate annotation approach that mitigates subjectivity in labeling for readability. Table 1 summarizes the differences between ReadMe++ and existing datasets. Multilingual Readability Assessment.Many prior efforts have used neural language models in a supervised manner for readability assessment. Supervised approaches include fine-tuning (Blaneck et al., 2022; Mesgar and Strube, 2018; Sun et al., 2020; Chakraborty et al., 2021; Liao et al., 2021) and combining language model embeddings with linguistic features (Imperial, 2021; Uto et al., 2020; Imperial et al., 2022; Le et al., 2018). Most previous works focused on monolingual readability assessment, while fewer studies have been done on the multilingual side. Lee and Vajjala (2022) performed cross-lingual experiments from English to French and Spanish. They only experimented with two target languages from the same Latin script as the pivot language. Rao et al. (2021) performed cross-lingual experiments to transfer from English to Chinese. Azpiazu and Pera (2019) proposed a multi-attentive recurrent neural network approach and experimented on six languages from the Latin script. Although promising, supervised approaches require training data that is often unavailable in non-English languages. Recently, Martinc et al. (2021) proposed the first neural unsupervised approach that combines language model statistics with sentence length as a lexical feature, which was evaluated on English and Slovenian corpora using monolingual language models. The majority of those previous studies on multilingual readability assessment have been evaluating on datasets annotated on a document-level. ReadMe++ provides higher-quality multilingual data that is manually annotated by native speakers and covers a diverse set of scripts, making it a better benchmark to study multilingual readability assessment. ## 3 ReadMe++ Corpus ReadMe++ contains sentences manually annotated for readability in 3 languages (Arabic, English, and \begin{table} \begin{tabular}{l l l l l} \hline \hline \multirow{2}{*}{**Parent Domain (Abrv)**} & \multirow{2}{*}{**\#Sub-domains**} & \multicolumn{3}{c}{**Sub-domain Example (source)**} \\ \cline{3-5} & & & **ar** & **en** & **hi** \\ \hline Captions (Capt) & 4 & **Images** (Elundi et al., 2020) & **Wildens** (Wang et al., 2019) & **Mwies** (Lison and Tiedemann, 2016) \\ Dialogue (Dia) & 3 & **Opezdemian** (Yuos et al., 2020) & **Negotiation** (He et al., 2018) & **Task-oriented** (Mahyya et al., 2021) \\ Dictionaries (Eduj) & 1 & **Dictionaries** (timalamycin) & **Dictionaries** (dictionary) & **X** \\ Entertainment (Ent) & 1 & **Jokes** (alurnal.com) & **Jokes** (Weller and Sepp, 2019) & **Jokes** (123hiniapikes.com) \\ Framework (Fain) & 1 & **X** & (Milda et al., 2014) & **Jokes** (Weller and Sepp, 2019) & **Jokes** (123hiniapikes.com) \\ Forums (For) & 3 & **QA Weibites** (hi.quora.com) & **StackOverflow** (Thaksum et al., 2020) & **Reddit** (reddit.com) \\ Geides (Goi) & 4 & **Oalline Tutorials** (ar.wikihbow.com) & **Code Documentation** (anthworks.com) & **Cooking Recipes** (meralamondi.in) \\ Legal (Lep) & 3 & **UN Parliament** (Zemski et al., 2016) & **Contittitors** (continuous-conconter.org) & **Statistical Rainburg** (Kapor et al., 2022) \\ Letters (Lep) & 1 & \(X\) & **L Letters** (ofoutline.com) & \(X\) \\ Literature (Lep) & 4 & **Novels** (hindani.org/books/) & **History** (gutlenburg.org) & **Biographies** (Public Domain Books) \\ Medical Text (Med) & 1 & \(X\) & **Clinical Reports** (Unmér et al., 2011) & \(X\) \\ News Agticles (New) & 5 & **Sports** (Alfome and Gawich, 2022) & **Economy** (Misra, 2022) & \(X\) \\ Poetry (Pose) & 1 & **Pectry** (distantown.net) & **Petry** (postpost/foundin.org) & **Petry** (histolingual.inkart.com) \\ Policies (Pol) & 3 & **Olympic Rules** (speciality.org) & **Contracts** (lonweb.com) & **Code of Conduct** (lonza.com) \\ Research (Reis) & 6 & **Polition** (jcopology.subaghdad.edu.ij) & **Science \& Engineering** (arxiv.org) & **Economics** (journal.jarms.org) \\ Social Media (Soc) & 1 & **Twitter** (Zheng et al., 2022) & **Twitter** (Zheng et al., 2022) & **Twitter** (Zheng et al., 2022) \\ Specific (SpeSpe) & 2 & **Public Speech** (the.gour/translation) & **Public Speech** (whithe Hindi). Sentences belong to 64 different textual domains that we identify and collect data from. We categorize domains as sub-domains under a parent domain that describes a general theme (policies, speech, etc.) as shown in Table 2. ### Data Collection The collection process varies per domain but can be categorized into four approaches: **(1)** automatically scraping content from a website _(e.g; Wikipedia)_, **(2)** extracting text from sources in PDF format _(e.g; contract templates, reports, etc.)_, **(3)** sampling text from existing data sources _(e.g; dialogue, user reviews, etc.)_, or **(4)** manually collecting sentences _(e.g; dictionary examples, etc.)_. Full collection details for each domain and language are provided in Appendix A. For each domain, we collected all the available text from one or more particular sources. We then sampled 50 paragraphs for each domain. For domains collected from highly unstructured sources like PDFs, the sampling rate was increased to 100 since it is highly likely that samples will contain text that is not useful for annotation (e.g; headers, titles, references, etc.). Finally, from each paragraph, we sample one sentence that will be used for readability annotation. For quality control, we perform manual post-sampling quality check to filter out any low-quality sentences and sentences that contain toxic or offensive language. Context.In addition to the sampled sentences, we collect up to three preceding sentences as context if available. Many of the sampled sentences could be placed in the body of a paragraph. Some may require context to be fully understood. By providing optional context, we ensure annotators will not mark a sentence as confusing and not easily readable simply because they don't know the context in which it appears. Such cases have not been considered in previous work. For example, Arase et al. (2022) avoid this problem by collecting only the first sentence in a paragraph. Corpus Splitting.To ensure all domains are covered in each data split, we randomly split each sub-domain into 80% for training, 10% for validation, and 10% for testing using a random seed of 42. The statistics of each split are shown in Table 3. ### Annotating Sentences with Readability CEFR Levels.The Common European Framework of Reference for Languages (CEFR) levels determines the language ability of a person on a 6-point scale (A1, A2, B1, B2, C1, C2) where A is for basic, B for independent, and C for proficient. Each level of the scale is defined by descriptions of what form of text the person can understand. This makes the CEFR scale a good candidate for readability annotation, where a level is selected for a sentence if it can be understood by readers at this level. For example, a sentence is labeled as B2 if it requires a reader at the B2 level to be understood. Rank-and-Rate.Rating each sentence individually on a scale of readability comes with the drawback of annotators eventually not differentiating between different sentences. This results in most samples being labeled within one or two levels, limiting their usefulness for statistical analyses (McCarty and Shrum, 2000). We propose an alternative _rank-and-rate_ approach for readability annotation which mitigates the issues of individual sentence rating by providing comparative context. We randomly group sentences into batches of 5 and ask annotators to first rank sentences of a batch from most to least readable. Annotators are then asked to rate each sentence on a 6-point CEFR scale. By comparing and contrasting sentences within a batch, annotators can better differentiate between the readability of different sentences and produce less-subjective ratings. Details of our annotation interface are shown in Appendix D. We recruited two native Arabic, two native English and two native Hindi speakers for annotation. Prior to the annotation process, training sessions were conducted to familiarize the annotators with the CEFR levels and the annotation framework. Correlation levels between annotators were high, reaching 0.738 for Arabic and 0.816 for English, and 0.651 for Hindi, which confirms the quality of the labeling and effectiveness of the rank-and-rate approach for assessing readability levels. \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline \multirow{2}{*}{**Lang**} & \multirow{2}{*}{**Split**} & \multicolumn{6}{c}{**Readability Class**} \\ \cline{3-9} & & \(1_{(41)}\) & \(2_{(42)}\) & \(3_{(B1)}\) & \(4_{(B2)}\) & \(5_{(C1)}\) & \(6_{(C2)}\) & Total \\ \hline \multirow{3}{*}{**ar**} & \#train & 67 & 198 & 414 & 434 & 284 & 146 & 1543 \\ & \#val & 6 & 26 & 44 & 63 & 38 & 19 & 196 \\ & \#test & 8 & 28 & 56 & 68 & 28 & 18 & 206 \\ \hline \multirow{3}{*}{**en**} & \#train & 146 & 540 & 487 & 723 & 313 & 65 & 2274 \\ & \#val & 14 & 69 & 63 & 92 & 40 & 9 & 287 \\ & \#test & 23 & 66 & 78 & 92 & 35 & 6 & 300 \\ \hline \multirow{3}{*}{**hi**} & \#train & 212 & 239 & 229 & 203 & 172 & 150 & 1205 \\ & \#val & 27 & 22 & 39 & 33 & 20 & 11 & 152 \\ \cline{1-1} & \#test & 33 & 34 & 25 & 32 & 30 & 13 & 167 \\ \hline \hline \end{tabular} \end{table} Table 3: Number of sentences per readability level for each data split of ReadMe++. ## 4 Supervised Methods We treat the task as a classification problem and fine-tune multiple discriminative language models. We experiment with models of varying sizes to study how this influence performance on readability assessment. ### Models and Implementation We use **mBERT**Devlin et al. (2019) and **XLM-RoBERTa**Conneau et al. (2020) multilingual models. We also compare to monolingual models by fine-tuning the English **BERT**Devlin et al. (2019) and the **AraBERT**Antoun et al. (2020) and **ArBERT**Abdul-Mageed et al. (2021) models for Arabic. For Hindi, we fine-tune **MuRIL**Khanuja et al. (2021), a model pre-trained on 12 different Indian languages. Model details are summarized in Table 4. In all our experiments, we fine-tune for 10 epochs using the cross-entropy loss and the Adam optimizer with a learning rate of \(1e^{-6}\). We selected the checkpoints with the best validation loss. ### Supervised Results Figure 2 shows the results of the fine-tuned models. To get a better sense of how close the model predictions are to the true labels, we also report the Pearson Correlation (\(\rho\)) between the predictions and the ground-truth labels. An interesting observation seen in both F1 and \(\rho\) for Arabic and English is that smaller-sized models achieve better performance in both monolingual and multilingual cases, going against the commonly observed phenomenon in most NLP tasks where performance increases with model scale. This supports the hypothesis that high-performing readability assessment models need not be models that have obtained the most knowledge about a language Martinez et al. (2021). Instead, models that haven't reached that level of language mastery may be better at assessing where a sentence lies in the readability spectrum. However, the opposite trend is observed in Hindi where \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**\#Params**} & \multicolumn{4}{c}{**Pre-training Domains**} \\ \cline{3-6} & & Wiki & News & Books & CC \\ \hline Multilingual LMs & & & & & \\ \hline mBERT & 177M & ✓ & & & \\ XLMR\({}_{base}\) & 278M & & & & ✓ \\ XLMR\({}_{large}\) & 559M & & & & ✓ \\ \hline Moningual Arabic LMs & & & & & \\ AraBERT\({}_{base}\) & 135M & ✓ & ✓ & & \\ AraBERT\({}_{large}\) & 369M & ✓ & ✓ & & ✓ \\ ArBERT & 163M & ✓ & ✓ & ✓ & ✓ \\ \hline Monolingual English LMs & & & & & \\ BERT\({}_{base}\) & 110M & ✓ & & ✓ & \\ BERT\({}_{large}\) & 350M & ✓ & & ✓ & \\ \hline Indian LMs & & & & & \\ MuRIL\({}_{base}\) & 237M & ✓ & & & ✓ \\ MuRIL\({}_{large}\) & 506M & ✓ & & & ✓ \\ \hline \hline \end{tabular} \end{table} Table 4: Summary of language models used in experiments. **CC** stands for Common Crawl. Figure 2: Test set macro F1 scores (top) and Pearson Correlation (\(\rho\)) (bottom) achieved by various fine-tuned multilingual, monolingual, and Indian models. Smaller models outperform larger ones in Arabic and English, while larger models outperform smaller ones in Hindi. performance seems to improve with bigger models. We find that providing context during fine-tuning can help improve performance of the large models in Arabic and English and the small models in Hindi (See Appendix C.1). ## 5 Unsupervised Methods Unsupervised methods for readability prediction are an attractive approach as it does not need any training data. We experiment with methods that leverage pre-trained language model distributions and metrics based on traditional text features. ### Language Model-based Metrics **Ranked Sentence Readability Score (RSRS).** Proposed by Martinc et al. (2021), RSRS combines neural language model statistics with the average sentence length as lexical feature. It computes a weighted sum of the individual word losses using the language model distribution as follows: \[\mathrm{RSRS}=\frac{\sum_{i=1}^{S}{[\sqrt{i}]^{\alpha}.\mathrm{WNLL}(i)}}{S} \tag{1}\] where \(S\) is the sentence length, \(i\) is the rank of the word after sorting each word's Word Negative Log Loss (WNLL) in ascending order. Words with higher losses are assigned higher weights, increasing the total score and reflecting less readability. \(\alpha\) is equal to 2 when a word is an Out-Of-Vocabulary (OOV) token and 1 otherwise, since RSRS assumes that OOV tokens represent rare words that negatively influence readability and thus are assigned higher weights by eliminating the square root. The WNLL is computed as follows: \[\mathrm{WNLL}=-(y_{t}\log{y_{p}}+(1-y_{t})\log(1-y_{p})) \tag{2}\] where \(y_{p}\) is the distribution predicted by the language model, and \(y_{t}\) is the empirical distribution where the word appearing in the sequence holds a value of 1 while all other words have a value of 0. ### Traditional Feature-based Metrics We use the Average Sentence Length **(ASL)**, the Automated Readability Index **(ARI)**Smith and Senter (1967), and the Flesch-Kincaid Grade Level **(FKGL)**Kincaid and Robert Jr (1975). We also use the Open Source Metric for Measuring Arabic Narratives **(OSMAN)**El-Haj and Rayson (2016), which is a modification of traditional readability formulas tailored for Arabic. Additional details are provided in Appendix B. ### Unsupervised Results Figure 3 shows the test-set results achieved by the unsupervised metrics. We report the Pearson Correlation between the metric scores and Figure 4: Comparison of test set macro F1 scores and Pearson Correlation (\(\rho\)) via supervised fine-tuning and unsupervised RSRS prediction (Martinc et al., 2021). Fine-tuned models clearly outperform RSRS. Figure 3: Test set Pearson Correlation (\(\rho\)) achieved by feature-based unsupervised metrics and RSRS (Martinc et al., 2021) via different language models. RSRS outperforms feature-based metrics across all languages. Arabic monolingual and Indian models perform worse than multilingual models in the unsupervised setting. ground-truth labels. Overall, we can observe that language model-based RSRS scores outperform feature-based metrics in all languages, highlighting the usefulness of leveraging language model-based statistics for unsupervised readability prediction. Different from the supervised setting, multilingual models achieved much higher correlations than monolingual models for Arabic. We can also notice the better performance of multilingual models for Hindi than models trained on Indian languages. To compare the performance of unsupervised and supervised methods, we also compute a macro F1 score for unsupervised metrics by performing a brute-force search for optimal thresholds for each metric that maximize the F1 score of the validation set. Results comparing fine-tuned models and RSRS are shown in Figure 4. There exists a big gap in performance between unsupervised and supervised methods, with fine-tuned models outperforming unsupervised metrics. While promising, better unsupervised methods are needed to bridge the gap with fine-tuned models which could be very useful for very low-resource languages. ## 6 Analyses Models trained using ReadMe++ achieve better domain generalization.We test the ability of models to generalize to unseen domains of text. We create new train/val/test splits from ReadMe++ by randomly removing an increasing number of parent domains from the dataset and all their associated sub-domains. We then use the sentences from the removed domains as the test set and use the rest of the dataset for training and validation. For direct comparison, we randomly sample the same amount of train/val sentences in each experiment from the CEFR-SP Wiki-Auto dataset (Arase et al., \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**ReadMe++**} & \multicolumn{2}{c}{**CEFR-SP**} & \multicolumn{2}{c}{**CompDS**} \\ \cline{2-7} & **F1** & \(\rho\) & **F1** & \(\rho\) & **F1** & \(\rho\) \\ \hline \multicolumn{7}{l}{**en \(\rightarrow\) ar**} \\ mBERT & **19.4** & **0.502** & 14.68 & 0.407 & 1.94 & 0.131 \\ XLM-R\({}_{base}\) & **30.08** & **0.641** & 10.92 & 0.05 & 4.22 & 0.260 \\ XLM-R\({}_{large}\) & **32.19** & **0.582** & 8.26 & -0.002 & 5.2 & 0.327 \\ \hline \multicolumn{7}{l}{**en \(\rightarrow\) hi**} \\ mBERT & **14.38** & **0.492** & 8.87 & 0.386 & 6.38 & 0.165 \\ XLM-R\({}_{base}\) & **16.5** & **0.65** & 9.73 & 0.134 & 9.85 & 0.391 \\ XLM-R\({}_{large}\) & **24.15** & **0.709** & 14.18 & 0.232 & 9.46 & 0.364 \\ \hline \multicolumn{7}{l}{**en \(\rightarrow\) hi**} \\ mBERT & **12.79** & **0.270** & 7.91 & 0.248 & 10.37 & 0.119 \\ XLM-R\({}_{base}\) & **14.38** & **0.295** & 9.66 & 0.029 & 12.0 & 0.137 \\ XLM-R\({}_{large}\) & **14.68** & **0.239** & 9.88 & -0.043 & 10.06 & 0.099 \\ \hline \multicolumn{7}{l}{**en \(\rightarrow\) de**} \\ mBERT & **15.98** & **0.672** & 12.51 & 0.595 & 6.88 & 0.347 \\ XLM-R\({}_{base}\) & **27.13** & **0.702** & 14.02 & 0.196 & 8.68 & 0.529 \\ XLM-R\({}_{large}\) & **22.19** & **0.701** & 10.0 & -0.092 & 11.84 & 0.408 \\ \hline \hline \end{tabular} \end{table} Table 6: Zero-shot cross lingual transfer results. Models fine-tuned using ReadMe++ significantly outperform models fine-tuned with CEFR-SP (Arase et al., 2022) or CompDS (Brunato et al., 2018) in cross-lingual transfer from English (en) to Arabic (ar), Hindi (hi), Italian (it), and German (de). \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multicolumn{2}{c}{**\#Unseen Domains**} & \multicolumn{2}{c}{**\#train/val**} & \multicolumn{2}{c}{**\#test**} & \multicolumn{2}{c}{**ReadMe++**} & \multicolumn{2}{c}{**CEFR-SP**} \\ \cline{3-8} \multicolumn{2}{c}{} & & & & **F1** & **Corr** & **F1** & **Corr** \\ \hline \multirow{4}{*}{**en**} & 2 (15): Wik, Res & 2766/324 & 410 & **29.29** & **0.516** & 27.42 & 0.505 \\ & 4 (7): Let, Ent, Soc, Gui & 2596/306 & 598 & **32.4** & **0.516** & 11.64 & 0.387 \\ & 6 (18): Res, Fin, Sta, Ent, Dia, New & 2288/267 & 945 & **31.4** & **0.731** & 23.23 & 0.450 \\ & 8 (25): Pol, Cap, Sta, Res, Rev, Leg, Soc, Poe & 1968/231 & 1301 & **37.59** & **0.784** & 19.44 & 0.659 \\ \hline \hline \multicolumn{8}{c}{**\#Unseen Domains**} & \multicolumn{2}{c}{**\#train/val**} & \multicolumn{2}{c}{**\#test**} & \multicolumn{2}{c}{**ReadMe++**} & \multicolumn{2}{c}{**ALC Corpus**} \\ \cline{3-8} \multicolumn{2}{c}{} & & & & **F1** & **Corr** & **F1** & **Corr** \\ \hline \multirow{4}{*}{**ar**} & 2 (2): Tex, Soc & 2146/250 & 268 & **46.93** & **0.793** & 4.49 & -0.295 \\ & 4 (7): Poe, Gui, Ent, Dia & 1942/227 & 495 & **23.4** & **0.572** & 13.23 & -0.239 \\ & 6 (23): For, New, Spe, Cap, Wik, Res & 1710/199 & 755 & **43.84** & **0.691** & 2.21 & 0.144 \\ & 8 (23): Ent, For, Leg, Spe, Wik, Dia, Poe, Res & 1476/173 & 1015 & **38.5** & **0.648** & 8.52 & 0.113 \\ \hline \hline \end{tabular} \end{table} Table 5: Performance on unseen parent domains in English and Arabic. Number between paranthesis corresponds to the total number of sub-domains unseen. Models fine-tuned using ReadMe++ achieve better domain generalization and significantly outperform models fine-tuned with CEFR-SP (Arase et al., 2022) for English or the ALC Corpus (Khallaf and Sharoff, 2021) for Arabic. Unseen Domains: **Wikipedia**, **Research**, **Finance**, **Guides**, **Statements**, **Social** Media, **Legal**, **Entertainment**, **Forums**, **News**, **Speech**, **Dialogue**, **Captions**, **Textbooks**, **Policies**, **Poetry**. 2022), since it has a sufficient amount of samples to perform this experiment, and use it to fine-tune mBERT models. We then evaluate those models on the unseen domains test set from ReadMe++. Results are shown in Table 5. It can be clearly seen that models fine-tuned using the train/val splits of ReadMe++ achieve good generalization to unseen domains and significantly outperform the models trained using CEFR-SP. This demonstrates the notable advantage of data diversity that ReadMe++ provides in producing more generalizeable models. We perform the same experiments for Arabic by comparing to the ALC Corpus (Khallaf and Sharoff, 2021), which is labeled on 5-scale CEFR levels (A1, A2, B1, B2, C). We convert the labels in ReadMe++ to the same scale of ALC Corpus by combining _C1_ and _C2_ into \(C\) and then perform 5-way classification. The results are shown in Table 5, where we can observe results similar to what is attained in English. The performance gap between models trained using ReadMe++ and ALC Corpus is more significant as compared to CEFR-SP, which shows the importance of having human sentence-level annotations instead of automatic document-level annotation. Models trained using ReadMe++ perform better geo-shot cross-lingual transfer.We perform zero-shot cross-lingual transfer from English to Arabic, Hindi, Italian, and German by fine-tuning multilingual models using the English subset of ReadMe++. For comparison, we also fine-tune these models on the same amount of training and validation sentences that we randomly sample from CEFR-SP Wiki-Auto (Arase et al., 2022) and the full English CompDS (Brunato et al., 2018). We evaluate on the Arabic and Hindi test sets from ReadMe++ as well as Italian CompDS (Brunato et al., 2018) and German TextComplexityDE (Naderi et al., 2019). Since CompDS and TextComplexityDE rate on scales from 1-7 instead of 1-6 we included level 7 into CEFR rating C2. Both datasets had only few level 7 sentences. Results are shown Table 6. Models fine-tuned using ReadMe++ achieve better cross-lingual transfer capabilities than models fine-tuned using CEFR-SP or CompDS across all tested languages. In several cases, training on ReadMe++ leads to a 50% increase in F1 score and double the correlation value over other datasets. Unsupervised models struggle with transliterations.We study the effect of transliterated words in Arabic and Hindi on the language-model based unsupervised scores. RSRS assumes that all unseen words by the model's tokenizer are rare, difficult words that should be assigned higher weights. With the constant emergence of new words that get transliterated from other languages, the language model losses of those words would also be high. For example, these could be names of new figures in politics, emerging diseases, or even historical names that the language model never saw during pre-training. We hypothesize that this design choice in RSRS degrades performance since many of those transliterated words do not add to the difficulty level of the sentence and could be highly familiar to readers. To test this hypothesis, we asked Arabic and Hindi annotators to indicate if a sentence contains transliterated words when performing rank-and-rate annotation. This resulted in 320 sentences with transliterations in Arabic (16.45% of Arabic data) and 561 sentences in Hindi (36.81% of Hindi data). We penalize the RSRS scores of those sentences as follows: \[\mathrm{RSRS}:=\mathrm{RSRS}-\frac{\lambda*\mathrm{RSRS}}{S} \tag{3}\] where \(\lambda\) is a penalty factor and \(S\) is the length of the sentence. The objective is to analyze whether decreasing those scores results in higher correlation with human ratings, since we assume transliterations cause RSRS scores to be unreasonably high. Figure 5: Effect of increasing the penalty factor \(\lambda\) on the Pearson Correlation \(\rho\) between RSRS scores and human ratings for Arabic and Hindi sentences that contains transliterations. The plot shows a clear improvement in correlation as \(\lambda\) increases, which is more significant for monolingual models than multilingual ones. The results are show in Figure 5 for 0.1 increments of \(\lambda\) using several language models. The trends in the plots clearly corroborate with our hypothesis; the correlation increasing as the penalty becomes higher up to a certain level. The improvement is more significant to monolingual models, reaching up to 6-7%, compared with that of multilingual models that reaches up to 1-3%. Multilingual models appear to be more robust to the spurious correlation caused by transliterations, yet it degrades performance for monolingual models, which provides insight to the performance gap observed in Section 5. These observations indicate that careful consideration for transliterations should be given in the design of future unsupervised methods. ## 7 Conclusion We presented _ReadMe++_, a massively multi-domain multilingual dataset for readability assessment. ReadMe++ provides 6,330 sentences in Arabic, English, and Hindi that are collected from 64 different domains of text and annotated by humans on a sentence-level according to the CEFR scale. We showed that models trained using ReadMe++ achieved strong generalization to unseen domains of text and performed well in zero-shot cross-lingual transfer. We believe that ReadMe++ will not only be valuable to encourage more research on multilingual multi-domain readability assessment, but its diversity and domain labels will be a useful resource to the community for studies on domain generalization. ### Limitations Readability assessment is a general task which can be further specialized for a target audience such as children Lennon and Burdick (2004), second language learners Xia et al. (2016), and adults with intellectual disabilities Feng et al. (2009). In this work, we focus on measuring readability in a general sense for a broad audience of readers. Hence, our data was labeled from the perspective of individuals with college-level education. Future avenues of research may include extending the corpus to add the additional dimension of reader perspective. Furthermore, while we include three diverse languages, the corpus may be further extended to include additional languages. Russian is a strong candidate language since it has been empirically found to be a useful pivot language for cross-lingual transfer Turc et al. (2021). Another important addition could be very low-resource languages to experiment with limited-data scenarios. ### Ethical Statement We are committed to upholding ethical standards in the construction and dissemination of the ReadMe++ corpus. To ensure the integrity of our data collection process, we have made our best effort to obtain data from sources that are available in the public domain, released under Creative Commons (CC) or similar licenses, or can be used freely for personal and non-commercial purposes according to the resource's Terms and Conditions of Use. These sources include user-generated content on public domain books, publicly available documents/reports, and publicly available datasets. We use a small number of randomly sampled sentences for academic research purposes, specifically for labeling sentence readability. We have included a full list of licenses and terms of use for each source in Appendix E. We would like to note that a couple corpora require access permission from the original authors (i2b2/VA Uzuner et al. (2011), and Hindi Product Reviews Akhtar et al. (2016)). Therefore, sentences and annotations from these sources will not be shared with the community unless access permission has been obtained from the original authors. When collecting sentences from the social media and forums domains, we have **manually excluded** any sampled sentences that contain, offensive/hateful speech, stereotypes, or private user information. All annotators were student employees paid at the standard student employee rate of $18 per hour for their time. Every annotator was informed that their annotations were being used in the creation of a dataset for readability assessment. Our manual filtering of toxic or harmful content ensured that annotators were working with inoffensive data. ### Acknowledgments The authors would like to thank Nour Allah El Senary, Govind Ramesh, and Ryan Punamiya for their help in data annotation. This research is supported in part by the NSF awards IIS-2144493 and IIS-2112633, ODNI and IARPA via the HIATUS program (contract 2022-22072200004). The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of NSF, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
2308.15959
Bäcklund transformations as integrable discretization. The geometric approach
We present interpretation of known results in the theory of discrete asymptotic and discrete conjugate nets from the "discretization by B\"{a}cklund transformations" point of view. We collect both classical formulas of XIXth century differential geometry of surfaces and their transformations, and more recent results from geometric theory of integrable discrete equations. We first present transformations of hyperbolic surfaces within the context of the Moutard equation and Weingarten congruences. The permutability property of the transformations provides a way to construct integrable discrete analogs of the asymptotic nets for such surfaces. Then after presenting the theory of conjugate nets and their transformations we apply the principle that B\"{a}cklund transformations provide integrable discretization to obtain known results on the discrete conjugate nets. The same approach gives, via the Ribaucour transformations, discrete integrable analogs of orthogonal conjugate nets.
Adam Doliwa
2023-08-30T11:24:04Z
http://arxiv.org/abs/2308.15959v3
# Backlund transformations as integrable discretization. ###### Abstract. We present interpretation of known results in the theory of discrete asymptotic and discrete conjugate nets from the _discretization by Backlund transformations_ point of view. We collect both classical formulas of XIXth century differential geometry of surfaces and their transformations, and more recent results from geometric theory of integrable discrete equations. We first present transformations of hyperbolic surfaces within the context of the Moutard equation and Weingarten congruences. The permutability property of the transformations provides a way to construct integrable discrete analogs of the asymptotic nets for such surfaces. Then after presenting the theory of conjugate nets and their transformations we apply the principle that Backlund transformations provide integrable discretization to obtain known results on the discrete conjugate nets. The same approach gives, via the Ribaucour transformations, discrete integrable analogs of orthogonal conjugate nets. Key words and phrases:integrable discretization, Darboux-Backlund transformations, discrete asymptotic nets, Moutard transformation, conjugate nets, fundamental transformation, lattice of planar quadrilaterals, Ribaucour transformation, circular lattice ## 1. Introduction Given integrable system of differential equations one is often interested in finding the corresponding (in the sense of small lattice step size limit) discrete system while preserving the integrability properties. It turns out that usually the simple/naive replacement of derivatives by difference operators spoils the integrability. The discretization has to be made on the level where the integrability features are visible and transparent. Such methods are, for example (i) discrete version of the linear problem [1], (ii) the Hirota method via a bilinear form [52], (iii) extensions of the Zakharov-Shabat dressing method [71], (iv) direct linearization using linear integral equations [86]. Another technique, which is the subject of the present work, is based on Backlund transformations, which are discrete symmetries of integrable equations. It is the fundamental observation made by Decio Levi, which we present quoting abstracts of two of his papers: _It is shown that any Backlund transformation of a nonlinear differential equation integrable by the multichannel Schrodinger eigenvalue problem can be written in the form \(V_{x}=U^{\prime}V-VU\). This allows us to interpret the Backlund transformation formally as a nonlinear differential difference equation for which we can immediately construct the soliton solutions. [68]_ _In this paper, one shows that the best known nonlinear differential difference equations associated with the discrete Schrodinger spectral problem and also with the discrete Zakharov-Shabat spectral problem can be interpreted as Backlund transformations for some continuous nonlinear evolution equations. [69]_ Backlund transformations arose in connection with the construction, by XIXth century geometers [6, 5], of pseudospherical surfaces and corresponding solutions of the sine-Gordon equation. It was shown by Bianchi [7] that such transformations can be iterated leading to an algebraic superposition formula. More recently, Wahlquist and Estabrook [110] demonstrated that also the Korteweg-de Vries equation, which is a paradigmatic example of integrable partial differential equation [48], admits invariance under a Backlund-type transformation and possesses an associated permutability theorem. They have used for that purpose the transformation introduced by Darboux [18] in the context of Sturm-Liouville problems. The subject is generally known in the soliton theory as Backlund or Darboux transformations [63, 97, 81, 98, 50, 46], but in the geometric theory of transformations of surfaces exhibiting permutability property, also other names are relevant [24, 96, 56]. The classical results of the old differential geometry of surfaces and their transformations are summarized in [20, 19, 8, 45, 109, 65, 47]. The most general transformations of conjugate nets and their permutability were introduced and studied by Jonas [56, 45]. The Darboux equations of multidimensional conjugate nets [19] have been rediscovered by Zakharov and Manakov [112, 113] as the most general systems solvable by the non-local \(\bar{\partial}\)-dressing method. The discrete analogue of conjugate nets on a surface was introduced first on the geometric level [100, 102], and connected to integrability theory in [25]. The integrable discrete version of the corresponding Darboux system was given in [14]. Geometric studies of multidimensional discrete conjugate nets have been initiated in [34] and were followed in [79, 36, 38]. The integrability of circular lattices, which form a distinguished reduction of discrete conjugate nets and in the continuous limit give orthogonal conjugate nets, was first studied geometrically in [17] and then confirmed by other tools [36, 60, 4, 39]. Compelling reasons for such an interpretation were given in [80, 91] on the basis of the computer graphics, and in [11, 9] from the theory of discrete isothermic nets. I met first time Decio Levi in late eighties when he came to Institute of Theoretical Physics of Warsaw University to visit his friend Antoni Sym, who supervised both my master and PhD theses. They worked together on integrable generalization of pseudospherical surfaces, known now as Bianchi surfaces [70]. In the present work we show a harmonious coexistence of two points of view: _Backlund transformations provide integrable discretization_ by Decio Levi, and _soliton theory is surface theory_[108] by Antoni Sym. In the theory of integrable systems it is quite common that the same equations can be derived using different methods, which give different perspective and emphasize different connections. The paper is constructed as follows. In Section 2 we present the classical theory of hyperbolic surfaces in asymptotic parametrization emphasizing their connection with the Moutard equation [83]. The corresponding transformations and their permutability property [89] give rise to discrete asymptotic nets, which coincide with their natural geometric analogs [102]. The next Section 3 is devoted to conjugate nets and their transformations. We give a discretization of the nets starting from their fundamental transformation and exploiting its permutability properties. In Section 4 we present derivation of integrable discrete version of orthogonal conjugate nets on the base of geometric interpretation of the Ribaucour reduction [96, 24] of the fundamental transformation. We conclude the paper with additional discussion on geometry of Hirota's discrete Kadomtsev-Petviashvili (KP) system and with some remarks about dispersionless systems. In the paper we tried to present old results, most of them more than one hundred years old, from contemporary perspective and in unified notation. We remark that interpretation of the fundamental transformation and its Ribaucour reduction in terms of vertex operators within the free-fermion formalism of the multicomponent KP hierarchy was the subject of [40, 39], where also the aspects of integrable discretization of conjugate and orthogonal nets were investigated. ## 2. Discretization of asymptotic nets and the Moutard transformation The classical transformation of Bianchi and Backlund for pseudospherical surfaces and the sine-Gordon equation can be considered as a reduction of Weingarten transformation of hyperbolic surfaces in asymptotic parametrization. We devote the present Section to such asymptotic nets and to the Moutard equation [83], which governs the behaviour of their normal vector. We use the standard notation and terminology of the classical theory of surfaces [44], see also [98]. ### Hyperbolic surfaces, the Moutard equation and the Lelieuvre formulas Let \((u,v)\) be local coordinate system on a surface \(\Sigma\) in \(\mathbb{R}^{3}\), and by \(\boldsymbol{r}(u,v)\) denote the position vector of a generic point. The coordinate lines are called asymptotic when in every point their tangent planes coincide with the tangent plane to the surface. Surfaces admitting the asymptotic coordinates are called hyperbolic. In such case we have \[\boldsymbol{r}_{,uu} =a_{1}\boldsymbol{r}_{,u}+b_{1}\boldsymbol{r}_{,v}\;, \tag{2.2}\] \[\boldsymbol{r}_{,vv} =a_{2}\boldsymbol{r}_{,u}+b_{2}\boldsymbol{r}_{,v}\;, \tag{2.1}\] where \(a_{i}\), and \(b_{i}\), \(i=1,2\), are functions of the local coordinates; here and in all the paper by a subscript after comma we denote the partial derivative with respect to the corresponding variable. As a consequence of the compatibility condition \(\boldsymbol{r}_{,uuvv}=\boldsymbol{r}_{,vvuu}\) there exists a function \(\phi\), given up to an additive constant, such that \[a_{1}=\phi_{,u}\;,\qquad b_{2}=\phi_{,v}\;. \tag{2.3}\] By direct calculation one can check that the normal vector \[\boldsymbol{\nu}=\mathrm{e}^{-\phi}\boldsymbol{r}_{,u}\times\boldsymbol{r}_{,v} \tag{2.4}\] satisfies the Moutard equation \[\boldsymbol{\nu}_{,uv}=f\boldsymbol{\nu}\;, \tag{2.5}\] with the potential \[f=\phi_{,uv}-b_{1}a_{2}\;. \tag{2.6}\] Moreover, using eventually the allowed freedom in definition of \(\phi\) and/or changing the orientation, the position vector is given by the Lelieuvre formulas [67] \[\boldsymbol{r}_{,u}=\boldsymbol{\nu}_{,u}\times\boldsymbol{\nu}\;,\qquad \boldsymbol{r}_{,v}=\boldsymbol{\nu}\times\boldsymbol{\nu}_{,v}\;. \tag{2.7}\] ### The Moutard transformation and its permutability property The following result provides a way how to transform solutions of a Moutard equation into new solution of equation of the same form but with different potential. In consequence, given a hyperbolic surface it allows to construct new surface of the same type. **Theorem 2.1**.: _[_83_]_ _Given vector-valued solution \(\boldsymbol{\nu}\) of the Moutard equation corresponding to a given potential \(f\), and given scalar solution \(\theta\) of the same equation, then the vector-valued function \(\hat{\boldsymbol{\nu}}\) defined by the compatible equations_ \[(\theta\hat{\boldsymbol{\nu}})_{,u}= \theta_{,u}\boldsymbol{\nu}-\theta\boldsymbol{\nu}_{,u}\;, \tag{2.9}\] \[(\theta\hat{\boldsymbol{\nu}})_{,v}= -\theta_{,v}\boldsymbol{\nu}+\theta\boldsymbol{\nu}_{,v}\;, \tag{2.8}\] _satisfies the Moutard equation with the potential_ \[\hat{f}=\frac{\hat{\theta}_{,uv}}{\hat{\theta}}\;,\qquad\text{where}\qquad \hat{\theta}=\frac{1}{\theta}\;.\] **Corollary 2.2**.: _Notice that \(\boldsymbol{\nu}\) is the Moutard transform of \(\hat{\boldsymbol{\nu}}\) with \(\hat{\theta}\) taken as the transformation function._ **Corollary 2.3**.: _One can check that the surface_ \[\hat{\boldsymbol{r}}=\boldsymbol{r}+\hat{\boldsymbol{\nu}}\times\boldsymbol{ \nu}, \tag{2.10}\] _can be obtained from \(\hat{\boldsymbol{\nu}}\) via the Lelieuvre formulas (2.7). In particular the local parameters \((u,v)\) form an asymptotic coordinate system on the transformed surface._ _Remark_.: The two-parameter family of lines \(\langle\boldsymbol{r},\hat{\boldsymbol{r}}\rangle\), which are tangent to both surfaces \(\Sigma\) and \(\hat{\Sigma}\) at the corresponding points, forms the so called Weingarten congruence. The corresponding transformation between hyperbolic surfaces \(\Sigma\) and \(\hat{\Sigma}\) is called the Weingarten transformation. Let \(\theta^{i}\), \(i=1,2\), be two solutions of the Moutard equation satisfied by the normal \(\boldsymbol{\nu}\), and let \(\boldsymbol{\nu}^{\{i\}}\), \(i=1,2\), be its corresponding two transforms. By \(\theta^{2\{1\}}\) denote also the Moutard transform of \(\theta^{2}\) with respect to \(\theta^{1}\) \[\left(\theta^{1}\begin{pmatrix}\boldsymbol{\nu}^{\{1\}}\\ \theta^{2\{1\}}\end{pmatrix}\right)_{,u}=\theta^{1}_{,u}\begin{pmatrix} \boldsymbol{\nu}\\ \theta^{2}\end{pmatrix}-\theta^{1}\begin{pmatrix}\boldsymbol{\nu}\\ \theta^{2}\end{pmatrix}_{,u},\qquad\left(\theta^{1}\begin{pmatrix}\boldsymbol{ \nu}^{\{1\}}\\ \theta^{2}\{1\}\end{pmatrix}\right)_{,v}=-\theta^{1}_{,v}\begin{pmatrix} \boldsymbol{\nu}\\ \theta^{2}\end{pmatrix}+\theta^{1}\begin{pmatrix}\boldsymbol{\nu}\\ \theta^{2}\end{pmatrix}_{,v}, \tag{2.11}\] and by \(\theta^{1\{2\}}\) denote the Moutard transform of \(\theta^{1}\) with respect to \(\theta^{2}\) \[\left(\theta^{2}\begin{pmatrix}\boldsymbol{\nu}^{\{2\}}\\ \theta^{1\{2\}}\end{pmatrix}\right)_{,u}=\theta^{2}_{,u}\begin{pmatrix} \boldsymbol{\nu}\\ \theta^{1}\end{pmatrix}-\theta^{2}\begin{pmatrix}\boldsymbol{\nu}\\ \theta^{1}\end{pmatrix}_{,u},\qquad\left(\theta^{2}\begin{pmatrix} \boldsymbol{\nu}^{\{2\}}\\ \theta^{1\{2\}}\end{pmatrix}\right)_{,v}=-\theta^{2}_{,v}\begin{pmatrix} \boldsymbol{\nu}\\ \theta^{1}\end{pmatrix}+\theta^{2}\begin{pmatrix}\boldsymbol{\nu}\\ \theta^{1}\end{pmatrix}_{,v}. \tag{2.12}\] In consequence, the transformation formulas (2.10) on the hyperbolic surfaces level \(\Sigma^{\{1\}}\) and \(\Sigma^{\{2\}}\) take the form \[\boldsymbol{r}^{\{1\}}=\boldsymbol{r}+\boldsymbol{\nu}^{\{1\}}\times \boldsymbol{\nu},\qquad\boldsymbol{r}^{\{2\}}=\boldsymbol{r}+\boldsymbol{\nu}^ {\{2\}}\times\boldsymbol{\nu}. \tag{2.13}\] Let us apply to \(\boldsymbol{\nu}^{\{1\}}\) the Moutard transformation with \(\theta^{2\{1\}}\), \[\left(\theta^{2\{1\}}\boldsymbol{\nu}^{\{1,2\}}\right)_{,u}=\theta^{2\{1\}}_{,u }\boldsymbol{\nu}^{\{1\}}-\theta^{2\{1\}}\boldsymbol{\nu}^{\{1\}}_{,u},\qquad \left(\theta^{2\{1\}}\boldsymbol{\nu}^{\{1,2\}}\right)_{,v}=-\theta^{2\{1\}}_{,v}\boldsymbol{\nu}^{\{1\}}+\theta^{2\{1\}}\boldsymbol{\nu}^{\{1\}}_{,v}, \tag{2.14}\] and to \(\boldsymbol{\nu}^{\{2\}}\) the transformation with \(\theta^{1\{2\}}\) \[\left(\theta^{1\{2\}}\boldsymbol{\nu}^{\{2,1\}}\right)_{,u}=\theta^{1\{2\}}_{,u }\boldsymbol{\nu}^{\{2\}}-\theta^{1\{2\}}\boldsymbol{\nu}^{\{2\}}_{,u},\qquad \left(\theta^{1\{2\}}\boldsymbol{\nu}^{\{2,1\}}\right)_{,v}=-\theta^{1\{2\}} _{,v}\boldsymbol{\nu}^{\{2\}}+\theta^{1\{2\}}\boldsymbol{\nu}^{\{2\}}_{,v}. \tag{2.15}\] We would like both transformations give the same result \(\boldsymbol{\nu}^{\{2,1\}}=\boldsymbol{\nu}^{\{1,2\}}\). Moreover since we have \[(\theta^{1}\theta^{2\{1\}}+\theta^{2}\theta^{1\{2\}})_{,u}=0=(\theta^{1} \theta^{2\{1\}}+\theta^{2}\theta^{1\{2\}})_{,v},\] then the additive constants in definition of \(\theta^{1}\theta^{2\{1\}}\) and of \(\theta^{2}\theta^{1\{2\}}\) can be fixed such that \[\theta^{2}\theta^{1\{2\}}=\theta^{12}=-\theta^{1}\theta^{2\{1\}}. \tag{2.16}\] By elimination of derivatives of the normal vectors from equations (2.11)-(2.12) and (2.14)-(2.15), and using identity (2.16) we can arrive to the the following important result. **Theorem 2.4** (Permutability of the Moutard transformations).: _The vector-valued functions \(\boldsymbol{\nu}^{\{12\}}\) given by algebraic formula_ \[\boldsymbol{\nu}^{\{12\}}-\boldsymbol{\nu}=\frac{\theta^{1}\theta^{2}}{\theta ^{12}}\left(\boldsymbol{\nu}^{\{1\}}-\boldsymbol{\nu}^{\{2\}}\right) \tag{2.17}\] _is simultaneously Moutard transform of \(\boldsymbol{\nu}^{\{1\}}\) with respect to \(\theta^{2\{1\}}\) and the Moutard transform of \(\boldsymbol{\nu}^{\{2\}}\) with respect to \(\theta^{1\{2\}}\)._ \[\begin{CD}\boldsymbol{\nu}@>{\theta^{1}}>{}>\boldsymbol{\nu}^{\{1\}}\\ @V{\theta^{2}}V{}V@V{}V{\theta^{2\{1\}}}V\\ \boldsymbol{\nu}^{\{2\}}@>{\theta^{1\{2\}}}>{}>\boldsymbol{\nu}^{\{12\}}.\end{CD}\] _Remark_.: Actually, because of free additive parameter in definition of \(\theta^{12}\) we obtain this way one-parameter family of the transforms. **Corollary 2.5**.: _The one-parameter family of vector-valued functions \(\boldsymbol{r}^{\{12\}}\) given by algebraic formula_ \[\boldsymbol{r}^{\{12\}}=\boldsymbol{r}+\frac{\theta^{1}\theta^{2}}{\theta^{12 }}\boldsymbol{\nu}^{\{1\}}\times\boldsymbol{\nu}^{\{2\}} \tag{2.18}\] _provides simultaneously the Weingarten transforms of the hyperbolic surface \(\boldsymbol{r}^{\{1\}}\) with respect to \(\theta^{2\{1\}}\) and the Weingarten transforms of the hyperbolic surface \(\boldsymbol{r}^{\{2\}}\) with respect to \(\theta^{1\{2\}}\)._ ### The Moutard transformation as integrable discretization The successive application of the Weingarten-Moutard transforms, taking into account their algebraic superposition formula, to a given hyperbolic surface \(\Sigma\) allows to build a two dimensional lattice \(\Sigma^{m,n}\) of such surfaces. Let us fix a point on \(\Sigma\) and trace properties of a lattice in \(\mathbb{R}^{3}\) of the corresponding points, represented by vectors \(\boldsymbol{r}^{m,n}\). **Proposition 2.6**.: _The five points \(\boldsymbol{r}^{m,n}\), \(\boldsymbol{r}^{m\pm 1,n}\), \(\boldsymbol{r}^{m,n\pm 1}\) belong to a common plane._ Proof.: The statement follows from equations (2.13), which imply that the lines \(\langle\boldsymbol{r}^{m,n},\boldsymbol{r}^{m+1,n}\rangle\) are orthogonal to both the vectors \(\boldsymbol{\nu}^{m,n}\) and \(\boldsymbol{\nu}^{m+1,n}\), and the lines \(\langle\boldsymbol{r}^{m,n},\boldsymbol{r}^{m,n+1}\rangle\) are orthogonal to both the vectors \(\boldsymbol{\nu}^{m,n}\) and \(\boldsymbol{\nu}^{m,n+1}\). The common plane of the five points is orthogonal to the vector \(\boldsymbol{\nu}^{m,n}\). **Corollary 2.7**.: _Equations (2.13) are discrete analogs of the Lelieuvre formulas (2.7), see also [59]._ _Remark_.: The three points \(\boldsymbol{r}^{m,n}\), \(\boldsymbol{r}^{m\pm 1,n}\), define the tangent plane of the first discrete coordinate curve at point \(\boldsymbol{r}^{m,n}\), and the three points \(\boldsymbol{r}^{m,n}\), \(\boldsymbol{r}^{m,n\pm 1}\), define the tangent plane of the second curve at this point. Both planes coincide with the common plane of Proposition 2.6, what allows to call the curves the discrete asymptotic lines. **Definition 2.1** ([102, 100]).: The discrete asymptotic net is a map \(\boldsymbol{r}\colon\mathbb{Z}^{2}\to\mathbb{R}^{3}\) such that for arbitrary \((m,n)\in\mathbb{Z}^{2}\) the five points \(\boldsymbol{r}^{m,n}\), \(\boldsymbol{r}^{m\pm 1,n}\), \(\boldsymbol{r}^{m,n\pm 1}\) are coplanar. **Corollary 2.8**.: _The algebraic superposition formula (2.17) of the Moutard transformations on the level of the normal vector \(\boldsymbol{\nu}^{m,n}\) can be interpreted as the discrete Moutard equation [89]_ \[\boldsymbol{\nu}^{m+1,n+1}-\boldsymbol{\nu}^{m,n}=f^{m,n}\left(\boldsymbol{ \nu}^{m+1,n}-\boldsymbol{\nu}^{m,n+1}\right). \tag{2.19}\] _Remark_.: To obtain the Moutard equation (2.5) and the Lelieuvre formulas (2.7) from (2.19) and (2.13) one first has to change the orientation of the normal vector according to \(\boldsymbol{\nu}^{m,n}\to(-1)^{n}\boldsymbol{\nu}^{m,n}\). Integrability of the hyperbolic surfaces can be understood as existence of transformations satisfying algebraic permutability principle. It turns out that one can construct analogous transformations for integrable discrete hyperbolic surfaces (discrete asymptotic nets). The corresponding transformation formulas, their geometric interpretation, and their permutability property have been considered in [89, 84, 27]. See also [55] for development of their theory in direction of application in computer graphic. Hyperbolic surfaces with constant Gauss curvature, i.e. pseudospherical surfaces, are described by the celebrated sine-Gordon equation for the angle between the asymptotic coordinates on such surfaces. Restricting the Weingarten-Moutard transformation to such surfaces we obtain discrete asymptotic nets whose elementary quadrilaterals have opposite sides of equal length, and which provide integrable discrete analogs of pseudospherical surfaces [101, 111, 10]. The corresponding Bianchi's permutability theorem for the Backlund transformation of the sine-Gordon equation provides its integrable difference analog [53], and describes the angle between asymptotic coordinates on discrete pseudospherical surface. The Bianchi surfaces [8], whose integrability was discussed in [70], are hyperbolic surfaces characterized by the property that their Weingarten transformation preserves Gauss curvature in the corresponding points; see [12, 103, 37, 105] for discussion of discrete analogs and this and other integrable reductions of asymptotic nets. ### Discrete BKP equation Integrable non-linear equations appear in the context of discrete Moutard equation, prior to reductions of discrete hyperbolic surfaces, for more then two discrete variables [89, 28]. Consider a map \(\boldsymbol{\nu}\colon\mathbb{Z}^{N}\to\mathbb{R}^{M}\), \(N,M\geq 3\), which satisfies the system of discrete Moutard equations in each pair of variables \[\boldsymbol{\nu}^{\boldsymbol{n}+\boldsymbol{e}_{i}+\boldsymbol{e}_{j}}- \boldsymbol{\nu}^{\boldsymbol{n}}=f_{ij}^{\boldsymbol{n}}\left(\boldsymbol{ \nu}^{\boldsymbol{n}+\boldsymbol{e}_{i}}-\boldsymbol{\nu}^{\boldsymbol{n}+ \boldsymbol{e}_{j}}\right),\qquad 1\leq i\neq j\leq N; \tag{2.20}\] here \(\boldsymbol{n}\in\mathbb{Z}^{N}\), and \(\boldsymbol{e}_{i}\) is the unit vector in \(i\)th direction, \(i=1,\ldots,N\). Compatibility of the system leads to the following set of nonlinear equations \[1+f_{jk}^{\boldsymbol{n}+\boldsymbol{e}_{i}}(f_{ij}^{\boldsymbol{n}}-f_{ik}^ {\boldsymbol{n}})=f_{ik}^{\boldsymbol{n}+\boldsymbol{e}_{j}}f_{ij}^{ \boldsymbol{n}}=f_{ij}^{\boldsymbol{n}+\boldsymbol{e}_{k}}f_{ik}^{\boldsymbol {n}},\qquad i,j,k\quad\text{distinct},\qquad f_{ji}^{\boldsymbol{n}}=-f_{ ij}^{\boldsymbol{n}}. \tag{2.21}\] The second equality implies existence of the potential \(\tau:\mathbb{Z}^{N}\to\mathbb{R}\), in terms of which the functions \(f_{ij}\) can be written as \[f_{ij}^{\boldsymbol{n}}=\frac{\tau^{\boldsymbol{n}+\boldsymbol{e}_{i}}\tau^ {\boldsymbol{n}+\boldsymbol{e}_{j}}}{\tau^{\boldsymbol{n}}\,\tau^{\boldsymbol {n}+\boldsymbol{e}_{i}+\boldsymbol{e}_{j}}},\qquad i<j. \tag{2.22}\] The first equality can be then rewritten in the form of the system of Miwa's discrete BKP equations [82] \[\tau^{\boldsymbol{n}}\tau^{\boldsymbol{n}+\boldsymbol{e}_{i}+\boldsymbol{e }_{j}+\boldsymbol{e}_{k}}=\tau^{\boldsymbol{n}+\boldsymbol{e}_{i}+\boldsymbol {e}_{j}}\tau^{\boldsymbol{n}+\boldsymbol{e}_{k}}-\tau^{\boldsymbol{n}+ \boldsymbol{e}_{i}+\boldsymbol{e}_{k}}\tau^{\boldsymbol{n}+\boldsymbol{e}_{j }}+\tau^{\boldsymbol{n}+\boldsymbol{e}_{j}+\boldsymbol{e}_{k}}\tau^{ \boldsymbol{n}+\boldsymbol{e}_{i}},\quad 1\leq i<j<k\leq N. \tag{2.23}\] Geometric meaning of the above equations goes beyond the theory of discrete asymptotic nets, and can be incorporated [28] into the theory of discrete conjugate nets [34, 38]. Figure 1. Discrete asymptotic nets Multidimensional conjugate nets, their fundamental transformation, and multidimensional lattices of planar quadrilaterals Conjugate nets on a surface are second, after asymptotic nets, distinguished coordinate systems studied in depth by geometers of XIXth century. They include, as a special subcase, curvature coordinates, whose theory will be studied in the next Section. For classical results on the subject see works of Gabriel Lame [64], Luigi Bianchi [8] or Gaston Darboux [19]. The relation of conjugate nets on a surface to linear partial differential equations of the second order allows to transfer special theorems on such equations [66, 72, 83, 49] to the geometric level. The theory of transformations within special classes of such equations/conjugate nets has obtained mature form in works of Hans Jonas [56] and Luther P. Eisenhart [45], see also [109, 65, 47]. The Darboux equations, which describe multidimensional conjugate nets, were rediscovered in [112, 113] in the context of soliton theory as the most general partial differential equations integrable by the non-local \(\bar{\partial}\)-dressing method. Moreover, in [57] they were isolated as the simplest equations within multicomponent KP hierarchy. We start this Section with presenting basic elements of the theory of conjugate nets on a surface and their transformations. Then we move to the multidimensional nets. ### Conjugate coordinates on a surface, and the Levy transformation Local coordinates \((u,v)\), on a surface in the space \(\mathbb{R}^{N}\) of arbitrary dimension \(N\geq 3\), are called conjugate if the second mixed derivative of the position vector \(\boldsymbol{r}(u,v)\) are tangent to the surface. The defining equation takes then the form of the _Laplace equation_ \[\boldsymbol{r}_{,uv}=a\boldsymbol{r}_{,u}+b\boldsymbol{r}_{,v}, \tag{3.1}\] where \(a(u,v)\) and \(b(u,v)\) are corresponding functions of the conjugate parameters. The following considerations allow to introduce concepts relevant in the general theory of transformations of multidimensional conjugate nets. Let \(\theta\) be a scalar solution of equation (3.1), linearly independent of the components of \(\boldsymbol{r}\), then by direct calculation one can check that the so called _Levy transforms_ of \(\boldsymbol{r}\), given by \[\boldsymbol{r}^{(u)}=\boldsymbol{r}-\frac{\theta}{\theta_{,u}}\boldsymbol{r}_ {,u},\qquad\boldsymbol{r}^{(v)}=\boldsymbol{r}-\frac{\theta}{\theta_{,v}} \boldsymbol{r}_{,v}, \tag{3.2}\] are new surfaces with \((u,v)\) being local conjugate coordinates. The corresponding Laplace equations (3.1) of the new nets have coefficients \[a^{(u)}=a+\left(\log\frac{\theta}{\theta_{,u}}\right)_{,v}, b^{(u)}=b^{(v)}+\left(\log a^{(u)}\right)_{,u}, \tag{3.4}\] \[a^{(v)}=a^{(u)}+\left(\log b^{(v)}\right)_{,v}, b^{(v)}=b+\left(\log\frac{\theta}{\theta_{,v}}\right)_{,u}, \tag{3.3}\] what can be verified by direct calculation. In the proof it is convenient to show first that \[\boldsymbol{r}_{,v}^{(u)}=a^{(u)}\left(\boldsymbol{r}^{(u)}-\boldsymbol{r}^{ (v)}\right),\qquad\boldsymbol{r}_{,u}^{(v)}=b^{(v)}\left(\boldsymbol{r}^{(v )}-\boldsymbol{r}^{(u)}\right),\] what means that the lines joining the corresponding points of both Levy transforms of the conjugate net \(\boldsymbol{r}\) are simultaneously tangent to \(v\)-coordinate lines of \(\boldsymbol{r}^{(u)}\) and to \(u\)-coordinate lines of \(\boldsymbol{r}^{(v)}\), see Figure 2. Few comments are in order: * Lines through \(\boldsymbol{r}\) in direction of \(\boldsymbol{r}_{,u}\) form the \(u\)_-tangent congruence_ of the conjugate net. Equivalently, \(\boldsymbol{r}\) is \(u\)_-focal net_ of the congruence. Similarly one defines the \(v\)-tangent congruence of \(\boldsymbol{r}\), which is its \(v\)-focal net. Using such a terminology one can state that \(\boldsymbol{r}^{(v)}\) is \(u\)-focal net of the \(v\)-tangent congruence of \(\boldsymbol{r}^{(u)}\), and _vice versa_. * Notice that we use the notion of congruence of lines in the narrow sense, i.e. the parameters \((u,v)\) of the family define its focal nets. This means that one-dimensional family of lines parametrized by \(u\) forms a developable surface and is made of tangents to \(u\)-coordinate on the \(u\)-focal net; similar condition folds for \(v\)-parameter family of the congruence. We say that the congruence is referred to its developables. * The unique \(v\)-focal net of the \(u\)-tangent congruence to \(r\), given by (3.5) \[\boldsymbol{r}_{(v)}^{(u)}=\boldsymbol{r}-\frac{1}{b}\boldsymbol{r}_{,u},\] is called the \(uv\)_-Laplace transform of_\(\boldsymbol{r}\). Analogously one defines the \(vu\)-Laplace transform of \(\boldsymbol{r}\) (3.6) \[\boldsymbol{r}_{(u)}^{(v)}=\boldsymbol{r}-\frac{1}{a}\boldsymbol{r}_{,v}.\] In particular the conjugate net \(\boldsymbol{r}^{(u)}\) is the \(uv\)-Laplace transform of \(\boldsymbol{r}^{(v)}\). * The net \(\boldsymbol{r}^{(u)}\) is called _conjugate_ to the \(u\)-tangent congruence of \(\boldsymbol{r}\), similarly the net \(\boldsymbol{r}^{(v)}\) is conjugate to the \(v\)-tangent congruence of \(\boldsymbol{r}\). In such relation the conjugate coordinates on a surface are the parameters which define the focal nets of the congruence. One can reverse the situation and try to find focal nets of a congruence conjugate to a given conjugate net. Such focal nets, denoted by \(\boldsymbol{r}_{(u)}\) and \(\boldsymbol{r}_{(v)}\), are called _adjoint Levy transforms_ of the net. * The conjugate net \(\hat{\boldsymbol{r}}\) is called a _fundamental transform_ of \(\boldsymbol{r}\) when the two-dimensional family of lines joining their corresponding points forms a congruence (in the narrow sense explained above) whose developables cut both nets along the conjugate coordinate lines, i.e. both nets are conjugate to the same congruence. ### Multidimensional conjugate nets and the Darboux equations Consider \(N\)-dimensional submanifold in \(\mathbb{R}^{M}\) with local parameters \(\boldsymbol{u}=(u_{1},\ldots,u_{N})\) satisfying in each pair \((u_{i},u_{j})\), \(i\neq j\), the conjugate net condition \[\boldsymbol{r}_{,ij}=a_{ij}\boldsymbol{r}_{,i}+a_{ji}\boldsymbol{r}_{,j}. \tag{3.7}\] The functions \(a_{ij}\) of the local conjugate parameters cannot be arbitrary, because for \(N>2\) they should satisfy the following compatibility conditions of the above system (3.7) of Laplace equations \[a_{ij,k}=a_{ij}a_{jk}+a_{ik}a_{kj}-a_{ij}a_{ik},\qquad i,j,k\quad\text{distinct}. \tag{3.8}\] The nonlinear _Darboux equations_ (3.8) imply in particular that \(a_{ij,k}=a_{ik,j}\), what allows to introduce potentials \(h_{i}\), called _Lame coefficients_, such that \[a_{ij}=\frac{h_{i,j}}{h_{i}},\qquad i\neq j, \tag{3.9}\] and correspondingly the Laplace system (3.7) takes the form \[\boldsymbol{r}_{,ij}=(\log h_{i})_{,j}\boldsymbol{r}_{,i}+(\log h_{j})_{,i} \boldsymbol{r}_{,j}\qquad i\neq j. \tag{3.10}\] The remaining part of the Darboux equations in Laplace coefficients \(a_{ij}\) can be written in terms of the Lame coefficients as \[h_{i,jk}=\frac{h_{j,k}h_{i,j}}{h_{j}}+\frac{h_{k,j}h_{i,k}}{h_{k}},\qquad i,j, k\quad\text{distinct}. \tag{3.11}\] Following Darboux, let us introduce the suitably scaled tangent vectors \(\boldsymbol{X}_{i}\), \(i=1,\ldots,N\), from equations \[\boldsymbol{r}_{,i}=h_{i}\boldsymbol{X}_{i}. \tag{3.12}\] Then the Laplace equations (3.7) take the particularly simple form \[\boldsymbol{X}_{i,j}=\beta_{ij}\boldsymbol{X}_{j},\qquad i\neq j, \tag{3.13}\] where the _rotation coefficients_\(\beta_{ij}\) are defined by the linear system adjoint to (3.13) \[h_{j,i}=\beta_{ij}h_{i},\qquad i\neq j. \tag{3.14}\] The corresponding version of the Darboux equation reads \[\beta_{ij,k}=\beta_{ik}\beta_{kj},\qquad i,j,k\quad\text{distinct}. \tag{3.15}\] The tangent lines to \(i\)-th coordinate on the conjugate net \(\boldsymbol{r}\) form its \(i\)-th tangent congruence. Its \(j\)-th focal net, called \(ij\)-Laplace transform of \(\boldsymbol{r}\), is given by \[\boldsymbol{r}^{(i)}_{(j)}=\boldsymbol{r}-\frac{1}{a_{ji}}\boldsymbol{r}_{,i}. \tag{3.16}\] The Laplace transforms satisfy the following identities \[(\boldsymbol{r}^{(i)}_{(j)})^{(j)}_{(i)}=\boldsymbol{r},\qquad(\boldsymbol{r} ^{(i)}_{(j)})^{(j)}_{(k)}=\boldsymbol{r}^{(i)}_{(k)},\qquad(\boldsymbol{r}^{ (i)}_{(j)})^{(k)}_{(i)}=\boldsymbol{r}^{(k)}_{(j)}. \tag{3.17}\] This means that each generic \(N\)-dimensional conjugate net comes together with whole system of conjugate nets enumerated by points of the \(Q(A_{N-1})\) root lattice. ### The vectorial fundamental (binary Darboux) transformation of conjugate nets The contemporary theory of transformations of conjugate nets and of the Darboux equations is based on the following Lemma, given first in the discrete setting in [79]. The present version is its direct limit. **Lemma 3.1**.: _Given solution \(\beta_{ij}\) of the Darboux equations (3.15), and given solution \(\boldsymbol{Y}_{i}\) of the linear system (3.13) taking values in the (column vector) space \(\mathbb{R}^{K}\), and given solution \(\boldsymbol{Y}^{*}_{i}\) of the adjoint linear system (3.14) taking values in the (row vector) space \(\mathbb{R}^{L}\). 1. There exists the \(K\times L\) matrix-valued potential \(\boldsymbol{\Theta}[\boldsymbol{Y},\boldsymbol{Y}^{*}]\) defined by the following compatible system_ \[\boldsymbol{\Theta}[\boldsymbol{Y},\boldsymbol{Y}^{*}]_{,i}=\boldsymbol{Y}_{i }\otimes\boldsymbol{Y}^{*}_{i},\qquad i=1,\ldots,N. \tag{3.18}\] _2. If \(K=L\) and the potential \(\boldsymbol{\Theta}[\boldsymbol{Y},\boldsymbol{Y}^{*}]\) is invertible, then the functions_ \[\hat{\beta}_{ij}=\beta_{ij}-\boldsymbol{Y}^{*}_{j}\boldsymbol{\Theta}[ \boldsymbol{Y},\boldsymbol{Y}^{*}]^{-1}\boldsymbol{Y}_{i}, \tag{3.19}\] _are new solutions of the Darboux equations. 3. The vector-valued functions_ \[\hat{\boldsymbol{Y}}_{i}=\boldsymbol{\Theta}[\boldsymbol{Y},\boldsymbol{Y}^{ *}]^{-1}\boldsymbol{Y}_{i},\qquad\hat{\boldsymbol{Y}}^{*}_{i}=\boldsymbol{Y}^ {*}_{i}\boldsymbol{\Theta}[\boldsymbol{Y},\boldsymbol{Y}^{*}]^{-1}, \tag{3.20}\] _are the corresponding new solutions of the linear and the adjoint linear problem equations (3.13)-(3.14). In addition, the new matrix-valued potential is of the form_ \[\boldsymbol{\Theta}[\hat{\boldsymbol{Y}},\hat{\boldsymbol{Y}}^{*}]=C- \boldsymbol{\Theta}[\boldsymbol{Y},\boldsymbol{Y}^{*}]^{-1}, \tag{3.21}\] _where \(C\) is a constant operator._ _Remark_.: Notice that equations (3.12) mean that one can write \(\mathbf{r}=\mathbf{\Theta}[\mathbf{X},h]\). _Remark_.: In the theory of Darboux transformations of the KP hierarchy [92] the function \(\mathbf{\Theta}\) is called the squared-eigenfunction potential. By suitable arrangement of the transformation data [38] we arrive at the following version of the above Lemma. **Theorem 3.2**.: _Consider conjugate net \(\mathbf{r}\) with Lame coefficients \(h_{i}\), normalized tangent vectors \(\mathbf{X}_{i}\) and rotation coefficients \(\beta_{ij}\). Given transformation data \(\mathbf{Y}_{i}\), \(\mathbf{Y}_{i}^{*}\) which satisfy point 2. of Lemma 3.1 then_ \[\hat{\mathbf{r}}=\mathbf{r}-\mathbf{\Theta}[\mathbf{X},\mathbf{Y}^{*}]\mathbf{\Theta}[\mathbf{Y},\mathbf{Y}^{* }]^{-1}\mathbf{\Theta}[\mathbf{Y},h], \tag{3.22}\] _is new conjugate net, called the fundamental transform of \(\mathbf{r}\). The new rotation coefficients are given by equation (3.19), and the Lame coefficients and normalized tangent vectors read_ \[\hat{h}_{i}=h_{i}-\mathbf{Y}_{i}^{*}\mathbf{\Theta}[\mathbf{Y},\mathbf{Y}^{*}]^{-1}\mathbf{\Theta }[\mathbf{Y},h],\qquad\hat{\mathbf{X}}_{i}=\mathbf{X}_{i}-\mathbf{\Theta}[\mathbf{X},\mathbf{Y}^{*}] \mathbf{\Theta}[\mathbf{Y},\mathbf{Y}^{*}]^{-1}\mathbf{Y}_{i}. \tag{3.23}\] When the transformation data are scalar functions \(Y_{i}\) and \(Y_{i}^{*}\) then \(\theta=\mathbf{\Theta}[Y,h]\) is a scalar additional solution of the Laplace equation of \(\mathbf{r}\). The \(N\)-dimensional vector-valued function \(\mathbf{X}^{\prime}=\mathbf{\Theta}[\mathbf{X},Y^{*}]\) defines new conjugate net with Lame coefficients \(Y_{i}^{*}\) and the same normalized tangent vectors \(\mathbf{X}_{i}\). The function \(\theta^{\prime}=\mathbf{\Theta}[Y,Y^{*}]\) is a scalar solution of the Laplace equation of \(\mathbf{X}^{\prime}\) and in such a notation the scalar fundamental transformation reads \[\hat{\mathbf{r}}=\mathbf{r}-\frac{\theta}{\theta^{\prime}}\mathbf{X}^{\prime}. \tag{3.24}\] The vector \(\mathbf{X}^{\prime}\) points in the direction of the congruence of the transformation, and is called the _Combescure transform_ of \(\mathbf{r}\). The corresponding \(i\)-th Levy transform \(\mathbf{r}^{(i)}\) of \(\mathbf{r}\), the intersection of \(i\)-tangents of \(\mathbf{r}\) and \(\hat{\mathbf{r}}\) reads \[\mathbf{r}^{(i)}=\mathbf{r}-\frac{\theta}{\theta_{,i}}\mathbf{r}_{,i}=\mathbf{r}-\frac{\theta }{Y_{i}}\mathbf{X}_{i}, \tag{3.25}\] while the \(i\)-th adjoint Levy transform \(\mathbf{r}_{(i)}\) of \(\mathbf{r}\), whose \(i\)-th tangent congruence is the congruence of the transformation is given by \[\mathbf{r}_{(i)}=\mathbf{r}-\frac{h_{i}}{Y_{i}^{*}}\mathbf{X}^{\prime}. \tag{3.26}\] See Figure 2 to compare with the two-dimensional case. Superpositions of fundamental transformations, and multidimensional lattices of planar quadrilaterals The vectorial form of the fundamental transformation given in Lemma 3.1 contains already their permutability theorem. **Theorem 3.3**.: _Assume the following splitting of the data of the vectorial fundamental transformation_ \[\mathbf{Y}_{i}=\begin{pmatrix}\mathbf{Y}_{i}^{a}\\ \mathbf{Y}_{i}^{b}\end{pmatrix},\qquad\mathbf{Y}_{i}^{*}=(\mathbf{Y}_{ai}^{*}\;\mathbf{Y}_{ bi}^{*})\,, \tag{3.27}\] _associated with the partition \(\mathbb{R}^{K}=\mathbb{R}^{K_{a}}\oplus\mathbb{R}^{K_{b}}\), which implies the following splitting of the potentials_ \[\mathbf{\Theta}[\mathbf{Y},h]=\begin{pmatrix}\mathbf{\Theta}[\mathbf{Y}^{a},h]\\ \mathbf{\Theta}[\mathbf{Y}^{b},h]\end{pmatrix},\qquad\mathbf{\Theta}[\mathbf{X},\mathbf{Y}^{*}]= (\mathbf{\Theta}[\mathbf{X},\mathbf{Y}_{a}^{*}]\;\mathbf{\Theta}[\mathbf{X},\mathbf{Y}_{b}^{*}])\,,\] \[\mathbf{\Theta}[\mathbf{Y},\mathbf{Y}^{*}]=\begin{pmatrix}\mathbf{\Theta}[\mathbf{Y}^{a},\mathbf{Y}_{ a}^{*}]&\mathbf{\Theta}[\mathbf{Y}^{a},\mathbf{Y}_{b}^{*}]\\ \mathbf{\Theta}[\mathbf{Y}^{b},\mathbf{Y}_{a}^{*}]&\mathbf{\Theta}[\mathbf{Y}^{b},\mathbf{Y}_{b}^{*}] \end{pmatrix}.\] _Then the vectorial fundamental transformation is equivalent to the following superposition of vectorial fundamental transformations: 1. Transformation \(\mathbf{r}\to\mathbf{r}^{\{a\}}\) with the data \(\mathbf{Y}_{i}^{a}\), \(\mathbf{Y}_{ai}^{*}\) and the corresponding potentials \(\mathbf{\Theta}[\mathbf{Y}^{a},h]\), \(\mathbf{\Theta}[\mathbf{Y}^{a},\mathbf{Y}_{a}^{*}]\), \(\mathbf{\Theta}[\mathbf{X},\mathbf{Y}_{a}^{*}]\)_ \[\mathbf{r}^{\{a\}} =\mathbf{r}-\mathbf{\Theta}[\mathbf{X},\mathbf{Y}_{a}^{*}]\mathbf{\Theta}[\mathbf{Y}^{a}, \mathbf{Y}_{a}^{*}]^{-1}\mathbf{\Theta}[\mathbf{Y}^{a},h], \tag{3.29}\] \[h_{i}^{\{a\}} =h_{i}-\mathbf{Y}_{ai}^{*}\mathbf{\Theta}[\mathbf{Y}^{a},\mathbf{Y}_{a}^{*}]^{-1} \mathbf{\Theta}[\mathbf{Y}^{a},h],\] (3.30) \[\mathbf{X}_{i}^{\{a\}} =\mathbf{X}_{i}-\mathbf{\Theta}[\mathbf{X},\mathbf{Y}_{a}^{*}]\mathbf{\Theta}[\mathbf{Y} ^{a},\mathbf{Y}_{a}^{*}]^{-1}\mathbf{Y}_{i}^{a}. \tag{3.28}\] _2. Application on the result the vectorial fundamental transformation with the transformed data_ \[\boldsymbol{Y}^{*\{a\}}_{bi} =\boldsymbol{Y}^{*}_{bi}-\boldsymbol{Y}^{*}_{ai}\boldsymbol{\Theta} [\boldsymbol{Y}^{a},\boldsymbol{Y}^{*}_{a}]^{-1}\boldsymbol{\Theta}[\boldsymbol {Y}^{a},\boldsymbol{Y}^{*}_{b}], \tag{3.32}\] \[\boldsymbol{Y}^{b\{a\}}_{i} =\boldsymbol{Y}^{b}_{i}-\boldsymbol{\Theta}[\boldsymbol{Y}^{b}, \boldsymbol{Y}^{*}_{a}]\boldsymbol{\Theta}[\boldsymbol{Y}^{a},\boldsymbol{Y}^ {*}_{a}]^{-1}\boldsymbol{Y}^{a}_{i}, \tag{3.31}\] _and potentials_ \[\boldsymbol{\Theta}[\boldsymbol{Y}^{b},h]^{\{a\}} =\boldsymbol{\Theta}[\boldsymbol{Y}^{b},h]-\boldsymbol{\Theta}[ \boldsymbol{Y}^{b},\boldsymbol{Y}^{*}_{a}]\boldsymbol{\Theta}[\boldsymbol{Y} ^{a},\boldsymbol{Y}^{*}_{a}]^{-1}\boldsymbol{\Theta}[\boldsymbol{Y}^{a},h]= \boldsymbol{\Theta}[\boldsymbol{Y}^{b\{a\}},h^{\{a\}}], \tag{3.34}\] \[\boldsymbol{\Theta}[\boldsymbol{Y}^{b},\boldsymbol{Y}^{*}_{b}]^{ \{a\}} =\boldsymbol{\Theta}[\boldsymbol{Y}^{b},\boldsymbol{Y}^{*}_{b}]- \boldsymbol{\Theta}[\boldsymbol{Y}^{b},\boldsymbol{Y}^{*}_{a}]\boldsymbol{ \Theta}[\boldsymbol{Y}^{a},\boldsymbol{Y}^{*}_{a}]^{-1}\boldsymbol{\Theta}[ \boldsymbol{Y}^{a},\boldsymbol{Y}^{*}_{b}]=\boldsymbol{\Theta}[\boldsymbol{Y} ^{b\{a\}},\boldsymbol{Y}^{*}_{b}],\] (3.35) \[\boldsymbol{\Theta}[\boldsymbol{X},\boldsymbol{Y}^{*}_{b}]^{ \{a\}} =\boldsymbol{\Theta}[\boldsymbol{X},\boldsymbol{Y}^{*}_{b}]- \boldsymbol{\Theta}[\boldsymbol{X},\boldsymbol{Y}^{*}_{a}]\boldsymbol{\Theta}[ \boldsymbol{Y}^{a},\boldsymbol{Y}^{*}_{a}]^{-1}\boldsymbol{\Theta}[ \boldsymbol{Y}^{a},\boldsymbol{Y}^{*}_{b}]=\boldsymbol{\Theta}[\boldsymbol{X} ^{\{a\}},\boldsymbol{Y}^{*}_{b}\{a\}], \tag{3.33}\] _i.e._ \[\hat{\boldsymbol{r}}=\boldsymbol{r}^{\{a,b\}}=\boldsymbol{r}^{\{a\}}- \boldsymbol{\Theta}[\boldsymbol{X},\boldsymbol{Y}^{*}_{b}]^{\{a\}}\left( \boldsymbol{\Theta}[\boldsymbol{Y}^{b},\boldsymbol{Y}^{*}_{b}]^{\{a\}}\right)^ {-1}\boldsymbol{\Theta}[\boldsymbol{Y}^{b},h]^{\{a\}}. \tag{3.36}\] _Remark_.: The final result (3.36) is independent of the order of making the partial transformations. _Remark_.: The above procedure fixes already the integration constants when integrating equations (3.18) in constructions of the potentials. Lest us consider the simplest case of \(K=2\) when the vectorial fundamental transformation is obtained as superposition of two scalar transformations \[\boldsymbol{r}^{\{a\}} =\boldsymbol{r}-\frac{\theta^{a}}{\theta^{a}_{a}}\boldsymbol{X}_{a }, \boldsymbol{X}_{a}=\Theta[\boldsymbol{X},Y^{*}_{a}],\quad\theta^{a}= \Theta[Y^{a},h],\quad\theta^{a}_{a}=\Theta[Y^{a},Y^{*}_{a}], \tag{3.38}\] \[\boldsymbol{r}^{\{b\}} =\boldsymbol{r}-\frac{\theta^{b}}{\theta^{b}_{b}}\boldsymbol{X}_{b }, \boldsymbol{X}_{b}=\Theta[\boldsymbol{X},Y^{*}_{b}],\quad\theta^{b}= \Theta[Y^{b},h],\quad\theta^{b}_{b}=\Theta[Y^{b},Y^{*}_{b}], \tag{3.37}\] and the final result reads \[\boldsymbol{r}^{\{a,b\}}=\boldsymbol{r}-(\boldsymbol{X}_{a},\boldsymbol{X}_{b })\begin{pmatrix}\theta^{a}_{a}&\theta^{a}_{b}\\ \theta^{b}_{a}&\theta^{b}_{b}\end{pmatrix}^{-1}\begin{pmatrix}\theta^{a}\\ \theta^{b}\end{pmatrix},\qquad\theta^{a}_{b}=\Theta[Y^{a},Y^{*}_{b}],\quad \theta^{b}_{a}=\Theta[Y^{b},Y^{*}_{a}]. \tag{3.39}\] The point \(\boldsymbol{r}^{\{a,b\}}\) belongs to the plane passing through \(\boldsymbol{r}\), \(\boldsymbol{r}^{\{a\}}\), \(\boldsymbol{r}^{\{b\}}\) (the plane containing \(\boldsymbol{r}\) and spanned by \(\boldsymbol{X}_{a}\) and \(\boldsymbol{X}_{b}\)), see Figure 3, and the superposition formula (3.39) can be rewritten in the form of a discrete analogue of the Laplace equation (3.7) \[\boldsymbol{r}^{\{a,b\}}-\boldsymbol{r}=\frac{\theta^{a\{b\}}\theta^{a}_{a}}{ \theta^{a\{b\}}\theta^{a}}(\boldsymbol{r}^{\{a\}}-\boldsymbol{r})+\frac{ \theta^{b\{a\}}\theta^{b}_{b}}{\theta^{b\{a\}}\theta^{b}}(\boldsymbol{r}^{\{b \}}-\boldsymbol{r}). \tag{3.40}\] If \(\boldsymbol{r}\), \(\boldsymbol{r}^{\{a\}}\), \(\boldsymbol{r}^{\{b\}}\) are given then the position of \(\boldsymbol{r}^{\{a,b\}}\) on the plane is arbitrary due to the integration constants in definitions of \(\theta^{a}_{b}\) and \(\theta^{b}_{a}\). The superposition of three scalar fundamental transformations \[\boldsymbol{r}^{\{a,b,c\}}=\boldsymbol{r}-(\boldsymbol{X}_{a},\boldsymbol{X} _{b},\boldsymbol{X}_{c})\begin{pmatrix}\theta^{a}_{a}&\theta^{a}_{b}&\theta^{a}_ {b}\\ \theta^{b}_{a}&\theta^{b}_{b}&\theta^{b}_{c}\\ \theta^{a}_{a}&\theta^{b}_{b}&\theta^{c}_{c}\end{pmatrix}^{-1}\begin{pmatrix} \theta^{a}\\ \theta^{b}\\ \theta^{c}\end{pmatrix}, \tag{3.41}\] does not generate new integration constants. This result can be found in [45] as _the extended theorem of permutability_. It implies, in particular, that the point \(\boldsymbol{r}^{\{a,b,c\}}\) is uniquely given as intersection of three planes \(\langle\boldsymbol{r}^{\{a\}},\boldsymbol{r}^{\{a,b\}},\boldsymbol{r}^{\{a,c\}}\rangle\), \(\langle\boldsymbol{r}^{\{b\}},\boldsymbol{r}^{\{a,b\}},\boldsymbol{r}^{\{b,c\}}\rangle\) and \(\langle\boldsymbol{r}^{\{c\}},\boldsymbol{r}^{\{a,c\}},\boldsymbol{r}^{\{b,c\}}\rangle\); see also [34]. Due to equation (3.37) the vectors \(\boldsymbol{X}_{a}\) can be interpreted as normalized tangent vectors to the lattice direction, and \(-\frac{\theta^{a}}{\theta^{a}_{a}}\) playing the role of the Lame coefficients. The transformation formulas \[\boldsymbol{X}_{b}^{\{a\}}=\boldsymbol{X}_{b}-\frac{\theta^{a}_{b}}{\theta^{a}_{a}} \boldsymbol{X}_{a} \tag{3.42}\] give the discrete analog of the linear problem (3.13) with \(-\frac{\theta^{a}_{b}}{\theta^{a}_{b}}\) playing the role of the rotation coefficients. The corresponding transformation rules provide nonlinear relations, which can be interpreted as discrete Darboux equations. To close this Section we briefly recapitulate the basic theory of discrete conjugate nets [34], which we have just obtained from the theory of transformations of conjugate nets in the spirit of works [68, 69]. **Definition 3.1**.: The discrete conjugate net is a map \(\boldsymbol{r}\colon\mathbb{Z}^{N}\to\mathbb{R}^{M}\) of \(N\)-dimensional integer lattice such that for arbitrary \(\boldsymbol{n}\in\mathbb{Z}^{N}\) and any two directions \(a\neq b\) of the lattice, the vertices \(\boldsymbol{r}^{\boldsymbol{n}}\), \(\boldsymbol{r}^{\boldsymbol{n}+\boldsymbol{e}_{a}}\), \(\boldsymbol{r}^{\boldsymbol{n}+\boldsymbol{e}_{b}}\), and \(\boldsymbol{r}^{\boldsymbol{n}+\boldsymbol{e}_{a}+\boldsymbol{e}_{b}}\) of elementary quadrilaterals are coplanar. The coplanarity condition can be written in terms of the system of discrete Laplace equations \[\boldsymbol{r}^{\boldsymbol{n}+\boldsymbol{e}_{a}+\boldsymbol{e}_{b}}- \boldsymbol{r}^{\boldsymbol{n}}=A_{ab}^{\boldsymbol{n}}\left(\boldsymbol{r}^{ \boldsymbol{n}+\boldsymbol{e}_{a}}-\boldsymbol{r}^{\boldsymbol{n}}\right)+A_{ ba}^{\boldsymbol{n}}\left(\boldsymbol{r}^{\boldsymbol{n}+\boldsymbol{e}_{b}}-\boldsymbol{r}^{ \boldsymbol{n}}\right),\qquad a\neq b. \tag{3.43}\] Due to compatibility of the system (3.43) the functions \(A_{ab}^{\boldsymbol{n}}\) can be expressed in terms of the discrete Lame coefficients \[A_{ab}^{\boldsymbol{n}}=\frac{H_{a}^{\boldsymbol{n}+\boldsymbol{e}_{b}}}{H_{ a}^{\boldsymbol{n}}}\;,\qquad a\neq b\;, \tag{3.44}\] which satisfy equations \[H_{c}^{\boldsymbol{n}+\boldsymbol{e}_{a}+\boldsymbol{e}_{b}}-H_{c}^{ \boldsymbol{n}}=\frac{H_{a}^{\boldsymbol{n}+\boldsymbol{e}_{b}+\boldsymbol{e} _{c}}}{H_{a}^{\boldsymbol{n}+\boldsymbol{e}_{c}}}\left(H_{c}^{\boldsymbol{n}+ \boldsymbol{e}_{a}}-H_{c}^{\boldsymbol{n}}\right)+\frac{H_{b}^{\boldsymbol{n} +\boldsymbol{e}_{a}+\boldsymbol{e}_{c}}}{H_{b}^{\boldsymbol{n}+\boldsymbol{e} _{c}}}\left(H_{c}^{\boldsymbol{n}+\boldsymbol{e}_{b}}-H_{c}^{\boldsymbol{n}} \right), \tag{3.45}\] for distinct \(a\), \(b\) and \(c\). If we introduce the suitably scaled tangent vectors \(\boldsymbol{X}_{a}\), \(a=1,...,N\), \[\boldsymbol{r}^{\boldsymbol{n}+\boldsymbol{e}_{a}}-\boldsymbol{r}^{ \boldsymbol{n}}=H_{a}^{\boldsymbol{n}}\boldsymbol{X}_{a}^{\boldsymbol{n}}, \tag{3.46}\] then equations (3.43) can be rewritten as a first order system \[\boldsymbol{X}_{a}^{\boldsymbol{n}+\boldsymbol{e}_{b}}-\boldsymbol{X}_{a}^{ \boldsymbol{n}}=Q_{ab}^{\boldsymbol{n}}\boldsymbol{X}_{b}^{\boldsymbol{n}}, \qquad a\neq b\;. \tag{3.47}\] The proportionality factors \(Q_{ab}\), called the discrete rotation coefficients, can be found from the linear equations \[H_{b}^{\boldsymbol{n}+\boldsymbol{e}_{a}}-H_{b}^{\boldsymbol{n}}=H_{a}^{ \boldsymbol{n}+\boldsymbol{e}_{b}}Q_{ab}^{\boldsymbol{n}},\qquad a\neq b\;, \tag{3.48}\] adjoint to (3.13). The compatibility condition for the system (3.13) (or its adjoint) gives the following form of the discrete Darboux equations \[Q_{ab}^{\boldsymbol{n}+\boldsymbol{e}_{c}}-Q_{ab}^{\boldsymbol{n}}=Q_{ac}^{ \boldsymbol{n}+\boldsymbol{e}_{b}}Q_{cb}^{\boldsymbol{n}},\qquad a,b,c\quad \text{distinct}. \tag{3.49}\] The integrable discretization of the Darboux system was achieved first in [14] within the \(\bar{\partial}\) technique. The discrete analog of a conjugate net on a surface was introduced on a purely geometric basis in [100, 102]. Its connection with integrability theory was made first in [25] by connecting the Laplace sequence of two-dimensional discrete conjugate nets to Hirota's discrete generalized Toda lattice [54]. Soon after that the discrete analogs of multidimensional conjugate nets were introduced in [34]. In particular it was shown there that the number of discrete variables in the discrete Darboux equations can be arbitrary large, and this augmentation does not restrict the solution space of the basic three-dimensional system -- this property is known nowadays as multidimensional consistency [3, 85] and is considered as the basic concept of the theory of discrete integrable systems [51]. Figure 3. Elementary quadrilateral of superposition of two fundamental transformations The Darboux-Backlund transformations of the discrete Darboux equations were formulated first on the algebraic level in [79], and then in [38] the full geometric flavour of the theory was presented together with the interpretation of the transformations on the nonlocal \(\bar{\partial}\)-dressing method level, see also [73, 75, 40, 77, 78]. In particular it was pointed out in [38], referring to [68] and other similar works, that for discrete conjugate nets there is no essential difference between transformations and generation of new dimensions of the lattice. On the other hand, the conjugate nets are natural continuous limits of the lattices of planar quadrilaterals, what can be easily seen on the level of their Laplace equations (3.7) and (3.43). Therefore, the principle of getting the integrable discretization via Backlund transformation approach in this particular case finds natural explanation. This shows that, from the point of view of the theory of integrable systems, the discrete ones seem to be more basic. The transformation theory of discrete conjugate nets can be constructed [38] following the geometric principles of the continuous case. Also here the notion of congruence of lines (any two neighbouring lines of the family are coplanar) turns out to be crucial. In particular, it implies natural definition of focal lattices of such congruences. The simplest congruences are given by tangent lines in a fixed direction of a discrete conjugate net. _Remark_.: The discrete Weingarten congruences, which are relevant in the theory of transformations of discrete asymptotic nets described in Section 2, do not satisfy the definition of a discrete congruences of the theory of transformations of discrete conjugate nets, as it was pointed to me by Maciej Nieszporski. In fact, as it was explained in [27], the discrete Weingarten congruences provide two-dimensional lattices of planar quadrilaterals in the Plucker quadric. Their theory fits therefore into the general scheme of quadratic reductions of discrete conjugate nets [26]. ## 4. Curvature coordinates, the Ribaucour transformation, and circular lattices The notion of conjugate nets is invariant with respect to projective transformations of the ambient space. By imposing an additional geometric structure one can consider the corresponding reduction of the general theory. In the Euclidean space (we work in the standard orthonormal basis where the scalar product of two vectors is given with the help of transposition \(\boldsymbol{u}\cdot\boldsymbol{v}=\boldsymbol{u}^{t}\,\boldsymbol{v}\)) one can consider orthogonal conjugate nets, which turn out to be curvature coordinates on the given submanifold. The corresponding reduction of the fundamental transformation is provided by the Ribaucour transformation. The classical Ribaucour transformation [96] concerns surfaces in three dimensional Euclidean space \(\mathbb{E}^{3}\) such that lines of curvature (which are both conjugate and orthogonal) correspond and such that the normals to both surfaces in corresponding points in a point (the center of the Ribaucour sphere) equidistant to both of them. In general one can consider \(N\)-dimensional submanifold of \(\mathbb{E}^{M}\), \(N\leq M\), parametrized by conjugate and orthogonal coordinates. The orthogonality constraint \[\boldsymbol{r}_{,i}\cdot\boldsymbol{r}_{,j}=0,\qquad i\neq j, \tag{4.1}\] implies that the function \[\rho=\frac{1}{2}\,\boldsymbol{r}\cdot\boldsymbol{r},\] satisfies the Laplace equations of the orthogonal conjugate net \(\boldsymbol{r}\), and the functions \[X_{i}^{\circ}=\boldsymbol{r}\cdot\boldsymbol{X}_{i},\] give the corresponding solution to the linear system (3.13), i.e. \(\rho=\boldsymbol{\Theta}[X^{\circ},h]\). Equivalently, the above facts imply orthogonality of the conjugate net. The corresponding Ribaucour reduction of the vectorial fundamental transformation, compatible with the orthogonality of a given conjugate net, can be constructed using only half of the transformation data, for analogous but different description see [74]. The following result can be checked directly by calculating derivatives of both sides of the formulas. **Lemma 4.1**.: _Given solution \(\boldsymbol{Y}_{i}^{*}\) of the adjoint linear problem (3.14) of the orthogonal conjugate net \(\boldsymbol{r}\) then_ \[\boldsymbol{Y}_{i}=\boldsymbol{\Theta}[\boldsymbol{X},\boldsymbol{Y}^{*}]^{t }\boldsymbol{X}_{i}, \tag{4.2}\] _give a solution to the linear problem (3.13) of the net. Moreover, the integration constants in construction of the corresponding potentials \(\mathbf{\Theta}[\boldsymbol{Y},\boldsymbol{Y}^{*}]\), \(\mathbf{\Theta}[\boldsymbol{Y},h]\) and \(\mathbf{\Theta}[X^{\circ},\boldsymbol{Y}^{*}]\) can be chosen such that_ \[\mathbf{\Theta}[\boldsymbol{Y},\boldsymbol{Y}^{*}]^{t}+\mathbf{ \Theta}[\boldsymbol{Y},\boldsymbol{Y}^{*}]=\mathbf{\Theta}[\boldsymbol{X}, \boldsymbol{Y}^{*}]^{t}\,\mathbf{\Theta}[\boldsymbol{X},\boldsymbol{Y}^{*}], \tag{4.4}\] \[\mathbf{\Theta}[\boldsymbol{Y},h]=\mathbf{\Theta}[\boldsymbol{X}, \boldsymbol{Y}^{*}]^{t}\boldsymbol{r}-\mathbf{\Theta}[X^{\circ},\boldsymbol{Y} ^{*}]^{t}. \tag{4.3}\] **Theorem 4.2**.: _The vectorial fundamental transformation of orthogonal conjugate net \(\boldsymbol{r}\) calculated by (3.22) with application of the above constraints (4.3)- (4.4) is again conjugate orthogonal net._ The proof is based on the observation that orthogonality of the conjugate net \(\hat{\boldsymbol{r}}\) is equivalent to the fact that the function \(\frac{1}{2}\hat{\boldsymbol{r}}\cdot\hat{\boldsymbol{r}}\) satisfies the Laplace system of the net. The statement follows then from direct verification, using the constraints (4.3)-(4.4), and showing that \[\frac{1}{2}\hat{\boldsymbol{r}}\cdot\hat{\boldsymbol{r}}=\frac{1}{2} \boldsymbol{r}\cdot\boldsymbol{r}-\mathbf{\Theta}[X^{\circ},\boldsymbol{Y}^{ *}]\mathbf{\Theta}[\boldsymbol{Y},\boldsymbol{Y}^{*}]^{-1}\mathbf{\Theta}[ \boldsymbol{Y},h]=\hat{\rho}, \tag{4.5}\] is therefore the corresponding transform of the solution \(\rho\) of the Laplace system of \(\boldsymbol{r}\). The formulation of the theorem on permutability of superpositions of vectorial Ribaucour transformations reads as in Theorem 3.3. One has to check, what can be done by direct calculation, that on the intermediate level the reduction conditions (4.2)-(4.4) are satisfied by the transformed data. Geometry of integrable discrete analog of orthogonal conjugate nets follows from the observation made by Demoulin, who showed [24] that the vertices \(\boldsymbol{r}\), \(\boldsymbol{r}^{\{a\}}\), \(\boldsymbol{r}^{\{b\}}\), and \(\boldsymbol{r}^{\{a,b\}}\) of the Ribaucour transform are concircular, see Figure 4. In proving that we will follow [45], where this fact was shown as an implication of the orthogonality constraint imposed on the fundamental transformation. Given three points \(\boldsymbol{r}\), \(\boldsymbol{r}^{\{a\}}\) and \(\boldsymbol{r}^{\{b\}}\), the coordinates of the center of the circle passing through them are of the form \[\boldsymbol{r}+\lambda\boldsymbol{X}_{a}+\mu\boldsymbol{X}_{b},\] where \(\lambda\) and \(\mu\) are determined by the condition that the lines joining the center to the mid-points of the segments \([\boldsymbol{r},\boldsymbol{r}^{\{a\}}]\) and \([\boldsymbol{r},\boldsymbol{r}^{\{b\}}]\), are perpendicular to these segments. These conditions, due to the transformation formulas (3.37) and the diagonal elements of the reduction condition (4.3) \[2\theta_{a}^{a}=\boldsymbol{X}_{a}\cdot\boldsymbol{X}_{a}, \tag{4.6}\] are reducible to \[\theta^{a}+(\lambda\boldsymbol{X}_{a}+\mu\boldsymbol{X}_{b})\cdot \boldsymbol{X}_{a}=0,\qquad\theta^{b}+(\lambda\boldsymbol{X}_{a}+\mu\boldsymbol {X}_{b})\cdot\boldsymbol{X}_{b}=0. \tag{4.7}\] Analogously, the condition that the line joining the center to the mid-point of the segments \([\boldsymbol{r}^{\{a\}},\boldsymbol{r}^{\{a,b\}}]\) is perpendicular to the segment is \[\left(\boldsymbol{r}^{\{a\}}+\frac{1}{2}\left(\boldsymbol{r}^{\{a,b\}}- \boldsymbol{r}^{\{a\}}\right)-\boldsymbol{r}-\lambda\boldsymbol{X}_{a}-\mu \boldsymbol{X}_{b}\right)\cdot\boldsymbol{X}_{b}^{\{a\}}=0, \tag{4.8}\] what can be verified using the transformation formulas, equations (4.7), and off-diagonal elements of the reduction condition (4.3) \[\theta_{b}^{a}+\theta_{a}^{b}=\boldsymbol{X}_{a}\cdot\boldsymbol{X}_{b}. \tag{4.9}\] _Remark_.: In [36] we proved the circularity of the quadrilaterals by showing equivalence of the constraint with equation \[\boldsymbol{X}_{a}\cdot\boldsymbol{X}_{b}^{\{a\}}+\boldsymbol{X}_{b}\cdot \boldsymbol{X}_{a}^{\{b\}}=0, \tag{4.10}\] an immediate consequence of the transformation formulas (3.42) and of the condition (4.3). In the spirit of works [68, 69] one can conclude that the integrable discrete analogs of orthogonal conjugate nets is provided by circular lattices. **Definition 4.1**.: The integrable discrete analogue of orthogonal conjugate net is a map \(\boldsymbol{r}\colon\mathbb{Z}^{N}\to\mathbb{E}^{M}\) of \(N\)-dimensional integer lattice such that for arbitrary \(\boldsymbol{n}\in\mathbb{Z}^{N}\) and any two directions \(a\neq b\) of the lattice, the vertices \(\boldsymbol{r}^{\boldsymbol{n}}\), \(\boldsymbol{r}^{\boldsymbol{n}+\boldsymbol{e}_{a}}\), \(\boldsymbol{r}^{\boldsymbol{n}+\boldsymbol{e}_{b}}\), and \(\boldsymbol{r}^{\boldsymbol{n}+\boldsymbol{e}_{a}+\boldsymbol{e}_{b}}\) of elementary quadrilaterals are concircular. Integrability of the circular reduction of lattices of planar quadrilaterals follows from the fact that the circularity constraint is preserved [17] by evolution of the lattices expressed by the extended permutability theorem, i.e. if the three quadrilaterals with vertices \[\{\boldsymbol{r},\boldsymbol{r}^{\{a\}},\boldsymbol{r}^{\{b\}},\boldsymbol{r} ^{\{a,b\}}\},\qquad\{\boldsymbol{r},\boldsymbol{r}^{\{a\}},\boldsymbol{r}^{\{ c\}},\boldsymbol{r}^{\{a,c\}}\},\qquad\{\boldsymbol{r},\boldsymbol{r}^{\{b\}}, \boldsymbol{r}^{\{c\}},\boldsymbol{r}^{\{b,c\}}\},\] are circular, then the three circles through the triplets \[\{\boldsymbol{r}^{\{a\}},\boldsymbol{r}^{\{a,b\}},\boldsymbol{r}^{\{a,c\}}\}, \qquad\{\boldsymbol{r}^{\{a\}},\boldsymbol{r}^{\{a,b\}},\boldsymbol{r}^{\{a, c\}}\},\qquad\{\boldsymbol{r}^{\{a\}},\boldsymbol{r}^{\{a,b\}},\boldsymbol{r}^{\{a,c\}}\},\] intersect in the point \(\boldsymbol{r}^{\{a,b,c\}}\), what is equivalent to the Miquel theorem of elementary geometry. In view of multidimensional consistency of lattices of planar quadrilaterals this implies the multidimensional consistency of circular lattices. Soon after geometric proof of their integrability, it was confirmed by the non-local \(\bar{\partial}\) dressing technique in [36], the algebro-geometric construction [4], by construction of the corresponding reduction of the fundamental transformation [26, 75], and within the free-fermion description of the KP hierarchy [39]. Integrability of other basic reductions of the multidimensional lattice of planar quadrilaterals was investigated in [35], see also [13]. We remark that transformations [89] of the system of discrete Moutard equations (2.20) and of the corresponding Miwa's discrete BKP system (2.23) can be obtained as an integrable reduction of the fundamental transformation [28]. For review of the theory and many other geometric aspects of integrable discrete systems see [13]. Interesting applications of the theory to architectural design and computer graphics are discussed in [94, 95]. ## 5. Conclusion and open problems We presented interpretation of known results in the theory of discrete asymptotic and discrete conjugate nets from the _discretization by Backlund transformations_ point of view. We collected both classical formulas of XIXth century differential geometry of surfaces and their transformations, and more recent results from geometric theory of integrable discrete equations. Darboux-Backlund transformations of difference operators are reviewed in [33]. Old ideas of differential geometry of surfaces relevant to integrability are still sources of inspiration for contemporary research, see for example [15, 23]. The theory of multidimensional discrete conjugate nets is based on the simple geometric principle of planarity of elementary quadrilaterals. Their integrable reductions often come from geometric understanding, in the spirit of Klein, of various reductions of projective geometry by introducing absolute objects and corresponding restrictions of the group of projective transformations. Moreover, basic analytic tools of the theory of integrable systems, like the non-local \(\bar{\partial}\)-dressing method and the algebro-geometric techniques, could be applied to construct such lattices and corresponding solutions to discrete Darboux equations in a rather pure form. It turns out however, surprisingly, that the principle of _coplanarity of four points_ can be replaced, without loss of generality from the integrability viewpoint, by the condition of _collinearity of three points_[30]. The multidimensional consistency of such lattices is equivalent to the Figure 4. The superposition of two scalar Ribaucour transforms of an orthogonal conjugate net. The vertices \(\boldsymbol{r}\), \(\boldsymbol{r}^{\{a\}}\), \(\boldsymbol{r}^{\{b\}}\), and \(\boldsymbol{r}^{\{a,b\}}\) of the elementary quadrilateral are concircular. For circular quadrilaterals opposite angles sum up to \(\pi\). The center of the circle is the intersection point of bisectors of all the four sides of the quadrilateral Desargues theorem of projective geometry. The collinearity condition for such \(N\)-dimensional Desargues lattice in natural homogeneous coordinates of the projective space can be rewritten in the form \[\boldsymbol{\psi^{n+e_{i}}}-\boldsymbol{\psi^{n+e_{j}}}=u^{\boldsymbol{n}}_{ij} \boldsymbol{\psi^{n}},\qquad 1\leq i\neq j\leq N. \tag{5.1}\] The compatibility condition \[u^{\boldsymbol{n}}_{ji}+u^{\boldsymbol{n}}_{ij}=0,\qquad u^{\boldsymbol{n}}_{ ij}+u^{\boldsymbol{n}}_{jk}+u^{\boldsymbol{n}}_{ki}=0,\qquad u^{\boldsymbol{n}}_{ ij}u^{\boldsymbol{n+e_{j}}}_{ik}=u^{\boldsymbol{n}}_{ik}u^{\boldsymbol{n+e_{k}}}_{ ij},\qquad i,j,k\quad\text{discinct} \tag{5.2}\] can be simplified by introducing the single potential \(\tau\) such that \[u^{\boldsymbol{n}}_{ij}=\frac{\tau^{\boldsymbol{n+e_{i}}+\boldsymbol{e_{j}}} \tau^{\boldsymbol{n}}}{\tau^{\boldsymbol{n+e_{i}}}\tau^{\boldsymbol{n+e_{j}}} }=-u^{\boldsymbol{n}}_{ji},\qquad i<j, \tag{5.3}\] which satisfies the system of Hirota equations [54, 82] \[\tau^{\boldsymbol{n+e_{i}}}\tau^{\boldsymbol{n+e_{j}}+\boldsymbol{e_{k}}}- \tau^{\boldsymbol{n+e_{j}}}\tau^{\boldsymbol{n+e_{i}+e_{k}}}+\tau^{\boldsymbol {n+e_{k}}}\tau^{\boldsymbol{n+e_{i}+e_{j}}}=0,\qquad i<j<k. \tag{5.4}\] As it was shown in [30] the \(2N-1\)-dimensional Hirota system is equivalent to discrete Darboux equations of \(N\)-dimensional discrete conjugate net supplemented by \((N-1)\) dimensional lattice of its Laplace transformations. The Hirota system (5.4) is probably the most important discrete integrable system both from theoretical [82] and practical [62] viewpoint. In particular, its multidimensional consistency gives rise to the full hierarchy of commuting symmetries of the KP equation [22]. Originally it was called the discrete Toda system, which is another sign of a general rule that single integrable discrete equation can lead, via different limits, to many differential equations. In fact, the original name is related to the theory of two-dimensional lattices of planar quadrilaterals and their Laplace transforms [25], while the other name is related to the Desargues lattices [30]. Darboux-Backlund transformations of the Hirota system were studied in [87, 88], where the fundamental transformation of the theory of discrete conjugate nets appears under the name of binary Darboux transformation (the Levy transformation is called there the elementary Darboux transformation). Among distinguished integrable reductions of the KP hierarchy of equations there is the BKP hierarchy, which is encompassed by the single Miwa equation (2.23). The CKP hierarchy [21] leads, via the Darboux-Backlund transformations to the corresponding reduction [58, 104] of the Hirota system, see also [35, 29] for geometry of the corresponding reduction of the discrete conjugate nets. In parallel to the reductions of the KP hierarchy, which is based on restrictions of the Lie algebra \(\mathfrak{gl}(\infty)\) to its orthogonal and symplectic subalgebras \(\mathfrak{so}(\infty)\) and \(\mathfrak{sp}(\infty)\), on the discrete level one has the corresponding root lattices and their affine Weyl group interpretations [31, 32] of the Hirota system, the Miwa (discrete BKP) and the Kashaev (discrete CKP) equations. Many of known integrable systems, both discrete and continuous, can be obtained as its further reductions, see [90, 99] for applications of the root lattices and affine Weyl groups to understand Painleve equations. It may seem therefore that integrable discrete systems are enough to fully understand integrability. There is however a distinguished class of integrable differential equations, called dispersionless systems [76, 16, 42, 43, 61, 107], which escapes the above interpretation. The distinguished examples of such systems are the so called generalized heavenly equations [106, 41], which describe self-dual Einstein spaces [93], or the self-dual Yang-Mills equations [2], and both of them are genuine four dimensional.
2305.02201
ChatGPT in education: A discourse analysis of worries and concerns on social media
The rapid advancements in generative AI models present new opportunities in the education sector. However, it is imperative to acknowledge and address the potential risks and concerns that may arise with their use. We analyzed Twitter data to identify key concerns related to the use of ChatGPT in education. We employed BERT-based topic modeling to conduct a discourse analysis and social network analysis to identify influential users in the conversation. While Twitter users generally ex-pressed a positive attitude towards the use of ChatGPT, their concerns converged to five specific categories: academic integrity, impact on learning outcomes and skill development, limitation of capabilities, policy and social concerns, and workforce challenges. We also found that users from the tech, education, and media fields were often implicated in the conversation, while education and tech individual users led the discussion of concerns. Based on these findings, the study provides several implications for policymakers, tech companies and individuals, educators, and media agencies. In summary, our study underscores the importance of responsible and ethical use of AI in education and highlights the need for collaboration among stakeholders to regulate AI policy.
Lingyao Li, Zihui Ma, Lizhou Fan, Sanggyu Lee, Huizi Yu, Libby Hemphill
2023-04-29T22:08:42Z
http://arxiv.org/abs/2305.02201v1
# ChatGPT in education: A discourse analysis of worries and concerns on social media ###### Abstract The rapid advancements in generative AI models present new opportunities in the education sector. However, it is imperative to acknowledge and address the potential risks and concerns that may arise with their use. We analyzed Twitter data to identify key concerns related to the use of ChatGPT in education. We employed BERT-based topic modeling to conduct a discourse analysis and social network analysis to identify influential users in the conversation. While Twitter users generally expressed a positive attitude towards the use of ChatGPT, their concerns converged to five specific categories: academic integrity, impact on learning outcomes and skill development, limitation of capabilities, policy and social concerns, and workforce challenges. We also found that users from the tech, education, and media fields were often implicated in the conversation, while education and tech individual users led the discussion of concerns. Based on these findings, the study provides several implications for policymakers, tech companies and individuals, educators, and media agencies. In summary, our study underscores the importance of responsible and ethical use of AI in education and highlights the need for collaboration among stakeholders to regulate AI policy. ChatGPT; Education; Social media; Topic modeling; Sentiment analysis; Social network. ## 1 Introduction The development of large language models (LLMs) has marked significant progress in the domain of generative artificial intelligence (AI) (Fan et al., 2023; Kasneci et al., 2023). ChatGPT, a chatbot that was launched in November 2022 (OpenAI, 2022b), is one such generative AI model that has shown tremendous potential in language understanding and knowledge retention. Multiple evaluations and tests have validated its capabilities (Dwivedi et al., 2023; Li et al., 2023), particularly in the realm of higher education (Fauzi et al., 2023; Firat, 2023). For example, ChatGPT was able to pass graduate-level exams from law and business schools (Kelly, 2023), and the recently released GPT-4 model achieved top 10% in a law test (Koetsier, 2023). ChatGPT's performance has even been recognized in academic publications, with several papers listing it as a co-author (Mintz, 2023). ChatGPT has brought about opportunities for education. Its ability to provide personalized learning experiences, to assist in creating educational content, and to overcome language barriers can have a significant impact on teaching and learning outcomes (Adiguzel et al., 2023; Chen, 2023). For example, ChatGPT can help teachers generate questions, quizzes, assignments, and interactive educational content, such as games and simulations, that cater to the students' learning styles (Kasneci et al., 2023; Lee, 2023). ChatGPT can also support students to customize learning and provide feedback accordingly (Kasneci et al., 2023). However, the use of ChatGPT in education has raised potential concerns and risks (AlAfnan et al., 2023; Kasneci et al., 2023; Sok & Heng, 2023). One concern is the ethical implications of ChatGPT's ability to write scientific essays (Mhlanga, 2023), which may compromise the authenticity and originality of research (Malik et al., 2023). Another issue is the use ChatGPT by students to outsource their writing (Lund et al., 2023), which poses a challenge for academic institutions that rely on plagiarism detection tools to maintain academic integrity (Fijacko et al., 2023; Ventayen, 2023) and potentially undermines students' writing skill development (Kasneci et al., 2023; Sallam, 2023). In addition, the output of ChatGPT can be biased or nonsensical, possibly leading to the dissemination of incorrect information (Baidoo-Anu & Owusu Anash, 2023; Choi et al., 2023). The implementation of ChatGPT in education has sparked a large-scale conversation on social media (Kelly, 2022; Taecharungroj, 2023), allowing individuals to exchange information and accelerate knowledge dissemination. Social media's ability to quickly and broadly disseminate information can facilitate the emergence of critical opinions in a short period, which can be valuable for decision-makers to address concerns (Haque et al., 2022; Li et al., 2022). In addition, social media platforms can promote knowledge sharing and collaboration among policymakers, tech companies, educators, and students (Ahmed et al., 2019), facilitating the development of best practices for responsible AI in education. For example, educators can share their experiences and concerns about integrating AI into the learning process, while tech companies and engineers can offer insights into the latest developments in AI models and strategies. Analyzing crowdsourced opinions through social media offers two significant advantages. First, compared to expert opinions, crowdsourcing provides a more comprehensive and diverse perspective on how the general public perceives concerns based on their experiences or observations. Second, dissecting the social network on social media can reveal important users that are frequently implicated in the conversation. Therefore, this study proposed the following two research questions. * **RQ1 (Concerns)**: What are the key concerns that Twitter users perceive with using ChatGPT in education? * **RQ2 (Accounts)**: Which accounts are implicated in the discussion of these concerns? To address the research questions, we conducted a discourse analysis of Twitter data pertaining to the use of ChatGPT in education. Leveraging topic modeling, we were able to cluster negative sentiment tweets and identify the concerns perceived by Twitter users. Using social network analysis, we were able to investigate opinion leaders and frequently implicated users in the conversation. Through this analysis, our study aims to inform stakeholders of the prevailing concerns associated with the use of ChatGPT in education and to highlight their responsibilities in mitigating potential risks. Our study further emphasizes the crucial role of collaboration among stakeholders towards the development of effective strategies to ensure the responsible and ethical use of generative AI in educational settings. ## 2 Literature review ### The use of Generative AI models in education Generative AI models have garnered significant interest and attention from the public due to their ability to produce content that closely resembles human-generated content. These models can respond to complex and diverse prompts, including images, text, and speech (Dwivedi et al., 2023). Among them, ChatGPT (OpenAI, 2022b) and DALL-E (OpenAI, 2022a) are the two popular GPT-based AI products released by OpenAI in 2022. Other Generative AI models like Stable Diffusion from Stability.ai and Lensa have the ability to create user portraits known as Magic Avatars (Pavlik, 2023). Google has recently released a new Generative AI system called Bard powered by Language Model for Dialogue Applications (LaMDA) (Pichai, 2023). These Generative AI models have emerged as promising tools for enhancing learning and teaching processes in education (Dwivedi et al., 2023). A recent study demonstrated the potential of GPT-3 to generate multiple-choice questions and answers for reading comprehension tasks (Dijkstra et al., 2022). Similarly, Bhat et al. (2022) proposed a pipeline for generating assessment questions based on a fine-tuned GPT-3 model to facilitate self-learning. Moreover, conversational agents such as Blender and GPT-3 have been explored for educational dialogues, generating conversational dialogues that convey a deep understanding of the learner (Tack & Piech, 2022). Beyond above applications, Generative AI models have been demonstrated useful in various educational tasks, such as generating code explanations (MacNeil et al., 2022), writing essays (Park et al., 2022), and providing formative feedback on student work (Jia et al., 2021). ### Opportunities and risks of using generative AI models in education Generative AI could benefit the educational sector with personalized and innovative teaching and learning methods. Prior studies have shown that these models can help educators create teaching materials such as quizzes, tests, and worksheets (Abdelghani et al., 2022; Dijkstra et al., 2022; Gabajiwala et al., 2022). They can also help analyze student data, identify learning patterns, and generate valuable insights to refine teaching methods (Bernius et al., 2022; Moore et al., 2022; Zhu et al., 2020). For students, generative AI models can provide interactive and engaging learning experiences that cater to their learning styles and needs. One typical application is to summarize information from multiple sources, which can aid the process of knowledge acquisition and improve learning efficiency (Haleem et al., 2022). They can also benefit students with disabilities by enhancing accessibility and developing more inclusive learning strategies. For example, Kasneci et al. (2023) discussed how ChatGPT can be used with text-to-speech or speech-to-text technologies to assist students with hearing or visual impairments. Although their application is on the rise, concerns have emerged about generative AI's impacts on students, educators, and the educational landscape. Researchers have highlighted the potential risks associated with students' reliance on AI-generated content, which may hinder their critical thinking and problem-solving skills (Iskender, 2023; Kasneci et al., 2023). Additionally, the authenticity and originality of AI-generated content have been questioned, as generative AI models can produce content that mimics human writing, raising concerns about academic integrity (Asler & Waisberg, 2023; Kasneci et al., 2023; Lim et al., 2023). Educators may also be hesitant to adopt generative AI in their teaching practices due to concerns about job security, lack of understanding of the technology, or fear of its potential negative impacts on the education system (Atlas, 2023). To better understand these concerns, we compiled a list of prior work that discusses the ethical and practical concerns of using ChatGPT in education (see **Table 1**). \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline Author and Year & \multicolumn{1}{c|}{Concerns} \\ \hline AlAfhan et al. (2023) & Discourage writing, impact on student learning and development, challenge to evaluate learning outcomes. \\ \hline Atlas (2023) & Risk of plagiarism, proper attribution (e.g., the source of information), workforce displacement and reskilling; multiple concerns and suggestions for educators (e.g., academic integrity, data privacy and security, ethical considerations, accessibility, transparency, institutional policies, professional development). \\ \hline Baidoo-Anu and Owusu Ansh (2023) & Lack of human interaction, limited understanding, bias in training data, lack of creativity, lack of contextual understanding, limited ability to personalized instruction, privacy. \\ \hline Choi et al. (2023) & Failure to generate sufficient detail, misunderstanding of terms, departure from the material. \\ \hline Halaweh (2023) & Concerns from text generation (e.g., writing, editing), concerns from ideas generation (e.g., critical thinking, originality). \\ \hline Kasneci et al. (2023) & Copyright issues, bias and fairness, impact on critical thinking and problem-solving, lack of understanding and expertise, difficulty to distinguish machine- and human-generated text, cost of training and maintenance, data privacy and security. \\ \hline Megahed et al. (2023) & The quality of AI outputs, biased prediction, ethical questions. \\ \hline Mhlanga (2023) & Respect for privacy, transparency, responsible AI (AI limitations), accuracy of information, replacement of teachers, fairness and non-discrimination. \\ \hline Qadir (2022) & Incorrect information and mistakes, unethical conduct, potential to exacerbate inequalities. \\ \hline Rudolph et al. (2023) & Threat to essay, the relevance or accuracy of the generated information, replacement of teaching jobs. \\ \hline Sallam (2023) & Ethical, copyright, transparency, and legal issues, the risk of bias, plagiarism, lack of originality, inaccurate content with risk of hallucination, limited knowledge, incorrect citations, cybersecurity issues, and risk of infodemics. \\ \hline Sallam et al. (2023) & Risk of plagiarism, copyright issues, the risk of academic dishonesty, lack of personal and emotional interactions, suppressing the development of critical thinking and communication skills. \\ \hline Sok \& Heng (2023) & Academic integrity, unfair learning assessment, inaccurate information, over-reliance on AI. \\ \hline Thorp (2023) & Scientific misconduct. \\ \hline \end{tabular} \end{table} Table 1: Existing concerns regarding the implementation of ChatGPT in education. ### Social media discussion of ChatGPT Most of the studies we reviewed rely on authors' observations or a review of existing literature to identify concerns, which may limit the diversity of perspectives and lack real-world relevance of the discussion. Social media offers a platform for users to share their opinions based on first-hand observation or experience, allowing us to tap into current trends and issues related to the use of ChatGPT and gain a more comprehensive collection of the concerns. A few attempts have leveraged social media data to crowdsource opinions regarding the application of ChatGPT (Feng et al., 2023; Haque et al., 2022). For example, Leiter et al. (2023) collected and analyzed Twitter data to investigate people's perceptions of ChatGPT. They found that ChatGPT has been generally well-received on social media, particularly in scientific domains like medical fields, but has been viewed as a threat to the education sector. In another study, Taecharungroj (2023) used Latent Dirichlet Allocation (LDA) topic modeling to examine tweets following ChatGPT's launch. The study identified several domains in which ChatGPT could operate effectively, including creative writing, essay writing, prompt writing, code writing, and question answering. However, the study also highlighted the potential impacts of ChatGPT on both technology and humans, such as concerns about job displacement. Two other studies leveraged discussions from Twitter and TikTok about ChatGPT in education. Tlili et al. (2023) analyzed tweets about ChatGPT using social network analysis and conducted interviews to understand stakeholders' perceptions and users' experiences. Their findings revealed several concerns of using ChatGPT, including cheating, honesty, privacy, and manipulation. Haensch et al. (2023) collected TikTok videos to investigate students' usage and perceptions of ChatGPT. They found that TikTok videos promoted ChatGPT for academic assignments and discussed ways to deceive AI detectors. However, they noted a lack of videos depicting ChatGPT-generated nonsensical or unfaithful content. The authors suggested that this gap in information could affect educators' attitudes towards using ChatGPT for teaching and grading. Our review of existing studies has revealed several research gaps. As mentioned, many studies have relied on expert opinions or literature reviews to summarize concerns related to this topic. This approach may not capture diverse perspectives or reflect a more general discussion from the public. While a few studies have leveraged social media data to crowdsource opinions, two questions remained unclear, as listed in the **Introduction**. Our study stands apart from the two relevant social media studies in two ways. First, we applied Natural Language Processing tools to mine opinions from a vast collection of tweets, thereby providing insights from a broad cross-section of Twitter users. Second, we analyzed influential users and their profile information from the social network, further providing practical implications. To address these gaps, we first used topic modeling to identify concerns expressed in negative-sentiment tweets to gain insights into the public's concerns about using ChatGPT in educational settings. Second, we employed social network analysis to identify opinion leaders and frequently mentioned accounts in the conversation. This information can provide useful insights into who the crowd identifies as stakeholders for the application of ChatGPT in education. ## 3 Data and methods **Figure 1** displays the graphical illustration of our research framework. The process started with data collection, which involved collecting and archiving tweets written in English related to the subject of ChatGPT in education, as elaborated in **Section 3.1**. Subsequently, we used a sentiment model developed on the RoBERTa architecture to classify the sentiment, as detailed in **Section 3.2**. Thereafter, we employed the BERTopic tool to cluster negative tweets into distinct topics, with the aim of eliciting potential concerns from negative tweets, as described in **Section 3.3**. Next, we used social network analysis to chart the dissemination network among users, which helps identify key accounts that propagate the concerns and should be apprised of the associated risks, as described in **Section 3.4**. ### Data collection We used the academic version of Twitter Search API and selected fourteen education-related search terms along with the keyword "ChatGPT" to collect relevant tweets. Our search covered tweets posted from December 1, 2022, to March 31, 2023. OpenAI released ChatGPT on November 30, 2022 (OpenAI, 2022b). Specific search parameters are provided in **Table 2**. To ensure consistency in our dataset, we limited our search to English tweets that contained the designated keywords. This decision was based on the capabilities of the NLP tools we used to analyze the tweets. As a result, we collected a total of 247,484 tweets, including original tweets, mentions, replies, and retweets, in which 84,828 were original tweets. Figure 1: Illustration of the research framework. ### Sentiment analysis For sentiment analysis, we applied the "twitter-roberta-base-sentiment-latest" released by Loureiro et al. (2022), which is a built based on its predecessor "twitter-roberta-base-sentiment" model released by Barbieri et al. (2020). Barbieri et al. (2020) selected RoBERTa (Liu et al., 2019) as a pre-training approach due to its top performance in the General Language Understanding Evaluation benchmark (GLUE). RoBERTa is a robustly optimized Bidirectional Encoder Representations (BERT) pre-training approach. It is particularly suitable for Twitter where most tweets are composed of a single sentence (Devlin et al., 2019). In addition, compared to context-free embedding models such as Word2Vec and GloVE, BERT embedding is based on the transformers architecture and relies on an attention mechanism to generate contextualized representation based on the text. The RoBERTa-base sentiment model was fine-tuned for sentiment analysis with the TweetEval benchmark, which specifically focuses on analyzing tweet sentiment. Barbieri et al. (2020) added a dense layer to reduce the dimensions of RoBERTa's last layer to the number of labels in the classification task to prepare the model for sentiment classification. This model has demonstrated its superior performance over FastText and Support Vector Machine (SVM) -based models with n-gram features (Barbieri et al., 2020). Loureiro et al. (2022) further updated the model by training it on a larger corpus of tweets, based on 123.86 million tweets extracted until the end of 2021, compared to its predecessor's 58 million tweets. By leveraging the latest model, we were able to analyze the sentiment of tweets. Tweet examples and their resulting sentiment classifications are presented in **Table 3**. The sentiment classification follows the highest score returned by the softmax layer in the RoBERTa model. We used only the tweets with negative sentiment (N = 70,318) in our analyses. \begin{table} \begin{tabular}{|l|l|} \hline \hline Conditions & Search \\ \hline API tool & Twitter Search API for academic research \\ \hline Search date & December 1, 2022 to March 31, 2023 \\ \hline Search terms & ChatGPT + school; ChatGPT + college; ChatGPT + university; \\ & ChatGPT + education; ChatGPT + student; ChatGPT + teacher; \\ & ChatGPT + learning; ChatGPT + curriculum; ChatGPT + class; \\ & ChatGPT + exam; ChatGPT + homework; ChatGPT + teaching; \\ & ChatGPT + academia; ChatGPT + academic. \\ \hline \hline \end{tabular} \end{table} Table 2: Twitter API search conditions. \begin{table} \begin{tabular}{|c|c|c|} \hline \hline Tweet & Sentiment Score & Sentiment \\ \hline It’s easy to underestimate the impact on education ChatGPT & Negative: 0.094 & Neutral \\ will have. I asked my kids what concept they were struggling & Neutral: 0.536 & \\ with understanding at school. Like we’ve seen with YouTube & Positive: 0.370 & \\ or Khan Academy, supplementing their edu with tools like this & & \\ can make them smarter than we ever were. & & \\ \hline \hline \#ChatGPT is fascinating for AI in our conversations with customer support, assistants etc. to write our homework, blog articles, grant proposals etc. Some even speculate that we may be a & Negative: 0.010 & Positive \\ \hline \hline \end{tabular} \end{table} Table 3: Examples of tweet sentiment classification. few years away from this technology replacing search engines like #Google. I once had a great engineering professor (Hi Prof. Lasky!). But he was old school. He introduced himself with this photo, and would assign problems that we were required to solve with a slide rule. This is a thread about #ChatGPT and AI tool adoption in math/engineering. The software recommendations that I have seen so far from chatgpt are often incorrect. Sometimes obvious (an incorrect method name) but also sy (timing). These mistakes are often caught by tools but people say this is a tool. In reality: it's homework. Not buying today. Professors Caught Students Cheating on College Essays With ChatGPT. Negative: 0.762 Negative Neutral: 0.228 Positive: 0.010 ### BERT-based topic modeling Topic modeling enables the identification of semantic themes in vast quantities of text data, such as social media data. We employed the BERTopic tool, which involves applying BERT embedding to extract semantically relevant sentence embeddings from tweets. As mentioned, BERT embedding offers a distinct advantage due to its contextualized representation. Specifically, we utilized the Sentence-BERT (SBERT) Python package for this task (Reimers & Gurevych, 2019). Given that the BERT embedding converts tweets into high-dimensional vectors, we employed a dimensionality reduction technique called Uniform Manifold Approximation and Projection (UMAP), as proposed by McInnes et al. (2018). UMAP can help mitigate the "curse of dimensionality" while preserving the local and global structure of the dataset. This feature of UMAP is particularly useful for constructing topic models that rely on the structural similarities of word vectors (McInnes et al., 2018). After that, we employed the Scikit-learn Python package to apply K-Means clustering and group similar sentence embeddings into topics (Buitinck et al., 2013). We chose K-Means clustering because of its simplicity, computational efficiency, and effectiveness in handling large datasets. We experimented with various cluster numbers, including 50, 100, and 200, to identify the optimal number of clusters for our analysis. Our results revealed that 50 or 100 clusters produced clusters that mixed several topics. 200 clusters provided more coherent and separable clusters. The final stage of our topic modeling process involves the representation of topics. We used the count vectorizer within the Scikit-learn Python package to tokenize the topics. We then applied class-based Term Frequency-Inverse Document Frequency (c-TF-IDF) to extract the topical keywords and representative tweets from each cluster (Grootendorst, 2022). This process enabled us to interpret the themes from clustered topics and assign different topics into a theme (i.e., "category" in our following analysis). ### 3.4 Social network analysis **Figure 2(a)** illustrates the relationships among Twitter users based on mentions, retweets, quotes, and replies. We focused on two types of interactions: mentions and retweets. A mention is a tweet that includes another user's name in the text, and when a user is mentioned, they receive a notification from Twitter. A retweet is a repost of another user's tweet (Twitter Inc, 2023). By analyzing mentions, we aimed to identify users who were frequently implicated in the conversation and thus should be responsive to the concerns. By analyzing retweets, we aimed to identify the opinion leaders in communicating risks and concerns to a broader audience. We utilized degree centrality to investigate communication patterns in Twitter social networks (Powell, 2015). **Figure 2(b)** illustrates the degree of centrality of each Twitter user, which reflects the number of connections (edges) a user has with other users (vertices). Graph theory introduces three types of centralities in a network graph: in-degree, out-degree, and betweenness centrality (Powell, 2015). We focused on in-degree centrality in this study as it allows us to identify the influential users (those who receive attention from other users). To analyze the social network of Twitter users, we employed NodeXL, a network analysis and visualization software package embedded with Microsoft Excel (Smith et al., 2010). With NodeXL, we selected the Clauset, Newman, and Moore (CNM) algorithm to plot the community structure (Clauset et al., 2004) and the Fruchterman-Reingold layout to display the network (Fruchterman & Reingold, 1991). Figure 2: Illustrations of Twitter interactions and degree centrality in Graph Theory (adapted from Kim et al. (2018) and Li et al. (2021)). (a) Types of Twitter interactions. (b) Degree centrality. ## 4 Results The results section consists of three parts. In **Section 4.1**, we used the RoBERTa tool to analyze sentiment and then identify events that were associated with sentiment changes. This sets the context for the discourse analysis and provides an overview of sentiment trends. In **Section 4.2**, we utilized BERTopic modeling to perform discourse analysis. This section addressed **RQ1** by identifying the concerns that garnered the most attention within Twitter communities. In **Section 4.3**, we employed the NodeXL tool to describe the social network of Twitter users who engaged in the conversation. This section answers **RQ2** by identifying influential users. ### Sentiment trends and analysis **Figure 3** displays the sentiment trend during the study period. Out of the 247,484 tweets, 49,528, 127,638, and 70,318 tweets were identified as negative, neutral, and positive, respectively. Similarly, out of the 84,828 original tweets, 16,011 were negative, 42,495 were neutral, and 26,322 were positive. The sentiment analysis suggests that Twitter users have a generally positive attitude towards the application of ChatGPT in education, as indicated by the orange line consistently positioned above the blue line in **Figure 3**. To better understand the sentiment trends, we identified and annotated a few noteworthy events that likely triggered widespread discussion on Twitter. One such event occurred on March 14, 2023, with the release of OpenAI's highly anticipated GPT-4 model (OpenAI, 2023), resulting in a positive sentiment across Twitter. Similarly, when ChatGPT exceeded 100 million users in early February (Milmo, 2023), it sparked a wide discussion that persisted for several days on Twitter. However, we also observed several significant events associated with negative sentiment in public attitudes. The first one occurred around December 4, 2022, at the beginning of the study period, as many tweets discussed the potential negative effects of ChatGPT on the education system (Meckler and Verma, 2022; Stokel-Walker, 2022). Teachers expressed concern that this technology could change academia, leading to cheating and other negative impacts on learning. Later on, in late January, these worries resurfaced Figure 3: Twitter sentiment trend and significant events during the study period. as news broke that ChatGPT had passed MBA exams and other tests from law and business schools (Kelly, 2023; Rosenblatt, 2023). Furthermore, schools and universities announcing ChatGPT bans in January (Johnson, 2023) also contributed to the negative sentiment. The largest negative sentiment peak was observed at the end of March, specifically on March 28, 2023. Many tech leaders, including Elon Musk and Steve Wozniak, signed an open letter by the Future of Life Institute (Future of Life Institute, 2023), calling for an immediate pause on giant AI experiments like ChatGPT, citing "profound risks to society and humanity" (Hurst, 2023). This event contributed to the dramatic increase in negative sentiment towards ChatGPT. Overall, these events serve as useful markers in contextualizing and interpreting our findings regarding public sentiment towards the use of ChatGPT in education. ### Discourse analysis of concerns We employed BERTopic to cluster 200 topics from 16,011 original negative-sentiment tweets. Subsequently, we manually examined the topics and categorized them into distinct categories. For consistency, we used the term "topic" to refer to the clusters returned by BERTopic and the term "category" to refer to the manually identified category information. Our categorization was informed by two sources. First, we extracted relevant categories from our literature review, as presented in **Table 1** in **Section 2.2**. Drawing on the insights from prior works, we identified six overarching categories that encapsulated the concerns expressed in the negative tweets. * **(A) Academic integrity**: topics that describe ethical and moral concerns in academic activities, such as unethical practices, scientific misconduct, the risk of academic dishonesty, and various forms of cheating and plagiarism in **Table 1**. * **(I) Impact on learning outcomes and skill development**: topics that describe negative impact on learning and skill development. This includes the impact on critical thinking, creativity, problem-solving abilities, as well as writing and coding skills in **Table 1**. * **(L) Limitation of AI capabilities**: topics that describe the limited capabilities of AI-generated information, such as biased outcomes, lack of contextual understanding, limited ability to personalized instruction, inaccurate information, misinformation, misunderstanding, failure to generate sufficient detail in **Table 1**. * **(S) Security and privacy**: topics that describe data security concerns, such as data privacy and cybersecurity issues in **Table 1**. * **(W) Workforce challenges**: topics that describe potential impacts on jobs, such as replacement of teachers, devaluation of job training, as well as workforce displacement and reskilling in **Table 1**. Second, after reviewing the keywords and a sample of tweets from each of the 200 topics, we identified three additional categories, as presented below. We noted that the "(M) Miscellaneous" category includes topics that contain the key search terms but are not relevant to ChatGPT application in education, such as general AI concerns, tech company competition, challenges of deep learning models, or tweets with sentiment classification errors. * **(G) General negativity**: topics that describe a generally negative attitude towards the application of ChatGPT in education, such as disruption to the education system and exposing weakness of current education. * **(O) Operation and management issues**: topics that pertain to the operation of ChatGPT, such as the system breakdowns and charges incurred during usage. * **(P) Policy and social concerns**: topics that describe the policies and social concerns, such as blocks and bans on the use of ChatGPT, as well as sensitive issues that may cause a social impact. * **(M) Miscellaneous**: These are topics that do not fit into any of the above categories. They may cover general concerns related to AI, deep learning models, data bias, or topics that arise from incorrect sentiment classification. After finalizing the categories, we manually reviewed the keywords (generated by the c-TF-IDF method) that represented the topics and examined a sample of tweets under each topic. Two authors conducted a discussion for each clustered topic and assigned it to one of the nine predetermined categories. Topics that fell under the "(M) Miscellaneous" category, which did not pertain to education settings, were excluded from subsequent analysis. **Figure 4(a)** shows the manual classification results of the 200 topics returned by BERTopic. Using this classification, we analyzed the distribution of the original negative-sentiment 16,011 tweets, as presented in **Figure 4(b)**. Our analysis identified five frequently discussed concerns among Twitter users, including "(A) Academic integrity," "(I) Impact on learning outcomes and skill development," "(L) Limitation of capabilities," "(P) Policy and social concerns," and "(W) Workforce challenges." We sought to examine the temporal patterns of concerns surrounding ChatGPT in education. **Figure 5** illustrates the temporal distribution of negative tweets related to these concerns. At the beginning of the study period, a surge of concerns arose regarding the potential disruption that ChatGPT could bring to education. There were also concerns about its impact on learning and skill development. In early January, when the use of ChatGPT was blocked in all New York City schools (**Figure 3**), a discussion about policy and social concerns related to ChatGPT in education emerged (i.e., could ChatGPT be used in classrooms or exams?). When ChatGPT passed MBA and law exams (**Figure 3**), Twitter users expressed concerns about its impact on the workforce, which could potentially diminish the value of education. Later in April, when tech leaders called for a pause on AI, discussions shifted towards the potential limitations of generative AI's capabilities. Our findings suggest that concerns surrounding ChatGPT in education could shift over time, with different issues taking precedence at different times depending on policies enacted and capabilities identified. Figure 4: The distribution of categories based on (a) 200 topics clustered by BERTopic and (b) 10611 original negative-sentiment tweets. **Table 4** provides details about each category, including its keywords and representative tweets. This data provides detail about the concerns and worries Twitter users expressed under each general category of concern. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline \multirow{2}{*}{Category} & Topic No. & \multicolumn{2}{c|}{Keywords} & \multicolumn{2}{c|}{Representative Tweet} \\ \hline (A) Academic integrity & 26 & ai - cheat - cheating - students - concerns - using - increase - passes - head on - over & Concerns over students using AI to cheat increase after ChatGPT passes Wharton & Concerns over students using AI to cheat increase after ChatGPT passes Wharton \\ \cline{2-4} & 48 & plagiarism - tool - decide - whether - engine - ai - rethink - chatbot - professors - princeton & ChatGPT Is Making Universities Rethink Plagiarism. Students and professors can’t decide whether the AI chatbot is a research tool, or a cheating engine. \#deth \#ILoveEdTech \#ImFutureReady \#learning \#AI \\ \cline{2-4} & 137 & florida - scandal - erupts - elite - program - high - accuses - inside - cheating - school & ChatGPT cheating scandal erupts inside elite program at Florida high school. \\ \cline{2-4} & 152 & alert - inevitable - release - after - teachers - cheating - are - on - for - of & Teachers are on alert for inevitable cheating after release of ChatGPT. \\ \hline (G) General negativity & 8 & education - system - we - our - world - teachers - are - will - already - change & ChatGPT is the nail in the coffin for the current Education System. RIP. \#AI \#Education \\ \cline{2-4} & 20 & education - system - will - pandemic - disrupt - be - we - the - already - going & ChatGPT will be truly disruptive for education. \\ \cline{2-4} & 108 & assessment - assessments - compliance - measure - deny - lazy - rethink - daughter - threat - time & Time to rethink higher education assessments. ChatGPT is disrupting everything including assessments. \\ \hline \end{tabular} \end{table} Table 4: Representative topics and tweet examples within each category. Figure 5: The trends of categories based on all the 49528 negative tweets. \begin{tabular}{|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|} \hline (I) Impact on learning outcomes and skill development & 2 & code - coding - python - programming - learning - learn - it - me - class - ask & \begin{tabular}{l} \(\underline{\text{@scrumtuous most administrators are too worried about the ChatGPT threat to consider teaching real skills like coding.}}\) \\ \end{tabular} \\ \cline{2-4} & 51 & essay - essays - write - tradition - center - humanistic - writing - undergraduate - ground - student & The essay, in particular the undergraduate essay, has been the center of humanistic teacher children how to research, think, and write. That entire tradition is about to be disrupted from the ground up. \\ \cline{2-4} & 92 & robs - motivation - write - writing - themselves - papers - term - prowess - technical - short & How \#ChatGPT robs students of motivation to write and think for themselves. \\ \cline{2-4} & 128 & critical - thinking - artists - threat - panic - provider - eit - education - beans - backgrounds & If the fucking curriculum you teach can be solved by ChatGPT then you're not teaching critical thinking anyway! \\ \hline (L) Limitation of capabilities & 33 & sources - academic - papers - source - articles - review - cite - research - references - up & \begin{tabular}{l} \(\underline{\text{@jtLOL Based off my experiments with}}\) \\ ChatGPT, it's A LOT like journalists. It will literally make shit up. I asked questions for which I already knew the answers and it was not only wrong, but when I asked for sources, it threw back books; academic papers that did not exist. \\ \cline{2-4} & 40 & machine - learning - months - model - tuned - input - dimwit - march - clown - limitations & In case you were wondering, ChatGPT is not a very good machine learning engineer. \\ \cline{2-4} & 70 & answers - answer - wrong - correct - mistakes - question - questions - immediately - but - errors & ChatGPT is like a lazy student who has wrong answers but always has the answers. \\ \cline{2-4} & 99 & fails - civil - exam - failed - india - competitive - service - biggest - exceeds - ka & ChatGPT fails India civil service exam. \\ \hline (O) Operation and management issues & 109 & money - month - free - tutors - plus - evil - you - cost - your - not & ChatGPT being \$42/month is bad for all. Maybe not so for westerners but for 3rd world countries, IT startups and especially for students learning new skills is just way too high!! \\ \cline{2-4} & 129 & fudan - platform - crashes - team - chatgpt style - launch - hours - apologises - after - china & China Fudan University team apologises after ChatGPT-style platform crashes hours after launch. \\ \cline{2-4} & 164 & homework - ate - do - my - need - godsend - fun fact - hogging - dbo - tempted & Why is ChatGPT at capacity the moment that I legitimately need help with my homework \\ \hline (P) Policy and social concerns & 58 & chatbot - inaccessible - impacts - devices - networks - negative - spokesperson - york - concerns - banned & NYC Bans Students and Teachers from Using ChatGPT. \\ \hline \end{tabular} Under the "(G) General negativity" category, Twitter users expressed general concerns about the potential risks of ChatGPT in education. For example, some users (e.g., Topic 20 in **Table 4**) highlighted how this technology could disrupt the education system. Others worried that AI could ultimately replace the education system (e.g., Topics 8 in **Table 4**), prompting educators to reconsider assessment (e.g., Topic 108 in **Table 4**). One specific concern addressed by Twitter communities is ChatGPT's impact on "(A) Academic integrity" in education. Users reported multiple instances of cheating with ChatGPT in academics (e.g., Topics 26 in **Table 4**). Concerns also existed about how ChatGPT and related AI tools could undermine the credibility of online exams and assignments, as the risk of cheating increased with the proliferation of these technologies (e.g., Topics 26 and 152 in **Table 4**). Moreover, the lack of understanding about the acceptable and unethical use of ChatGPT in an educational context, may further exacerbate these concerns (e.g., Topic 48 in **Table 4**). In addition, we observed that the category of "(I) Impact on learning outcomes and skill development" was widely discussed within Twitter communities (**Figure 4**). Some users cautioned against using it to plagiarize or complete assignments, such as writing essays (e.g., Topics 51 in **Table 4**) or coding assignments (e.g., Topic 2 in **Table 4**). Using generative AI instead of writing prose and code one's self could undermine important abilities, such as critical thinking, communication, and problem-solving (e.g., Topics 92 and 128 in **Table 4**), raising concerns about the potential impact of ChatGPT on skill development. Another widely discussed concern is the limitation of ChatGPT's generated information or understanding of human interactions, as illustrated in the category of "(L) Limitation of capabilities." Many users reported that ChatGPT generated nonsensical or false information (e.g., Topics 33 and 70 in **Table 4**). In particular, ChatGPT could generate resources that did not exist (Topics 33 in **Table 4**). Others doubted about ChatGPT's effectiveness as a resource for learning certain skills, such as machine learning (e.g., Topic 40 in **Table 4**) or civil service (e.g., Topic 99 in **Table 4**). Our analysis further revealed two significant concerns expressed by Twitter users. The first one falls under the category of "(P) Policy and Social concerns." Many academic institutions have implemented policies to regulate the use of ChatGPT due to the fear of cheating or impact on learning. For instance, New York City announced the prohibition of ChatGPT in schools (e.g., Topic 58 in **Table 4**). Similarly, Hong Kong universities imposed bans on the use of ChatGPT (Topic 158 in **Table 4**). Within this category, our analysis highlights that Twitter communities engaged in extensive discussions on the ethical and moral implications of using AI-generated content in sensitive situations. For instance, one discussion revolved around whether it is appropriate to use ChatGPT to write a consolation letter to victims of mass shootings (e.g., Topic 65 in **Table 4**). The second one falls under the category of "(W) Workforce challenges." ChatGPT's potential to offer virtual intelligent tutoring services and provide students with feedback has led to debates among educators about the possibility of being supplanted by AI chatbots (e.g., Topic 39 in **Table 4**). This concern was not limited to the education sector, as users in other fields also worried that ChatGPT could devalue their job training, such as MBA training (e.g., Topic 41 in **Table 4**). Others were concerned about ChatGPT's potential to replace middle-class jobs or white-collar jobs (e.g., Topics 35 and 177 in **Table 4**). Last, our topic analysis reveals two additional concerns that received less attention but are still significant. One concern is related to "(O) Operation and management issues," while the other concern is "(S) Security and privacy." Concerns surrounding the operation and management mainly include the cost of premium service, stability, and capacity limitations (e.g., Topics 109, 129, and 164 in **Table 4**). These concerns reflect the challenges of maintaining a high-quality AI service that meets the needs of users. In terms of security and privacy concerns, we identified only one topic. Some users expressed concerns about the potential inadvertent disclosure of personal information and chat histories. However, we also observed a few tweets that fall into other categories but imply security and privacy issues, which is discussed in the **Limitations** section. ### 4.3 Social network analysis As discussed in **Section 3.4**, we examined two types of relationships on Twitter: mentions and retweets. The network of mentions helps determine who was frequently implicated in the conversation and therefore potentially responsible for informing policies around ChatGPT. The network of retweets helps identify who led or disseminated the concerns. These insights help clarify the sources and information dissemination on Twitter. We utilized NodeXL to generate the networks of mentions and retweets, as presented in **Figures 6** and 7, respectively. **Table 5** summarizes the statistics for the social networks of mentions and retweets. Explanations for each network metric are listed below. * Twitter users included in the network. * Connections between two Twitter users, such as retweet and mention. * One user mentions or retweets another user multiple times. * Users mention or retweet their own tweets that form self-loops. * A set of users in the network that are linked to each other by edges, forming clusters within the social network. * The shortest path between two Twitter users, measured by the least number of edges connecting them in the network. The network of mentions (**Figure 6**) comprises 11,478 vertices (i.e., Twitter users who either mentioned or were mentioned by other users), 12,264 edges, and 2,349 connected components, which are clusters within the network. The average distance between any two vertices, also known as the geodesic distance, is 8.27. In contrast, the network of retweets (**Figure 7**) comprises 32,806 vertices (i.e., Twitter users who either retweeted or were retweeted by other users), 33,517 edges, and 1,499 connected components, with an average geodesic distance of 4.93. Overall, the network of mentions has fewer users, edges, and connected components but a greater geodesic distance than the network of retweets. It is also less dense than the network of retweets (smaller geodesic distance) and has fewer clusters with a single user dominating the cluster. This suggests that there are fewer interactions between users in the network of mentions, potentially because those who were mentioned did not respond to those who mentioned them. \begin{table} \begin{tabular}{l|l|l} \hline Network metric & Network of mentions & Network of retweets \\ \hline Network type & Directed & Directed \\ \hline Vertices (i.e., Twitter users) & 11,478 & 32,806 \\ \hline Total edges & 12,264 & 33,517 \\ \hline Duplicated edges & 980 & 727 \\ \hline Unique edges & 11,284 & 32,790 \\ \hline Connected components & 2349 & 1499 \\ \hline Self-loops & 155 & 199 \\ \hline Max. geodesic distance & 26 & 21 \\ \hline Avg. geodesic distance & 8.267 & 4.925 \\ \hline \end{tabular} \end{table} Table 5: Social network statistics for the networks of mentions and retweets. There are several noteworthy observations regarding the users. First, a few users appear to be "centered" within the cluster, surrounded by a large number of users. These "centered" users possess the highest in-degree centrality, as they were often retweeted or mentioned by other participants in the conversation. Figure 6: The social network of mentions. Figure 7: The social network of retweets. In some clusters, such as user 1, user 4, user 3, and user 2 in G1, G2, G3, and G4, respectively, a single user may hold complete control over the entire cluster (See **Figure 7**). In other clusters, several users co-locate in the cluster, as illustrated by user 1, user 8, user 14, and user 15 in G1 within **Figure 6**, suggesting that their tweets share common outreach and responses. We also observed that users within the network of retweets are more concentrated in comparison to those within the network of mentions, as denoted by a higher ratio of connected components divided by vertices and a greater geodesic distance. This implies that a few users' tweets were repeatedly retweeted within the community, whereas users' mentions were more diffuse. Nevertheless, many users were only mentioned or retweeted once within both networks. **Tables 6** and 7 listed the top 30 users who have the highest in-degree centrality within the network. To provide more insights into these users, we examined two attributes: verification status and account type. Verification status indicates whether Twitter has verified an account as belonging to a public figure or organization of public interest, such as government agencies, politicians, journalists, media personalities, and other influential individuals. The second attribute is account type, which was manually determined based on our judgment of users' account description. We classified users into either organizational or individual accounts with their specialized areas, as listed below. Due to privacy considerations, we replaced usernames with "user#" in the tables. * Organizational accounts: \(\circ\) \(\circ\) \(\circ\) \(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\circ\)\(\) Similarly, based on categories of retweets (see **Table 7**), tweets within these five identified categories were more likely to receive attention from the community, while the other categories were not as prominent. (Notes: (1) The "Category" column in **Table 6** implies the number of mentions of a user. (2) It should be noted that the sum of mentions is not equal to the in-degree centrality in **Table 6**. This is because in-degree centrality only counts unique edges between two users (unique edges are mentions by one other account; even if account A mentions account B twice, the in-degree for account B is 1), and includes tweets labeled as "(M) Miscellaneous.") \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline \hline User & In-degree & Group & Verified & Account & Categories \\ \hline User 1 & 386 & G1 & True & org\_tec & W(124), G(109), I(53), L(50), A(19), & \\ & & & & & P(13), O(4) & \\ \hline User 2 & 213 & G4 & False & ind tech & L(219), I(5) \\ \hline User 3 & 210 & G4 & True & org tech & L(205), A(2), I(2), W(1), P(1) \\ \hline User 4 & 207 & G4 & False & org tech & L(204) \\ \hline User 5 & 202 & G4 & False & org tech & L(204) \\ \hline User 6 & 174 & G5 & False & org tech & I(173) \\ \hline User 7 & 154 & G2 & True & ind\_tech & L(53), G(31), I(13), A(12), W(11), & \\ & & & & & P(11), O(3) & \\ \hline User 8 & 125 & G1 & True & ind tech & G(93), L(15), P(5), I(4), W(3), O(1) \\ \hline User 9 & 124 & G2 & True & org media & I(114), G(2), L(1) \\ \hline User 10 & 113 & G8 & True & ind\_tech & W(65), I(16), G(10), A(9), L(6), P(6), & O(1) \\ \hline User 11 & 88 & G9 & False & org media & I(83), G(1), L(1) \\ \hline User 12 & 80 & G20 & True & ind edu & L(80) \\ \hline User 13 & 71 & G8 & False & ind edu & W(64), I(4), L(2), A(1) \\ \hline User 14 & 71 & G1 & True & ind tech & G(70) \\ \hline User 15 & 70 & G1 & False & ind tech & G(70) \\ \hline User 16 & 67 & G21 & False & ind edu & W(68) \\ \hline User 17 & 51 & G23 & True & org\_media & A(50) \\ \hline User 18 & 49 & G22 & False & ind other & L(16), G(10), W(10), I(5), A(3), P(2) \\ \hline User 19 & 47 & G16 & True & org media & A(40), I(2), L(20, W(2), P(2), G(1)) \\ \hline User 20 & 42 & G25 & True & ind edu & I(46) \\ \hline User 21 & 42 & G25 & False & ind edu & I(46) \\ \hline User 22 & 42 & G25 & False & ind edu & I(46) \\ \hline User 23 & 42 & G25 & False & ind edu & I(46) \\ \hline User 24 & 42 & G25 & False & ind edu & I(46) \\ \hline User 25 & 42 & G25 & False & ind edu & I(43) \\ \hline User 26 & 41 & G27 & True & org media & A(41) \\ \hline User 27 & 40 & G2 & True & org other & G(14), W(9), L(7), I(5), A(2), P(1) \\ \hline User 28 & 36 & G14 & False & org tech & L(20), G(19), I(8), A(6), W(6), P(4) \\ \hline User 29 & 33 & G11 & False & ind tech & A(31), G(1), I(1) \\ \hline User 30 & 29 & G7 & False & ind tech & P(22), I(12), A(4), G(1) \\ \hline \end{tabular} \end{table} Table 6: Top 30 users based on in-degree centrality in the network of mentions. (Note: the "Category" column in **Table 7** implies the number of tweets that the user posted. User 23 only has one tweet fallen into the "(M) Miscellaneous" category.) In addition to analyzing the network of the top 30 mentioned users, as shown in **Table 6**, we sought to examine user patterns on a broader scale. To achieve this, we conducted further investigations on accounts with an in-degree larger than 12, resulting in a total of 105 users. Upon manually reviewing their profiles, there were four users who had either deactivated their accounts or had posted zero tweets. As a result, we included 101 users to explore the patterns from mentioned users, as presented in **Figure 8**. \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline User & In-degree & Group & Verified & Account & Categories \\ \hline User 1 & 9042 & G1 & False & ind edu & I(4), L(4), G(1), W(1), A(1) \\ \hline User 2 & 1562 & G4 & False & ind tech & W(1) \\ \hline User 3 & 1489 & G3 & False & ind tech & G(1) \\ \hline User 4 & 1295 & G2 & False & ind edu & W(1), I(1) \\ \hline User 5 & 1287 & G5 & False & ind edu & L(1) \\ \hline User 6 & 725 & G9 & False & ind other & G(1) \\ \hline User 7 & 575 & G10 & False & ind other & G(2), W(1) \\ \hline User 8 & 520 & G11 & False & ind tech & W(2) \\ \hline User 9 & 460 & G8 & True & ind politic & P(1) \\ \hline User 10 & 354 & G14 & False & ind other & L(1), I(1) \\ \hline User 11 & 343 & G17 & False & ind edu & L(1) \\ \hline User 12 & 285 & G12 & True & ind edu & I(1), G(1), A(1) \\ \hline User 13 & 281 & G18 & True & ind media & I(1) \\ \hline User 14 & 224 & G13 & False & org media & L(1) \\ \hline User 15 & 211 & G6 & False & ind edu & A(1) \\ \hline User 16 & 206 & G24 & False & ind edu & L(1) \\ \hline User 17 & 198 & G7 & False & ind edu & I(2) \\ \hline User 18 & 189 & G7 & False & ind edu & G(1) \\ \hline User 19 & 186 & G15 & True & ind edu & G(3), I(2), A(1), L(1), P(1) \\ \hline User 20 & 180 & G30 & False & ind tech & I(1), P(1) \\ \hline User 21 & 179 & G13 & False & ind tech & P(1) \\ \hline User 22 & 156 & G22 & False & ind tech & P(1) \\ \hline User 23 & 154 & G23 & False & ind tech & M(1) \\ \hline User 24 & 129 & G7 & False & ind edu & G(1) \\ \hline User 25 & 112 & G6 & False & ind tech & G(1) \\ \hline User 26 & 104 & G16 & False & ind edu & L(1) \\ \hline User 27 & 103 & G21 & False & ind tech & W(1), P(1) \\ \hline User 28 & 97 & G6 & False & ind edu & W(1) \\ \hline User 29 & 96 & G7 & False & ind edu & G(1) \\ \hline User 30 & 93 & G15 & True & ind edu & I(1) \\ \hline \end{tabular} \end{table} Table 7: Top 30 users based on in-degree centrality in the network of retweets. Upon analyzing **Figure 8(a)-(c)**, we discovered that more than half of the top mentioned users (58 out of 101) were non-verified and individual accounts (55 out of 101). In terms of account area, there was an almost equal distribution of top mentioned users from the tech, education, and media sectors. However, government portals and politicians, who we expected to be implicated in the discussion, were rarely mentioned in this list. Further manual examination of these accounts revealed that many of these tech accounts are from big tech companies such as OpenAI, Google, and Microsoft or influential figures like Elon Musk and Sam Altman. The education accounts are mainly university professors, and the media accounts are from news agencies, such as the _New York Times_ and the _Washington Post_. Upon examination of **Figure 8(d)**, we identified that the top users were predominantly mentioned in two categories: (L) Limitation of capabilities (mentioned 1221 times) and (I) Impact on learning outcomes and skill development (mentioned 1091 times). After manual examination, we found that the most frequently mentioned three accounts for both categories are from the tech area. This pattern persists across other categories, with the exception of (A) academic integrity, where media accounts take center stage. Figure 8: Distribution of top 101 mentioned users in the network. (a) Count by verification. (b) Count by type. (c) Count by area. (d) Mentions by category. ## 5 Discussion As an emerging technology, ChatGPT has the potential to bring significant opportunities to the education sector, but it also poses certain risks. To better understand how the public perceives these risks and concerns, we conducted a study to identify concerns about the ChatGPT application in education as expressed in social media. Specifically, we chose a crowdsourcing approach with Twitter data, aiming to (1) tap into a diverse range of perspectives from the public and (2) identify users who were frequently implicated and who communicated the concerns in the conversation. Our study sheds light on the public's perceptions of key issues and considerations that need to be addressed for the responsible deployment of AI applications in education. Below are the key findings. ### 1. Twitter users expressed generally positive attitudes towards using ChatGPT in education The sentiment analysis indicates that a significant majority of users hold a positive view of ChatGPT's potential applications in education. Despite the relatively recent availability of this technology, many Twitter users appear to be enthusiastic about the ways in which it could improve the learning experience for students and teachers alike. This observation aligns with Tlili et al. (2023), showing that Twitter users showed a generally positive sentiment in the month following the release of ChatGPT. This consistency suggests that Twitter users continued to view the application of ChatGPT in education positively. In addition, our findings suggest that major events can influence public perception of ChatGPT's potential benefits and risks. For instance, when ChatGPT passed exams typically taken by human professionals, such as MBA and law exams (Kelly, 2023; Rosenblatt, 2023), some users voiced concerns about its impact on the job market and the value of educational training. We also observed variability in sentiment after updates from the model creators and when tech industry professionals made public comments about generative AI. ### 2. Twitter users mainly focused on five specific concerns In response to the first research question, our findings reveal that Twitter users mainly discussed five specific concerns: "academic integrity," "impact on learning outcomes and skill development," "limitation of capabilities," "policy and social concerns," and "workforce challenges." These concerns echo prior work, which explains expert opinions (AlAfnan et al., 2023; Mhlanga, 2023; Mintz, 2023; Qadir, 2022; Sok & Heng, 2023). Twitter users identified ChatGPT's potential risks to academic integrity, writing and coding skills, limited capabilities, and certain job markets. Twitter users also expressed their unique perspectives based on their experiences with the technology. For instance, many users reported instances of ChatGPT generating misinformation, false references, and incorrect answers, which is also consistent with literature we reviewed (Baidoo-Anu & Owusu Ansah, 2023; Qadir, 2022). Other users noted that ChatGPT can be at capacity or break down, leading to frustration and limitations in usage. Users also expressed concerns about the limitations of ChatGPT in specialized areas, such as machine learning or civil service exams, where it may provide limited or insufficient information. We observed discussion on Twitter regarding the banning of ChatGPT in schools and universities due to concerns about its impact on academic integrity and learning outcomes. Last, many users expressed concerns about negative social consequences that could arise from using ChatGPT, such as using it to draft insincere or impersonal letters in sensitive situations (Korn, 2023). ## 3 Tech, education, and media users were highly implicated in the conversation In response to the second research question, our analysis of social networks first revealed that Twitter users mentioned others from the tech, education, and media sectors. Specifically, we observed that accounts from big tech companies or influential individuals in the tech sector were often mentioned. Professors were the key players in the education sector, and popular news agencies led the media sector. However, we found no significant difference in the level of influence between verified or unverified accounts, nor did account type (i.e., individual or organizational) make a significant impact on the conversation's direction. Politicians and governmental agents were barely mentioned in the conversation. Together these findings indicate that Twitter is not actively discussing regulation or government oversight of generative AI. Instead, Twitter is trying to understand the technology (i.e., by engaging academic experts), to follow related news, and to keep up with tech industry conversations. ## 4 Education and tech individual users drove the distribution of concerns Our analysis of social networks also revealed that individual users from education and tech sectors played a vital role in driving the discussion about concerns. Specifically, we identified 30 users whose tweets about concerns were retweeted most often. Among them, a substantial proportion of the top users who drove the conversation on ChatGPT's use in education belonged to the education sector, specifically professors. This observation indicates that professors hold significant influence in shaping the discourse around education, and their concerns on ChatGPT's application in education were well-received by the public. We recognize that a fundamental role of academia is to question and critique technological advances, and professors may be over-represented in the population of users who express concerns about ChatGPT. ### Practical implications Our study suggests several practical implications for various stakeholders, including policymakers, tech companies and individuals, educators, and media agents. In particular, these practical implications are from the general public's perspective on social media. These implications center on the need for a broader discussion on the integration of AI with education, with a focus on preparing individuals to work alongside these technologies effectively. ## 1 Policymakers should take a more proactive role in the conversation about AI in education Our findings reveal that government officials and politicians rarely participated in the discussion, as evidenced by the low number of accounts implicated (the network of mentions) or distributing the concerns (the network of retweets). However, on April 11, 2023, the Biden administration called for public comments on potential accountability measures for AI systems (Tracy, 2023), reflecting a growing interest in AI regulation. In addition, the National Telecommunications and Information Administration agency advised the White House to take actions on telecommunications and information policy (Shepardson & Bartz, 2023). According to Based on the guidance outlined by Carnegie Mellon University in 2020 (CMU Block Center for Technology and Society, 2020), we suggest that government agencies should take the role of policymakers in shaping AI policies. Specifically, the paper suggested that the federal government should update the regulatory guidance of existing agencies, such as Security and Exchange Commission (SEC), Consumer Financial Protection Bureau (CFPB), Food and Drug Administration (FDA) to ensure AI-assisted decision-making. Regarding the generative AI policy, a recent report indicates that the US government has taken actions in regulating generative AI primarily through the Federal Trade Commission (FTC) (Ryan-Mosley, 2023). In the educational settings, we recommend that policymakers should take a more proactive role in the conversation of AI policy. In particular, they could address the implications of generative AI models in education and develop clear guidelines for their use. Such guidelines should also consider the potential social impacts and academic integrity concerns associated with AI-generated documents, such as "policy and social concerns" and "academic integrity," that emerged in our sample. Finally, we urge policymakers to carefully evaluate how AI models may challenge the current education system, particularly in terms of assessment methods. Tech companies should take responsibility for improving the capabilities and collaborating with policymakers and educators to determine policies Our analysis of the social network revealed that both organizational and individual tech accounts were frequently mentioned. These accounts include prominent tech companies such as Google, Microsoft, OpenAI, and influential individuals in the tech industry such as Elon Musk and Sam Altman. We observed that these tech accounts were mostly referenced in the "limitation of capabilities" category. As a result, we recommend that these tech companies focus on improving their AI models' performance and developing responsible and effective educational tools that incorporate AI. In addition, given that these tech accounts were frequently mentioned across all categories, we suggest that they collaborate with educators and policymakers to ensure the safe and ethical use of AI in education. We also noticed that some users have encountered issues when using ChatGPT, such as breakdowns or limitations in capabilities, and that some users cannot afford to use the premium version. Tech companies may also need to address operational and management issues to provide users with a more seamless experience when using AI products. Their prominence in the conversation about concerns suggests that Twitter users see tech companies and leaders as stakeholders. Twitter users have recommendations for improvement and valid concerns that tech companies can leverage to improve AI and its utility. They also have a responsibility to work with policymakers, within government and civil society, to communicate limitations and appropriate use of generative AI. ## 3 Educators should voice their concerns and leverage AI as a learning tool in the classroom As the main users of ChatGPT in education, we recommend that educators share their concerns about AI in education on social media. Our analysis of the retweet network indicates that professors' views were widely retweeted by Twitter users, implying that the public trusted, or at least considered, their opinions on the topic. We also observed that discussions around academic integrity and AI policy were prevalent in Twitter communities. Rather than prohibiting the use of AI tools in education, it might be more beneficial to incorporate them into conventional teaching methodologies and revise guidelines regarding academic misconduct. Our suggestion aligns with an opinion article published in the _New York Times_(Roose, 2023). Our analysis reveals that many individuals perceive ChatGPT as a tool that has the potential to augment their learning experiences, similar to calculators or other educational aids. In addition, we suggest that educators should focus on guiding students in the appropriate use of the AI tools rather than trying to ban them outright. Specifically, students should be taught how to navigate the technology and apply critical thinking and problem-solving skills to their use (Sun & Hoelscher, 2023). ## 4 Media agencies should play an important role in supervision Our findings show that media accounts were frequently mentioned in the conversation, and sentiment changes were associated with reported events. We recommend that media agencies provide accurate and impartial coverage of ChatGPT, including its capabilities, limitations, and potential applications in education. Topics such as whether AI could be used to create documents in sensitive situations or if it has a place in the classroom should be carefully examined and reported on by media agents. Doing so can not only help to foster informed public discussion but also supervise responsible and ethical development of AI in education. ### Limitations and future work While our data-driven crowdsourcing approach to characterizing public response to generative AI in education has provided valuable insights, it's important to note several limitations. First, our data collection spans only four months after the release of the ChatGPT application. It's possible that some people's attitudes towards ChatGPT may change with a longer period of usage, particularly as new GPT4 models and policies for integrating AI models in education emerge. Moreover, there could be new risks associated with using Generative AI models in education that arise over time. In response to this limitation, an area of ongoing and future work could involve the continued collection of social media data to monitor emerging sentiment and concerns among the public. Another limitation comes from sentiment analysis. First, the accuracy of the sentiment analysis is contingent on the capability of the RoBERTa-base model. Given that we did not train the model specifically for sentiment analysis in this context, it is possible that the model may incorrectly detect sentiment in some cases. Last, the sentiment score was calculated based on tweet units, but it's worth noting that a tweet may include conflicting sentiments in different parts. To address this limitation, one future study could investigate other sentiment detection models, such as BERTweet (Nguyen et al., 2020), and compare their performance. There are two limitations associated with the topic modeling approach employed (BERTopic). First, the assumption that each document contains only one topic does not always hold in tweets. As a result, BERTopic can return suboptimal representations of documents. Second, we observed that certain clustered topics may encompass tweets pertaining to different categories, possibly because BERTopic clusters tweets based solely on their textual similarity. In particular, those clustered topics that contain a significant number of tweets are more likely to include descriptions of multiple categories. To address this limitation, future work could consider exploring other LLMs such as integrating GPT-based models (Chen et al., 2021) with the topic modeling process. While crowdsourcing can help mitigate biases that may arise in open-ended surveys, it's important to acknowledge that relying on public comments about the ChatGPT application in education introduces its own biases. People who write tweets are not representative of the general public. For example, research has shown that young, educated, and urbanized individuals are more likely to post comments on social media platforms due to their familiarity with social media (Barbera and Rivero, 2015; Mislove et al., 2021). Additionally, some individuals may express their opinions on other social media platforms such as Facebook or Tiktok and may use videos or images to convey their attitudes towards ChatGPT applications. These factors could potentially affect the quality of the data preparation and introduce biases into the results. A prior study shows some potential of using TikTok videos to capture people's opinions regarding the use of ChatGPT (Haensch et al., 2023). Therefore, another possible avenue for future research could include data from other social media platforms, such as Facebook or TikTok, or from more representative sources such as probability sample surveys. Each platform and method has potential trade-offs, and we acknowledge these limitations of the Twitter sample. ## 6 Conclusions Generative AI models have the potential to revolutionize education but also present risks that must be carefully considered. The emergence of ChatGPT has sparked a significant amount of discussion on social media about its potential applications in education. To contribute to this conversation, our study uses a crowdsourcing approach to identify concerns by analyzing discourse on Twitter. Specifically, we employ BERT-based sentiment and topic modeling techniques to identify concerns related to the use of ChatGPT and use social network theory to identify key accounts frequently implicated in the discussion. The sentiment analysis indicates that Twitter users have an overall positive attitude towards the use of ChatGPT in education. However, we note that sentiment changes are often associated with significant events that occur within the conversation. Our topic analysis highlights five key areas of concern that emerged from negative tweets: academic integrity, impact on learning outcomes and skill development, limitation of capabilities, policy and social concerns, and workforce challenges. Our social network analysis shows that users from the fields of tech, education, and media were highly implicated in the conversation, while education and tech individual users played a crucial role in leading the diffusion of concerns to broader audiences. Taken together, our discourse analysis underscores the urgent need for collaboration among policymakers, tech companies and individuals, educators, and media agencies to establish guidelines for the use of AI in education. While generative AI models offer significant opportunities for enhancing learning, we must address the identified concerns and risks in a responsible and ethical manner. By working together, we can develop effective policies and guidelines that ensure the responsible use of AI in education for the benefit of all stakeholders. Author contributions * Original Draft, Writing - Review & Editing. * Original Draft, Writing - Review & Editing. * Original Draft, Writing - Review & Editing. * Original Draft, Writing - Review & Editing. * Original Draft, Writing - Review & Editing. * Review & Editing, Project Administration, Resources. Declaration of competing interest The authors declare that they have no known competing interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgement This material is based upon work supported by the National Science Foundation under grant no. 1928434.
2304.02128
Weierstrass Semigroup, Pure Gaps and Codes on Function Fields
We determine the Weierstrass semigroup at one and two totally ramified places in a Kummer extension defined by the affine equation $y^{m}=\prod_{i=1}^{r} (x-\alpha_i)^{\lambda_i}$ over $K$, the algebraic closure of $\mathbb{F}_q$, where $\alpha_1, \dots, \alpha_r\in K$ are pairwise distinct elements, and $\gcd(m, \sum_{i=1}^{r}\lambda_i)=1$. For an arbitrary function field, from the knowledge of the minimal generating set of the Weierstrass semigroup at two rational places, the set of pure gaps is characterized. We apply these results to construct algebraic geometry codes over certain function fields with many rational places.
Alonso S. Castellanos, Erik A. R. Mendoza, Luciane Quoos
2023-04-04T21:22:40Z
http://arxiv.org/abs/2304.02128v2
# Weierstrass semigroups, pure gaps and codes on function fields ###### Abstract. For an arbitrary function field, from the knowledge of the minimal generating set of the Weierstrass semigroup at two rational places, the set of pure gaps is characterized. Furthermore, we determine the Weierstrass semigroup at one and two totally ramified places in a Kummer extension defined by the affine equation \(y^{m}=\prod_{i=1}^{r}(x-\alpha_{i})^{\lambda_{i}}\) over \(K\), the algebraic closure of \(\mathbb{F}_{q}\), where \(\alpha_{1},\ldots,\alpha_{r}\in K\) are pairwise distinct elements, \(1\leq\lambda_{i}<m\), and \(\gcd(m,\sum_{i=1}^{r}\lambda_{i})=1\). We apply these results to construct algebraic geometry codes over certain function fields with many rational places. For one-point codes we obtain families of codes with exact parameters. **Keywords**: Kummer extensions, Weierstrass semigroups, Pure gaps, AG codes. **Mathematics Subject Classification (2010)**: 94B27, 11G20, 14H55. The first author was partially supported by FAPEMIG. The second author was partially supported by FAPERJ/RJ-Brazil 201.650/2021. The third author thanks FAPERJ 260003/001703/2021 - APQ1, CNPQ PQ 302727/2019-1 and CAPES MATH AMSUD 88881.647739/2021-01 for the partial support. Introduction Let \(K\) be a field and \(\mathbb{F}_{q}\) be a field. A _Kummer extension_ of \(K\) is a field \(\mathbb{F}_{q}\) of characteristic \(p\) if and only if \(K\) is a field. The _Kummer extension_ of \(K\) is defined by an affine equation \[\mathcal{X}:\quad y^{m}:=f(x)=\prod_{i=1}^{r}(x-\alpha_{i})^{\lambda_{i}},\quad \lambda_{i}\in\mathbb{N},\quad 1\leq\lambda_{i}<m\quad\text{and}\quad p \nmid m, \tag{1}\] where \(\alpha_{1},\ldots,\alpha_{r}\in K\) are pairwise distinct elements, \(\lambda_{0}:=\sum_{i=1}^{r}\lambda_{i}\), and \(\gcd(m,\lambda_{0})=1\). The investigation of Kummer extensions has attracted attention in recent years, see [7, 17, 24, 29, 36, 37] for codes and semigroups, see [10] for towers of function fields, and see [9, 19] for sequences over finite fields with high non-linear complexity. An important object on the study of the theory of AG codes is related with the local structure of a function field, the Weierstrass semigroup at one or two rational places \(P_{1},P_{2}\), defined by \[H(P_{1})=\{s\in\mathbb{N}\mid(z)_{\infty}=sP_{1}\text{ for some }z\in F\}\] and \[H(P_{1},P_{2})=\{(s_{1},s_{2})\in\mathbb{N}^{2}\mid(z)_{\infty}=s_{1}P_{1}+s_ {2}P_{2}\text{ for some }z\in F\},\] respectively. The complement sets \(G(P_{1}):=\mathbb{N}\setminus H(P_{1})\) and \(G(P_{1},P_{2}):=\mathbb{N}^{2}\setminus H(P_{1},P_{2})\) are called the set of gaps. Weierstrass semigroups have been used to analyze the minimum distance, redundancy, and construction of AG codes with good parameters, see e.g. [6, 12, 27, 28, 33]. Explicit constructions of AG codes (in one and two points) over maximal curves can be found in [12] for the Hermitian curve, in [7, 34] for a generalization of the Hermitian curve, in [8] for codes over the \(GK\) curve, and in [26] for the \(BM\) curve. Many of the obtained constructions have examples of codes with good parameters. In 2001, Homma and Kim [23] investigated two-point codes over the Hermitian curve and introduced the very nice concept of _pure gaps_ which turned out to be very useful for the improvement of the minimum distance of an AG code, see Theorem 3.2. These ideas were generalized to many rational points by Carvalho and Torres, see [6], and applied in recent publications such as [5] and [24]. Kummer extensions as in (1) where all the multiplicities \(\lambda_{1}=\lambda_{2}=\cdots=\lambda_{r}\) are equal have been an object of interest concerning the theory of Weierstrass semigroups and codes, see [23, 29] and [37]. In this case, for two totally ramified places in the Kummer extension, the Weierstrass semigroup and the minimal generating set was completely determined, see [3, 7] and [36]. The study of Kummer extensions as in (2) where not all the multiplicities \(\lambda_{1},\lambda_{2},\dots,\lambda_{r}\) are equal is a challenging problem and was just recently explored. In this case, for a totally ramified place \(P\) in the Kummer extension, the authors in [1] provided an arithmetical criterion to determine if a positive integer is an element of the gap set \(G(P)\). In [30], Mendoza explicitly describes the Weierstrass semigroup and the gap set at the only place at infinity. In this work, we explore Kummer extensions as described in (2). We provide an explicit description of the gap set at any totally ramified place and determine the minimal generating set at two totally ramified places. We apply the obtained results to construct one-point AG codes with exact parameters. In particular, we obtain a family of AG codes with Singleton defect \(\delta=N-k-d=2\), see Remark 5.6. For two rational places in an arbitrary function field, we present a characterization of the pure gap set in terms of the minimal generating set of the Weierstrass semigroup, see Proposition 3.3. This characterization was very helpful for the applications. As a consequence, we determine pure gaps in two rational places over a maximal curve and construct codes with good parameters from Theorem 3.2, see Table 2. We organize the paper as follows. Section 2 contains general results from the function field theory, Weierstrass semigroups and basic facts related to AG codes. In Section 3, we characterize the pure gap set \(G_{0}(P_{1},P_{2})\) at two rational places \(P_{1}\) and \(P_{2}\) from the minimal generating set \(\Gamma(P_{1},P_{2})\) for any function field (see Proposition 3.3). In Section 4, we provide an explicit description of the gap set \(G(P)\) at any totally ramified place \(P\) in a Kummer extension as in (2)(see Propositions 4.1, 4.2 and 4.3). We also compute the minimal generating set \(\Gamma(P_{1},P_{2})\) at two totally ramified places \(P_{1}\) and \(P_{2}\) in a Kummer extension (see Proposition 4.4 and 4.5). In Section 5, we apply the results in the previous section to construct one-point AG codes. More specifically, we construct one-point AG codes over a general family of Kummer extensions (see Theorem 5.1). In the same section, AG codes over particular function fields with many rational places are constructed given in all cases the exact value of their parameters (see Corollaries 5.2, 5.3, 5.4 and Proposition 5.5). In Section 6, using the characterization of pure gaps in two rational places given in Proposition 3.3, we compute pure gaps at two rational places over a certain maximal function field and construct two-point AG codes (see Proposition 6.1, and Propositions 6.2 and 6.3 respectively). Finally, in Section 7 we compare the relative parameters of the two-point AG codes obtained in Section 6 with the parameters of one-point AG codes. ## 2. Preliminaries and notation Throughout this article, we let \(q\) be the power of a prime \(p\), \(\mathbb{F}_{q}\) be the finite field with \(q\) elements, and \(K\) be the algebraic closure of \(\mathbb{F}_{q}\). For \(a,b\) integers, we denote by \((a,b)\) the greatest common divisor of \(a\) and \(b\). For \(c\in\mathbb{R}\) a real number, we denote by \(\lfloor c\rfloor\), \(\lceil c\rceil\) and \(\{c\}\) the floor, ceiling and fractional part functions of \(c\) respectively. We also let \(\mathbb{N}=\{0,1,\dots\}\) be the set of natural numbers. ### Function Fields and Weierstrass semigroups Let \(F/K\) be a function field of one variable of genus \(g=g(F)\). We denote by \(\mathcal{P}_{F}\) the set of places in \(F\), by \(\Omega_{F}\) the space of differentials forms in \(F\), by \(\nu_{P}\) the discrete valuation of \(F/K\) associated to the place \(P\in\mathcal{P}_{F}\), and by \(\operatorname{Div}(F)\) the free abelian group generated by the places in \(F\). An element in \(\operatorname{Div}(F)\) is called a divisor. For a function \(z\in F\) we let \((z)_{F},(z)_{\infty}\) and \((z)_{0}\) stand for the principal, pole and zero divisors of the function \(z\) in \(F\) respectively. Given a divisor \(G\in\operatorname{Div}(F)\) of \(F/K\), we have the following two vector spaces associate to \(G\), the Riemann-Roch space \[\mathcal{L}(G)=\{z\in F\mid(z)_{F}+G\geq 0\}\cup\{0\}\] with dimension \(\ell(G)\) as vector space over \(K\), and the space of differentials given by \[\Omega(G)=\{\omega\in\Omega_{F}\mid(\omega)_{F}\geq G\}\cup\{0\}.\] Now we introduce the notion of Weierstrass semigroup that plays an important role in the study of codes. For a place \(P\in\mathcal{P}_{F}\), the _Weierstrass semigroup_ at \(P\) is defined by \[H(P)=\{s\in\mathbb{N}\mid(z)_{\infty}=sP\text{ for some }z\in F\}.\] We say that a non-negative integer \(s\) is a non-gap at \(P\) if \(s\in H(P)\). An element in the complement set \(G(P):=\mathbb{N}\setminus H(P)\) is called a gap at \(P\). For a function field \(F/K\) of genus \(g>0\), the number of gaps is always finite, in fact \(\#G(P)=g\). The Weierstrass semigroup \(H(P)\) at one place admits a generalization for two places. Let \(P_{1}\) and \(P_{2}\) be distinct places in \(F\). We define the Weierstrass semigroup associated to \(P_{1},P_{2}\) by \[H(P_{1},P_{2})=\{(s_{1},s_{2})\in\mathbb{N}^{2}\mid(z)_{\infty}=s_{1}P_{1}+s_{ 2}P_{2}\text{ for some }z\in F\}.\] Analogously as in the case of one place, the elements of the set \(G(P_{1},P_{2}):=\mathbb{N}^{2}\setminus H(P_{1},P_{2})\) are called gaps at \(P_{1},P_{2}\). Gaps can be characterized using Riemann-Roch spaces, that is, a pair \((s_{1},s_{2})\in\mathbb{N}^{2}\) is a gap at \(P_{1},P_{2}\) if and only if \[\ell\left(s_{1}P_{1}+s_{2}P_{2}\right)=\ell\left(s_{1}P_{1}+s_{2}P_{2}-P_{j} \right)\text{ for some }j\in\{1,2\}.\] The set of gaps at two places \(P_{1},P_{2}\) can be obtained from the gaps at \(P_{1}\) and \(P_{2}\) as it follows. Suppose that \(G(P_{1})=\{\beta_{1}<\beta_{2}<\cdots<\beta_{g}\}\) and \(G(P_{2})=\{\gamma_{1}<\gamma_{1}<\cdots<\gamma_{g}\}\). For each \(i\), we let \(n_{\beta_{i}}=\min\{\gamma\in\mathbb{N}\mid(\beta_{i},\gamma)\in H(P_{1},P_{2})\}\). From [25, Lemma 2.6], we have the equality \(\{n_{\beta}\mid\beta\in G(P_{1})\}=G(P_{2})\), and therefore there exists a permutation \(\sigma\) of the set \(\{1,2,\ldots,g\}\) such that \(n_{\beta_{i}}=\gamma_{\sigma(i)}\). The graph of the bijective map between \(G(P_{1})\) and \(G(P_{2})\) defining the permutation \(\sigma\) is the set \(\Gamma(P_{1},P_{2})=\{(\beta_{i},\gamma_{\sigma(i)})\mid i=1,\ldots,g\}\). The following lemma characterizes the set \(\Gamma(P_{1},P_{2})\). **Lemma 2.1**.: _[_22_, Lemma 2]_ _Let \(\Gamma\) be a subset of \((G(P_{1})\times G(P_{2}))\cap H(P_{1},P_{2})\). If there exists a permutation \(\tau\) of \(\{1,2,\ldots,g\}\) such that \(\Gamma=\{(\alpha_{i},\beta_{\tau(i)})\mid i=1,\ldots,g\}\), then \(\Gamma=\Gamma(P_{1},P_{2})\)._ For \(\mathbf{x}=(\beta_{1},\gamma_{1})\) and \(\mathbf{y}=(\beta_{2},\gamma_{2})\), the least upper bound of \(\mathbf{x}\) and \(\mathbf{y}\) is defined as \(\operatorname{lub}(\mathbf{x},\mathbf{y})=(\max\{\beta_{1},\beta_{2}\},\max\{ \gamma_{1},\gamma_{2}\})\). The following result shows that it is enough to determine \(\Gamma(P_{1},P_{2})\) to compute the Weierstrass semigroup \(H(P_{1},P_{2})\). **Lemma 2.2**.: _[_25_, Lemma 2.2]_ _Let \(P_{1}\) and \(P_{2}\) be two distinct places in \(F\). Then_ \[H(P_{1},P_{2})=\{\operatorname{lub}(\mathbf{x},\mathbf{y})\mid\mathbf{x}, \mathbf{y}\in\Gamma(P_{1},P_{2})\cup(H(P_{1})\times\{0\})\cup(\{0\}\times H(P_{ 2}))\}.\] In this sense, the set \(\Gamma(P_{1},P_{2})\) is called the _minimal generating set_ of the Weierstrass semigroup \(H(P_{1},P_{2})\). This set was computed in [8, 28] for some places in families of function fields and used to provide codes with good parameters. The next lemma will be an important tool in the computation of pure gaps (see Definition 3.1). **Lemma 2.3**.: _[_13_, Noether's Reduction Lemma]_ _Let \(D\) be a divisor, \(P\in\mathcal{P}_{F}\) and let \(W\) be a canonical divisor. If \(\ell(D)>0\) and \(\ell(W-D-P)\neq\ell(W-D)\), then \(\ell(D+P)=\ell(D)\)._ ### Algebraic Geometry Codes For a function field \(F/\mathbb{F}_{q}\) with full constant field \(\mathbb{F}_{q}\), we say that a place \(P\) is rational if it has degree one. In [35], the Goppa's construction of linear codes over a function field \(F/\mathbb{F}_{q}\) of genus \(g\) is described as follows. Let \(P_{1},\ldots,P_{N}\) be pairwise distinct rational places in \(F\) and \(D:=P_{1}+\cdots+P_{N}\). Consider other divisor \(G\) of \(F\) such that \(\operatorname{supp}(D)\,\cap\,\operatorname{supp}(G)=\emptyset\). Associated to the divisors \(D\) and \(G\) we have the linear algebraic geometry code \(C_{\mathcal{L}}(D,G)\) and the differential algebraic geometry code \(C_{\Omega}(D,G)\) defined as \[C_{\mathcal{L}}(D,G)=\{(f(P_{1}),\ldots,f(P_{N}))\mid f\in\mathcal{L}(G)\} \subseteq\mathbb{F}_{q}^{N}\] and \[C_{\Omega}(D,G)=\{(\operatorname{res}_{P_{1}}(\omega),\ldots, \operatorname{res}_{P_{N}}(\omega))\mid\omega\in\Omega(G-D)\}\subseteq \mathbb{F}_{q}^{N}.\] The parameters of these codes are: \(N\) is the length of the code, \(k\) its dimension over \(\mathbb{F}_{q}\), and \(d\) its minimum (hamming) distance. We say that the code is an \([N,k,d]\)-code (AG code). These codes are dual to each other, that is, \(C_{\mathcal{L}}(D,G)^{\perp}=C_{\Omega}(D,G)\). In what follows we have the classical lower bounds for the minimum distance of the linear and differential codes. **Proposition 2.4**.: _[_35_, Corollary 2.2.3 and Theorem 2.2.7]_ _Given the AG codes \(C_{\mathcal{L}}(D,G)\) and \(C_{\Omega}(D,G)\) with parameters \([N,k,d]\) and \([N,k_{\Omega},d_{\Omega}]\) respectively, we have that if \(2g-2<\deg(G)<N\), then_ \[k=\deg(G)+1-g\quad\text{and}\quad d\geq N-\deg(G),\] _and_ \[k_{\Omega}=N-\deg(G)-1+g\quad\text{and}\quad d_{\Omega}\geq\deg(G)-(2g-2).\] The well-known Singleton bound on a linear \([N,k,d]\)-code establishes that \(k+d\leq N+1\). The Singleton defect of the code is defined by \(\delta=N+1-k-d\geq 0\) and can be used to measures how good is the code, that is, the smaller is the Singleton defect better is the code. Now, we present a result that can be used to improve the lower bound for the minimum distance of an AG code. **Theorem 2.5**.: _[_16_, Theorem 3]_ _Suppose that \(\gamma-t,\gamma-t+1,\ldots,\gamma-1,\gamma\) is a sequence of \(t+1\) consecutive gaps at a rational place \(Q\). Let \(G=\gamma Q\) and \(D=P_{1}+P_{2}+\cdots+P_{N}\), where \(P_{i}\) is a rational place not in the support of \(G\) for each \(i=1,\ldots,N\). If the code \(C_{\mathcal{L}}(D,G)\) has positive dimension, then its minimum distance \(d\) satisfies_ \[d\geq N-\deg(G)+t+1.\] ## 3. Pure Gaps in Function Fields In this section, we characterize the set of pure gaps at two rational places over an arbitrary function field \(F/\mathbb{F}_{q}\). The notion of pure gaps at two places in a function field was introduced by Homma and Kim in [23]. **Definition 3.1**.: _The pair of natural numbers \((s_{1},s_{2})\) is a pure gap at the places \(P_{1},P_{2}\) if it satisfies_ \[\ell\left(s_{1}P_{1}+s_{2}P_{2}\right)=\ell\left(s_{1}P_{1}+s_{2}P_{2}-P_{j} \right)\text{ for all }j\in\left\{1,2\right\}.\] _We denote the pure gap set at \(P_{1},P_{2}\) by \(G_{0}(P_{1},P_{2})\)._ Equivalent, by [23, Lemma 2.3], we have that the pair \((s_{1},s_{2})\) is a pure gap if \[\ell\left(s_{1}P_{1}+s_{2}P_{2}\right)=\ell\left((s_{1}-1)P_{1}+(s_{2}-1)P_{2} \right).\] Homma and Kim used this notion to provide a lower bound for the minimum distance of two-point differential AG codes. **Theorem 3.2**.: _[_23_, Theorem 3.3]_ _Let \(P_{1},\ldots,P_{N},Q_{1},Q_{2}\) be pairwise distinct \(\mathbb{F}_{q}\)-rational places on the function field \(F/\mathbb{F}_{q}\) of genus \(g\). Let \((\alpha_{1},\alpha_{2}),(\beta_{1},\beta_{2})\) in \(\mathbb{N}^{2}\) be such that \(\alpha_{i}\leq\beta_{i}\) for \(i=1,2\). Suppose each pair \((\gamma_{1},\gamma_{2})\) with \(\alpha_{i}\leq\gamma_{i}\leq\beta_{i}\) for \(i=1,2\) is a pure gap at \(Q_{1},Q_{2}\). Consider the divisors \(D=P_{1}+\cdots+P_{N}\) and \(G=\sum_{i=1}^{2}(\alpha_{i}+\beta_{i}-1)Q_{i}\). Then the minimum distance \(d\) of the code \(C_{\Omega}(D,G)\) satisfies_ \[d\geq\deg(G)-(2g-2)+\sum_{i=1}^{2}(\beta_{i}-\alpha_{i})+2.\] In the following we show a way to characterize pure gaps at two rational places \(P_{1},P_{2}\) in \(F\) from the minimal generating set \(\Gamma(P_{1},P_{2})\). For this we need the following definition: given two pairs \(\mathbf{x}=(\beta_{1},\gamma_{1})\) and \(\mathbf{y}=(\beta_{2},\gamma_{2})\) in \(\mathbb{N}^{2}\), the _greatest lower bound_ of \(\mathbf{x}\) and \(\mathbf{y}\) is defined as \[\operatorname{glb}(\mathbf{x},\mathbf{y})=(\min\{\beta_{1},\beta_{2}\},\min\{ \gamma_{1},\gamma_{2}\}).\] **Proposition 3.3**.: _Let \(P_{1}\) and \(P_{2}\) be two distinct rational places in the algebraic function field \(F/\mathbb{F}_{q}\). Then the set of pure gaps \(G_{0}(P_{1},P_{2})\) at \(P_{1},P_{2}\) is given by_ \[G_{0}(P_{1},P_{2})=\{\operatorname{glb}(\mathbf{x},\mathbf{y})\mid\mathbf{x}, \mathbf{y}\in\Gamma(P_{1},P_{2})\}\setminus\Gamma(P_{1},P_{2}).\] Proof.: Let \(G(P_{1})=\{\beta_{1}<\beta_{2}<\cdots<\beta_{g}\}\), \(G(P_{2})=\{\gamma_{1}<\gamma_{2}<\cdots<\gamma_{g}\}\) be the set of gaps at \(P_{1}\) and \(P_{2}\) respectively. Then it exists \(\sigma\) a permutation of \(\{1,\ldots,g\}\) such that \[\Gamma(P_{1},P_{2})=\{(\beta_{i},\gamma_{\sigma(i)})\mid i=1,\ldots,g\}.\] From [23, Theorem 2.1], the set of pure gaps can be characterizes as \[G_{0}(P_{1},P_{2})=\{(\beta_{i},\gamma_{j})\mid i<\sigma^{-1}(j)\text{ and }j<\sigma(i)\}.\] Let \((\beta_{i},\gamma_{j})\) be an element of \(G_{0}(P_{1},P_{2})\). Then \(\beta_{i}\in G(P_{1})\), \(\gamma_{j}\in G(P_{2})\), and from definition of the minimal generating set \(\Gamma(P_{1},P_{2})=\{(\beta_{i},\gamma_{\sigma(i)})\mid i=1,\ldots,g\}\) we have that \((\beta_{i},\gamma_{\sigma(i)})\) and \((\beta_{\sigma^{-1}(j)},\gamma_{j})\) are elements of \(\Gamma(P_{1},P_{2})\). Since \(i<\sigma^{-1}(j)\) and \(j<\sigma(i)\), it follows that \((\beta_{i},\gamma_{j})=\operatorname{glb}((\beta_{i},\gamma_{\sigma(i)}),( \beta_{\sigma^{-1}(j)},\gamma_{j}))\) and \((\beta_{i},\gamma_{j})\not\in\Gamma(P_{1},P_{2})\). On the other hand, let \(\mathbf{x}=(\beta_{k},\gamma_{\sigma(k)})\) and \(\mathbf{y}=(\beta_{l},\gamma_{\sigma(l)})\) be elements in \(\Gamma(P_{1},P_{2})\) such that \(\operatorname{glb}(\mathbf{x},\mathbf{y})\not\in\Gamma(P_{1},P_{2})\). Without loss of generality, suppose that \(k<l\). If \(\sigma(k)\leq\sigma(l)\), then \(\operatorname{glb}(\mathbf{x},\mathbf{y})=\mathbf{x}\in\Gamma(P_{1},P_{2})\), a contradiction. Therefore \(\sigma(k)>\sigma(l)\) and \(\operatorname{glb}(\mathbf{x},\mathbf{y})=(\beta_{k},\gamma_{\sigma(l)})\), where \(k<\sigma^{-1}(\sigma(l))\) and \(\sigma(l)<\sigma(k)\), that is, \(\operatorname{glb}(\mathbf{x},\mathbf{y})\in G_{0}(P_{1},P_{2})\). ## 4. The Weierstrass semigroup at one and two rational places over a Kummer extension Consider the curve \(\mathcal{X}\) defined by the affine equation \[\mathcal{X}:\quad y^{m}:=f(x)=\prod_{i=1}^{r}(x-\alpha_{i})^{\lambda_{i}}, \quad\lambda_{i}\in\mathbb{N},\quad 1\leq\lambda_{i}<m\quad\text{and}\quad p \nmid m, \tag{3}\] where \(\alpha_{1},\ldots,\alpha_{r}\in K\) are pairwise distinct elements, \(\lambda_{0}:=\sum_{i=1}^{r}\lambda_{i}\) and \((m,\lambda_{0})=1\). Let \(\mathcal{F}=K(\mathcal{X})\) be its function field. Then \(\mathcal{F}/K(x)\) is a Kummer extension with exactly one place at infinity. By [35, Proposition 3.7.3], its genus \(g(\mathcal{X})\) is given by \[g(\mathcal{X})=\frac{m(r-1)+1-\sum_{i=1}^{r}(m,\lambda_{i})}{2}.\] For \(i=1,\ldots,r\), let \(P_{i}\) and \(P_{\infty}\) be the places in \(\mathcal{P}_{K(x)}\) corresponding to the zero of \(x-\alpha_{i}\) and the pole of \(x\) respectively. If \((m,\lambda_{i})=1\) we denote by \(Q_{i}\) the only place in \(\mathcal{F}\) over \(P_{i}\) and by \(Q_{\infty}\) the only place over \(P_{\infty}\). Suppose for a moment that all the multiplicities \(\lambda:=\lambda_{1}=\lambda_{2}=\cdots=\lambda_{r}\) are the same, that is, \(y^{m}=f(x)^{\lambda}\) where \(f(x)\) is a separable polynomial over \(K\) and \((m,r\lambda)=1\). In this case, since \((m,\lambda)=1\), the function field of \(\mathcal{X}\) is isomorphic to \(K(x,z)\) where \(z^{m}=f(x)\) for \(z=y^{r_{1}}f^{r_{2}}\) and \(r_{1},r_{2}\) integers satisfying \(r_{1}\lambda+r_{2}m=1\). So, without loss of generality, if the multiplicities of the roots of \(f(x)\) are the same and co-prime with \(m\), we let all of them equal to \(1\). In this section, we compute the Weierstrass semigroup at any totally ramified place in the extension \(\mathcal{F}/K(x)\). Furthermore, for two totally ramified places in the extension \(\mathcal{F}/K(x)\), we determine the minimal generating set of the corresponding Weierstrass semigroup. To describe the minimal generating set at two totally ramified places in the extension \(\mathcal{F}/K(x)\), we start by providing another description of the gap set \(G(Q_{\infty})\) given in [30, Proposition 4.1], and computing the gap set at a totally ramified place \(Q_{\ell}\in\mathcal{P}_{\mathcal{F}}\), where \(Q_{\ell}\neq Q_{\infty}\). **Proposition 4.1**.: _The gap set at the only place at infinity \(Q_{\infty}\) of \(\mathcal{F}\) is given by_ \[G(Q_{\infty})=\left\{mj-i\lambda_{0}\mid 1\leq i\leq m-1,\,\left\lceil\frac{i \lambda_{0}}{m}\right\rceil\leq j\leq\sum_{\ell=1}^{r}\left\lceil\frac{i\lambda _{\ell}}{m}\right\rceil-1\right\}.\] Proof.: Define the set \[G:=\left\{mj-i\lambda_{0}\mid 1\leq i\leq m-1,\,\left\lceil\frac{i\lambda_{0}}{m} \right\rceil\leq j\leq\sum_{\ell=1}^{r}\left\lceil\frac{i\lambda_{\ell}}{m} \right\rceil-1\right\}.\] For \(mj-i\lambda_{0}\in G\), let \(t\) be the unique element in \(\{0,\ldots,m-1\}\) such that \(mj-i\lambda_{0}-t\lambda_{0}\equiv 0\mod m\). Since \((m,\lambda_{0})=1\) we get \(-i\equiv t\mod m\), so \(\{\frac{t\lambda_{\ell}}{m}\}=\{\frac{-i\lambda_{\ell}}{m}\}\) for \(1\leq\ell\leq r\). Then \[\sum_{\ell=1}^{r}\left\{\frac{t\lambda_{\ell}}{m}\right\}=\sum_{\ell=1}^{r} \left\{\frac{-i\lambda_{\ell}}{m}\right\}=\sum_{\ell=1}^{r}\left(-\frac{i \lambda_{\ell}}{m}-\left\lfloor-\frac{i\lambda_{\ell}}{m}\right\rfloor\right)= -\frac{i\lambda_{0}}{m}+\sum_{\ell=1}^{r}\left\lceil\frac{i\lambda_{\ell}}{m} \right\rceil.\] From the definition of \(G\) we have \[\sum_{\ell=1}^{r}\left\{\frac{t\lambda_{\ell}}{m}\right\}=-\frac{i\lambda_{0}} {m}+\sum_{\ell=1}^{r}\left\lceil\frac{i\lambda_{\ell}}{m}\right\rceil>-\left \lceil\frac{i\lambda_{0}}{m}\right\rceil+\sum_{\ell=1}^{r}\left\lceil\frac{i \lambda_{\ell}}{m}\right\rceil\geq j-\left\lfloor\frac{i\lambda_{0}}{m} \right\rceil=\left\lceil\frac{mj-i\lambda_{0}}{m}\right\rceil.\] Applying [1, Corollary 3.6], we conclude that \(mj-i\lambda_{0}\in G(Q_{\infty})\). This yields \(G\subseteq G(Q_{\infty})\). On the other hand, since \(\#\{1\leq s\leq m-1:m\) divides \(s\lambda_{\ell}\}=(m,\lambda_{\ell})-1\) for \(1\leq\ell\leq r\), we have that \[\#G =\sum_{i=1}^{m-1}\left[\left(\sum_{\ell=1}^{r}\left\lceil\frac{i \lambda_{\ell}}{m}\right\rceil\right)-\left\lceil\frac{i\lambda_{0}}{m} \right\rceil\right]\] \[=\sum_{\ell=1}^{r}\sum_{i=1}^{m-1}\left\lceil\frac{i\lambda_{\ell }}{m}\right\rceil-\sum_{i=1}^{m-1}\left\lceil\frac{i\lambda_{0}}{m}\right\rceil\] \[=\sum_{\ell=1}^{r}\left(m-(m,\lambda_{\ell})+\sum_{i=1}^{m-1} \left\lfloor\frac{i\lambda_{\ell}}{m}\right\rfloor\right)-\frac{(m-1)( \lambda_{0}+1)}{2}\] \[=\sum_{\ell=1}^{r}\frac{(m-1)(\lambda_{\ell}-1)-(m,\lambda_{\ell })-1+2m}{2}-\frac{(m-1)(\lambda_{0}+1)}{2}\] \[=\frac{m(r-1)+1-\sum_{\ell=1}^{r}(m,\lambda_{\ell})}{2}\] \[=g(\mathcal{X}),\] and this concludes the desired result \(G(Q_{\infty})=G\). In the next proposition we compute the gap set at \(Q_{\ell}\), a totally ramified place in the extension \(\mathcal{F}/K(x)\) different from \(Q_{\infty}\). **Proposition 4.2**.: _Suppose that \((m,\lambda_{\ell})=1\) for some \(1\leq\ell\leq r\) and let \(1\leq\lambda\leq m-1\) be the inverse of \(\lambda_{\ell}\) modulo \(m\). Let \(Q_{\ell}\in\mathcal{P}_{\mathcal{F}}\) be the unique extension of \(P_{\ell}\). Then_ \[G(Q_{\ell})=\left\{i+mj\mid 1\leq i\leq m-1,\,0\leq j\leq\left(\sum_{k=1}^{r} \biggl{\lceil}\frac{i\lambda\lambda_{k}}{m}\biggr{\rceil}\right)-\biggl{\lceil} \frac{i\lambda\lambda_{0}}{m}\biggr{\rceil}-1\right\}.\] Proof.: We start by defining the set \[G:=\left\{i+mj\mid 1\leq i\leq m-1,\,0\leq j\leq\left(\sum_{k=1}^{r}\biggl{\lceil} \frac{i\lambda\lambda_{k}}{m}\biggr{\rceil}\right)-\biggl{\lceil}\frac{i \lambda\lambda_{0}}{m}\biggr{\rceil}-1\right\}.\] For \(i+mj\in G\), let \(t\) be the unique element in \(\{0,\ldots,m-1\}\) such that \(i+mj+t\lambda_{\ell}\equiv 0\mod m\). So \(-i\lambda\equiv t\mod m\) and we get \(\{\frac{t\lambda_{k}}{m}\}=\{\frac{-i\lambda\lambda_{k}}{m}\}\) for \(1\leq k\leq r\). Now we have \[\sum_{k=1}^{r}\left\{\frac{t\lambda_{k}}{m}\right\}=\sum_{k=1}^{r}\left\{ \frac{-i\lambda\lambda_{k}}{m}\right\}=\sum_{k=1}^{r}\left(-\frac{i\lambda \lambda_{k}}{m}-\biggl{\lfloor}\frac{-i\lambda\lambda_{k}}{m}\biggr{\rfloor} \right)=-\frac{i\lambda\lambda_{0}}{m}+\sum_{k=1}^{r}\biggl{\lceil}\frac{i \lambda\lambda_{k}}{m}\biggr{\rceil}.\] Since \((m,\lambda)=(m,\lambda_{0})=1\), it follows that \[\sum_{k=1}^{r}\left\{\frac{t\lambda_{k}}{m}\right\}=-\frac{i\lambda\lambda_{0 }}{m}+\sum_{k=1}^{r}\biggl{\lceil}\frac{i\lambda\lambda_{k}}{m}\biggr{\rceil} >-\biggl{\lceil}\frac{i\lambda\lambda_{0}}{m}\biggr{\rceil}+\sum_{k=1}^{r} \biggl{\lceil}\frac{i\lambda\lambda_{k}}{m}\biggr{\rceil}\geq j+1=\biggl{\lceil} \frac{i+mj}{m}\biggr{\rceil}.\] Thus, from [1, Corollary 3.6], it follows that \(G\subseteq G(Q_{\ell})\). Similarly to the proof of Proposition 4.1, it can be proved that \(\#G=g(\mathcal{X})\) and therefore \(G(Q_{\ell})=G\). Furthermore, for the case \(\lambda_{1}=\lambda_{2}=\cdots=\lambda_{r}=1\), we give another description of the gap set \(G(Q)\) at a totally ramified place \(Q\neq Q_{\infty}\) in \(\mathcal{F}/K(x)\). With this new characterization it will be easier to identify all consecutive sequences of gaps. **Proposition 4.3**.: _Suppose that \(\lambda_{1}=\lambda_{2}=\cdots=\lambda_{r}=1\) and let \(Q\) be a totally ramified place in \(\mathcal{F}/K(x),Q\neq Q_{\infty}\). Then_ \[G(Q)=\left\{mj-i\mid 1\leq j\leq r-1,\,\biggl{\lfloor}\frac{jm}{r}\biggr{\rfloor} +1\leq i\leq m-1\right\}.\] Proof.: First of all, define the set \[G:=\left\{mj-i\mid 1\leq j\leq r-1,\,\biggl{\lfloor}\frac{jm}{r}\biggr{\rfloor} +1\leq i\leq m-1\right\}\,,\] and note that \[\#G=\sum_{j=1}^{r-1}\left(m-1-\left\lfloor\frac{jm}{r}\right\rfloor\right)=( m-1)(r-1)-\sum_{j=1}^{r-1}\biggl{\lfloor}\frac{jm}{r}\biggr{\rfloor}= \frac{(m-1)(r-1)}{2}=g(\mathcal{X}).\] On the other hand, for each \(mj-i\in G\), let \(t\) be the unique element in \(\{0,\ldots,m-1\}\) such that \(mj-i+t\equiv 0\mod m\), then \(t=i\). Moreover, since \(mj/r<\lfloor mj/r\rfloor+1\leq i\), we have \(j<ir/m\) and \[\sum_{\ell=1}^{r}\left\{\frac{t\lambda_{\ell}}{m}\right\}=\frac{ir}{m}>j= \biggl{\lceil}\frac{mj-i}{m}\biggr{\rceil}.\] From [1, Corollary 3.6], we obtain that \(G\subseteq G(Q)\) and therefore \(G=G(Q)\). Now we describe the minimal generating set for the Weierstrass semigroup at two totally ramified places in \(\mathcal{F}\) with the same multiplicity. **Proposition 4.4**.: _Suppose that \(\lambda_{\ell_{1}}=\lambda_{\ell_{2}}\) and \((m,\lambda_{\ell_{1}})=1\) for some \(1\leq\ell_{1},\ell_{2}\leq r\). Let \(\lambda\) be the inverse of \(\lambda_{\ell_{1}}\) modulo \(m\), and \(Q_{\ell_{s}}\in\mathcal{P}_{\mathcal{F}}\) be the unique extension of \(P_{\ell_{s}}\) for \(s=1,2\). Then_ \[\Gamma(Q_{\ell_{1}},Q_{\ell_{2}})=\bigg{\{}(i+mj_{1},i+mj_{2})\in \mathbb{N}^{2}\mid 1\leq i\leq m-1,\,j_{1}\geq 0,\,j_{2}\geq 0,\\ j_{1}+j_{2}=\left(\sum_{k=1}^{r}\left\lceil\frac{i\lambda\lambda _{k}}{m}\right\rceil\right)-\left\lceil\frac{i\lambda\lambda_{0}}{m}\right\rceil -1\bigg{\}}.\] Proof.: Without loss of generality, suppose that \(\ell_{1}=1\) and \(\ell_{2}=2\). Define the set \[\Gamma:=\bigg{\{}(i+mj_{1},i+mj_{2})\in\mathbb{N}^{2}\mid 1\leq i \leq m-1,\,j_{1}\geq 0,\,j_{2}\geq 0,\\ j_{1}+j_{2}=\left(\sum_{k=1}^{r}\left\lceil\frac{i\lambda\lambda _{k}}{m}\right\rceil\right)-\left\lceil\frac{i\lambda\lambda_{0}}{m}\right\rceil -1\bigg{\}}.\] We are going to prove that \(\Gamma=\Gamma(Q_{1},Q_{2})\). For \(k=1,\ldots,r\), from [35, Proposition 3.7.3], we have the principal divisors \[\begin{split}&(x-\alpha_{k})_{\mathcal{F}}=\frac{m}{(m,\lambda_{k} )}\sum_{Q\in\mathcal{P}_{\mathcal{F}},\,Q|P_{k}}Q-mQ_{\infty}\quad\text{and}\\ &(y)_{\mathcal{F}}=\sum_{k=1}^{r}\frac{\lambda_{k}}{(m,\lambda_{ k})}\sum_{Q\in\mathcal{P}_{\mathcal{F}},\,Q|P_{k}}Q-\lambda_{0}Q_{\infty}.\end{split} \tag{4}\] On the other hand, since \((m,\lambda_{1})=1\), then there exists \(\beta\in\mathbb{Z}\) such that \(\beta m+\lambda\lambda_{1}=1\). Given the tuple \((i+mj_{1},i+mj_{2})\) in \(\Gamma\), after some computations, we have the following divisor \[\begin{split}&\left((x-\alpha_{1})^{-(j_{1}+\beta i)}(x-\alpha_{ 2})^{-(j_{2}+\beta i)}y^{-i\lambda}\prod_{k=3}^{r}(x-\alpha_{k})^{\left\lceil \frac{i\lambda\lambda_{k}}{m}\right\rceil}\right)_{\mathcal{F}}\\ &=\sum_{k=3}^{r}\frac{m\left\lceil\frac{i\lambda\lambda_{k}}{m} \right\rceil-i\lambda\lambda_{k}}{(m,\lambda_{k})}\sum_{Q\in\mathcal{P}_{ \mathcal{F}},\,Q|P_{k}}Q+\left(i\lambda\lambda_{0}-m\left\lfloor\frac{i \lambda\lambda_{0}}{m}\right\rceil\right)Q_{\infty}\\ &-(i+mj_{1})Q_{1}-(i+mj_{2})Q_{2},\end{split}\] where \(m\lceil i\lambda\lambda_{k}/m\rceil-i\lambda\lambda_{k}\), \(i\lambda\lambda_{0}-m\lfloor i\lambda\lambda_{0}/m\rfloor\), \(i+mj_{1}\), and \(i+mj_{2}\) are non-negative integers. This proves that \(\Gamma\subseteq H(Q_{1},Q_{2})\). On the other hand, since \(j_{1},j_{2}\geq 0\) and \(j_{1}+j_{2}=\left(\sum_{k=1}^{r}\left\lceil\frac{i\lambda\lambda_{k}}{m} \right\rceil\right)-\left\lceil\frac{i\lambda\lambda_{0}}{m}\right\rceil-1\), we have \[0\leq j_{s}\leq\left(\sum_{k=1}^{r}\left\lceil\frac{i\lambda\lambda_{k}}{m} \right\rceil\right)-\left\lceil\frac{i\lambda\lambda_{0}}{m}\right\rceil-1 \quad\text{for $s=1,2$.}\] From Proposition 4.2, we conclude that \(\Gamma\subseteq G(Q_{1})\times G(Q_{2})\). Therefore \(\Gamma\subseteq(G(Q_{1})\times G(Q_{2}))\cap H(Q_{1},Q_{2})\). Moreover, again from Proposition 4.2, the set \(\Gamma\) can be seen as the graph of the bijective map \(\theta:G(Q_{1})\to G(Q_{2})\) given by \(\theta(i+mj_{1})=i+mj_{2}\), which defines a permutation \(\tau\) of the set \(\{1,\dots,g(\mathcal{X})\}\). From Lemma 2.1 we conclude that \(\Gamma=\Gamma(Q_{1},Q_{2})\). In particular, if \(\lambda_{1}=\lambda_{2}=\dots=\lambda_{r}\) in the Proposition 4.4, we obtain the description of the minimal generating set given in [36, Theorem 8]. **Proposition 4.5**.: _Suppose that \(\lambda_{\ell}=1\) for some \(1\leq\ell\leq r\) and let \(Q_{\ell}\in\mathcal{P}_{\mathcal{F}}\) be the unique extension of \(P_{\ell}\). Then_ \[\Gamma(Q_{\infty},Q_{\ell})=\bigg{\{}(mj_{1}-i\lambda_{0},i+mj_{2 })\in\mathbb{N}^{2}\mid 1\leq i\leq m-1,\,j_{1}\geq\bigg{\lceil}\frac{i \lambda_{0}}{m}\bigg{\rceil},\,j_{2}\geq 0,\] \[j_{1}+j_{2}=\sum_{k=1}^{r}\biggl{\lceil}\frac{i\lambda_{k}}{m} \bigg{\rceil}-1\bigg{\}}.\] Proof.: Without loss of generality, suppose that \(\ell=1\) and define the set \[\Gamma:=\bigg{\{}(mj_{1}-i\lambda_{0},i+mj_{2})\in\mathbb{N}^{2} \mid 1\leq i\leq m-1,\,j_{1}\geq\bigg{\lceil}\frac{i\lambda_{0}}{m}\bigg{\rceil}, \,j_{2}\geq 0,\] \[j_{1}+j_{2}=\sum_{k=1}^{r}\biggl{\lceil}\frac{i\lambda_{k}}{m} \bigg{\rceil}-1\bigg{\}}.\] Since \(j_{1}\geq\big{\lceil}\frac{i\lambda_{0}}{m}\big{\rceil}\), \(j_{2}\geq 0\), and \(j_{1}+j_{2}=\sum_{k=1}^{r}\bigl{\lceil}\frac{i\lambda_{k}}{m}\big{\rceil}-1\), we have \[\biggl{\lceil}\frac{i\lambda_{0}}{m}\bigg{\rceil}\leq j_{1}\leq\sum_{k=1}^{r} \biggl{\lceil}\frac{i\lambda_{k}}{m}\bigg{\rceil}-1\quad\text{and}\quad 0\leq j_{2} \leq\left(\sum_{k=1}^{r}\biggl{\lceil}\frac{i\lambda_{k}}{m}\bigg{\rceil} \right)-\bigg{\lceil}\frac{i\lambda_{0}}{m}\bigg{\rceil}-1.\] From Propositions 4.1 and 4.2, it follows that \(\Gamma\subseteq G(Q_{\infty})\times G(Q_{1})\). On the other hand, for \((mj_{1}-i\lambda_{0},i+mj_{2})\in\Gamma\) and from (4), we have the following divisor \[\left((x-\alpha_{1})^{-j_{2}}y^{-i}\prod_{k=2}^{r}(x-\alpha_{k})^ {\left\lceil\frac{i\lambda_{k}}{m}\right\rceil}\right)_{\mathcal{F}}\] \[=\sum_{k=2}^{r}\frac{m\bigl{\lceil}\frac{i\lambda_{k}}{m}\big{\rceil} -i\lambda_{k}}{(m,\lambda_{k})}\sum_{Q\in\mathcal{P}_{\mathcal{F}},\,Q|P_{k}} Q-(mj_{1}-i\lambda_{0})Q_{\infty}-(i+mj_{2})Q_{1}\] and therefore \(\Gamma\subseteq H(Q_{1},Q_{2})\). Thus, \(\Gamma\subseteq(G(Q_{\infty})\times G(Q_{1}))\cap H(Q_{\infty},Q_{1})\). Similarly, as in the proof of Proposition 4.4, \(\Gamma\) is the graph of the bijective map \(\theta:G(Q_{\infty})\to G(Q_{1})\) given by \(\theta(mj_{1}-i\lambda_{0})=i+mj_{2}\), which defines a permutation \(\tau\) of the set \(\{1,\dots,g(\mathcal{X})\}\) that satisfies the conditions of the Lemma 2.1. It follows that \(\Gamma=\Gamma(Q_{\infty},Q_{1})\) ## 5. One-point Codes In this section we construct one-point AG codes over Kummer extensions. We start by presenting a general construction of linear codes using the results obtained in the previous sections. As a consequence, we construct three families of one-point AG codes on function fields with many rational places and provide the exact value of their parameters. **Theorem 5.1**.: _Let \(\mathcal{X}\) be the curve defined by \(y^{m}=f(x)\), where \(f(x)\in\mathbb{F}_{q}[x]\) is a separable polynomial of degree \(r\geq 3\). Let \(Q\in\mathcal{P}_{\mathbb{F}_{q}(\mathcal{X})}\) be a totally ramified place in the extension \(\mathbb{F}_{q}(\mathcal{X})/\mathbb{F}_{q}(x)\) such that \(Q\neq Q_{\infty}\). For \(a\in\{2,\ldots,r-1\}\), define the divisors_ \[G_{a}:=\left(\text{am}-\left\lfloor\frac{\text{am}}{r}\right\rfloor-1\right)Q \quad\text{and}\quad D:=\sum_{Q^{\prime}\in\mathcal{X}(\mathbb{F}_{q}),Q^{ \prime}\neq Q}Q^{\prime},\] _where \(\mathcal{X}(\mathbb{F}_{q})\) is the set of \(\mathbb{F}_{q}\)-rational places on the function field \(\mathbb{F}_{q}(\mathcal{X})\), and assume that \(\deg(G_{a})<N:=\deg(D)\). Then the linear AG code \(C_{\mathcal{L}}(D,G_{a})\) has parameters_ \[\left[N,a+\sum_{i=1}^{a-1}\biggl{\lfloor}\frac{\text{im}}{r}\biggr{\rfloor}, d\geq N-m(a-1)\right].\] _In addition, if \(\#\{\gamma\in\mathbb{F}_{q}\mid P_{\gamma}\in\mathcal{P}_{\mathbb{F}_{q}(x)}\) splits completely in \(\mathbb{F}_{q}(\mathcal{X})/\mathbb{F}_{q}(x)\}\geq a-1\), then the minimum distance of the linear code \(C_{\mathcal{L}}(D,G_{a})\) is exactly \(d=N-m(a-1)\)._ Proof.: From Proposition 4.3, the gap set \(G(Q)\) can be decomposed as the disjoint union of the following consecutive sequences of gaps \[(j-1)m+1,(j-1)m+2,\ldots,(j-1)m+\left(m-\left\lfloor\frac{\text{jm}}{r} \right\rfloor-1\right) \tag{5}\] of length \(m-\left\lfloor\frac{\text{jm}}{r}\right\rfloor-1\) for \(1\leq j\leq r-1\). Given \(a\) in \(\{2,\ldots,r\}\), from (5), we deduce that \[\ell(G_{a})=\ell\left(\left(am-\left\lfloor\frac{\text{am}}{r}\right\rfloor-1 \right)Q\right)=a+\sum_{i=1}^{a-1}\biggl{\lfloor}\frac{\text{im}}{r}\biggr{\rfloor}.\] Thus, since \(\deg(G_{a})<\deg(D)\) and from Theorem 2.5, the AG code \(C_{\mathcal{L}}(D,G_{a})\) has parameters \[\left[N,a+\sum_{i=1}^{a-1}\biggl{\lfloor}\frac{\text{im}}{r}\biggr{\rfloor}, d\geq N-m(a-1)\right].\] Now, let \(S:=\{\gamma\in\mathbb{F}_{q}\mid P_{\gamma}\in\mathcal{P}_{\mathbb{F}_{q}(x)}\) splits completely in the extension \(\mathbb{F}_{q}(\mathcal{X})/\mathbb{F}_{q}(x)\}\) and suppose that \(\#S\geq a-1\). For \(2\leq a\leq r-1\), consider the function \[z:=(x-\beta)^{1-a}\prod_{i=1}^{a-1}(x-\gamma_{i}),\] where \(\gamma_{i}\in S\) and \(\beta\in\mathbb{F}_{q}\) is such that \(Q\) is the only place over \(P_{\beta}\in\mathcal{P}_{\mathbb{F}_{q}(x)}\). Then \(z\) is in \(\mathcal{L}((am-\lfloor\text{am}/r\rfloor-1)Q)\) and has exactly \(m(a-1)\) distinct zeros. The weight of the corresponding codeword is \(N-m(a-1)\) and the result follows. In the following, we apply Theorem 5.1 to construct one-point AG codes on function fields with many rational places. Let \(q=p^{n},\)\(p\) prime and \(n\geq 2\). In [2] Abdon and Garcia showed that there exists a unique \(\mathbb{F}_{q^{2}}\)-maximal function field with genus \(q(q/p-1)/2\) having a place such that \(q/p\) is a non-gap at this place. We present a first family of codes over this function field. **Corollary 5.2**.: _Let \(q=p^{n}\), \(p\) prime, and \(n\geq 2\) such that \(3\leq q/p\). For each \(2\leq a\leq q/p-1\), it exists an AG code over \(\mathbb{F}_{q^{2}}\) with parameters_ \[\left[\frac{q^{3}}{p},a+\frac{pa(a-1)}{2},\frac{q^{3}}{p}-(q+1)(a-1)\right].\] Proof.: Consider the \(\mathbb{F}_{q^{2}}\)-maximal function field of the curve defined by the affine equation \[cy^{q+1}=x^{q/p}+x^{q/p^{2}}+\cdots+x^{p}+x,\quad c^{q}+c=0\text{ and }c\neq 0\] with genus \(g=q(q/p-1)/2\). From Theorem 5.1 the result follows. In the following result we present one-point AG codes over a generalization of the Hermitian function field given by Garcia in [15, Example 1.3]. **Corollary 5.3**.: _For \(q\geq 3\), \(n\) an odd integer, and \(2\leq a\leq q-1\), it exists an AG code over \(\mathbb{F}_{q^{2n}}\) with parameters_ \[\left[q^{2n+1},a+\frac{a(a-1)q^{n-1}}{2},q^{2n+1}-(q^{n}+1)(a-1)\right].\] Proof.: Consider the \(\mathbb{F}_{q^{2n}}\)-maximal function field of the curve defined by the equation \[y^{q^{n}+1}=x^{q}+x.\] This function field has genus \(g=q^{n}(q-1)/2\) and note that when \(n=1\) we get the Hermitian function field. Using Theorem 5.1, we obtain one-point codes with the desired parameters. As a last application, we construct one-point AG codes over the function field of the Norm-Trace curve. This function field was studied in detail by Geil in [18]. **Corollary 5.4**.: _For \(q\geq 3,n\geq 2,\) and \(2\leq a\leq q^{n-1}-1\), we obtain one-point AG codes over \(\mathbb{F}_{q^{n}}\) with parameters_ \[\left[q^{2n-1},\frac{a(a+1)}{2}+\sum_{i=1}^{a-1}\biggl{\lfloor}\frac{i(q^{n- 1}-1)}{q^{n-1}(q-1)}\biggr{\rfloor},q^{2n-1}-\frac{(a-1)(q^{n}-1)}{q-1}\right].\] Proof.: Let \(n\geq 2\) be an integer. The Norm-Trace curve over \(\mathbb{F}_{q^{n}}\) is defined by \[y^{\frac{q^{n}-1}{q-1}}=x^{q^{n-1}}+x^{q^{n-2}}+\cdots+x.\] Its function field has genus \(g=\frac{q(q^{n-1}-1)^{2}}{2(q-1)}\) and \(q^{2n-1}+1\) rational places over \(\mathbb{F}_{q^{n}}\). The desired result follows immediately by applying Theorem 5.1 to this function field. Now we provide a family of AG codes with exact parameters over the function field \(\mathbb{F}_{q^{2n}}(\mathcal{Y}_{m})\), where \(\mathcal{Y}_{m}\) is defined by the Equation (6). Let \(q\geq 4\) even, \(n\geq 3\) be an odd integer, and \(m\geq 2\) be an integer such that \(m\) divides \(q^{n}+1\) and \(q+1\) divides \(m\). Consider the curve given by the affine equation \[\mathcal{Y}_{m}:\quad y^{m}=x(x+1)\left(\frac{x^{q-1}+1}{x+1}\right)^{q+1}. \tag{6}\] This curve is a subcover of the Beelen-Montanucci curve [4] and first appeared in [31, Theorem 3.1]. Its function field \(\mathbb{F}_{q^{2n}}(\mathcal{Y}_{m})\) is maximal over \(\mathbb{F}_{q^{2n}}\), has genus \[g=\frac{m(q-1)-q^{2}+q+1}{2}\] and the number of rational places is \[\#\mathcal{Y}_{m}(\mathbb{F}_{q^{2n}})=q^{2n}-q^{n+2}+(m+1)q^{n+1}-(m-1)q^{n}+1.\] The only totally ramified places in the extension \(\mathbb{F}_{q^{2n}}(\mathcal{Y}_{m})/\mathbb{F}_{q^{2n}}(x)\) are the places \(Q_{\infty}\) and \(Q_{\alpha}\) that lie over the places \(P_{\infty}\) and \(P_{\alpha}\) in \(\mathcal{P}_{\mathbb{F}_{q^{2n}}(x)}\) for \(\alpha\in\{0,1\}\). For \(\beta\) in \(\mathbb{F}_{q}\setminus\{0,1\}\), the place \(P_{\beta}\in\mathcal{P}_{\mathbb{F}_{q^{2n}}(x)}\) has exactly \(q+1\) rational places in \(\mathbb{F}_{q^{2n}}(\mathcal{Y}_{m})\) over \(P_{\beta}\). From [32, Theorem 4], for each \(\gamma\in\mathbb{F}_{q^{2n}}\setminus\mathbb{F}_{q}\) the place \(P_{\gamma}\in\mathcal{P}_{\mathbb{F}_{q^{2n}}(x)}\) has exactly none or \(m\) rational places in \(\mathbb{F}_{q^{2n}}(\mathcal{Y}_{m})\) over \(P_{\gamma}\). Let \(u\) be the number of elements \(\gamma\in\mathbb{F}_{q^{2n}}\setminus\mathbb{F}_{q}\) such that \(P_{\gamma}\in\mathcal{P}_{\mathbb{F}_{q^{2n}}(x)}\) has exactly \(m\) rational places in \(\mathbb{F}_{q^{2n}}(\mathcal{Y}_{m})\) over \(P_{\gamma}\). Then \[\#\mathcal{Y}_{m}(\mathbb{F}_{q^{2n}})=3+(q+1)(q-2)+um\] and we conclude \(u=(q^{n}-q^{2}+q)(q^{n}+1)/m+q^{n+1}-q^{n}\). For \(q-1\leq k\leq u\) consider the divisors \[G:=kmQ_{\infty}\quad\text{and}\quad D:=\sum_{Q\in\mathcal{Y}_{m}(\mathbb{F}_{ q^{2n}}),\,Q\neq Q_{\infty}}Q,\] and the code \(C_{\mathcal{L}}(D,G)\). This code has length \[N:=\deg(D)=q^{2n}-q^{n+2}+(m+1)q^{n+1}-(m-1)q^{n}\] and its minimum distance is \(d=N-km\). In fact, the function \(z_{k}=\prod_{i=1}^{k}(x-\gamma_{i})\) in \(\mathcal{L}(kmQ_{\infty})\), where \(\gamma_{i}\in\mathbb{F}_{q^{2n}}\setminus\mathbb{F}_{q}\) is such that \(P_{\gamma_{i}}\) splits completely in \(\mathbb{F}_{q^{2n}}(\mathcal{Y}_{m})/\mathbb{F}_{q^{2n}}(x)\), has exactly \(km\) distinct zeros and the weight of the corresponding codeword is \(N-km\). Also, since \(2g-2<km<N\), the code has dimension \(km+1-g\). We summarize this result in the next proposition. **Proposition 5.5**.: _Let \(q\geq 4\) be even, \(n\geq 3\) odd, and \(m\geq 2\) be an integer such that \(m\) divides \(q^{n}+1\) and \(q+1\) divides \(m\). For \(q-1\leq k\leq(q^{n}-q^{2}+q)(q^{n}+1)/m+q^{n+1}-q^{n}\), it exists a linear code over \(\mathbb{F}_{q^{2n}}\) with parameters_ \[[N,km+1-((q-1)m-q^{2}+q+1)/2,N-km],\] _where \(N=q^{2n}-q^{n+2}+(m+1)q^{n+1}-(m-1)q^{n}\)._ **Remark 5.6**.: _In particular, for \(m=q+1\) we have a code with parameters_ \[[q^{2n}+q^{n+1},k(q+1)+1-q/2,q^{2n}+q^{n+1}-k(q+1)]\] _and singleton defect \(\delta=q/2\). We notice that for \(q=4\) we get a code over \(\mathbb{F}_{q^{2n}}\) with singleton defect \(\delta=2\)._ ## 6. Two-point Codes Previously, in Propositions 4.4 and 4.5 it was obtained a description of the minimal generating set at two totally ramified places in a Kummer extension. Now, benefiting from these descriptions and applying Proposition 3.3 and Theorem 3.2, we construct two-point AG codes on the subcover of the Beelen-Montanucci curve described in (6). For two-point AG codes on the Beelen-Montanucci curve, see [26]. For \(q\geq 4\) even, consider the curve in (6) for \(m=q^{n}+1\), that is, \[\mathcal{Y}_{q^{n}+1}:\quad y^{q^{n}+1}=x(x+1)\left(\frac{x^{q-1}+1}{x+1} \right)^{q+1}, \tag{7}\] and let \(\mathbb{F}_{q^{2n}}(x,y)\) be its function field. Fix the place \(Q_{\infty}\), and the other two totally ramified places of degree one \(Q_{0},Q_{1}\) in the extension \(\mathbb{F}_{q^{2n}}(x,y)/\mathbb{F}_{q^{2n}}(x)\). By Propositions 4.4 and 4.5 we have the following descriptions of the minimal generating sets \[\Gamma(Q_{0},Q_{1})=\bigg{\{}(i+mj_{1},i+mj_{2})\mid 1\leq i\leq m-1, \;j_{1}\geq 0,\;j_{2}\geq 0,\text{ and }\\ j_{1}+j_{2}=(q-2)\bigg{[}\frac{i}{s}\bigg{]}+1-\bigg{[}\frac{i(q ^{2}-q)}{m}\bigg{]}\bigg{\}} \tag{8}\] and \[\Gamma(Q_{\infty},Q)=\bigg{\{}(mj_{1}-i(q^{2}-q),i+mj_{2})\mid 1 \leq i\leq m-1,\;j_{1}\geq\bigg{[}\frac{i(q^{2}-q)}{m}\bigg{]}\,,\;j_{2}\geq 0,\\ \text{ and }j_{1}+j_{2}=(q-2)\left\lceil\frac{i}{s}\right\rceil+1 \bigg{\}}, \tag{9}\] where \(Q\in\{Q_{0},Q_{1}\}\). Furthermore, we notice that the divisor \[W:=(2g-2)Q_{\infty}=(q^{n+1}-q^{n}-q^{2}+2q-2)Q_{\infty}\] is a canonical divisor of the function field \(\mathbb{F}_{q^{2n}}(x,y)\). In fact, from [30, Theorem 4.4], the Weierstrass semigroup \(H(Q_{\infty})\) is symmetric and therefore \(2g-1\in G(Q_{\infty})\). From the Riemann-Roch Theorem, \(\ell(uQ_{\infty})=u+1-g\) for \(u\geq 2g-1\) and therefore \[\ell(W)=\ell((2g-2)Q_{\infty})=\ell((2g-1)Q_{\infty})=g.\] With the same notation above, we present pure gaps at two rational places on the function field of the curve \(\mathcal{Y}_{q^{n}+1}\). **Proposition 6.1**.: _Let \(q\geq 4\) be even, \(n\geq 3\) be an odd integer, \(s=\frac{q^{n}+1}{q+1}\), and \(Q\in\{Q_{0},Q_{1}\}\) a totally ramified place in the extension \(\mathbb{F}_{q^{2n}}(x,y)/\mathbb{F}_{q^{2n}}(x)\) as in (7). Then_ 1. _For_ \(0\leq a\leq s-2\) _and_ \(1\leq b\leq s-1-a\) _we have_ \[((q^{n}+1)(q-1)-(s-a)(q^{2}-q),b)\in G_{0}(Q_{\infty},Q).\] 2. _For_ \(1\leq a_{i}\leq q+1\) _and_ \(1\leq b_{i}\leq(q^{2}-q-2a_{i})\frac{(q^{n-1}-1)}{q^{2}-1}\) _for_ \(i=1,2\) _we have_ \[\left(\frac{a_{2}(q^{n}-2q^{n-1}+1)}{q-1}-b_{2},\frac{a_{1}(q^{n}-2q^{n-1}+1)}{ q-1}-b_{1}\right)\in G_{0}(Q_{0},Q_{1}).\] 3. _For_ \(1\leq a_{2}\leq a_{1}\leq q+1\)_,_ * \(0\leq b_{1}\leq\begin{cases}2a_{1}\frac{(q^{n-1}-1)}{q^{2}-1},\text{ if }1\leq a_{1}\leq\frac{q}{2},\\ \frac{q^{n-1}-q}{q-1},\text{ if }\frac{q}{2}+1\leq a_{1}\leq q+1,\end{cases}\)__ * \(1\leq b_{2}\leq(q^{2}-q-2a_{2})\frac{(q^{n-1}-1)}{q^{2}-1},\)__ _the pairs_ \[\left(\frac{a_{1}(q^{n}-2q^{n-1}+1)}{q-1}+b_{1},\frac{a_{2}(q^{n}-2q^{n-1}+1)}{ q-1}-b_{2}\right)\] _and_ \[\left(\frac{a_{2}(q^{n}-2q^{n-1}+1)}{q-1}-b_{2},\frac{a_{1}(q^{n}-2q^{n-1}+1)}{ q-1}+b_{1}\right)\] _are pure gaps at_ \(Q_{0},Q_{1}\)_._ Proof.: For the first item, choosing \(i=s-a,j_{1}=q-1\), and \(j_{2}=0\) in Equation (9), we have that for \(0\leq a\leq s-1\) \[\mathbf{u}_{a}:=((q^{n}+1)(q-1)-(s-a)(q^{2}-q),s-a)\in\Gamma(Q_{\infty},Q).\] Thus, from Proposition 3.3 we obtain that \[\text{glb}(\mathbf{u}_{a},\mathbf{u}_{s-b})=((q^{n}+1)(q-1)-(s-a)(q^{2}-q),b) \in G_{0}(Q_{\infty},Q)\] for \(0\leq a\leq s-2\) and \(1\leq b\leq s-1-a\). Now we are going to divide the proof of the second and third items into two steps. For simplicity let \(M:=\frac{q^{n}-2q^{n-1}+1}{q-1}\). **Claim 1:** For \(1\leq a\leq q+1\) and \(0\leq b\leq\tilde{M}:=\min\left\{2a\frac{(q^{n-1}-1)}{q^{2}-1},\left\lceil \frac{q^{n}+1-2a}{q^{2}-q}\right\rceil-1\right\}\), we have \[\mathbf{w}_{a,b}:=(aM+b,aM+b)\in\Gamma(Q_{0},Q_{1}).\] Proof of Claim 1: Given \(a\) and \(b\), choose the values \(j_{1}=0\), \(j_{2}=0\), and \(i=aM+b\) in the description of the set \(\Gamma(Q_{0},Q_{1})\) given in (8). Then we are left to prove that and \(\left(q-2\right)\left\lceil\frac{i}{s}\right\rceil+1-\left\lceil\frac{i(q^{2}-q)}{q ^{n}+1}\right\rceil=0\). Note that \[\left\lceil\frac{i}{s}\right\rceil =\left\lceil\frac{a(q^{n}-2q^{n-1}+1)(q+1)}{(q-1)(q^{n}+1)}+\frac{b (q+1)}{q^{n}+1}\right\rceil\] \[=\left\lceil\frac{a(q^{n+1}-q^{n}-2q^{n-1}+q+1)}{q^{n+1}-q^{n}+q- 1}+\frac{b(q^{2}-1)}{(q^{n}+1)(q-1)}\right\rceil\] \[=\left\lceil a-\frac{2a(q^{n-1}-1)}{(q^{n}+1)(q-1)}+\frac{b(q^{2}- 1)}{(q^{n}+1)(q-1)}\right\rceil\] \[=a+\left\lceil\frac{b(q^{2}-1)-2a(q^{n-1}-1)}{(q^{n}+1)(q-1)} \right\rceil=a,\] since \(0\leq b\leq\frac{2a(q^{n-1}-1)}{q^{2}-1}\). Analogously, \[\left\lceil\frac{i(q^{2}-q)}{q^{n}+1}\right\rceil =\left\lceil\frac{a(q^{n}-2q^{n-1}+1)(q^{2}-q)}{(q-1)(q^{n}+1)}+ \frac{b(q^{2}-q)}{q^{n}+1}\right\rceil\] \[=\left\lceil\frac{a(q^{n+1}-2q^{n}+q)}{q^{n}+1}+\frac{b(q^{2}-q)} {q^{n}+1}\right\rceil\] \[=a(q-2)+\left\lceil\frac{2a+b(q^{2}-q)}{q^{n}+1}\right\rceil=a(q -2)+1,\] since \(0\leq b\leq\left\lceil\frac{q^{n}+1-2a}{q^{2}-q}\right\rceil-1\). We now compute the minimum \(\tilde{M}\). For \(n\geq 3\) we are going to show that \[\tilde{M}=\begin{cases}2a\frac{(q^{n-1}-1)}{q^{2}-1},&\text{ if }1\leq a\leq \frac{q}{2},\\ \frac{q^{n-1}-q}{q-1},&\text{ if }\frac{q}{2}+1\leq a\leq q+1.\end{cases}\] In fact, at first let \(1\leq a\leq\frac{q}{2}\), then \[\left\lceil\frac{q^{n}+1-2a}{q^{2}-q}\right\rceil-1\geq\frac{q^{n}+1-q}{q^{2} -q}-1=\frac{q^{n}+1-q^{2}}{q^{2}-q}\geq\frac{q^{n}-q}{q^{2}-1}\geq 2a\frac{(q^{n-1}- 1)}{q^{2}-1}.\] On the other hand, if \(\frac{q}{2}+1\leq a\leq q+1\), then \[2a\frac{(q^{n-1}-1)}{q^{2}-1}\geq(q+2)\frac{q^{n-1}-1}{q^{2}-1}\geq\frac{q^{n} -q-1}{q^{2}-q}\geq\left\lceil\frac{q^{n}+1-2a}{q^{2}-q}\right\rceil-1=\sum_{i= 1}^{n-2}q^{i}=\frac{q^{n-1}-q}{q-1}.\] Then it is easy to conclude that \(i=aM+b\leq q^{n}\). The claim follows. **Claim 2**: For \(1\leq a\leq q+1\) and \(1\leq b\leq(q^{2}-q-2a)\frac{(q^{n-1}-1)}{q^{2}-1}\), the pair \[\mathbf{u}_{a,b}:=(aM-b,aM-b)\in G_{0}(Q_{0},Q_{1}).\] Proof of Claim 2:.: From definition of pure gap we must prove that \(\mathcal{L}(E)=\mathcal{L}(E-Q_{i})\), where \(E=\left(aM-b\right)\left(Q_{0}+Q_{1}\right)\), for \(i=0,1\). The cases \(i=0,1\) are analogous, and then we prove only for \(i=0\). Given \(a\) and \(b\), define the function \[\mu:=y^{aM-b-1}(x+1)\prod_{\alpha\in\mathbb{F}_{q}\setminus\{0,1\}}(x-\alpha)^{1- a}\in\mathbb{F}_{q^{2n}}(x,y).\] After some computations, we obtain that the principal divisor of \(\mu\) in \(\mathbb{F}_{q^{2n}}(x,y)\) is \[(\mu)_{\mathbb{F}_{q^{2n}}(x,y)} =(aM-b-1)\,Q_{0}+(aM+q^{n}-b)\,Q_{1}\] \[\quad+\left(\frac{(q^{n-1}-1)(q^{2}-q-2a)}{q^{2}-1}-b\right)\sum_ {Q|P_{\alpha},\,\alpha\in\mathbb{F}_{q}\setminus\{0,1\}}Q\] \[\quad-\left(q^{n+1}-q^{n}+q-1+2a-(b+1)(q^{2}-q)\right)Q_{\infty}.\] We obtain that \(\mu\in\mathcal{L}(W-E+Q_{0})\setminus\mathcal{L}(W-E)\), where \(W=(q^{n+1}-q^{n}-q^{2}+2q-2)Q_{\infty}\) is a canonical divisor of the function field \(\mathbb{F}_{q^{2n}}(x,y)\). From Lemma 2.3, we conclude that \(\mathcal{L}(E)=\mathcal{L}(E-Q_{0})\). This completes the proof of the second claim. Since \(\mathbf{u}_{a,b}\) is a pure gap at \(Q_{0},Q_{1}\), from (8) and Proposition 3.3, it follows that there exists positive integers \(j_{1},j_{2}\) such that \(\mathbf{u}_{a,b}=\operatorname{glb}(\mathbf{v}_{a,b}^{1},\mathbf{v}_{a,b}^{2})\) for the pairs \[\mathbf{v}_{a,b}^{1} :=(aM-b+j_{1}(q^{n}+1),aM-b)\text{ and }\] \[\mathbf{v}_{a,b}^{2} :=(aM-b,aM-b+j_{2}(q^{n}+1))\] in \(\Gamma(Q_{0},Q_{1})\). Thus, for \(1\leq a_{k}\leq q+1\), \(1\leq b_{k}\leq(q^{2}-q-2a_{k})\frac{(q^{n-1}-1)}{q^{2}-1}\) for \(k=1,2\) we have \[\operatorname{glb}(\mathbf{v}_{a_{1},b_{1}}^{1},\mathbf{v}_{a_{2},b_{2}}^{2}) =(a_{2}M-b_{2},a_{1}M-b_{1})\in G_{0}(Q_{0},Q_{1}).\] On the other hand, for \(1\leq a_{2}\leq a_{1}\leq q+1\), \(0\leq b_{1}\leq\min\left\{\frac{2a_{1}(q^{n-1}-1)}{q^{2}-1},\left\lceil\frac {q^{n}+1-2a_{1}}{q^{2}-q}\right\rceil-1\right\}\), and \(1\leq b_{2}\leq(q^{2}-q-2a_{2})\frac{(q^{n-1}-1)}{q^{2}-1}\) we obtain that \[\operatorname{glb}(\mathbf{w}_{a_{1},b_{1}},\mathbf{v}_{a_{2},b_{2}}^{1})=(a_ {1}M+b_{1},a_{2}M-b_{2})\in G_{0}(Q_{0},Q_{1})\] and \[\operatorname{glb}(\mathbf{w}_{a_{1},b_{1}},\mathbf{v}_{a_{2},b_{2}}^{2})=(a_ {2}M-b_{2},a_{1}M+b_{1})\in G_{0}(Q_{0},Q_{1}).\] Note that Proposition 3.3 was the fundamental key for determining the pure gaps given in the previous proposition. To see a geometric interpretation of Proposition 3.3, in Figure 1 we determine the pure gap set \(G_{0}(Q_{0},Q_{1})\) from the minimal generating set \(\Gamma(Q_{0},Q_{1})\) on the curve \(\mathcal{Y}_{q^{n}+1}\) defined in (7) for the case \(q=4\) and \(n=3\). To finish this section, we construct two-point AG codes using Proposition 6.1 and Theorem 3.2. **Proposition 6.2**.: _Let \(q\geq 4\) be even and \(n\geq 3\) be an odd integer. Then for_ \[\left\lfloor\frac{q^{n+2}-2q^{n+1}-q^{3}+q^{2}+1}{2q^{3}-3q-1}\right\rfloor+1 \leq a\leq\frac{q^{n}-2q-1}{q+1}\] it exists a \([N,k,d]\)-code over \(\mathbb{F}_{q^{2n}}\) with parameters_ \[N=q^{2n+1}-q^{n+2}+2q^{n+1}-1,\] \[k=q^{2n+1}-q^{n+2}+\frac{5q^{n+2}+q^{n}-q^{3}+q^{2}-2q+2}{2(q+1)} -a(2q^{2}-2q-1),\text{ and}\] \[d\geq 2a(q^{2}-q-1)-\frac{q^{2}(q^{n}-2q^{n-1}-q^{n-2}-q+1)}{q+1}.\] Proof.: Consider \(Q_{\infty}\) the only place at infinity of the function field \(\mathbb{F}_{q^{2n}}(x,y)\) and \(Q\) any other totally ramified place of degree one in the extension \(\mathbb{F}_{q^{2n}}(x,y)/\mathbb{F}_{q^{2n}}(x)\). Let \(s=\frac{q^{n}+1}{q+1}\). From item \(i)\) in Proposition 6.1, we have the following pure gaps at \(Q_{\infty},Q\) for the values of \(b=1\) and \(b=s-1-a\) \[((q^{n}+1)(q-1)-(s-a)(q^{2}-q),1)\text{ and }((q^{n}+1)(q-1)-(s-a)(q^{2}-q),s-1- a). \tag{10}\] Define the divisors \[G:=(2(q^{n}+1)(q-1)-2(s-a)(q^{2}-q)-1)Q_{\infty}+(s-1-a)Q,\] and \[D:=\sum_{Q^{\prime}\in\mathcal{Y}_{q^{n}+1}(\mathbb{F}_{q^{2}}n)\setminus\{Q_{ \infty},Q\}}Q^{\prime}.\] Then \(\deg(G)<\deg(D)=q^{2n+1}-q^{n+2}+2q^{n+1}-1\) and \[\deg(G) =2(q^{n+1}-q^{n}+q-2)-(s-a)(2q^{2}-2q-1)\] \[>2(q^{n+1}-q^{n}+q-2)-\left(s-\frac{q^{n+2}-2q^{n+1}-q^{3}+q^{2}+ 1}{(q+1)(2q^{2}-2q-1)}\right)(2q^{2}-2q-1)\] \[=2(q^{n+1}-q^{n}+q-2)-s(2q^{2}-2q-1)+\frac{q^{n+2}-2q^{n+1}-q^{3}+ q^{2}+1}{q+1}\] \[=\frac{q^{n+2}-q^{n}-q^{3}+q^{2}-2}{q+1}\] \[=2g(\mathcal{Y}_{q^{n+1}})-2.\] So the AG code \(C_{\Omega}(D,G)\) has dimension \[k_{\Omega} =\deg(D)+g-1-\deg(G)\] \[=q^{2n+1}-q^{n+2}+\frac{5q^{n+2}+q^{n}-q^{3}+q^{2}-2q+2}{2(q+1)}- a(2q^{2}-2q-1).\] Since \(((q^{n}+1)(q-1)-(s-a)(q^{2}-q),b)\) is a pure gap at \(Q_{\infty},Q\) for any \(1\leq b\leq s-1-a\), from Proposition 6.1 and Theorem 3.2, it follows that the AG code \(C_{\Omega}(D,G)\) has minimum distance \[d_{\Omega} \geq\deg(G)-(2g-2)+s-a\] \[=2a(q^{2}-q-1)-\frac{q^{2}(q^{n}-2q^{n-1}-q^{n-2}-q+1)}{q+1}.\] **Proposition 6.3**.: _Let \(n\geq 3\) be an odd integer and \(u=2^{n}\). For \(\frac{4u^{2}+9}{5}\leq c_{1}\leq\frac{11u^{2}+4}{12}\) and \(\frac{4u^{2}+9}{5}\leq c_{2}\leq\frac{5u^{2}+4}{6}\), there exists an AG code over \(\mathbb{F}_{u^{4}}\) with parameters_ \[\left[4u^{4}-8u^{2}-1,4u^{4}-\frac{33u^{2}}{4}-5-c_{1}-c_{2},d\geq\frac{u^{2}} {2}+12\right].\] Proof.: Let \(M=\frac{u^{2}+2}{6}\), \(R=\frac{u^{2}-4}{60}\), and \(b=\frac{u^{2}-16}{12}\). For \(q=4\) and \(a_{1}=a_{2}=q+1\) in items \(ii)\) and \(iii)\) of Proposition 6.1, we deduce that the elements of the set \[\left\{(n_{1},n_{2})\in\mathbb{N}^{2}\mid 5M-2R\leq n_{1}\leq 5M+b,\,5M-2R\leq n _{2}\leq 5M-1\right\}\] are pure gaps at \(Q_{0},Q_{1}\), where \(Q_{0},Q_{1}\) are the totally ramified places in \(\mathbb{F}_{u^{4}}(\mathcal{Y}_{u^{2}+1})/\mathbb{F}_{u^{4}}(x)\) distinct from \(Q_{\infty}\). Consider the pairs \[(c_{1},c_{2})\quad\text{and}\quad(5M+b,5M-1), \tag{11}\] where \(5M-2R\leq c_{1}\leq 5M+b\) and \(5M-2R\leq c_{2}\leq 5M-1\), and the divisors \[G:=(5M+b+c_{1}-1)Q_{0}+(5M+c_{2}-2)Q_{1}\] and \[D:=\sum_{Q^{\prime}\in\mathcal{Y}_{u^{2}+1}(\mathbb{F}_{u^{4}})\setminus\{Q_{ 0},Q_{1}\}}Q^{\prime}.\] The pairs given in (11) are pure gaps at \(Q_{0},Q_{1}\) and satisfy the conditions of Theorem 3.2. Furthermore, since \(\deg(G)<\deg(D)=4u^{4}-8u^{2}-1\) and \[\deg(G)=10M+c_{1}+c_{2}+b-3\geq 20M-4R+b-3>2g(\mathcal{Y}_{u^{2}+1})-2,\] we conclude that the AG code \(C_{\Omega}(D,G)\) over \(\mathbb{F}_{u^{4}}\) has dimension \[k=\deg(D)+g-1-\deg(G)=4u^{4}-\frac{33u^{2}}{4}-5-c_{1}-c_{2},\] and minimum distance satisfying \[d\geq\deg(G)-(2g-2)+1+b+10M-c_{1}-c_{2}=\frac{u^{2}}{2}+12.\] ## 7. Some tables of codes In this section, we compare the relative parameters of two-point AG codes over the function field \(\mathbb{F}_{q^{2n}}(\mathcal{Y}_{q^{n}+1})\) obtained in Propositions 6.2 and 6.3, with the relative parameters of one-point AG codes over the same function field obtained using the order bound. Let \(P\) be a rational place in a function field \(F/\mathbb{F}_{q}\) and set \[H(P)=\{\rho_{1}:=0<\rho_{2}<\dots\}\] the Weierstrass semigroup at \(P\). The _Feng-Rao designed minimum distance_ or the _order bound_ of \(H(P)\) is defined by the function \(d_{ORD}:\mathbb{N}\rightarrow\mathbb{N}\) given by \[d_{ORD}(\ell):=\min\{\nu_{m}\mid m\geq\ell\},\] where \(\nu_{\ell}:=\#\{(i,j)\in\mathbb{N}^{2}\mid\rho_{i}+\rho_{j}=\rho_{\ell+1}\}\). In general, we have that \(d_{ORD}(\ell)\geq\ell+1-g\) and the equality holds if \(\rho_{\ell}\geq 4g-1\), see [21, Theorem 5.24]. For one-point differential AG codes, we can use the order bound for obtain a lower bound for the minimum distance. **Theorem 7.1**.: _[_21_, Theorem 4.13]_ _Consider the one-point code \(C_{\ell}:=C_{\mathcal{L}}(P_{1}+\dots+P_{N},\rho_{\ell}P)^{\perp}\), where \(P,P_{1},\dots,P_{N}\) are distinct \(\mathbb{F}_{q}\)-rational places in \(F\) and \(\rho_{\ell}\in H(P)\) is such that \(N>\rho_{\ell}\). Then \(C_{\ell}\) is an \([N,N-\ell,\geq d_{ORD}(\ell)]\)-code over \(\mathbb{F}_{q}\)._ For \(Q\) a totally ramified place in the extension \(\mathbb{F}_{q^{2n}}(\mathcal{Y}_{q^{n}+1})/\mathbb{F}_{q^{2n}}(x)\) such that \(Q\neq Q_{\infty}\), we can use the description of the gap set \(G(Q)\) given in Proposition 4.2 and the order bound described in Theorem 7.1 to obtain one-point AG codes over the function field \(\mathbb{F}_{q^{2n}}(\mathcal{Y}_{q^{n}+1})\). In Table 1, using the package NumericalSgps [11] of the software GAP [14], we present parameters of one-point AG codes over \(\mathbb{F}_{q^{2n}}(\mathcal{Y}_{q^{n}+1})\) for \(q=4\) and \(n=3\). These codes have length \(N=15872\) and are defined over \(\mathbb{F}_{2^{12}}\). On the other hand, in Table 2 we present two-point AG codes over \(\mathbb{F}_{2^{12}}\) of length \(N=15871\) obtained from Proposition 6.2 (for \(q=4\) and \(n=3\)) and Proposition 6.3 (for \(n=3\)). In all cases we obtain better relative parameters with respect to the one-point AG codes obtained on Table 1.
2303.00940
Sampling over Union of Joins
Data scientists often draw on multiple relational data sources for analysis. A standard assumption in learning and approximate query answering is that the data is a uniform and independent sample of the underlying distribution. To avoid the cost of join and union, given a set of joins, we study the problem of obtaining a random sample from the union of joins without performing the full join and union. We present a general framework for random sampling over the set union of chain, acyclic, and cyclic joins, with sample uniformity and independence guarantees. We study the novel problem of the union of joins size evaluation and propose two approximation methods based on histograms of columns and random walks on data. We propose an online union sampling framework that initializes with cheap-to-calculate parameter approximations and refines them on the fly during sampling. We evaluate our framework on workloads from the TPC-H benchmark and explore the trade-off of the accuracy of union approximation and sampling efficiency.
Yurong Liu, Yunlong Xu, Fatemeh Nargesian
2023-03-02T03:27:52Z
http://arxiv.org/abs/2303.00940v2
# Sampling over Union of Joins ###### Abstract. Data scientists often draw on multiple relational data sources for analysis. A standard assumption in learning and approximate query answering is that the data is a uniform and independent sample of the underlying distribution. To avoid the cost of join and union, given a set of joins, we study the problem of obtaining a random sample from the union of joins without performing the full join and union. We present a general framework for random sampling over the set union of chain, acyclic, and cyclic joins, with sample uniformity and independence guarantees. We study the novel problem of union of joins size evaluation and propose two approximation methods based on histograms of columns and random walks on data. We propose an online union sampling framework that initializes with cheap-to-calculate parameter approximations and refines them on the fly during sampling. We evaluate our framework on workloads from the TPC-H benchmark and explore the trade-off of the accuracy of union approximation and sampling efficiency. The source code, data, and/or other artifacts have been made available at [https://github.com/DataIntelligenceCrew/sample-union-joins.git](https://github.com/DataIntelligenceCrew/sample-union-joins.git). + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition such as RippleJoin (Ripple, 2017) and WanderJoin (WanderJoin, 2017) manage to use non-random/independent and random/dependent samples, respectively. Other techniques for sampling over join apply the accept/reject sampling paradigm to guarantee i.i.d (Ripple, 2017; Sohn, 2018). The most recent work by Zhao et al. proposes a framework for sampling over one join that handles general multi-way joins (Zhao et al., 2018). The motivation of random sampling over join is tightly connected to join size estimation which has also been a point of interest in the database community due to its application to query optimization (Zhao et al., 2018; Zhao et al., 2018; Zhao et al., 2018). **Example 2**.: _Continuing with Ex. 1, the second challenge is to union join samples such that uniformity is guaranteed, i.e., each tuple has the probability \(\frac{1}{|I_{W}\cup|I_{E}\cup|J_{MW}|}\) of being in the final sample. A naive solution is to union samples of joins, obtained in an offline manner. Suppose we apply an off-the-shelf sampling over join algorithm and obtain samples \(S_{W},S_{E},\) and \(S_{MW}\) from \(J_{W},J_{E},\) and \(J_{MW}\), respectively. We have \(P(t\in S_{W})=1/|S_{W}|,P(t^{\prime}\in S_{E})=1/|S_{E}|,\) and \(P(t^{\prime\prime}\in S_{MW})=1/|S_{MW}|.\) It is easy to show that \(U=J_{W}\cup J_{E}\cup J_{MW}\) does not guarantee uniformity and tuples have unequal probability of appearing in \(U.\) Consider the contradicting example of \(r\in S_{E},\)\(r\notin S_{W},r\notin S_{MW},\) we have \(P(r\in U)=1/|S_{E}|,\) however, if \(r\in S_{E}\cap S_{W}\cap S_{MW},\) we get \(P(r\in U)=(\frac{1}{|le|}+\frac{1}{|w|}+\frac{1}{|Mw|})\cdot\frac{|S_{E}\cap S _{MW}|}{|U|},\) because we do set union and keep one instance of overlapping tuples. An accept/reject sampling algorithm can help to adjust this probability to obtain \(1/|U|,\) however, as we show in SS 2, the algorithm needs to know apriori, the size of each join and their union, which requires the overlap size of all combinations of \(J_{E},\)\(J_{W},\) and \(J_{MW}.\) One idea may be to estimate the overlaps and unions from the samples. However, that would not be a viable option, since just like joining samples or relations, the probability of obtaining samples from the overlapping regions of joins is low._ In this paper, we present a generic framework for random sampling over the union of joins. In particular, we consider sampling set union with replacement. Sampling from the disjoint union is a straightforward extension of the set union. The classic join sampling (Ripple, 2017; Zhao et al., 2018) and the recently revisited framework (Zhao et al., 2018) consider random sampling _with replacement_ over join. Another relevant problem is the random enumeration of the result of the union of acyclic conjunctive queries (Zhao et al., 2018). The intermediate results of a random query result enumeration algorithm can be considered as a random sample from the union _without replacement_ which is different than our problem. Moreover, in this paper, we study union sampling over a larger class of joins (chain, cyclic, and acyclic). In SS 3.2, we provide an elaborate discussion and analytical comparison of this line of work with our framework. There are several challenges to addressing the sampling over the union of joins problem. First, unioning random samples from joins does not guarantee uniformity. Our solution is an accept/reject sampling algorithm that defines Bernoulli and non-Bernoulli probability distributions for selecting joins. The latter mimics the behavior of union calculation. Second, it turned out that to guarantee uniformity, the sampling framework needs to know the size of each join and the size of the union of joins apriori. Although the problem of set union size approximation (Zhao et al., 2018; Zhao et al., 2018; Zhao et al., 2018) and its online extension to streams (Zhao et al., 2018; Zhao et al., 2018; Zhao et al., 2018; Zhao et al., 2018) have been extensively studied in the approximate counting literature, to the best of our knowledge, there is no study that addresses the problem of approximating the union size of joins without performing the full join and overlap. Third, histogram-based estimation requires knowing the overlap of an exponential number of sets of joins, each set in the powerset of joins. We reduce the space of calculation by reformulating the problem to use smaller-unit statistics, called \(k\)-overlaps, of each join, which is the size of the subset of a join result that is shared with _exactly_\((k-1)\) other joins. Next, we propose two instantiations of the framework for estimating the overlap of joins with an arbitrary number of relations and all join types (chain, cyclic, and acyclic): a histogram-based method and a random-walk method. The histogram-based technique is cheap and requires knowing limited statistics of joins. It may incur a loose bound, thus, a high rejection rate, under circumstances. The histogram-based method is highly suitable for data in the wild or scenarios, such as data markets, where limited metadata is available but access to the whole data is infeasible. The random-walk method is accurate in estimating parameters and results in low delay. It needs sampling for parameters warm-up and provides theoretical guarantees. To balance the trade-off of parameter estimation cost and sampling efficiency, we propose an online-union sampling algorithm that initializes and updates parameters with the histogram-based and random-walk methods, respectively, and reuses the samples obtained during random-walk while ensuring uniformity. In this paper, we make the following contributions: * We present the problem of random sampling over the union of joins. * We design a framework for sampling over the set union of joins of types chain, cyclic, and acyclic (SS 2). Any instantiation of the framework always returns uniform and independent samples from the full result (Theorem 1) but with different sampling efficiency (SS 6.2). * We design histogram-based (SS 4) and random-walk (SS 6) methods to bound the size of overlap of any collection of chain, acyclic, or cyclic joins. * We present an online-union sampling technique that balances the latency and warm-up cost trade-off (SS 6.2). * We perform extensive experimental evaluations using the TPC-H benchmark to investigate the error and runtime of parameter estimation and sampling methods(SS 9). We also evaluate the scalability of our framework with respect to relation size, number of samples, and overlap size. ## 2. Problem Definition Let \(\mathcal{A}\) be the universe of attributes and \(\mathcal{A}_{i}\) be the attributes in relation \(J_{i}.\) We are given a set of joins \(S=\{J_{1},\dots,J_{n}\}.\) A join \(J_{j}\) is defined as \(J_{j}=R_{j,1}\bowtie A_{J_{j,1}}R_{j,2}\bowtie A_{J_{j,2}}\cdots\bowtie A_{J_{j,m-1}}R_{j,m_{ 1}},\) where \(R_{j,1},\cdots,R_{j,m-1}\) are base relations. Similar to relational algebra, we assume all joins have the same output schema after performing the join in terms of the number and name of attributes. Note that joins can still have different lengths and different relations. We also assume that join attributes are standardized to have the same names. We only mention attribute names when needed. In relational algebra, there are two types of unions: set union and disjoint union. The former eliminates duplicate tuples from the result of a union and the latter keeps the duplicates. The notion of unionability (Zhao et al., 2018) can be applied on base relations to align attributes such that joins incur the same schema. The problem of sampling over a union of joins is to return each tuple with probability \(1/|union(J_{1},\cdots,J_{n})|\), where union may be set or disjoint union. Returning just one sampled tuple is usually not enough, therefore, we would like to generate totally independent sampled tuples continuously until a certain desired sample size \(N\) is reached. We formulate the sampling set union and disjoint problems as follows. Definition 1 (Sampling Disjoint Union of Joins): Given a set of joins \(S=\{J_{1},\ldots,J_{n}\}\), return \(N\) independent samples from \(V=J_{1}\uplus\ldots\uplus J_{n}\) such that each sampled tuple is returned with probability \(\frac{1}{|V|}=\frac{1}{|V|}\cdot\frac{1}{J_{1}\cdots J_{n}}\). Sampling from the disjoint union is straightforward. Given the disjoint union \(V=J_{1}\uplus\ldots\uplus J_{n}\), we first select a join \(J_{j}\) with probability \(P(J_{j})=\frac{|J_{j}|}{\left\|J_{1}+\ldots+J_{n}\right\|}\), then, we select a random tuple from \(J_{j}\). This means the probability of each sampled tuple \(t\) is \(P(t)=\frac{|J_{1}|}{|V|}\cdot\frac{1}{|J_{1}|}=\frac{1}{|V|}\). We repeat the process until \(N\) sampled tuples are obtained. This algorithm always returns independent samples because a returned sample is always uniform regardless of the previous sampling iterations. Methods of sampling a tuple from a single join have long been a popular problem [10; 33; 37; 38]. We revisit random sampling over join in SS 3.2. The set union operation eliminates duplicate tuples from the result of the union. As such, an i.i.d sampling algorithm over the set union should return each tuple in the universe of the set union with the probability of the size of a set union. Definition 2 (Set Union of Joins Sampling): Given a set of joins \(S=\{J_{1},\ldots,J_{n}\}\), let \(\mathcal{U}\) be the discrete space of unique tuples in \(U=J_{1}\cup\ldots\cup J_{n}\). Return \(N\) independent samples from \(\mathcal{U}\), such that each sampled tuple is returned with probability \(\frac{1}{\left\|J_{1}\cup\ldots\cup J_{n}\right\|}\). ## 3. A Union Sampling Framework Let \(U\) be the universe of tuples in the set union of joins. We assume there are no duplicates in each join. Given the set union \(U=\bigcup_{j=1}^{n}J_{j}\), we want for each value \(u\in U\), \(P(t=u)=\frac{1}{|U|}\). Example 3 (): Consider joins \(J_{1}\) and \(J_{2}\) that have the same output schema. Suppose \(t_{1}=(3,6,4)\in J_{1}\) and \(t_{2}=(3,6,4)\in J_{2}\). The value of each tuple \(t\), namely \(t.val\), can be obtained by concatenating its attribute values using a standard convention. Then, by the definition of a set, \(t_{1}\) and \(t_{2}\) refer to the same tuple, say \(u\), in the universe \(U=J_{1}\cup J_{2}\). We want \(P(u)\), the probability of selecting a tuple with value \(u\) from \(U\), to be \(\frac{1}{|U|}\). Tuples \(t_{1}\) and \(t_{2}\) are distributed in different joins. Hence, \(u\) is obtained if \(t_{1}\) or \(t_{2}\) are sampled from their corresponding joins. That is, we want \(P(t=u)=P(t_{1})+P(t_{2})=\frac{1}{|U|}\). Note that we may have a sampling with replacement or we may get both \(t_{1}\) and \(t_{2}\) in the sample. Our framework guarantees that \(P(t=u)=P(t_{1}.val)=P(t_{2}.val)=\frac{1}{|U|}\), whether we choose to remove duplicates or not. At each sampling iteration, the framework performs two steps: join selection and join random sampling. The framework continuously samples tuples, with replacement, with \(1/|U|\) probability, until the desired sample size \(N\) is reached. A straightforward way is based on the union trick [15]. At each iteration, we iterate through all joins and select a join with the Bernoulli probability \(P(J_{j})=|J_{j}|/|U|\). This means multiple joins may be selected in each iteration. Upon selecting \(J_{j}\), we randomly sample a tuple \(t\) from \(J_{j}\) with replacement. Recall \(u=t.val\) denotes the value of tuple \(t\). We accept tuples with duplicate values \(u\), only if they are sampled from the same joins, otherwise, we accept the tuples. This means a duplicate tuple \(t\) is retained only if it is sampled from the first join where \(u=t.val\) was observed. With this description, a tuple value \(u\in U\) is returned upon first selecting a join \(J_{j}\) that contains \(u\) with probability \(|J_{j}|/|U|\), then sampling \(J_{j}\) with probability \(1/|J_{j}|\). This guarantees that every value \(u\in U\) is returned with probability \(\frac{|J_{j}|}{|U|}\cdot\frac{1}{|J_{j}|}=\frac{1}{|U|}\). Despite its simplicity, this algorithm has a high rejection ratio for highly overlapping joins and may result in high latency. This is attributed to the utilization of a two-phase framework, which is essential for ensuring uniformity in sampling. Next, we describe a join selection algorithm with a more careful selection of joins. In SS 7, we propose a novel approach that leverages computation performed in the first phase to reduce latency in the second stage. ### Non-Bernoulli Join Selection The above technique keeps samples from an overlap area of joins only if they are sampled from exactly one predetermined join. Consider two joins \(J_{1}\) and \(J_{2}\) with overlapping data region \(B\) in Fig. 1(a). We select and keep any sample \(t_{1}\in J_{1}\). Later, upon selecting \(J_{2}\), if we sample a \(t_{2}\in B\), we reject \(t_{2}\). The trick to avoiding rejection is to keep the \(B\) from \(J_{1}\) as the only space we sample from \(J_{1}\). Therefore, we have \(P(J_{1})=\frac{|A+B|}{|A+B+C|}=\frac{|J_{1}|}{|U|}\), and \(P(J_{2})=\frac{|C|}{|A+B+C|}=\frac{|J_{1}|-|B|}{|U|}\). Our join selection is outlined in Algorithm 1. Prior to sampling, the algorithm needs to decide which overlapping region is restrictively sampled from which join. We call this division of joins a _cover_ (line 2 of Algorithm 1). A cover over joins \(S=\{J_{1},\cdots,J_{n}\}\), namely \(C=\{J_{1}^{\prime},\cdots,J_{n}^{\prime}\}\), is an ordering over \(S\) such that \(J_{i}^{\prime}=\{t\in J_{i}|t\notin|J_{j}^{\prime}\}\). In fact, a cover \(J_{i}^{\prime}\) of join \(J_{i}\) is a selection query over join \(J_{i}\). A cover of \(S\) can be created by starting from the first join and keeping or removing overlapping parts. Fig. 1(b) illustrates an example of a cover for three overlapping joins. Given a cover \(C\), to calculate the size of \(J_{i}^{\prime}\), we simply follow the inclusion-exclusion principle. Let \(O_{\Delta}=\bigcap_{j_{i}\in\Delta}J_{j}\) and \(S_{i}\) represent the set of joins that appear before \(J_{i}\) in the ordering offered by \(C\), then we have the following. \[|J_{i}^{\prime}|=|J_{i}|+\sum_{m=1}^{i-1}\sum_{\Delta\subset S_{i},|\Delta|=m}(-1)^ {m}|O_{\Delta}\cup\{J_{i}\}|\] Based on this cover, each \(J_{i}\) is selected with \(P(J_{i})=\frac{|J_{i}^{\prime}|}{|U|}\). When sampling, we should always follow the cover we pre-defined, i.e., for any sample \(t\in J_{i}\), we should discard if \(t\notin J_{i}^{\prime}\). However, if we do not have overlap information apriori, upon selecting \(J_{i}\) and sampling \(t\), it is not possible to verify whether \(t\) is in \(J_{i}^{\prime}\) or not. Thus, we face a non-trivial case when we sample \(t\in J_{i}\setminus J_{i}^{\prime}\). If we later sample \(t\) from \(J_{j}\) with \(J_{i}^{\prime}\cap J_{i}\neq 0\), i.e., \(J_{i}^{\prime}\) covers the overlapping part with \(J_{i}\), we should do a critical operation, called _revision_. This means we remove \(t\in J_{j}\) from the sample and re-sample \(J_{j}\), while keeping the \(t\) from \(J_{i}\). **Example 4**.: _Consider joins \(J_{1}\), \(J_{2}\), and \(J_{3}\) of Fig. 2b. A cover for these joins are highlighted with blue, red, and green colors. The algorithm selects \(J_{1}\), \(J_{2}\), and \(J_{3}\) with probability \(|V_{1}^{\prime}|/|U|\), \(|J_{2}^{\prime}|/|U|\), and \(|J_{3}^{\prime}|/|U|\), respectively. Suppose at some iteration we have selected \(J_{2}\) and sampled \(t\in J_{2}\setminus J_{2}^{\prime}\). Suppose now we select \(J_{1}^{\prime}\) and sample the same \(t\). Because the cover tells us to sample \(J_{2}\) only from \(J_{2}^{\prime}\) area, we remove \(t\)'s from the target set, accept \(t\) that's sampled from \(J_{1}\) and assign it to \(J_{1}\) in the record._ **Theorem 1**.: _Given joins \(S=\{J_{1},\ldots,J_{n}\}\), Algorithm 1 returns each result tuple \(t\) with value \(u\) with probability \(\frac{|J_{1}|\ldots|J_{n}|}{|J_{1}|\ldots|J_{n}|}\)._ Proof.: Intuitively, a cover defined by Algorithm 1 decides from which join exclusively a value in the overlap of a collection of joins is sampled. Recall \(U\) is the universe of the set union of tuples of joins, i.e., \(\{u|u\in\cup_{i}J_{i}\}\). Algorithm 1 uses a mapping strategy function \(f:U\to S\) that tells us to which \(J_{i}\) a specific \(u\) is assigned. Note that \(u\) could belong to multiple \(J_{i}\)'s, however, \(f\) refers to the unique \(J_{i}\) from which \(u\) can be sampled. Let a cover \(C\) of \(S\) be the quotient space of \(U\) over \(f\) and \(g:S\to C\) be a mapping function such that \(g(J_{i})=J_{i}^{\prime}\). Then, \(g\circ f\) will map each \(u\in U\) to a join in cover \(C\). For all \(u\), we denote \(|g(f(u))|\) to be \(|\{u^{\prime}|g(f(u^{\prime}))=g(f(u))\}|\). In other words, the probability of sampling a \(u\in U\) depends on the probability of selecting \(g(f(u))\) followed by sampling \(u\) from \(g(f(u))\). Therefore, we obtain the probability of \(P(t=u)\) as follows. \[P(t=u)=P(f(u))\cdot\frac{1}{|g(f(u))|}=\frac{|g(f(u))|}{|U|}\cdot\frac{1}{|g( f(u))|}=\frac{1}{|U|}\] Computing the probability distribution of line 2 of Algorithm 1 requires the knowledge of \(|J_{i}^{\prime}|\) as well as \(|U|\). In SS 4, we describe ways of estimating the overlap of \(k\) joins and \(|J_{i}^{\prime}|\). ### Join Sampling Revisited To sample a single join (line 7 of Algorithm 1), we consider the work by Zhao et al. (Zhao et al., 2017), which is a generic framework for sampling from any type of join. The framework defines a join data graph where each tuple in a relation is a node. Each tuple \(t\) is labeled with a weight defined as the upper bound for the number of tuples in the join result that \(t\) yields. The framework performs accept/reject sampling. Each tuple from a relation is sampled with some probability based on its weight and is rejected with some rate in terms of the weights to guarantee uniformity. We make some design choices to adopt the join sample framework of Zhao et al. as a subroutine in our union sampling framework. First, for weight instantiation, we use three techniques: extended Olken's, exact, and Wander Join (Zhao et al., 2017), proposed by Zhao et al. (Zhao et al., 2017). Second, this framework requires index structures over base relations to know which tuples can be joined together. Instead, we use hash tables for relations to maintain tuples' joinability information. Third, one limitation of Zhao et al.'s framework is the assumption of having only key-foreign key joins between relations. Since in a generic join, some tuples may not have a joinable tuple in other relations, we release this assumption by modifying the Extended Olken's to set the weights (and hence probabilities) of those tuples to zero with an extra linear search in the hash tables. Finally, to obtain the accept/reject ratio, this framework allows us to plug in any of the join size upper-bound estimations. We also need to compute the size upper bound of joins in Algorithm 1. To do so, in what follows, we adopt parts of the algorithm proposed in Ngo et al. (Ngo et al., 2017) and extend Olken's algorithm (Zhao et al., 2017) to calculate the upper bound on the size of joins of an arbitrary number of relations. Assume a join \(J=R_{1}\bowtie_{A_{1}}R_{2}\bowtie_{A_{2}}\cdots\bowtie_{A_{n}}\), \(R_{n}\). Let \(M_{A_{i}}(R_{i+1})\) be the maximum value frequency in attribute \(A_{i}\) of relation \(R_{i+1}\). Since each tuple in \(R_{2}\) with value \(v\) for \(A_{i}\) can be matched with maximum \(M_{A_{i}}(R_{i+1})\) tuples of \(R_{i+1}\) on \(A_{i}\), we have the following upper bound for the size of a join \(J\): \(|J|\leq|R_{1}|\cdot\prod_{i=1}^{n-1}M_{A_{i}}(R_{i+1})\). In our framework, we consider the above extension of Olken's algorithm for join size estimation in all algorithms. ### Cost Analysis Since the subroutine of sampling from a join in Algorithm 1 is based on the existing algorithms, for the cost analysis, we decouple the delay of random sampling over join from our algorithm and consider the total number of samples obtained from the join subroutine as our total cost. **Theorem 2**.: _Given joins \(S=\{J_{1},\ldots,J_{n}\}\), the expected total sampling cost of Algorithm 1 for returning \(N\) uniform and independent samples is \(N+N\log N\)._ Figure 2. (a) union operation, (b) cover for three joins, and (c) \(\mathcal{J}_{j}^{k}\) of four joins. Proof.: Given a cover \(C=\{J_{j}^{r}\ |\ j\in[1,n]\cap\mathbb{Z}\}\), Algorithm 1 samples each join \(J_{j}\) with probability \(|J_{j}^{r}|/|U|\). Let \(N_{j}\) be the number of tuples from \(J_{j}\) that are in the final result. Based on Algorithm 1, we know a tuple from \(J_{j}\) is in the final sample if it is obtained from \(J_{j}^{r}\). Therefore, we have \(N_{j}=\frac{|U_{j}^{r}|}{|U|}\cdot N\), in expectation. Let \(\psi_{j}\) be the number of tuples Algorithm 1 ever obtains from \(J_{j}\). A tuple may be a rejected, accepted, or revised sample, because the set of tuples from different joins may intersect. Based on the union bound, the number of iterations of Algorithm 1 is bounded by the sum of the number of tuples sampled from each join. Using this principle, we have the expected total number of iterations of \(\psi\leq\sum_{j=1}^{n}\psi_{j}\). Given \(N_{j}\) coupons, the coupon collector's problem provides a bound for the number of samples we expect we need to draw with replacement before having drawn each coupon at least once (Kang and Bong, 2015). This result allows us to obtain the expected value of \(\psi_{j}=N_{j}\log N_{j}\). Therefore, we have the following expected number of iterations. \[\psi\leq\sum_{j=1}^{n}N_{j}\log N_{j}=\sum_{j=1}^{n}N.\frac{|U_{j}^{r}|}{|U|} \log\left(N.\frac{|U_{j}^{r}|}{|U|}\right)\] Let \(\alpha_{j}=\frac{|U_{j}^{r}|}{|U|}\). We have the following. \[\psi\leq\sum_{j=1}^{n}\alpha_{j}.N\log(\alpha_{j}.N)= N\left(\sum_{j=1}^{n}\alpha_{j}\log\alpha_{j}+\sum_{j=1}^{n}\alpha_{j} \log N\right)\] From the definition of cover, we know \(\sum_{j=1}^{n}\frac{|U_{j}^{r}|}{|U|}=1\). Therefore, we have the following bound on the expected total time. \[\psi\leq N(\log(H(n))+\log N)\leq N+N\log N\] We remark that although our algorithm does not have a strict and deterministic guarantee on the delay between samples, our total time is on par with the \(\mathcal{O}(N\log N)\) time of the algorithm proposed by Carmeli et al., for the random enumeration of the result of the union of conjunctive queries, where \(N\) is the number of answers (Carmeli et al., 2017). ## 4. Size of set union of joins Executing full joins and computing set union is costly. We propose a novel way of computing the set union size by using the size of joins and the size of the overlap of joins. To do so, we first separate each join \(J_{j}\) into \(n\) disjoint parts, denoted as \(J_{j}=\bigcup_{k=1}^{n}\mathcal{R}_{j}^{k}\), where \(\mathcal{R}_{j}^{k}\) is the set of tuples of \(k\)-th overlap in \(J_{j}\), i.e., each tuple in \(\mathcal{R}_{j}^{k}\) belongs to \(J_{j}\) and appears in exactly \(k-1\) other joins. The base case \(\mathcal{R}_{j}^{1}\) includes the tuples in \(J_{j}\) that are the set complement of all overlaps. Fig. 2c represents the \(\mathcal{R}_{j}^{k}\) areas for a join \(J_{1}\). Since for each \(J_{j}\), \(\mathcal{R}_{j}^{k}\)'s are disjoint, we can define the size of the set union \(U\) as follows. \[|U|=\sum_{j=1}^{n}\sum_{k=1}^{n}\frac{1}{k}|\mathcal{R}_{j}^{k}| \tag{1}\] Note that \(\mathcal{R}_{j}^{k}\) is non-trivial information, which requires combining the overlap size of \(k\)-combinations of joins. There are two challenges for computing \(\mathcal{R}_{j}^{k}\). First, there is no relationship between the pairwise overlap information and higher order \(k\)-th overlap, \(\mathcal{R}_{j}^{k}(k>2)\). Second, computing a pairwise overlap size without a full join is more challenging than computing a single join size. Suppose we have a way of computing the overlap for any set of joins. More formally, given a collection \(\Delta\in S\) of joins, \(\mathcal{O}_{\Delta}\) denotes the overlap of joins in \(\Delta\). In SS 4 and 7, we describe various algorithms for overlap estimation of all join types (chain, cyclic, and acyclic). Now, we turn our attention to computing \(\mathcal{R}_{j}^{k}\) using \(\mathcal{O}_{\Delta}\). We describe the intuition of our solution with an example. **Example 5**.: _Consider the joins \(S=\{J_{1},\cdots,J_{4}\}\) of Fig. 2c. The areas \(\mathcal{A}_{1}^{k}\) for \(k\in[1,4]\) are color-coded. We would like to compute the size of \(\mathcal{A}_{1}^{2}\). The dotted, \(+\), and \(\times\) areas included all pairwise overlaps. Suppose we first compute the sum of the pairwise overlap size of joins with \(J_{1}\), i.e., \(\sum_{\Delta\in\mathbb{P}_{2}\setminus\Delta\in|\mathcal{O}_{\Delta}|}\), where \(\mathbb{P}_{2}\) is the collection of all subsets of size \(2\) of \(S\). However, to determine the area of the overlap of exactly one join with \(J_{1}\), \(\mathcal{A}_{1}^{2}\), we need to exclude all \(\mathcal{A}_{1}^{3}\) and \(\mathcal{A}_{1}^{4}\) areas. In fact, each subarea of \(\mathcal{A}_{1}^{3}\) counts twice in the above sum. For example, \(J_{1}\cap J_{2}\cap J_{3}\) is in both \(J_{1}\cap J_{2}\) and \(J_{1}\cap J_{3}\). Similarly, \(\mathcal{A}_{1}^{4}\) counts three times in the sum of \(\mathcal{O}_{\Delta}\)'s since it is included in \(J_{1}\cap J_{2}\cap J_{3}\), \(J_{1}\cap J_{2}\cap J_{4}\), and \(J_{1}\cap J_{3}\cap J_{4}\). To avoid over-counting, the \(\mathcal{R}_{j}^{k}\)'s are weighed by \(1/k\), in Eq. 1._ **Theorem 3**.: _Let \(S=\{J_{1},J_{2},\ldots J_{n}\}\) and \(\mathbb{P}_{k}\) be all subsets of size \(k\) of \(S\), then for any join path \(J_{j}\), and for any \(1\leq k\leq n\), we have_ \[|\mathcal{A}_{j}^{k}|=\sum_{\Delta\in\mathbb{P}_{k}\setminus J_{j}\in\Delta}| \mathcal{O}_{\Delta}|-(\sum_{r=k+1}^{n}\binom{r-1}{k-1}\cdot|\mathcal{A}_{j}^{ r}|).\] _For \(k=n\), we have \(|\mathcal{A}_{j}^{n}|=|\mathcal{O}_{S}|\). For \(k=1\), we have the following._ \[|\mathcal{A}_{j}^{1}|=\sum_{\Delta\in\mathbb{P}_{1}\setminus J_{j}\in\Delta}| \mathcal{O}_{\Delta}|-\sum_{r=2}^{n}\binom{r-1}{0}|\mathcal{A}_{j}^{k}|=|J_{j} |-\sum_{r=2}^{n}|\mathcal{A}_{j}^{r}|\] Proof.: When \(k=n\), \(\mathbb{P}_{n}\) is the set representing the universe \(S\) including \(J_{j}\). Therefore, it is trivial that \(|\mathcal{A}_{j}^{n}|=O_{S}\), which can be evaluated with \(\bigcap_{j\in S}J_{j}\). Then, for \(k\in[2,n-1]\cap\mathbb{Z}\), we calculate \(|\mathcal{A}_{j}^{k}|\) dynamically. Now, suppose we know \(|\mathcal{A}_{j}^{k+1}|\). Recall \(\mathcal{A}_{j}^{k}\) consists of all tuples in \(J_{j}\) that appear in exactly \(k-1\) other join paths. That is, tuples in \(J_{j}\) that are in some \(\Delta\in\mathbb{P}_{k}\) but are not in any higher order overlap \(\Delta^{r}\in\mathbb{P}_{r}\), where \(r\in[k+1,n]\). Therefore, we first add up all the \(k\)-th overlap for sets \(\Delta\in\mathbb{P}_{k}\), where \(J_{j}\in\Delta\). Since \(J_{j}\) is confirmed, we have \(\binom{n-1}{k-1}\) number of such sets \(\Delta\). Note that a tuple \(t\in\mathcal{A}_{j}^{k}\) may appear in multiple \(\Delta\in\mathbb{P}_{r},r\in[k+1,n]\). Therefore, to get the exact value of \(|\mathcal{A}_{j}^{k}|\), for each \(r\in[k+1,n]\), we need to count the number of \(\Delta\in\mathbb{P}_{r}\) where \(J_{j}\in\Delta\). Starting with \(r=k+1\), each such combination of \(\Delta\in\mathbb{P}_{k+1}\) contains \(J_{j}\), therefore, it appears once in remaining \(\binom{k-1}{k-1}\) number of \(\Delta^{r}\in\mathbb{P}_{k}\)'s. Hence, we need to deduct \((k-1)\cdot|\mathcal{A}_{j}^{k+1}|\) from the sum. Now for the general case \(r\), where \(k<r\leq n\), after \(J_{j}\) is confirmed, each combination of \(\Delta\in\mathbb{P}_{r}\) has its other \(k-1\) paths chosen in \(\binom{r-1}{k-1}\) number of \(\Delta^{r}\in\mathbb{P}_{k}\), so a total number of \(\binom{r-1}{k-1}\)\(|\mathcal{A}_{j}^{r}|\) needs to be deducted from the sum for each \(r\). Therefore, we can organize the formula of calculating \(|\mathcal{A}_{j}^{k}|\) as shown in the theorem. Using this theorem to compute \(|\mathcal{A}_{f}^{k}|\)'s for a given \(J_{j}\) and all \(k\in[1,n]\), we start by initializing \(|\mathcal{A}_{f}^{n}|\) with \(|\mathcal{O}_{S}|\) using the method proposed in SS 4. Then, \(|\mathcal{A}_{f}^{n-1}|\) requires evaluating \(|\mathcal{A}_{f}^{n}|\) that have been already computed as well as \(|\mathcal{O}_{\Delta}|\) for each subset of size \(n-1\) of \(S\). Again, SS 4 is used to compute a \(|\mathcal{O}_{\Delta}|\). In general, iterating from \(n-1\) to \(1\), each \(|\mathcal{A}_{f}^{k}|\) can be computed from \(|\mathcal{A}_{f}^{n}|\)'s, where \(r\in(k,n]\), that have been already evaluated and \(|\mathcal{O}_{\Delta}|\)'s that can be computed from our method for the pairwise join path overlap. Computing the size of a set union requires computing the overlap of all \(k\)-subsets of joins, which is exponential in the number of input joins. We remark that in practice the number of input joins is small. However, when \(S\) is large, if we compute \(|\mathcal{O}_{\Delta}|\)'s in the order of the bottom-up traversal of the powerset lattice of \(S\), we can speed up by reusing some of the computation. **Warm-up Phase:** Note that computing the exact values of \(k\)-overlaps and overlaps for an arbitrary number of joins and relations is computationally expensive or infeasible. Next, we present two instantiations of the framework for approximating these parameters. We consider two cases: centralized and decentralized (Gendelman et al., 2017). In a centralized setting, relations are accessible through direct access to data, such as relations within databases. We propose random-walk for this setting. In a decentralized setting, data is private or expensive to sample. Examples include data markets or large relations in databases. Our histogram-based method is suitable for this setting. Different instantiations of the framework only differ in how the union size bound, and join overlap bounds are computed during the warm-up phase. We remark that both methods guarantee uniformity. There is a tradeoff between efficiency and cost of estimation: tighter upper bounds are more costly to set up, but once in place, can generate samples more efficiently. On the other hand, looser upper bounds are easier to compute but lead to low sampling efficiency (due to potentially higher rejection rates). We propose a modified version of union sampling based on the random-walk method that does not require warm-up and strikes a better tradeoff between upper-bound computation and sampling efficiency. ## 5. Instantiation with Histograms Database management systems often maintain histograms as a special type of column statistic that provides more detailed information about the data distribution in a table column during query optimization. These histograms are useful for cardinality estimation, particularly if the data in a column is skewed. In this section, we present ways of estimating join overlap and union size using these histograms and even more minimalistic statistics such as maximum degrees of tuples in relations. Here, we propose a solution for the case of chain join, inspired by Olken's seminal work on join size estimation(Olken, 1999). In SS 8.2, we extend our framework to more generic cyclic and acyclic joins. ### Overlap of Equi-length Chain Joins We start with estimating the overlap of multiple chain joins. Suppose all joins consist of the same number of relations and there is a one-to-one mapping between relations of each pair of joins such that mapped relations have the same schema. Given a collection of joins \(S\) and a subset \(\Delta\subseteq S\), let \(O_{\Delta}=\cap_{J_{j}\in\Delta}J_{j}\) be the set of tuples that appear in all \(J_{j}\in\Delta\). Trivially, a loose upper bound for the overlap is \(\min\{|J_{j}|:J_{j}\in\Delta\}\). We first partition the joins on relations consistently. At each step, we estimate the overlap size of each sub-join dynamically from the overlap of smaller sub-joins by multiplying the overlap size of a smaller sub-joins by the minimum of the maximum degree of values join attributes. For example, for joins of three relations, we first evaluate the overlap of the first relations in all joins. Then, we evaluate the overlap of the first two relations in all joins by multiplying the overlap of the first relations by the minimum of the maximum degree of values of the first relations, and so on. More formally, let \(\mathcal{K}(i)\) be the upper bound of the number of overlapping tuples after the \(i\)-th join. Hence, \(|\mathcal{O}_{\Delta}|\leq\mathcal{K}(n-1)\). Let \(M_{A_{I}}(R_{j,i})\) be the maximum degree of values in the domain of a join attribute \(A_{I}\) of relation \(R_{j,i}\) of join \(J_{j}\) and let \(d_{A_{I}}(v,R_{j,i})\) be the degree of value \(v\) in the domain of \(A_{I}\). Note that the statistics of the degree of values are available from the histograms on join attributes. We can obtain an upper bound dynamically as \(\mathcal{K}(i)=\mathcal{K}(i-1)\cdot\min_{J_{j}\in\Delta}\{M_{A_{i}}(R_{j,i+1})\}\). Note that for \(\mathcal{K}_{1}\) we calculate the bounds based on values, i.e., \(\mathcal{K}(1)=\sum_{v\in\mathcal{C}}\min_{J_{j}\in\Delta}\{d_{A_{1}}(v,R_{j,1 })\cdot d_{A_{1}}(v,R_{j,2})\}\). So far, this bound requires the full histogram of the first relations in all joins and the maximum degree of values in the remaining relations. If the histograms are available for all join attributes in the relations, we can further refine the bound by replacing the term of the minimum of maximum degrees, \(M_{A_{i}}(R_{j,i+1})\), with the minimum of the average degree of values in the join attributes. ### Overlap of Chain Joins We now release this assumption to accommodate joins with arbitrary length and arbitrary relation schemas. Note that the joins themselves should still have the same schemas after joining. We introduce the _splitting method_ that aims to reorganize joins into joins on relations of the same size, so that the results of SS 5.1 can be applied. The splitting method derives new joins by breaking down relations into sub-relations, each sub-relation consisting of exactly two attributes. The derived joins have the same schema and are lossless, i.e., each generates the same data as the original join, and all contain the same number of relations. Moreover, for each relation in a derived join, there are corresponding relations in other joins. Since the derived joins satisfy the requirements of SS 5.1 and generate the same data, we can directly apply SS 5.1 to estimate the overlap size of the original joins. Although the input joins may not include relations with the same schemas, they definitely have corresponding attributes and the same schema after joining. As such, breaking all relations in sub-relations of two attributes and redefining joins incurs join with the same number of same-schema relations. Note that our splitting method is different than the normalization in the database theory which aims to decompose relations into sub-relations based on functional dependencies to avoid anomalies (Gendelman et al., 2017). _Split_ relations keep a record of their original sizes for the estimation steps. We call the join between two relations split from the same original relation _fake join_. The following theorem describes a generic way of bounding the overlap of chain joins. **Theorem 4**.: _Given a collection of split joins \(S\) and a subset \(\Delta\subset S\), let \(O_{\Delta}=\bigcap_{j_{j}\in\Delta}J_{j}\). Let \(M_{A_{l}}(R_{j,i})\) be the maximum degree of values in the domain of a join attribute \(A_{l}\) of relation \(R_{j,i}\) of join \(J_{j}\) and let \(d_{A_{l}}(v,R_{j,i})\) be the degree of value \(v\) in the domain of \(A_{l}\). We define the following._ \[M_{j,i}=\left\{\begin{array}{ll}M_{A_{l}}(R_{j,i+1})\;\;\text{if}\;\;R_{j,i} \bowtie R_{j,i+1}\\ \\ 1\;\;\text{if}\;\;R_{j,i}\bowtie^{\prime}R_{j,i+1}\end{array}\right.\] _Let \(\mathcal{K}(i)\) be the upper bound of the number of overlapping tuples after the \(i\)-th join and let \(d_{A_{l}}(v,R_{j,i})\) be the degree of value \(v\) in the domain of \(A_{l}\). We then obtain an upper bound for the overlap size of joins in \(\Delta\), \(|O_{\Delta}|\), dynamically as follows._ \[|\mathcal{O}_{\Delta}| \leq\mathcal{K}(n-1)=\mathcal{K}(n-2)\cdot\min_{J_{j}\in\Delta} \{M_{j,n}\}\] \[\mathcal{K}(1) =\sum_{v\in\mathcal{C}}\min_{J_{j}\in\Delta}\{d_{A_{1}}(v,R_{j, 1})\cdot d_{A_{1}}(v,R_{j,2})\}\] \[\mathcal{K}(i) =\mathcal{K}(i-1)\cdot\min_{J_{j}\in\Delta}\{M_{j,i}\}\] Proof.: The proof of this theorem follows from SS 5.1 and SS 5.2. We remark that Theorem 4 can become a biased estimator of join overlap if the data is skewed. Here, we present a solution with the least statistics available. We can extend the theorem, to become an unbiased estimator, in a straightforward way to use the histogram information of all join attributes and compute the expected value and upper bound of overlap. ## 6. Instantiation with random walks The techniques proposed in SS 4 perform join union size estimation in a direct manner. In this section, we consider an alternative and more accurate way of estimating join overlap size in an online manner. The idea is to update the join size and overlap size on the fly, during the warm-up phase, by obtaining tuples from join paths and reusing these tuples during the main sampling step. ### Join Size Estimation Revisited To solve the online aggregation problem over join, wander join proposes an algorithm by performing random walks over the underlying join data graph (Han and Yang, 2009). This solution can be applied to join size estimation by computing the COUNT operation over the join. A join data graph models the join relationships among the tuples as a graph, where nodes are tuples and there is an edge between two tuples if they can join. Using a join graph, we can easily obtain successfully joined tuples by performing random walks. The probability of a tuple sampled from a join can be computed on the fly using the join graph. Given a join \(J=R_{1}\bowtie R_{2}\bowtie\ldots\bowtie R_{m}\), the probability of a result tuple \(t=t_{1}\bowtie t_{2}\bowtie\ldots\bowtie t_{m}\) is computed as \(p(t)=\frac{1}{|R_{1}|}\cdot\frac{1}{|d_{2}(t_{1})|}\cdot\cdots\cdot\frac{1}{| d_{m}(t_{m-1})|}\), where \(d_{i}(t_{i-1})\) is the number of tuples in \(R_{i}\) than join with \(t_{i-1}\). **Example 6**.: _Consider the index graph of \(J\) in Fig. 3d. The probability of choosing \(a_{1}\) is \(\frac{1}{5}\). Then among the three joinable tuples with \(a_{1}\), the probability of selecting \(b_{2}\) is \(\frac{1}{2}\). Similarly, the probability of selecting \(c_{1}\) is \(\frac{1}{5}\). Therefore, the probability of obtaining tuple \(a_{1}\bowtie b_{2}\bowtie c_{1}\) is \(p(a_{1}\bowtie b_{2}\bowtie c_{1})=\frac{1}{5}\times\frac{1}{2}\times\frac{1}{5}\)._ Suppose we have obtained a sample \(S\) of size \(m\) from a join path \(J\). Following Horvitz-Thompson estimator (Horvitz and Thompson, 1995), the estimated join size of \(J\) based on sample \(S\), namely \(|J|_{S}\) can be evaluated as \(|J|_{S}=\sum_{t\in S}\frac{1}{p(t_{k})}\cdot\frac{1}{m}\)(Han and Yang, 2009). We can update this estimation in real-time as new join samples are obtained. Suppose a new tuple \(t_{0}\) is added to \(S\), even an update the join size estimation as follows. \[|J|_{S\cup t_{0}} =\frac{\sum_{t\in S}\frac{1}{p(t_{k})}+\frac{1}{p(t_{0})}}{(m+1)}= \frac{\sum_{t\in S}\frac{1}{p(t_{k})}}{m}+\frac{\frac{m}{p(t_{0})}-\sum_{t\in S }\frac{1}{p(t_{k})}}{(m+1)m}\] \[=|J|_{S}+\frac{1}{m+1}\left(\frac{1}{p(t_{0})}-|J|_{S}\right)\] We revisit the mean and variance of \(|J|\) later in the discussion of random walk overlap. Hence, a real-time approximate answer is returned with some confidence level, and the accuracy improves as the sample size grows larger. Extending from wander join, we have two methods to estimate the overlap sizes. First, we set an \(\alpha\) as a parameter, which is the confidence level we want to achieve. There is a confidence level value \(z_{\alpha}\) corresponding to the \(\alpha\). The half-width of the confidence interval is \(\frac{z_{\alpha}\cdot\sigma}{\sqrt{n}}\), where \(n\) is the sample size and \(\sigma\), is the standard deviation of the sample set. We terminate the sampling when the half-width becomes less than the threshold we defined. ### Overlap of Joins We described an algorithm based on random walks for sampling a join and estimating a join size. Given a set \(\Delta\in S\) of join paths, we would like to estimate the overlap of joins in \(\Delta\), namely \(\mathcal{O}_{\Delta}\). Let \(S_{j}=\{t_{1},t_{2},\ldots,t_{m}\}\) denote a collection of sampled tuples from join \(J_{j}\in\Delta\). Let \(count(t)\) be the number of occurrences of tuple \(t\) in a set. We define \(S^{\prime}_{j}\) such that for each tuple \(t\) in \(S_{j}\), \(S^{\prime}_{j}\) contains exactly \(\frac{1}{p(t)}\) number of such tuple \(t\), i.e., \(S^{\prime}_{j}=\{t\in S_{j}\midcount(t)=\frac{1}{p(t)}\}\). Thus, sample \(S^{\prime}_{j}\) preserves the distribution of \(J_{j}\). We assume uniformity, over overlap, and non-overlap regions among join paths, that is we sample tuples and estimate join sizes by performing random walks, for any \(J_{j}\in\Delta\), we have \(\frac{|\mathcal{O}_{\Delta}|}{|J_{j}|}=\frac{|\bigcap_{j\in\Delta}S^{\prime}_{j }|}{|S^{\prime}_{j}|}\). Therefore, a join overlap size is estimated on the fly as follows. \[|\mathcal{O}_{\Delta}|=|\bigcap_{j\in\Delta}J_{j}|=|J_{j}|\cdot\frac{|\bigcap_ {j\in\Delta}S^{\prime}_{t}|}{|S^{\prime}_{j}|} \tag{2}\] How to get the \(|\bigcap_{j\in\Delta}S^{\prime}_{j}|\)? We fix a \(J_{j}\in\Delta\) and continually sample from this single source, forming the \(S_{j}\). In each round, if we accept the sample \(t\), then we check every.\(j_{i}\in\Delta\), where \(i\neq j\) to see where \(t\) is contained in \(j_{i}\). Since we already have the index for each \(j_{i}\)(stored in hash tables), this operation could be cheap since it just requires \((N-1)\times(M-1)\) queries with key, where \(N=|\Delta|\) and \(M\) is the number of tables in a join path. If \(t\) is in every \(J_{i}\), we include it into \(\bigcap_{j\in\Delta}S^{\prime}_{i}\). We can now plug this in estimation in Theorem 3 to compute the union size of joins in \(\Delta\). Next, we compute the confidence interval for \(|\mathcal{O}_{\Delta}|\). The variance of \(|\bigcap_{j\in\Delta}S^{\prime}_{i}|/|S^{\prime}_{j}|\), denoted by \(\sigma^{2}_{j}\), can be computed by a binomial sampling, with a variance of \(\hat{p_{j}}(1-\hat{p_{j}})\) and mean of \(\hat{p_{j}}\)(Han and Yang, 2009). Li et al. showed the mean and variance of \(|J_{j}|\), denoted by \(\hat{\phi}^{2}_{j}\), are \(T^{j}_{\Delta}(u)(=\frac{1}{n-1}\sum_{i=1}^{n}f^{j}(i))\) and \(T^{j}_{n,2}(u)(=\frac{1}{n-1}\sum_{i=1}^{n}(f^{j}(i)-T^{j}_{n}(f))^{2})\), respectively (Han and Yang, 2009). Assuming these terms are independent, we have the variance of \(|O_{\Delta}|\) as follows. \[\sigma^{2}_{|O_{\Delta}|}=T^{j}_{n,2}(u)\cdot\hat{p_{j}}\cdot(1-\hat{p_{j}})+T^{ j}_{n,2}(u)\cdot\hat{p_{j}}+T^{j}_{n}(u)\cdot\hat{p_{j}}\cdot(1-\hat{p_{j}})\] This gives us the following confidence interval for \(|O_{\Delta}|\) of Eq. 2. \[E=z\cdot\sqrt{\frac{1}{n}\sum_{j_{j}\in\Delta}(T^{j}_{n,2}(u)\cdot\hat{p_{j}} \cdot(1-\hat{p_{j}})+} \tag{3}\] This means to obtain a 90% confidence on overlap estimation, the algorithm requires a sample size of \((\frac{1.96\cdot z}{E}\cdot\sigma_{|O_{\Delta}|})^{2}\), on average. Note that our estimator for overlap, using random walks, is unbiased. We first guarantee uniformity by adding \(\frac{1}{p(t)}\) number of tuple \(t\) to the collection \(S_{j}\). We know we have the following. \[\lim_{|S_{j}|\rightarrow\infty}\frac{|\bigcap_{j\in\Delta}S^{\prime}_{j}|}{|S ^{\prime}_{j}|}=\frac{\lim_{|S_{j}|\rightarrow|\Delta}|\,|\bigcap_{j\in \Delta}S^{\prime}_{j}|}{\lim_{|S_{j}|\rightarrow|\Delta}|\,|S^{\prime}_{j}|}= \frac{|\bigcap_{j\in\Delta}J_{j}|}{|J_{j}|}\] Therefore, we can show that our result gets more and more accurate when \(|S_{j}|\) gets larger and equals the exact result when \(|S_{j}|=|\Delta|\). As the accuracy of overlap estimation gets closer to the true values, we also obtain a better estimation for the union size, which shows that our estimator improves for values used in our algorithms. ## 7. Online union sampling The histogram-based method has almost zero setup cost but low sampling efficiency, while the random-walk method requires some sampling cost during the warm-up phase, but yields better estimation and efficiency. To design a sampling algorithm with a minimal setup cost and high sampling efficiency, we introduce an online union sampling algorithm as illustrated in Algorithm 2. At a high level, join and union size estimation is performed in an online manner as the union of joins is being sampled. Algorithm 2 extends Algorithm 1 with two optimizations: sample reuse and backtracking with parameter update. It initializes join and union parameters using the histogram-based method, then, continues with selecting joins and sampling joins using the random-walk method. At each iteration, obtained samples are used to further refine estimations using the join and union estimation proposed in SS 6.1. **Sample Reuse** (lines 8-10 of Algorithm 2) This makes up for the overhead of the random-walk. Recall the tuples sampled by random-walk are not uniform, however, with an extra accept/reject step we can reuse them in the main sampling phase. For each join, we keep track of every tuple \(t\) and its probability \(p(t)\), computed during join sampling as described in SS 6.1. Suppose we have already sampled \(S=\{t_{1},t_{2},\ldots,t_{l}\},t_{i}\in J_{j}\), from \(J_{j}\). Recall \(S\) may have duplicates, i.e., there exists \(i,j\) s.t. \(t_{i}=t_{j}\). Then, if we choose \(J_{j}\), we can first randomly choose a tuple \(t\) from \(t_{1},t_{2},\ldots,t_{l}\), but we only accept it with probability \(\frac{l}{p(t)\cdot|J_{j}|}\). In this way, the algorithm guarantees that the reused \(t\) is sampled from \(J_{j}\) with probability \(p(t)\cdot\frac{1}{l}\cdot\frac{l}{p(t)\cdot|J_{j}|}=\frac{1}{|J_{j}|}\) which ensures uniformity of sampling over the union. Note that if we accept \(t\), we do not return \(t\) to the pool, i.e., it is a sample without replacement process and \(l\) is changing. Once we use all the tuples we stored, the next time \(J_{j}\) is selected, we simply sample over join using the techniques of SS 3.2. Note that the acceptance rate, namely \(R\), can be equal to and greater than \(1\). This means the algorithm may return more than one instance of \(t\) in a certain round, while still ensuring the uniforming condition. We define the \(r_{i}\) as the probability that \(i\) instances of \(t\) be accepted in a certain round. That is, \(\sum_{i}^{n}r_{i}\cdot i=R\), where \(\sum_{i}^{n}r_{i}=1,\ 0\leq r_{i}\leq 1\). Then, we have to choose the number of instances, \(n\in N^{+}\), by choosing one of the many valid solutions of this system. ``` 0: Join paths \(\{J_{j},1\leq j\leq m\}\), tuple count \(N\), backtrack para \(\phi\), target confidence level \(\phi\). 0:tuples \(\{t_{1},1\leq i\leq N\}\) 1:\(\{|J_{j}|,1\leq j\leq n\}\), \(|U|,\leftarrow\)\(\mathit{warmup}(S)\), \(\{|J^{\prime}_{j}|\}\gets\mathit{cover}(S)\) 2:\(\mathit{conf\_level}\gets 0\), \(T\leftarrow\{\}\)\(\triangleright\) result sample 3:\(P\leftarrow[j][|]\)\(\triangleright\) record probability of selected tuples from each join path 4:\(orig\_join\leftarrow\{\}\)\(\triangleright\) record of original join of seen tuples 5:while\(n<N\)do 6: select \(J_{j}\) with probability \(\frac{|J^{\prime}_{j}|}{|U|}\) 7:if\(S_{j}\neq\emptyset\)then 8: sample \(t\in S_{j}\), accept with \(\frac{l}{p(t)\cdot|J|}\), remove \(t\) from \(S_{j}\) 9:if\(S_{j}=\emptyset\) or \(t\) from \(S_{j}\) is rejected then 10:\(t\leftarrow\) a random sample from \(J_{j}\) 11:if\(t\in orig\_join_{1}\) for any \(i<j\)then reject \(t\) 12:else 13:if\(t\in orig\_join_{1}\) for any \(i>j\)then\(\triangleright\) revision 14: remove \(t\) from \(orig\_join_{i}\) and add \(t\) to \(orig\_join_{j}\) 15: remove all \(t\)'s from \(T\) and delete \(P[i][t]\)'s 16:if\(t\not\in orig\_join_{1}\)then add \(t\) to \(orig\_join_{j}\) 17:\(T\gets T\cup\{t\}\) and update \(P[j][t]\) 18:if\(\sum_{j\in[m]}|P[j]|\approx\phi\leftarrow\) and \(conf\_level<\gamma\)then 19:\(\{|J_{j}|,1\leq j\leq m\},|U|,T\) 20:\(\mathit{conf\_level}\gets Update(T,P)\)\(\triangleright\) backtrack (SS 7) 21:return\(T\) ``` **Algorithm 2** Set Union Sampling with Reuse and Backtracking **Backtracking with Parameter Update** In SS 4, we show that despite the small overhead of the histogram-based method and its usefulness, the histogram-based method may not be an unbiased estimator of our sampling parameters. Moreover, in SS 6, we proved that the random-walk is an unbiased estimator whose parameter estimations converge to the true values after infinitely many numbers of samples. Algorithm 2 initializes the framework with the estimation of the histogram-based method and refines the parameters by applying random-walk. The caveat is that, with this refinement strategy, although at each round the probability of sampled tuples is uniform and equal to \(1/|U|\), the uniformity of tuples sampled across rounds is not guaranteed since the estimation of \(|U|\) changes from one round to another with more random walks. To mitigate this non-uniformity, we introduce a backtracking trick which is an accept/reject strategy for all already sampled tuples in previous rounds. Algorithm 2 initializes \(T\) to be the set of result samples and initializes a list \(P\) to store, \(p(t)\)'s, the probabilities of tuples obtained from a join either it is accepted, rejected, or when the random walk fails. We also specify a parameter \(\phi\), which indicates how often we backtrack. During the sampling process, we record all \(p(t)\)'s regardless of \(t\) being a rejected or reused tuple or being the result of a failed random walk (in this case, \(p(t)=0\)). Every \(\phi\) iterations, i.e., \(\phi\) recorded \(p(t)\)'s, we update join, overlap, and union estimations following the random-walk method then perform backtracking following Algorithm1 to adjust the probability of previously sampled tuples based on the new estimation of \(|U|\). During backtracking, we iterate over all previously sampled tuples in the result and adjust their probabilities by rejecting tuple \(t\) with probability \(\frac{|I(t.adj^{\prime}|/|U|^{\prime}}{|I(t.adj)|/|U|^{\prime}}\), where \(|I(t.adj)|\) and \(|U|\) are original values, and \(|J(t.val)|^{\prime}\) and \(|U|\) are updated values. It is not hard to see that the backtracking algorithm guarantees that each tuple in the result is sampled with \(\frac{1}{|U|^{\prime}}\). We also keep track of the confidence level \(\gamma\) of the estimated sizes and stop backtracking when the accuracy is beyond a predefined threshold. ## 8. Other Types of Joins In this section, we show how to generalize our sampling framework to acyclic joins. The join subroutine of our algorithm relies on an existing algorithm. The work by Zhao et al. provides a way of random sampling over join of all types: chain, acyclic, and acyclic (Zhao et al., 2017). The two discussed instantiations of our framework propose different ways of estimating join overlap size parameters. The random-walk method relies on samples obtained from joins for estimation and handles acyclic and cyclic joins in the subroutine of join sampling. For brevity, we do not repeat the algorithm of Zhao et al. and describe how we extend the histogram-based method to acyclic and cyclic joins. ### Acyclic Joins We organize the relations in a join tree, where each node refers to a relation and each edge denotes a join. Figure3c illustrates an example of a join tree. The basic idea in extending our sampling algorithm to acyclic joins is to transform all acyclic joins and chain joins in the union to the base case of equi-length chain joins and use the results of SS5.1 to estimate join overlaps. Our solution involves first building a _standard template_ of joins. A template is a join tree structure to which the structure of every join can be converted. We formalize the standard template as a chain join that contains relations of two attributes. The reason we need the template is that the degree-based comparison which is necessary for the size estimation of SS5.1 can only be applied when relations have exactly the sample structure. To rewrite an acyclic join as a base chain join, we first construct the equivalent join tree such that a breadth-first traversal, always starting from the left-most node in each level, gives us joins of the same schema for all trees. A chain join is indeed a join tree with one branch. Joins may result in different tree structures. Therefore, we next need to choose a standard tree structure (template) before decomposing them into base chain joins. A good template is important in the estimation process. A bad template can lead us to the worst bound results of \(\min_{j\in[n]}|U_{j}|\). **Example 7**.: _Consider the join in Fig.3a. Suppose we choose the template of \((A,D)\bowtie(A,C)\bowtie(B,C)\bowtie(B,E)\bowtie(E,F)\). To obtain \((B,E)\), we need to estimate the size of \((A,B,C)\bowtie(C,D)\bowtie(D,E)\); to obtain \((E,F)\), we need to estimate the size of \((D,E)\bowtie(C,D)\bowtie(F,F)\). Since we also need to estimate the fake join size, these two estimations between relations lose lots of information. However, the template \((A,B)\bowtie(B,C)\bowtie(C,D)\bowtie(D,E)\bowtie(E,F)\) gives us a better bound as we only use the pre-estimation for relations once to obtain \((E,F)\)._ It is not hard to notice that if we want to preserve most of the structure of the original relations, we prefer templates that put attributes in their original relations. We formulate the problem of finding a standard template for a collection of chain and cyclic joins as the problem of splitting joins into two-attribute relations such that the total pairwise distance of attributes in the same relation, in the tree of the template, is minimized. #### 8.1.1. Pairwise attributes score Suppose all \(J\)'s in \(S\) result in tables with attributes \(\mathcal{D}\). For any pair of attributes \(A,A^{\prime}\in\mathcal{D}\), let \(Dist_{J}(A,A^{\prime})\) be the distance between node(relation)s of \(A\) and \(A^{\prime}\) in join tree for \(J_{j}\). Note that the distance between two attributes \(A\) and \(A^{\prime}\) is equivalent to the number of joins we need to perform to obtain \((A,A^{\prime})\) in a template. Then, we define the score between \(A\) and \(A^{\prime}\) as \(score(A,A^{\prime})=\sum_{j\in[n]}Dist_{J}(A,A^{\prime})\). Again consider Figure3a. We have \(score(A,B)=0+0+0=0\), which has the highest priority when we select a table for the standard. Moreover, \(score(A,F)=2+3+2=7\) represents that \(A\) and \(F\) are far from each other and have a small possibility to appear together in the original tables. Thus, pairs with a lower score have a higher possibility of originally being in the same table. The lower the score is, the higher the priority. We form all the pairs as a tree, where the root is an empty node and each path from the root to a leaf is an eligible path after eliminating the empty root node. For example, if the resulting table has schema \(\mathcal{D}=\{A,B,C\}\), and \((A,B)=0,(A,C)=3,(B,C)=6\), the tree will be formed as shown in Fig.8.1.1. We want the standard template to have the lowest score, so we can convert the problem to finding the minimum cost path which can be solved recursively. Figure 3. (a) Acyclic join, (b) Cyclic join, (c) Tree structure for overlap estimation of (b), (d) Skeleton join and residual joins for random sampling (d) Join data graph of \(J\) #### 8.1.2. Alternating score Another thing worth noticing is that split relations and joins without estimating sub-join size preserve most information, so we may give weights to the case with \(Dist_{j}(A,A^{\prime})=0\). We can view the score for this case as a hyper-parameter that can be tuned for finding the tightest bound. Given a standard template, we now introduce how acyclic and cyclic joins can be converted while preserving information for "fake join"s. Consider the tree structure acyclic join. Suppose node for \(R_{i}\) has \(k\) number of children, \(R_{i_{1}},R_{i_{2}},\ldots,R_{i_{k}}\), and we have an extreme case of the template where each table \(R_{i_{j}}\) has one attribute that is paired with an attribute in \(R_{i}\). In this case, we do _fake join_ on each \(R^{\prime}_{i,j}=R_{i}\bowtie^{\prime}R_{i_{j}}\bowtie^{\prime}Childs(R_{i_{j}})\) and estimate \(|R^{\prime}_{i,j}|\) using the method in SS 7. In this step, we also record the estimated maximum degree in each attribute \(A\) in \(R^{\prime}_{i_{j}}\) as follows: \[M_{A}(R^{\prime}_{i_{j}})=\left\{\begin{array}{ll}M_{A}(R_{i})\cdot M_{A}(R _{i_{j}})\ \ \text{if}\ \ A\ \text{is join attribute}\\ \\ \max\{M_{A}(R_{i}),M_{A}(R_{i_{j}})\ \ \text{otherwise}\end{array}\right.\] Through this way, we can split \(R^{\prime}_{i_{j}}\) according to the standard template and with information on both cardinality and maximum degrees. Moreover, we are able to estimate the overlap size accordingly. Note that we do not necessarily need to fake join all the child nodes with their parent for transformation, as in real scenarios, we select the children based on the schemes of relations in the standard template. ### Cyclic Joins In this section, we extend our sampling algorithm to cyclic queries. Following the method proposed in (Sang et al., 2017), we break all the cycles in the join hyper-graph by removing a subset of relations so that the join becomes a connected and acyclic join. The residual join, namely \(\mathcal{S}_{R}\), is the set of removed relations and the skeleton join, namely \(\mathcal{S}_{M}\), is the set of relations in the main acyclic join. Fig. 2(c) shows the equivalent skeleton join tree and residual join to the cyclic join of Fig. 2(b). Let attributes in \(\mathcal{S}_{R}\) be \(Attr(\mathcal{S}_{R})\), and attributes in \(\mathcal{S}_{M}\) as \(Attr(\mathcal{S}_{M})\). We treat \(\mathcal{S}_{R}\) as a single relation in the new acyclic join. We can even materialize \(\mathcal{S}_{R}\) by performing joins in \(\mathcal{S}_{R}\). Note that some attributes in \(Attr(\mathcal{S}_{R})\) from the residual \(\mathcal{S}_{R}\) may be joined with \(Attr(\mathcal{S}_{M})\). This means we have an acyclic join (the skeleton join) and a residual that can be joined with two or more relations in the skeleton. Now the maximum degree \(M(\mathcal{S}_{R})\) of any attribute in \(\mathcal{S}_{R}\) is defined as follows \[M(\mathcal{S}_{R})=\max_{v_{i}\in A_{i}}|t:t\in\mathcal{S}_{R},\pi_{A_{i}}(t)= v_{i},VA_{i}\inAttr(\mathcal{S}_{M})\capAttr(\mathcal{S}_{R})|\] Since we treat the residual as one relation, with the degree information, we can estimate the join size and overlap size by breaking \(S_{R}\) into the base chain join structure, as described in SS 8.1. Note that the choice of set or relations to remove can have a significant influence on performance. We follow the methods used by Zhao et al. (Zhao et al., 2017) to decide where to break the cycle in practice. ### Selection Predicates Our sampling algorithms can support selection predicates in two ways. The first alternative is to push down the predicates to relations, i.e., we filter each relation with the predicates, during the preprocessing, and work with filtered relations during sampling. This paradigm works for both histogram-based and random-walk. Another alternative is to enforce the selection predicate during the sampling process. This paradigm works with only random-walk, unless the histogram-based method has access to the selectivity degree of the predicate and can adjust the degree statistics. Since this paradigm adds an additional rejection factor, it is most appropriate for selection predicates that are not very selective. ## 9. Evaluation **Datasets:** We use three datasets consisting of different types of joins tailored from the TPC-H benchmark. Each query workload is to sample from the union of joins in a dataset. UQ1 consists of five chain joins, where each has five relations: nation, supplier, customer, orders, and lineitem; UQ2 consists of three chain joins which use: region, nation, supplier, partsup, and part, where we also add selection predicates following \(Q_{2}^{N}\cup Q_{2}^{P}\cup Q_{2}^{Q}\) in (Dong et al., 2016); and, UQ3 has one acyclic join and two chain joins. UQ3 is derived from relations: supplier, customer, and orders. We split them vertically and horizontally to get relations with different schemas. Therefore, working with UQ3 involves the application of the splitting method. To experiment with the scale of data, we use TPCH-DBGen to generate relations with various scales. For example, with TPC-H scale factor \(N\)-gb, and \(K\%\) scale ratio, UQ3 is a dataset of size \(K\%\cdot N\cdot 3\). For UQ2, we have the same data for three joins but have different constraints for selection predicates. Hence, UQ2 has a large overlap scale. We also vary the overlap scale \(p\%\) between joins of UQ1. When generating different queries, we keep \(p\%\) of the data the same in the original corresponding relations. This way, although we cannot ensure that the overlap ratio in queries is exactly \(p\%\), given unknown information between relations, we can guarantee that the overlap ratio between queries is proportional to the overlap scale. Note that we did not perform experiments on cyclic joins queries, particularly because transforming cyclic to acyclic joins and online sampling from cyclic join is done based on an existing work (Zhao et al., 2017). **Algorithms** We evaluate the histogram-based and a random-walk instantiations by plugging in techniques of SS 4 and SS 6, respectively, in Theorem 3. The join estimation of histogram-based can be instantiated by baselines EW (Exact Weight) (Zhao et al., 2017), which is the ground truth for weights by calculating the exact weight of each tuple in the join data graph, or EO (Extended Olken's) (Zhao et al., 2017), which we described in SS 3.2. The join estimation of the online technique uses our random-walk of SS 6. We also consider FullJoinUnion as the ground truth for our join size and union size estimations. This algorithm performs the full join and computes the union. Note that FullJoinUnion is extremely expensive on large datasets. Our experiments timed out on data sizes of more than 5GB (per relation). We do not evaluate DisjoinUnion since it is consistent with sampling over one join path as it has no extra delays. we do not evaluate the Bernoulli set union sampling since it is a slightly different variation of the Non-Bernoulli and has lower efficiency theoretically. **Implementation:** The framework is implemented in Python. Relations in joins are stored in hash relations with a linear search. Acyclic joins are implemented in a tree structure and acyclic joins are handled by recursion. All experiments are conducted on a machine with 2 Intel(r) Xeon Gold 5218 @ 2.30GHz (64 cores), 512 GB DDR4 memory, a Samsung(r) SSD 983 DCT M.2 (2 TB), 4 GPUs - TU102 (GeForce RTX 2080 Ti). ### Join and Union Size Approximation #### 9.1.1. Error We evaluate the estimation error of the ratio \(|J_{i}|/|U|\) for each join in a query, because our algorithms rely on this ratio to define probability distributions over joins. For these experiments, we use UQ1 and UQ3 with 3GB scale raw data. After preprocessing, UQ1 is 9GB and UQ3 is 5.4GB.The overlap scale is set to 0.2. Fig. 3(a) and 3(b) show the ratio estimation error for UQ1 and UQ3, with respect to overlap scale, using histogram-based method. _For large overlap scales, the error tends to be small and stable. For smaller scales, the performance is unstable_. This is because when the overlap scale is small, small samples will have a large effect on the estimation performance. However, when we have a large scale of overlap, which is our use case, the randomness will be removed. Besides, we observe that the average error for UQ3, in Fig. 3(b), is better than UQ1, in Fig. 3(a). As we take an upper bound for every join, our histogram-based method gains higher accuracy on joins with a smaller length. Given that UQ3 is smaller both in length and numbers, this explains why the estimation is relatively more accurate for UQ3. #### 9.1.2. Runtime We report the runtime of our parameter estimation methods, in Fig. 3(c) and 3(d). First, histogram-based _is significantly faster than the brute-force full join_. Second, for UQ1, we observe that as the cost of full join increases with overlap scale, the time histogram-based method needs becomes less. This is because when the overlap scale is large and the overlapping structure is complex, it becomes harder for the full join to scan over data, but for our method, a higher overlap scale instead accelerates our method in finding the tuple with the maximum degree. Unlike the histogram-based technique, our random-walk technique collects sampling statistics during the warm-up phase. When evaluating the confidence level of the overlap size, we are actually evaluating the ratio that the overlap part takes in the join, i.e., \(\frac{|\bigcap_{j\in\mathcal{A}}S_{j}^{\prime}|}{|S_{j}^{\prime}|}\). In Eq. 2, we take \(|J_{j}|\) as an exact value to fulfill the assumption of independence. This is in fact equivalent to having the confidence level of \(|J_{j}|\) as 1 and confidence interval as 0, which is an approximation of the case given by Wande Join (Wande and Join, 2017). We terminate online sampling when the confidence level reaches 90% or we obtain 1,000 samples. Fig. 4(a) compares the performance of histogram-based with EO (Shi et al., 2017; Shi et al., 2017), as join size instantiation, with random-walk, in terms of the error of join to union size ratio estimation on UQ1. We used a data scale of 3GB for each query. First, random-walk _outperforms_ histogram-based; _in fact, random-walk is extremely accurate and stable and has an error close to zero for all joins_. This is because the nature of indexing will give us extra information about overlapping. We remark that the accurate estimation comes at the cost of sampling during the warm-up phase. We will discuss the empirical evaluation of the sampling technique that reuses these samples, shortly. Besides, _while the estimation error is quite robust across joins, the higher the overlap, the more accurate histogram-based becomes_. Since the accuracy of overlap size estimation heavily depends on the overlap size of samples we collect, the larger the actual overlap is, the easier we find overlap in samples, As we take the minimum in each step as the upper bound of overlap size, the bound gets tighter when overlap size approaches data size, which results in more accurate results in overlap size estimation. Nevertheless, though random-walk has better performance, histogram-based is relatively faster and can be applied to databases without index structures. ### Set Union Sampling #### 9.2.1. Scaling with Number of Samples For histogram-based, We use both EW and EO methods for weights initialization in sampling from a single join, and we only use EW for random-walk. First, Fig. 4(c), 4(d), and 4(e) show how SetUnion scale with number of samples. Overall, we can see that when using EW instantiation, histogram-based and random-walk have nearly no difference in performance. In other words, _the accuracy of the estimation bound has little impact on sampling efficiency_. However, for histogram-based, using EW results in a much slower situation than using EO on all three queries, since with exact weights calculated, we obtain a rejection rate of zero. #### 9.2.2. Runtime Breakdown Fig. 4(f), 4(g), and 4(h) shows the comparisons of time spent on parameter estimation(join size, overlap, and weights), producing accepted answers and on producing rejected answers. The reason for the decay comes from EO, as well as the fact that we need to reject duplicate tuples that are sampled from a Figure 4. The error of join to union size ratio estimation using histogram-based +EO on (a) UQ1 and (b) UQ3; runtime of union size estimation using histogram-based and FullJoin on (c) UQ1 and (d) UQ3. join different from what it is assigned to. From these plots, the most significant finding is that though using EO is much less efficient than using EW, it has better performance in the warm-up phase. Moreover, since it uses the upper bound of weights for sampling from a join, it has an extra rejection phase and needs to spend much more time on rejected answers than using EW. Besides, the time spent on accepted answers is similar for three combinations of instantiations for all queries. Moreover, _our SetUnion algorithm spends minor time rejecting duplicate tuples and has very high efficiency when using_ EW _for join sampling._ #### 9.2.3. Scaling with Relation Size Although we use \(scalef\,actor=5\) for all three queries, we will get different sizes of unions if we perform full joins due to different numbers of relations and different levels of overlaps. From our set out, the order of union size for three queries from large to small is UQ1, UQ3, UQ2. From both sets of plots, we notice that sampling time is in proportion to the resulting union size. What's more, when the expected union size is small, as for UQ2, EO has a relatively smaller gap with EW during sampling, and has an even better advantage in the warm-up phase. Moreover, Fig. (b)b reports sampling time for various data scales for UQ1. The first observation is that using EO for join size estimation makes both algorithms slower than using EW and overall EW scales better with the size of data since with exact weights the rejection rate for sampling from single join path is 0. Second, though the sampling time of both algorithms increases with the size of data, the scale has a much larger effect on EO than EW. As the size of each relation grows larger, a tuple has a higher rejection rate due to the growth of the number of tuples in the relation to be joined with. Finally, initialization in union size using either histogram-based or random-walk has little impact on efficiency, which is consistent with the conclusion we obtained earlier. ### Online Union Sampling with Sample Reuse In the next set of experiments, we evaluate the runtime of the random-walk sampling using the idea of reusing samples collected during warm-up. We compare random-walk with and without reuse on all three queries. Fig. (a)a shows sampling time with respect to sample size. First, we can clearly observe that we have much higher efficiency when we sample with reuse. When we sample from the pool of pre-sampled and joined tuples during the warm-up phase, we only do a fast check on rejection or acceptance and do not need to sample over each relation. Moreover, there is a slight change in slope on lines of sampling with reuse cases. When pre-sampled samples are all used, the performance of SetUnion will slowly converge to their original performance. One other interesting phenomenon is that the reuse of samples has a more apparent increase in performance when the expected union size is larger. For UQ1, there is a huge gap between with and without reuse; but for UQ2, the gap is much smaller. Fig. (b)b compares to time spent on successfully accepting one tuple in the regular sampling phase and in the reuse sampling phase. We use the ratio of total time spent Figure 5. (a) the error of join to union size ratio; (b) SetUnion time vs. data scale on UQ1; runtime vs. sample size on (c) UQ1 and (d) UQ2; (e) UQ3; time breakdown of (f) UQ1 (g) UQ2 and (h) UQ3. Figure 6. (a) time vs. sample size with and without reuse (b) time per sample spent in a regular phase vs. a reuse phase. on sampling and the number of successfully sampled tuples for each phase for comparison, and we can see that when we reuse pre-sampled tuples, we have much higher efficiency. This shows the huge improvement in efficiency brought by our online union sampling. ## 10. Related Work **Random Access to Query Results** The closest problem to ours is random access to the results of conjunctive queries. Bagan et al. show that the free-connex acyclic conjunctive queries can be evaluated using an enumeration algorithm with a constant delay between consecutive answers, at the cost of a linear-time preprocessing phase (Bagan et al., 2016). However, because this work does not guarantee the randomness of the intermediate answers, the produced result may have extreme bias, making it unsuitable for learning tasks. Recently, Carmeli et al. studied the problem of enumerating the answers of the union of acyclic conjunctive queries in a uniformly random order (Carmeli et al., 2017). The proposed algorithm requires full access to the database, i.e., the computation of the full joins as well as a linear pre-processing time in the size of the database. As such, this algorithm is not applicable to random sampling over open data, data markets, proprietary databases, or web databases where the access model is tuple-at-a-time access. Unlike the approach of Carmeli et al. which requires computing the exact join and overlap sizes, our framework presents sampling strategies and ways of approximating these parameters using simple statistics, such as degrees, in our direct method or a subset of random samples in our online method. **Random Sampling over Joins** The problem of random sampling over a single join path was posed in the 1990s (Acharya et al., 2009). Acharya et al. proposed a solution for good approximate answers using only random samples from the base relations, but accuracy still remained to be improved (Acharya et al., 2009). Joining random samples of joins produces a much smaller sample size than samples. Moreover, it is shown that join samples obtained do not satisfy the independence requirement (Han et al., 2009). To solve this, Olken proposed the idea of rejecting join of two samples with specific probabilities for two-table join (Zhao et al., 2010); Chaudhuri et al. proposed techniques that are applicable to linear joins but not to arbitrary joins (Acharya et al., 2009). Both methods require full information of the tables as well as the index structure. Chaudhuri et al. significantly improved the efficiency by proposing another strategy group sample algorithm that relies on only partial statistics (Acharya et al., 2009). However, all the above three methods only work for 2-table Joins. Ripple join returns dependent and uniform samples (Zhao et al., 2010). Wander join (Zhao et al., 2010) extended ripple join to return independent but non-uniform samples from the join. Recently, Zhao et al. proposed a framework that handles general multi-way joins and guarantees i.i.d (Zhao et al., 2010). This algorithm can be plugged in our framework for random sampling over a single join path. **Union of Sets and Queries** The union-of-sets problem has been studied in approximate counting literature (Zhao et al., 2010). The goal is to design a randomized algorithm that can output an approximation of the size of the union of sets efficiently. Karp et al. proposed a \((1+\epsilon)\)-randomized approximation algorithm for approximating the size of the union of sets with a linear running time. This algorithm requires the exact size of each set and a uniform random sample of each set. (Zhao et al., 2010). Bringmann and Friedrich later applied this algorithm in designing an algorithm for high dimensional geometric objects using uniform random sampling. They also proved that the problem is \(\sharp\)P-hard for high dimensional boxes (Bagnan et al., 2016). The computation of union of sets also has links to 0-th frequency moment estimation (Bagan et al., 2016). One line of work in this area is on DNF counting problem (Zhao et al., 2010), including designing hashing-based algorithms (Bagnan et al., 2016; Daskal and Kasten, 2016; Zhao et al., 2010; Zhao et al., 2010). Another popular line of work is on estimating the union of sets where each set arrives in a streaming fashion (Bagnan et al., 2016; Daskal and Kasten, 2016; Zhao et al., 2010; Zhao et al., 2010). ## 11. Conclusion This paper studies two novel problems: sampling over the union of joins and size approximation of the union of joins. A general union sampling framework is proposed that estimates join overlap and union parameters when (1) data statistics are available in DBMSs and (2) access to the data in relations is feasible. The framework extends to the union size of joins of arbitrary multi-way acyclic and cyclic. Interesting future work directions include analyzing the impact of data skew on approximations as well as integrating a union sampling operator into a database engine.
2305.12012
Aberration free synthetic aperture second harmonic generation holography
Second harmonic generation (SHG) microscopy is a valuable tool for optical microscopy. SHG microscopy is normally performed as a point scanning imaging method, which lacks phase information and is limited in spatial resolution by the spatial frequency support of the illumination optics. In addition, aberrations in the illumination are difficult to remove. We propose and demonstrate SHG holographic synthetic aperture holographic imaging in both the forward (transmission) and backward (epi) imaging geometries. By taking a set of holograms with varying incident angle plane wave illumination, the spatial frequency support is increased and the input and output pupil phase aberrations are estimated and corrected -- producing diffraction limited SHG imaging that combines the spatial frequency support of the input and output optics. The phase correction algorithm is computationally efficient and robust and can be applied to any set of measured field imaging data.
Gabe Murray, Jeff Field, Maxine Xiu, Yusef Farah, Lang Wang, Olivier Pinaud, Randy Bartels
2023-05-19T21:44:38Z
http://arxiv.org/abs/2305.12012v1
# Aberration free synthetic aperture second harmonic generation holography ###### Abstract Second harmonic generation (SHG) microscopy is a valuable tool for optical microscopy. SHG microscopy is normally performed as a point scanning imaging method, which lacks phase information and is limited in spatial resolution by the spatial frequency support of the illumination optics. In addition, aberrations in the illumination are difficult to remove. We propose and demonstrate SHG holographic synthetic aperture holographic imaging in both the forward (transmission) and backward (epi) imaging geometries. By taking a set of holograms with varying incident angle plane wave illumination, the spatial frequency support is increased and the input and output pupil phase aberrations are estimated and corrected - producing diffraction limited SHG imaging that combines the spatial frequency support of the input and output optics. The phase correction algorithm is computationally efficient and robust and can be applied to any set of measured field imaging data. + Footnote †: preprint: AIP/12-QED + Footnote †: preprint: AIP/12-QED ## I Introduction Imaging with second harmonic generated (SHG) light enables label free imaging of non-linear structures. This intrinsic contrast mechanism, which relies on the lack of inversion symmetry, allows selective imaging of particular features, while eliminating background. Leveraging this advantage, SHG microscopy is continuously growing as a valuable resource for the study of biomedical and material systems [1, 2, 3]. In biological tissues, light undergoes second harmonic scattering when interacting with non-centrosymmetric molecules that are ordered spatially so that coherent nonlinear second harmonic scattering from the tissues add constructively to produce a measurable SHG signal [4, 5, 6, 7, 8]. SHG has proven to be a valuable method for identifying a wide range of diseases [9, 10], including to quantify the alignment of collagen surrounding tumors to grade metastatic potential [11]. SHG microscopy has even been used for mapping cell lineage in embryos by tracking cell division using SHG generated by the mitotic spindle during mitosis [12]. SHG microscopy has found significant use in materials science [13] and investigating two-dimensional materials [14]. Standard SHG imaging is based on laser scanning microscopy, in which an incident laser beam at the fundamental wavelength is focused tightly into a sample. A portion of the SHG power is collected in either the forward- or backward-scattered direction at each focal point [15]. An SHG image is built from assigning the measured power to a location in a matrix corresponding the spatial location of the focused fundamental beam. Unfortunately, this leads to slow image formation, since each point in the image must be collected sequentially. The signal to noise ratio (SNR) also suffers because the signal is collected from each spatial point in an image only for the time that the laser beam dwells on each focal point. The SHG signal power is proportional to \(|\chi^{(2)}|^{2}\), where \(\chi^{(2)}\) is the nonlinear susceptibility responsible for SHG signal generation. Conventional SHG microscopy does not directly reveal the desired spatial map of \(\chi^{(2)}\), with only the magnitude of the susceptibility that depends both on the spatial distribution and sign of the susceptibility distribution within the focal volume of the focused fundamental beam. Complex image information, notably the sign of \(\chi^{(2)}\), which indicates the orientation of the SHG-active molecules, can be obtained by interferometric single-pixel detection SHG imaging [16, 17]. However, the lack of a stable reference phase from a repeated set of measurements prevents an improvement in the image SNR that would be possible with averaging the image fields, rather than the image intensity [18]. While such conventional nonlinear laser scanning microscopy benefits from the non-linear spatial filtering that helps with forming three-dimensional images and imaging within scattering media, optical aberrations degrade this imaging method. The SNR, image quality, and spatial resolution of SHG imaging are affected by these optical aberrations introduced by the imaging system itself and from specimen variations in the refractive index [19, 20, 21]. In SHG microscopy, the distortions introduced by the optics, particularly the objective lens, and the specimen broaden the size of the focused beam, worsening the ability of the microscope to image fine spatial features and reducing the signal level. Adaptive optics methods [20] have been applied to improve imaging with point-scanning nonlinear microscopy [22, 23], including wavefront shaping for polarization-resolved SHG imaging within tissues [24]. Imaging speed and SNR are significantly improved with widefield SHG holographic imaging [25, 26, 27, 28, 29, 30, 31]. Speed is increased with SHG holography for two reasons. The first is that wide-field images are recorded on a camera, so that each pixel benefits from signal being recorded for the entire imaging time. Thus, even for faster imaging, the SNR of the image is improved. Secondly, the hologram is formed from the interference of a signal and a reference beam, producing a heterodyne signal amplification that allows for optimization of the SHG imaging speed [30]. This amplification allows even very weak SHG signal fields to be detected at the shot noise limit. Ad ditionally, holography allows for extraction of the complex field, so that amplitude and phase information is available, and the nonlinear susceptibility can be extracted by solving the inverse scattering problem [31]. Widefield SHG imaging has been restricted to a transillumination geometry because generally SHG fields that are scattered in the forward direction are much stronger than the backward direction in biological tissues. Point scanning images that are collected in the backscattered direction consist of a combination of directly backscattered SHG radiation [1; 32] and forward-scattered SHG light that is re-directed in the backward direction so that it can be collected in a epi configured microscope [33]. The ratio of forward and backward scattered SHG power of ex-vivo tissues has proven useful as a biomarker for distinguishing healthy and cancerous tissues [9; 34]. While conventional laser scanning SHG microscopy can be deployed favorably in biological tissues that highly scatter fundamental and SHG light, widefield SHG imaging has been degraded by optical scattering, which is dominated by randomization of the phase of the SHG field [30]. Measuring widefield SHG holographic images in a transmission and epi configuration would be extremely valuable for imaging collagen and muscle in tissues in a minimally invasive manner. While point scanning SHG imaging can be performed in an epi direction, such a conventional approach suffers from very weak signals [32; 33], limiting practical use. Holographic widefield SHG in a epi configuration will enable improved detection of weak backscattered signals as a result of heterodyne amplification. Furthermore, imaging in a backscattered configuration would allow for direct optically-sectioned imaging because the low-coherence interferometry will gate only backscattered SHG light over an axial depth of the coherence length of the SHG light - exactly analogous to depth sectioning achieved with optical coherence tomography. In this Article, we demonstrate the first epi collected widefield SHG images leveraging the heterodyne signal enhancement provided by holographic measurements to mitigate the weak backscattered signal strength. Additionally, we exploit phase information to coherently superimpose measured fields obtained from a set of illumination angles to implement synthetic aperture coherent nonlinear holographic imaging for second harmonic generation (SHG) scattering from samples. In synthetic aperture holography, [35; 36; 37] complex spatial frequency information from multiple field measurements is combined to produce a net complex field image with spatial frequency support that is expanded up to a factor of two, improving imaging resolution. Aberrations, represented as a phase variation across the pupil, can severely distort the synthetic aperture image. [38] We introduce a robust and computationally efficient algorithm to estimate and correct the pupil phase distortions responsible for aberrations in SHG imaging. The acquired data contain sufficient redundancy to allow estimation of the imaging system aberrations directly from the recorded data. Redundancy in the field was used to identify conserved coherent field amplitudes to selectively suppress noise in the estimated image. When phase corrections are applied, we observe drastic improvements in SNR and image quality of the SHG images. Utilizing the linear properties of wave propagation and synthetic time reversal, the pupil phase distortions of both the input and imaging pupil planes can be compensated, thereby correcting system as well as sample induced aberrations. The result is a diffraction limited SHG image with a spatial frequency support twice that present in a single holographic SHG image, or four times the spatial frequency support of the fundamental field. Finally, we demonstrate synthetic aperture SHG holography on transmitted SHG fields in addition to the first back scattered SHG fields collected in the epi direction of the SHG holographic microscope. The experiments described here involve imaging a thin SHG-active sample when illuminated with a fundamental plane wave. The SHG scattered fields are captured in both the transmitted and epi directions as the input plane wave propagation direction is varied across the aperture of the condenser lens. Referring to Fig. 1, we see that microscope consists of a pair of matched objective lenses. The illumination for both the epi and transmission configurations thus pass through the same condenser objective lens with a pupil phase \(\phi_{1}(\mathbf{x}_{i})\) at the fundamental beam wavelength \(\lambda_{1}\). Plane wave illumination means that the fundamental beam passes through a small point in the input pupil plane located at \(\mathbf{x}_{i}\), which maps to an input spatial frequency of \(\mathbf{u}_{i}=(\lambda f_{c})^{-1}\mathbf{x}_{i}\) with wavenumber \(\|\mathbf{u}_{i}\|=1/\lambda\) and \(f_{c}\) denoting the condenser lens focal length. As SHG scattering is driven by the square of the fundamental illumination beam, the effective input pupil phase is \(\phi_{i}=2\,\phi_{1}\). These input aberrations are transmitted to the scattered field and distort the image. In the case of synthetic aperture holography, these distortions are replicated across the image field spatial frequency distribution, as illustrated in Fig. 1(d). The propagation of light being linear, we can describe the relationship of a given light field from one plane to another with a simple matrix operation (reflection or transmission matrix depending on the configuration). The choice of input and output planes, and thus the basis of this matrix, is chosen to be the input and output pupil planes, \(P_{i}\) and \(P_{o}\) shown in Fig. 1. A given input angle conveniently corresponds to a point, \(\mathbf{u}_{i}\), in the input pupil plane. At the output pupil plane, the input plane wave is scattered by the sample into many angles, each given by a point in the output pupil, \(\mathbf{u}_{o}\). This scattered field is proportional to the spatial frequency map of the second order susceptibility \(\chi^{(2)}(\mathbf{q})\) of the sample, where the object spatial frequency, \(\mathbf{q}\), will also be used to denote the scattering vector. The imaged SHG field can be described in the output spatial frequency plane with coordinates \(\mathbf{u}_{o}\). By invoking the assumption of a thin specimen and assuming that the fundamental field is not depleted appreciably in the nonlinear scattering process, we may write the scattered field in the output pupil plane as \[E_{\mathrm{SHG}}(\mathbf{u}_{o},\mathbf{u}_{i})=\int H(\mathbf{u}_{o}, \mathbf{r})\,\chi^{(2)}(\mathbf{r})\,G(\mathbf{r},\mathbf{u}_{i})\,d^{2} \mathbf{r} \tag{1}\] for a given input spatial frequency, \(\mathbf{u}_{i}\). The thin specimen is described by a two-dimensional second order nonlinear susceptibility distribution, \(\chi^{(2)}(\mathbf{r})\), that lies in the sample plane with coordinates \(\mathbf{r}\). Light is scattered at the second harmonic frequency of the incident fundamental beam at frequency \(\omega_{1}\), with a Green's function, \(G(\mathbf{r},\mathbf{u}_{i})\) describing the square of the fundamental field incident on the sample. This function maps input spatial frequencies for each point \(\mathbf{u}_{i}\) to the SHG driving term at the sample plane. The scattered field is collected by the objective and mapped from the sample plane \(\mathbf{r}\) to the output imaging pupil \(\mathbf{u}_{o}\) with the Green's function, \(H(\mathbf{u}_{o},\mathbf{r})\), for the SHG field at optical frequency \(\omega_{2}=2\omega_{1}\). This Green's function can be used to describe imaging of the forward-scattered field in a trans-SHG holographic microscope or to image the back-scattered field in an epi-SHG holographic microscope. Within an isoplanatic spatial imaging region, the imaging point spread function is spatially invariant, which allows the transfer function to be modelled with the pupil function, \(P(\mathbf{u})=|P(\mathbf{u})|\exp\left[i\,\phi(\mathbf{u})\right]\), where the spatial frequency support is \(|P(\mathbf{u})|\) and \(\phi(\mathbf{u})\) accounts for aberrations. In addition to aberrations, there are random phase shifts due to air currents and mechanical vibrations inherent in the measurement process which must also be accounted for in the synthetic image reconstruction. This perturbation adds another phase term for the input pupil function \(P_{i}(\mathbf{u}_{i})=|P_{i}(\mathbf{u}_{i})|\exp\left[i\,\phi(\mathbf{u}_{i} )\right]\exp\left[i\,\phi_{d}(\mathbf{u}_{i})\right]\), where \(\phi_{d}(\mathbf{u}_{i})\) is the experimental phase drift, with total phase \(\phi_{i}(\mathbf{u}_{i})=\phi_{i}(\mathbf{u}_{i})+\phi_{d}(\mathbf{u}_{i})\). As shown in Appendix A, for a thin specimen, the illumination and SHG fields propagate through free space, so that input and output Green functions read \(G(\mathbf{r},\mathbf{u}_{i})=P_{i}(\mathbf{u}_{i})\,e^{-i2\pi\mathbf{u}_{i}\cdot \mathbf{r}}\) and \(H(\mathbf{u}_{o},\mathbf{r})=P_{o}(\mathbf{u}_{o})\,e^{\pm i2\pi\mathbf{u}_{o} \cdot\mathbf{r}}\), respectively. Note that \(-\) corresponds to a transmission image and \(+\) corresponds to an epi image. Under the conditions outlined here, the SHG field for a given input frequency \(\mathbf{u}_{i}\), measured in the output pupil plane is given by \[E_{\text{SHG}}(\mathbf{u}_{o},\mathbf{u}_{i})=P_{o}(\mathbf{u}_{o})\,\hat{ \chi}^{(2)}(\mathbf{q})\,P_{i}(\mathbf{u}_{i}), \tag{2}\] where that scattering vector is given by \(\mathbf{q}=\pm\mathbf{u}_{o}+\mathbf{u}_{i}\). Here \(+\) and \(-\) appear in transmission and reflection, respectively. We have defined the spatial frequency spectrum of the second order optical susceptibility as \(\hat{\chi}^{(2)}(\mathbf{q})=\mathcal{F}\{\chi^{(2)}(\mathbf{r})\}\), where \(\mathcal{F}\{\cdot\}\) defines the Fourier transform operator as defined in Appendix A. A reflection or transmission matrix, for backscattered or transmitted SHG fields, respectively, is defined by sampling the continuous scattering operator in Eq. 2 over the discrete input and output spatial frequency coordinates. The reflection matrix defined for the epi imaging condition can be written as the product of three matrices, \(\mathbf{R}_{\mathbf{u}_{o},\mathbf{u}_{i}}=\mathbf{P}_{o}\,\hat{\chi}_{\mathbf{ q}}^{(2)}\,\mathbf{P}_{i}\). Figure 1: Conceptual diagram of SHG synthetic aperture holography represented in a transmission geometry, but which equally applies to a reflection geometry. a) The input fundamental field is focused to a point in the input pupil plane at the input spatial frequency coordinate \(\mathbf{u}_{i}\). When this illumination is at the origin of the input pupil plane coordinates, the fundamental illumination beam is a normally incident plane wave. The scattered field, analogous to a linear transillumination field, is collected by the output pupil and the complex signal field is recorded. b) A second input field example shows an SHG darkfield configuration in which the fundamental beam is incident on the sample at an angle determined by \(\mathbf{u}_{i}\). c) The input illumination angle is scanned across the input pupil to collect SHG scattered fields from a range of object spatial frequency distribution with each scattered spectra aligned to \(\hat{\chi}^{(2)}(\mathbf{q})\) showing an enhanced frequency support. d) The full spatial frequency spectrum of the object, \(\hat{\chi}^{(2)}(\mathbf{q})\), is estimated from the coherent sum of the recorded spectral field. Aberrations from the input, \(P_{i}(\mathbf{u}_{i})\), and output, \(P_{o}(\mathbf{u}_{o})\), pupils distort the estimated object spectrum and must be corrected to produce aberration-free images. The input and output pupil matrices are defined by the discrete form of the pupil functions, \(\mathbf{P}_{i}=\text{diag}\{P_{i}(\mathbf{u}_{i})\}\) and \(\mathbf{P}_{o}=\text{diag}\{P_{o}(\mathbf{u}_{o})\}\), respectively. The object susceptibility spectrum is a Toeplitz structure that is given by \(\hat{\chi}_{\mathbf{q}}^{(2)}=\hat{\chi}^{(2)}(\mathbf{q})\). In reflection, this matrix structure reads \(\hat{\chi}_{\mathbf{q}}^{(2)}=\hat{\chi}^{(2)}(-\mathbf{u}_{o}+\mathbf{u}_{i})\), whereas in transmission, the matrix takes the form \(\hat{\chi}_{\mathbf{q}}^{(2)}=\hat{\chi}^{(2)}(\mathbf{u}_{o}+\mathbf{u}_{i})\). This matrix can alternatively be constructed with \(\hat{\chi}_{\mathbf{q}}^{(2)}=\mathbf{F}\,\text{diag}\{\chi^{(2)}(\mathbf{r}) \}\,\mathbf{F}^{-1}\), where the susceptibility matrix has been flattened into a one-dimensional vector before being placed on the matrix diagonal. Here, \(\mathbf{F}\) and \(\mathbf{F}^{-1}\) are the discrete Fourier and inverse Fourier transforms operators, respectively. These reflection and transmission matrices map the input spatial frequency coordinate, \(\mathbf{u}_{i}\), to the output spatial frequency coordinate, \(\mathbf{u}_{o}\). Scattering from the object probes the object spatial frequency so that in the output pupil plane the scattered field is proportional to the complex spatial frequency distribution of the second order susceptibility of the sample, but is shifted according to the tilt of the input plane wave. Once the transmission or reflection matrix is constructed, we can obtain the synthetic SHG image field from a shifted form of the matrix. The synthetic SHG image field can be constructed by shifting the columns of the reflection matrix to line up the scattered fields, \(E_{\text{SHG}}(\mathbf{u}_{o},\mathbf{u}_{i})\), with respect to \(\hat{\chi}^{(2)}(\mathbf{q})\). This shifted operator reads \[D(\mathbf{q},\mathbf{u}_{i})=P_{o}(\pm\mathbf{q}\mp\mathbf{u}_{i})\,\hat{\chi} ^{(2)}(\mathbf{q})\,P_{i}(\mathbf{u}_{i}). \tag{3}\] The transmission form of the operator corresponds to replacing \(\mathbf{u}_{o}\rightarrow\mathbf{q}-\mathbf{u}_{i}\), whereas in reflection, the output spatial frequency coordinate is mapped with \(\mathbf{u}_{o}\rightarrow-\mathbf{q}+\mathbf{u}_{i}\). In matrix form, this is written as \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{i}}\), and the shifted matrix is obtained by shifting the columns of the reflection (or transmission) matrix \(\mathbf{R}_{\mathbf{u}_{o},\mathbf{u}_{i}}\), as is illustrated in Fig. 3. The synthetic SHG field is obtained by integration over the input spatial frequencies \(E_{\text{SHG}}^{\text{s}}(\mathbf{q})=\int D(\mathbf{q},\mathbf{u}_{i})\,d \mathbf{u}_{i}\), which becomes a discrete sum over the input spatial frequency elements of the matrix \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{i}}\). This estimate of the object spectrum is sampled on the same grid that defines in output spatial frequency coordinates. Similarly, as illustrated in Fig. 2, a reversal synthetic aperture object spectrum can be formed in the input pupil coordinates by first taking the transpose of the reflection matrix \(\mathbf{R}_{\mathbf{u}_{o},\mathbf{u}_{i}}\) (i.e., by swapping the input and output spaces) and then shifting the columns again in the same manner as discussed above. This operation will will produce the operator \[D(\mathbf{q},\mathbf{u}_{o})=P_{i}(\mathbf{u}_{o}\mp\mathbf{q})\,\hat{\chi}^{( 2)}(\mathbf{q})\,P_{o}(\mathbf{u}_{o}), \tag{4}\] that is written as \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{o}}\) in matrix form. The transmission form of the operator corresponds to replacing \(\mathbf{u}_{i}\rightarrow\mathbf{q}-\mathbf{u}_{o}\), whereas in reflection, the output spatial frequency coordinate is mapped with \(\mathbf{u}_{i}\rightarrow\mathbf{q}+\mathbf{u}_{o}\). The structure of these matrices are shown pictorially in Fig. 3. Optical aberrations appearing in the form of phase aberrations in the input pupil, \(\phi_{i}(\mathbf{u}_{i})\), and the output pupil, \(\phi_{o}(\mathbf{u}_{o})\), lead to distortions in the synthesized image. These phase distortions can be estimated and corrected using redundancy in the reflection and transmission matrices. Previous work in linear scattering has demonstrated that correlations of the output spatial frequency spectrum between closely spaced input spatial frequency measurements provides a good estimate of input pupil phase difference at the mean of the two input spatial frequency points [39; 40; 41]. Here, we present a straightforward and effective algorithm that estimates and corrects aberrations in the synthetic aperture holographic images by determining the input and output pupil phase. The estimation of pupil phase involves utilizing the singular value decomposition (SVD) of the matrices \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{i}}\) and \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{o}}\). The formation of these matrices introduces strong correlations among the columns over a wide range. Consequently, the SVD is well-suited for this scenario as it identifies the eigenvectors of the correlation matrices. [41] The SVD is given by \(\mathbf{D}=\mathbf{U}\,\Sigma\,\mathbf{V}^{\dagger}=\sum_{j}\sigma_{j}\,\mathbf{ u}_{j}\,\mathbf{v}_{j}^{\dagger}\). The left singular vectors, \(\mathbf{v}_{j}\), are columns in \(\mathbf{V}\), and are eigenvectors of the correlation matrix \(\mathbf{D}^{\dagger}\mathbf{D}\). Similarly, \(\mathbf{u}_{j}\), are columns in \(\mathbf{U}\), and are the right eigenvectors of the other correlation matrix \(\mathbf{D}\mathbf{D}^{\dagger}\). These singular vectors are paired with the singular values, \(\sigma_{j}\), which are listed in decreasing order along the diagonal of \(\Sigma=\text{diag}\{\sigma_{j}\}\), with eigenvalues given by \((\sigma_{j})^{2}\). The matrices \(\mathbf{D}_{\mathbf{q},\mathbf{u}}\) and \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{o}}\) are arranged such that the synthetic aperture spectrum is reconstructed (in either the forward or reversed direction) by simply summing the columns. Unfortunately, each field (column) is out of phase with one another according to both \(\mathbf{P}_{o}\) and \(\mathbf{P}_{i}\), shown pictorially in Fig.1. Choosing two neighboring columns of \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{i}}\): \(\mathbf{d}_{\mathbf{q},\mathbf{u}_{i}^{\dagger}}=\mathbf{P}_{o}(\mathbf{q}+ \mathbf{u}_{i}^{\dagger})\hat{\chi}^{(2)}(\mathbf{q})\mathbf{P}_{i}(\mathbf{u} _{i}^{\dagger})\) and \(\mathbf{d}_{\mathbf{q},\mathbf{u}_{i}^{2}}=\mathbf{P}_{o}(\mathbf{q}+\mathbf{ u}_{i}^{2})\hat{\chi}^{(2)}(\mathbf{q})\mathbf{P}_{i}(\mathbf{u}_{i}^{2})\) if the difference in input angle between the two columns is su Figure 2: Conceptual diagram of SHG synthetic aperture holography. a) The input fundamental field is focused to a single point in the input pupil plane. This field produces plane wave illumination of the sample. The optical imaging system filters the SHG field spatial frequency support by the output pupil, \(P_{o}(\mathbf{u}_{o})\), which is applied to a portion of the SHG object spectrum centered on \(\mathbf{u}_{i}\). b) Each measured SHG spectrum is flattened into a vector and stacked into a matrix \(\mathbf{R}_{\mathbf{u}_{o},\mathbf{u}_{i}}\). c) The conjugate transpose of \(\mathbf{R}_{\mathbf{u}_{o},\mathbf{u}_{i}}\) behaves conceptually as a time-reversal experiment. This time-reversal matrix describes a scenario interpreted as an SHG field from the output pupil that backpropagates through the system to the input pupil plane. d) The time-reversal of the data can be realized by taking the conjugate transpose of \(\mathbf{R}_{\mathbf{u}_{o},\mathbf{u}_{i}}\). ficiently small such that \(P_{o}(\mathbf{q}+\mathbf{u}_{i}^{1})\approx P_{o}(\mathbf{q}+\mathbf{u}_{i}^{2})\) then the phase difference between the two columns is approximately just a piston phase shift according to \(P_{i}(\mathbf{u}_{i}^{1})\) and \(P_{i}(\mathbf{u}_{i}^{2})\). This then nearly isolates the input and output pupils and allows for the problem to be written as a simple matrix operation: \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{i}}\overline{s}(\mathbf{u}_{i})=E_{\text{ SHG}}^{\text{s}}(\mathbf{q})\), which gives an explicit expression of the discrete summation form of synthetic SHG field spectrum with phase correction imparted by \(\vec{s}(\mathbf{u}_{i})\) so that the phase of each column is shifted to eliminate aberrations: \(\vec{s}(\mathbf{u}_{i})=e^{i\vec{\phi}_{c}(\mathbf{u}_{i})}\), with \(\phi_{c}\) being the phase correction. We would like to find \(\vec{s}(\mathbf{u}_{i})\) such that it maximizes the total intensity of \(E_{\text{SHG}}^{\text{s}}(\mathbf{q})\). When the total intensity is maximum, all the columns (fields) are in phase. This occurs when \(\vec{s}(\mathbf{u}_{i})=P_{i}(\mathbf{u}_{i})^{*}\), implying that \(\phi_{c}=-\phi_{c}(\mathbf{u}_{i})\), thereby correcting the aberrations imparted by the input pupil. A comparison of the performance of the cross correlation approach and the SVD phase estimate is provided in Appendix D. To motivate this algorithm, we consider an infinitesimal scattering point on axis. Such a scatterer produces a uniform spatial frequency distribution \(\hat{\chi}^{(2)}(\mathbf{u})=1\). Consequentially, the reflection matrix is rank one and formed by the outer product \(\mathbf{R}_{\mathbf{u}_{o},\mathbf{u}_{i}}=\mathbf{P}_{\mathbf{q}}\mathbf{P}_ {\mathbf{l}}{}^{T}\), where the pupils are represented as vectors after flattening with suitable lexicographic ordering. For this simple case, the right singular vector is associated with the input pupil function and the left singular vector is associated with the conjugate of the output pupil function. In this simple case, the input and output pupils can be obtained directly from the SVD. Using the method of Lagrange multipliers it can be shown that the vector \(\vec{s}(\mathbf{u}_{i})\), that maximizes the total intensity of \(E_{\text{SHG}}^{\text{s}}(\mathbf{q})\) (subject to the constraint that \(\vec{s}(\mathbf{u}_{i})\) is a unit vector), is the left singular vector of \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{i}}\) corresponding to the largest singular value. As shown in Appendix C, if \(\mathbf{v}_{1}\) is the dominant left singular vector of \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{i}}\) then optimal phase conjugate occurs for \(\vec{s}(\mathbf{u}_{i})=\mathbf{v}_{1}\). The SVD algorithm gives an excellent phase correction for the input pupil even under very low SNR conditions which are difficult to avoid when measuring backward generated SHG; see Appendix D. To find the phase correction for the output pupil the same process is carried out except after transforming \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{i}}\) to \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{i}}\), then the dominant left singular vector of \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{o}}\) is the best estimate of the output pupil correction. Because the input and output pupils are only approximately separable using the shifted representations of the reflection matrix, the algorithm proceeds iteratively, with iteration index denoted by \(k\). At each iteration, the reflection matrix, \(\mathbf{R}_{\mathbf{u}_{o},\mathbf{u}_{i}}^{(k)}\), is corrected from the input and output pupil phases from each iteration. In the initial iteration, the reflection matrix is initialized with the reflection matrix obtained from the data, \(\mathbf{R}_{\mathbf{u}_{o},\mathbf{u}_{i}}^{(0)}=\mathbf{R}_{\mathbf{u}_{o}, \mathbf{u}_{i}}\). The input pupil phase is estimated by taking the phase argument of the dominant left singular vector given by the SVD of the shifted reflection matrix \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{i}}\). The estimated input pupil phase is taken from the phase of the dominant left singular vector of \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{i}}\), \(\delta\tilde{\phi}_{c,i}^{(k)}=\angle\mathbf{v}_{1}\). The estimated phase correction is then applied and then the matrix is transformed from \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{i}}\) to \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{o}}\). The output pupil phase is then estimated similarly by the dominant left singular vector of \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{o}}\), \(\delta\tilde{\phi}_{c,o}^{(k)}=-\angle\mathbf{v}_{1}\). Transforming \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{o}}\) back to \(\mathbf{R}_{\mathbf{u}_{o},\mathbf{u}_{i}}\) after applying the output pupil phase correction provides the corrected reflection matrix \(\mathbf{R}_{\mathbf{u}_{o},\mathbf{u}_{i}}^{(k+1)}\), and this pair of steps is counted as one iteration. These operations are shown graphically in Fig.3. The total phase correction is estimated from \(\tilde{\phi}_{c,i}=\sum_{k}\delta\tilde{\phi}_{c,i}^{(k)}\) and similarly, \(\tilde{\phi}_{c,o}=\sum_{k}\delta\tilde{\phi}_{c,o}^{(k)}\) for the input and output pupils, respectively. ## II Methods: The strategy outline above was implemented experimentally after validation and testing of the algorithm through simulations. The experimental system allows for epi and transmission synthetic SHG holograms to be recorded. The SHG field is extracted from the set of holographic intensity patterns and used to build the reflection (and transmission) matrices. These data are then processed according to the algorithm discussed in the previous section to synthesize an enhanced SHG spectrum that is free from optical aberrations, producing aberration-free SHG field images in forward and backscattered configurations with resolution higher than the diffraction limit. Figure 3: Conceptual diagram of operations needed to find pupil phase corrections. a) Taking the constructed reflection matrix and aligning the output spectra (aligning the columns of \(\mathbf{R}_{\mathbf{u}_{o},\mathbf{u}_{i}}\)) constructs the matrix \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{i}}\). The aberrated input pupil phase is estimated from the phase of the dominant left singular vector of SVD of \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{i}}\), giving \(\phi_{i}(\mathbf{u}_{i})\approx\angle\mathbf{v}_{1}\). b) Similarly for the output pupil phase correction first the conjugate transpose of \(\mathbf{R}_{\mathbf{u}_{o},\mathbf{u}_{i}}\) is taken, then the spectra are again aligned (aligning the columns of \(\mathbf{R}_{\mathbf{u}_{o},\mathbf{u}_{i}}^{\dagger}\)) to form \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{i}}\). The output pupil phase is estimated from the phase of the dominant left singular vector of SVD of \(\mathbf{D}_{\mathbf{q},\mathbf{u}_{o}}\), which reads \(\phi_{c}(\mathbf{u}_{o})\approx-\angle\mathbf{v}_{1}\). ### Experimental setup The experimental system, as shown in Fig. 4, is driven by a home built Yb:fiber-amplifier system that produces ultrafast laser pulses centered at a 1050-nm wavelength with a bandwidth that supports \(<35\)-fs transform-limited pulses. Power in the beam is split into signal and reference arms with a combination of a half waveplate and a polarizing beam splitter. The signal arm is sent through two galvanometric mirrors that are relay imaged to one another and finally relay imaged to the back focal plane of a focusing condenser lens. To avoid damage in the back focal plane of the lens, an aspheric lens (New Focus 5726) serves as the condenser. In the epi direction, the same lens is used to collect the backscattered SHG light. An identical aspheric lens is used in transmission. In both the epi and transmission arms, the SHG signal is isolated with a dichroic optical filter - rejecting the pump pulse. Meanwhile, the reference beam is frequency doubled by gently focusing the fundamental beam in a BBO crystal. The SHG reference beam is collimated and dichroic filters are used to isolate the reference beam. The reference beam is sent through a mechanical delay line and dispersion balancing optics. The signal SHG field is combined with the reference beam with a non-polarizing beam splitter. An image of the SHG field is formed with a tube lens in both the epi and transmission arms. Off-axis holographic images are captured with a Hammamatsu ORCA Quest C15550 in the epi arm and a Teledyne prime 95B in the transmission arm. ### Constructing the reflection matrix For synthetic aperture SHG holography, we record a sequence of \(M\) SHG holograms, from which we extract the SHG field in the output spatial image plane for a sequence of input illumination angles, denoted by \(\mathbf{u}_{i}\). Each of the \(M\) holograms are captured for a distinct point in the input pupil plane, \(\mathbf{u}_{i}\), which corresponds to a particular incident angle on the specimen. The pair of galvanometer scan mirrors are used to control the incident angle of the fundamental beam by relay imaging the surface of the second galvanometer mirror to the sample plane. The incident angle is controlled by setting the voltage on each galvanometer. A calibration of voltage to resulting plane wave tilt in spatial frequency units is implemented by finding the voltage applied to each galvanometric-mirror required to reach each edge of the pupil. This voltage then corresponds to a spatial frequency of \(\mathbf{u}_{i}=\mathrm{NA}/\lambda_{1}\), which outlines the condenser lens pupil boundary. The offset voltage that is needed to center the illumination path on the objective pupil plane is also determined. The pupil boundary voltages are then used to translate the control voltages to the input spatial frequency for each measured hologram. The knowledge of the input spatial frequency decoded from the control voltage applied to the galvanometer is used to compute the required shift to align each of the output SHG spectra. Holograms of the scattered SHG field are recorded for each incident fundamental illumination angle. The holograms are captured with a planar off-axis reference beam, so that the SHG field is recorded with a conventional holographic processing algorithm [25]. The recorded field is spatially cropped to limit the image field of view to a total number of pixels \(N\). Each cropped output SHG field is transformed to the output pupil plane, with coordinates \(\mathbf{u}_{o}\). These measured fields are flattened according to lexicographic order into a linear vector of length \(N\). Each SHG field (now represented as a vector) are stored in the columns of a transmission or reflection matrix for the transmission and epi SHG fields, respectively. The columns of the matrix are filled with the input spatial frequencies ordered in the same lexicographic order as the flattened output spatial frequency vectors. In this way, a column of the reflection matrix corresponds to a measured output field due to a certain input spatial frequency. A row corresponds to the detected scattered complex-valued SHG field of a certain output spatial frequency (pixel) according to the input spatial frequency. Remapping a row of \(\mathbf{R}\) to a 2D array in the correct ordering yields a spectrum corresponding to the reversal of a plane wave through the system. In other words, mimicking the process of sending a second harmonic plane wave from the output pupil plane to the input pupil plane. The resulting matrix is of size \(N\times M\), with columns mapping the output spatial frequency coordinates and the rows indicating the input spatial frequency for illumination. ## III Results: Specimens used for the experiments include an sparse field of \(\sim 80-100\)-nm diameter Bismuth ferrite, or BiFeO\({}_{3}\), (BFO) nanoparticles and a \(10\mu m\) thick section of sheep tendon. To eliminate the possibility of reflections coming back into the camera on the epi side, the samples are mounted on a glass slide without a cover slip and oriented so that the sample lies on the distal side of the glass relative to the condenser lens. Both samples were imaged in epi and transmission and configurations. In all cases, the SHG field reflection or transmission matrix is recorded for a set of input spatial frequencies, corresponding to a sweep of incident angle of the fundamental beam. These data are then processed to estimate and correct for the input and output pupil phases after which aberration-free synthetic SHG spatial frequency spectrum and images are obtained. The scattered SHG field from sub-wavelength particles is of similar power in the forward and backward directions. In contrast, for sheep tendon the scattered SHG power is reduced by at least an order of magnitude in the backward direction as compared to forward scattered SHG. As SHG scattering is already a weak process, the scattered SHG fields are relatively weak, and particularly weak in the backscattered direction from the sheep tendon samples. Thus, measuring the backward scattered SHG field is a challenge. Fortunately, SHG holography allows for the measurement of a weak field by leveraging the heterodyne enhancement from a strong reference beam [30]. This enhancement enabled us to record SHG widefield holograms in the epi direction. Coherent addition of the fields also aids in increasing the SHG scattered field strength. The synthetic summation of SHG scattered field spectra (over the illumination angles) enables the SHG signal to grow linearly with the number, \(M\), of coherent fields added together. While these strategies enable measurement of backward emitted SHG from the sample in a widefield imaging configuration, the signal is still quite low. We can take another step to further boost this signal by exploiting phase information in hand. Normally, to increase a signal measured on a camera one could just take the average of many measurements or increase the exposure time on the camera. Unfortunately, in the case of holography, this method fails rapidly because the signal of interest is retrieved by analyzing the fringe pattern produced on the camera due to the interference of the signal and reference beam. This fringe pattern is extraordinarily sensitive to air currents, vibrations, and other perturbations to the accumulated phase in the non-common path regions of the reference and signal arms. The fringe visibility, and thus the signal, degrades when averaging several holograms or increasing the exposure time due to the fluctuation in the relative phase over the integration timescale. We have developed a simple strategy to mitigate these relative phase fluctuations - enabling a significant boost in the signal-to-noise ratio of the SHG holographic field measurement. This strategy again leverages coherent field summation. To implement the coherent sum boost, a sequence of holograms with short camera exposure times is taken for each incident angle. The SHG field is extracted using standard off-axis holographic processing [25]. These fields are cropped, flattened, and stacked into the columns of a matrix, \(\mathbf{A}\). Since these measurements are all taken at the same input angle their spectra are already aligned, each SHG spectrum is nominally identical, except for the changes in overall phase that arises as a result of the shot-to-shot changes in relative signal-reference beam phase. Taking inspiration from the algorithm we developed for the aberration correction we can again find the phase offset between them using the SVD. Defining a correction vector, \(\mathbf{c}=\mathbf{v}_{1}/|\mathbf{v}_{1}|\), from the dominant singular vector, \(\mathbf{v}_{1}\), of \(\mathbf{A}\). Further improvement in the SNR can be obtained by filtering out noise in \(\mathbf{A}\) with a truncated SVD. The coherent sum of the SHG spectra for each input angle is given simply by \(\mathbf{A}\mathbf{c}^{*}\), which boosts the SHG signal field by the number of hologram measurements. To obtain an equivalent enhancement in signal by averaging the intensity on the camera would require perfectly stable fringes. With the approach outlined here, we are able to form a widefield image while also correcting aberrations even with an extraordinarily weak signal, highlighting the power of this technique. This coherent averaging process is repeated for each fundamental beam illumination angle, i.e., each input spatial frequency \(\mathbf{u}_{i}\), and the coherently summed SHG field for each angle is stacked into a matrix to build the reflection/transmission matrix \(\mathbf{R}_{\mathbf{u}_{i}\mathbf{u}_{i}}\). Before and after images illustrating the correction of aberrations are shown in Fig. 5 for SHG-active BFO nanoparticles and Fig. 6 for thin sheep tendon slices. The input and output pupil phase estimates are also shown. Due to experimental phase drifts in the system (air currents, mechanical vibrations), each measurement is dephased with each other measurement by a random offset phase. With no correction of the relative random phase fluctuation, the resulting synthetic aperture reconstruction has very low SNR (shown in Fig. 5 and Fig. 6). This experimental phase drift is corrected along with the input pupil phase correction simultaneously. The resulting input pupil phase map is the superposition of the optical aberrations and the phase drift. the slide, but still in a high enough density to see constellations of BFO nanoparticles in the images, see Fig. 5. Fig. 5 a) shows the uncorrected synthetic aperture SHG image intensity when the input and output pupil phases are not corrected for the transmission image of BFO nanoparticles and Fig. 5 d) shows corresponding image obtained in the epi direction. The aberration-free images obtained by estimating and correcting for the input and pupil phases are shown in Fig. 5 b) and Fig. 5 e) for the transmitted and backscattered SHG field, respectively. The uncorrected and corrected aberration corrected synthetic aperture reconstructions of thin sheep tendon samples in the epi and transmission directions are shown in Fig. 6. The results show a dramatic improvement in image quality after processing with the SVD-based aberration correction algorithm. While the estimation of SNR from an image is a difficult problem, the distribution of SVD values have been shown to provide a robust strategy for SNR estimation[42]. The SNR values of the images are estimated based on this strategy, with results shown in the caption of Fig. 6. The results of complex aberration-free synthetic aperture SHG fields recovered from SHG synthetic aperture holographic imaging imaging of a thin sheep tendon in transmission and reflection geometries of the same region are shown in Fig. 6. We display the estimated input and output pupil phases inset within the corrected intensity images Fig. 6 b) and Fig. 6 f). Note that these measurements were taken in independent data runs, so there is no correlation between the shot-to-shot random phase fluctuations of the input pupil phases. These results show extremely robust performance of synthetic aperture SHG holographic imaging. ## IV Discussion: We have adapted methods that were first developed to improve imaging distorted by optical aberrations and linear coherent scattering[43; 40; 41; 36] for application to nonlinear holographic imaging.[29; 27; 28; 30; 31] Nonlinear scattering is dominated by forward scattering due to phase-matching considerations[31]. As a result, the backward scattered field is quite weak - presenting a significant challenge for detection of the SHG field imaged in an epi configuration. Our new imaging strategy makes use of two coherent summations of the complex-valued SHG field that is recovered from off-axis SHG holography in the forward-scattered (transmission) and backward-scattered (epi) geometries. These coherent combining methods rely critically on the ability to estimate phase differences Figure 5: Experimental results for SHG synthetic aperture with aberration correction in the epi and transmission configurations for a field of BFO nanoparticles. a) uncorrected synthetic aperture SHG image intensity in transmission. b) corrected synthetic aperture SHG image intensity in transmission. c) estimated input and output pupil phase. Because each input angle is a separate measurement, there is a uniformly distributed random phase on top of the optical pupil phase of the illumination condenser optic. d) uncorrected synthetic aperture SHG image intensity in reflection. e) corrected synthetic aperture SHG image intensity in reflection. f) estimated output and input pupil phase in reflection. The estimated SNR from uncorrected to corrected for transmission is 5dB to 27dB respectively. Similarly for reflection the estimated SNR values go from 4dB to 37dB. between a set of measurements. The coherent summation can overcome the inherently weak SHG field strength to produce aberration-free complex-valued SHG field images with increased spatial frequency support, and thus improved spatial resolution. We have successfully been able to form an aberration corrected synthetic aperture image for the back-scattered as well as forward-scattered SHG from BFO nanoparticles and sheep tendon. The detected signal is boosted using coherent amplification of the field that occurs from heterodyne mixing between the signal field and a reference field in a holographic measurement [30]. The epi-SHG holographic images shown here provide the first complex-valued nonlinear backscattered optical field measurements. To ensure that there was no contamination from forward-scattered SHG radiation that is directed in a backward direction, we eliminated all material from the distal end of the sample to prevent Fresnel reflection of forward-scattered SHG fields into the epi SHG holographic imaging system. In addition, the SHG signal and reference fields are broad-bandwidth, with a cross-coherence length of \(\sim 8.84\mu m\). This epi SHG image carries the advantage of gating out any stray reflections and will ultimately enable three-dimensional imaging because the low-coherence interferometry provides optical sectioning of the backscattered SHG field. Even with the coherent heterodyne amplification provided by holographic imaging with a strong reference field, the SNR of the field extracted from epi-SHG holography was relatively low. One strategy to boost the SNR is to simply increase the integration time of the camera, if there is still some dynamic range left of the camera sensor. Unfortunately, this strategy is infeasible in our configuration because the signal and reference beams are not common path. The lack of common path propagation leads to random relative phase fluctuations over the camera integration time. These random fluctuations degrade the fringe visibility and thus the SNR of the extracted SHG field. To combat this SNR degradation, we implemented a new coherent summation strategy for the SHG field for a set of nominally identical SHG holograms. This algorithm is based on estimating the random relative phase variations of the set of the SHG field extracted from multiple holographic measurements. Implementation of this protocol provides a significant boost in the SHG field SNR. We note that a similar strategy has been adopted for linear holographic imaging through turbulent media [44]. Despite this boost in signal SNR, the SHG images are still quite degraded due to a combination of aberration phases in Figure 6: Experimental results for SHG synthetic aperture with aberration correction in the epi and transmission configurations for a \(10\mu m\) thick section of sheep tendon. Images a-d correspond to the epi configuration and e-h in transmission. Images a) and e) show the reconstruction before any corrections are applied. Images b) and f) show the intensity after correction with the input and output pupil phase corrections inset. Panels c) and g) show the phase of the synthetic aperture reconstructions after the corrections are applied. The reconstructed spectrum in images d) and h) have white dashed outlines showing the original spatial frequency support of the system shifted to different positions stitching together an expanded spatial frequency support containing more information. The dashed circles only show a few example positions, in reality there were 2601 measurements taken. For the synthetic aperture reconstruction in the transmission configuration (images e and f) the estimated SNR goes from 12.3dB to 35.8dB after correcting for experimental phase drift and aberrations. Similarly for the epi configuration (a and b) the SNR goes from 13.5dB to 34.8dB. SNR estimates were made by considering the singular values up to an optimal truncation point as signal and the rest noise. The optimal truncation point is determined using the convention established by Gavish and Donahoe [42]. produced by the input and output optics, as well as due to specimen-induced aberrations from propagation of the fundamental and second harmonic fields in the sample. The imaging distortion from aberrations is exacerbated by the fact that we use an aspheric lens for imaging. While such aspheres are generally avoided due to the presence of strong optical aberrations, we used these optics since there are no optics in the pupil plane of the lens. As this plane is inside of conventional optical objectives, these objectives are damaged when employing such a plane-wave illumination. Our synthetic aperture strategy, in which we accurately extract the input and output pupil phase, allows for the use of low quality, and less expensive, optics for imaging. To accurately estimate the input and output pupil phases from a set of data, we need to introduce some redundancy in the measurements. Such redundancy is implemented by capturing data over a range of input fundamental incident angles so that the set of SHG field measurements have partially overlapping spatial frequency support. This set of data contains sufficient redundancy (i.e., spatial frequency redundancy) to estimate the input and output pupil phases. These phases can be extracted with a cross-correlation algorithm that has been used for imaging phase correction in linear scattering based microscopy [39], however, we found that this cross-correlation algorithm performed poorly in the limit of low SNR. A simulation of the robustness of cross-correlation phase estimation as a function of data SNR is discussed in Appendix D. This same analysis shows that our new algorithm for input and output pupil phase and correction based on the SVD of suitably shifted reflection (or transmission) matrices performs extremely well in the presence of low SNR data. Furthermore, we show in Appendix D that the SVD-based algorithm discovers the optimal phase correction to produce aberration-free SHG field imaging even in the presence of high noise levels. In addition, the SVD algorithm is more computationally efficient than computing the cross correlations for phase estimation. On average it requires about a factor of two less iterations and in some cases finds the best phase corrections in a single iteration according to SNR and sharpness metrics. The application of our new aberration-free synthetic aperture imaging strategy to experimental measurements shows excellent performance. The combination of the coherent signal enhancement and large spatial frequency support allows for the un-distorted estimation of the spatial frequency spectrum of the second-order nonlinear optical susceptibility that gives rise to the coherent nonlinear SHG scattering. The corrections shown for sub-resolution nanoparticles exhibits excellent performance. High quality amplitude and phase images of thin sheep tendon slices illustrate the power of this new imaging modality. ## V Conclusions: We have presented the first epi SHG holographic images that were enabled by a combination of heterodyne-enhanced signal amplification that is able to boost the weak backscattered SHG signal field, along with a coherent summation strategy to boost the SNR of individual holographic field measurements. The fundamental illumination beam is configured as a plane wave where the incident angle is scanned. The full set of data from both transmission and epi SHG holograms are collected into a scattering matrix. These data exhibit overlap in the measured SHG spatial frequency distributions. This redundancy enabled the robust estimation and correction of the input and output pupil phase that leads to a distortion of the SHG hologram images. Once the scattering matrix is corrected, an aberration-free SHG image field spectrum is estimated with an expanded, synthetic aperture. Results are shown for synthetic aperture SHG holography in both the epi and transmission configurations. This demonstration of epi-collected widefield SHG holographic imaging opens a new path for minimally invasive imaging in scattering media with aberration-corrected SHG holography. ###### Acknowledgements. We are grateful to funding support from the Chan Zuckerberg Initiative's Phase I Deep Tissue imaging program. OP acknowledges support from NSF grant DMS-2006416. We would like to thank Tia Tedford, histo technician at the Orthopaedic and Bioengineering Research Lab at Colorado State University, for her expertise in sample preparation of the sheep tendon tissues that we imaged in this work. ## Data Availability Statement Data and processing scripts written in Matlab are available at [https://github.com/RandyBartelsCSU/SHGSyntheticApertureHologrpahy](https://github.com/RandyBartelsCSU/SHGSyntheticApertureHologrpahy). ## Appendix A Green's function for SHG excitation and detection Greens's functions used in the formulae for the SHG field forward and backward scattering configurations defined in Eq. 1 are derived. As is evident in Fig. 1 and in Fig. 4, the input illumination field Green's function is obtained from a map from the input spatial frequency plane coordinates, \(\mathbf{u}_{i}\), to the coordinates in the sample plane, \(\mathbf{r}\) and the output Green's function is a map from the sample plane coordinates to the output pupil plane spatial frequency coordinates, \(\mathbf{u}_{o}\). In both cases, the map from input coordinates to the sample plane coordinates and from the sample plane coordinates to the output plane coordinates are accomplished with a 2-f optical system. The relevant Green's functions are defined below using the notation in the classic optical textbook by Mertz. [45] Following this notation, we will use the wavenumber defined by \(\kappa_{j}=1/\lambda_{j}\), for a field at the optical wavelength \(\lambda_{j}\). ### Input Green's function The input fundamental field, with wavelength \(\lambda_{1}\), is focused within the input pupil spatial coordinates \(\mathbf{x}_{i}\). Suppose that this field is incident on the front focal plane of the illumination condenser lens with focal length \(f_{c}\) and is denoted by \(E_{i}(\mathbf{x}_{i})\). The fundamental field in the sample plane that is incident on the sample placed in the back focal plane is given by \[E_{1}(\mathbf{r})=-i\,\frac{\kappa_{1}}{f_{c}}\,\int P_{i}(\mathbf{x}_{i})\,E_{ i}(\mathbf{x}_{i})\,\exp\left(-i\,2\,\pi\,\frac{\kappa_{1}}{f_{c}}\,\mathbf{x}_{i} \cdot\mathbf{r}\right)\,d^{2}\mathbf{x}_{i}. \tag{10}\] The fundamental field at the sample plane excited a second-order dipole oscillation that drives scattering at the second harmonic frequency of \(\omega_{2}=2\,\omega_{1}\), which appears at a wavelength of \(\lambda_{2}=\lambda_{1}/2\). Synthetic aperture holographic imaging uses an illumination with a point focus in the input pupil \(P_{l}(\mathbf{x}_{i})\) at a spatial coordinate \(\mathbf{x}_{s}\) with a field that is approximated as a 2-D Dirac delta function, \(E_{i}(\mathbf{x}_{i})=\delta^{(2)}(\mathbf{x}_{i}-\mathbf{x}_{s})\). With this input field, we have a fundamental plane wave incident on the sample of \[E_{1}(\mathbf{r})=P_{l}(\mathbf{x}_{i})\,\exp\left(-i\,2\,\pi\,\frac{\kappa_{ 1}}{f_{c}}\,\mathbf{x}_{s}\cdot\mathbf{r}\right). \tag{11}\] The scattered SHG field is driven by the square of the incident fundamental field, \(E_{1}^{2}(\mathbf{r})\), from which we define the input Green's function \[G(\mathbf{r},\mathbf{u}_{i})=P_{l}(\mathbf{u}_{i})\,e^{-i\,2\,\pi\,\mathbf{u }_{i}\cdot\mathbf{r}}, \tag{12}\] where we have defined the effective input pupil spatial frequency \(\mathbf{u}_{i}=2\,\mathbf{x}_{s}/(\lambda_{1}f_{c})=\mathbf{x}_{s}/(\lambda_{ 2}\,f_{c})\) and we have assumed that the amplitude support of the input pupil is binary. Here we have suppressed scaling factors in favor of compact notation. ### Output Green's function The output SHG field, with wavelength \(\lambda_{2}\), is mapped from the sample plane to the output pupil plane with coordinates \(\mathbf{x}_{o}\) using an objective lens with focal length \(f_{o}\). The form of the Green's function for this mapping depends on whether we collect forward or backward scattered light. This output field is given by \[E_{2}(\mathbf{x}_{o})=P_{o}(\mathbf{x}_{o})\,\int\,\chi^{(2)}(\mathbf{r})\,G (\mathbf{r},\mathbf{u}_{i})\,\exp\left(\pm i\,2\,\pi\,\frac{\kappa_{2}}{f_{o}} \,\mathbf{r}\cdot\mathbf{x}_{o}\right)\,d^{2}\mathbf{r}, \tag{13}\] where again we have suppressed constants of proportionality for brevity. Here, the \(+\) denotes a reflected SHG field and \(-\) indicates a transmitted SHG field. Identifying the output pupil spatial frequency at the second harmonic optical frequency as \(\mathbf{u}_{o}=\mathbf{x}_{o}/(\lambda_{2}f_{o})\), then we define the output Green's function for the SHG field as \[H(\mathbf{u}_{o},\mathbf{r})=P_{o}(\mathbf{u}_{o})\,e^{\pm i\,2\,\pi\,\mathbf{ u}_{o}\cdot\mathbf{r}}. \tag{14}\] ## Appendix B Scattered SHG field operators To establish the scattering field operators, we apply Eq. 1 to the Green's function derived in the previous section. Using the explicit form of the Green's functions given in Appendix A, we compute the scattering matrix in transmission and reflection in continuous operator form. ### Scattering operator in transmission The set of scattered fields that are mapped to the output pupil, \(\mathbf{u}_{o}\), as a function of the input spatial frequency define the SHG transmission operator \(T(\mathbf{u}_{o},\mathbf{u}_{i})\). Inserting Eq. 12 and Eq. 14 into Eq. 1 leads to the integral definition of the SHG scattering operator in transmission \[T(\mathbf{u}_{o},\mathbf{u}_{i})\equiv\int P_{l}(\mathbf{u}_{i})\,e^{-i\,2\, \pi\,\mathbf{u}_{i}\cdot\mathbf{r}}\chi^{(2)}(\mathbf{r})\,P_{o}(\mathbf{u}_ {o})\,e^{-i\,2\,\pi\,\mathbf{u}_{o}\cdot\mathbf{r}}\,d^{2}\mathbf{r}. \tag{15}\] Defining the scattering vector in transmission as \(\mathbf{q}=\mathbf{u}_{o}+\mathbf{u}_{i}\) allows for a compact representation of the scattering operator as \[T(\mathbf{u}_{o},\mathbf{u}_{i})=P_{o}(\mathbf{u}_{o})\,\hat{\chi}^{(2)}( \mathbf{q})\,P_{l}(\mathbf{u}_{i}), \tag{16}\] where the spatial frequency spectral distribution of the nonlinear susceptibility is \(\hat{\chi}^{(2)}(\mathbf{q})=\mathcal{F}\{\chi^{(2)}(\mathbf{r})\}(\mathbf{q})\). Here we have defined the Fourier transform as \[\mathcal{F}\{f(\mathbf{x})\}(\mathbf{u})=\int_{-\infty}^{\infty}f(\mathbf{x}) \,e^{-i\,2\,\pi\,\mathbf{u}\cdot\mathbf{x}}\,d^{2}\mathbf{x}.\] The corresponding inverse transform is given by \[\mathcal{F}^{-1}\{F(\mathbf{u})\}(\mathbf{x})=\int_{-\infty}^{\infty}F(\mathbf{ u})\,e^{i\,2\,\pi\,\mathbf{u}\cdot\mathbf{x}}\,d^{2}\mathbf{u}.\] ### Scattering operator in reflection Following a similar argument to that used in Appendix B.1, we define the reflection operator for the backscattered SHG field as \[R(\mathbf{u}_{o},\mathbf{u}_{i})\equiv\int P_{l}(\mathbf{u}_{i})\,e^{-i\,2\, \pi\,\mathbf{u}_{i}\cdot\mathbf{r}}\,\chi^{(2)}(\mathbf{r})\,P_{o}(\mathbf{u}_ {o})\,e^{+i\,2\,\pi\,\mathbf{u}_{o}\cdot\mathbf{r}}\,d^{2}\mathbf{r}. \tag{17}\] In the backscattering case as \(\mathbf{q}=-\mathbf{u}_{o}+\mathbf{u}_{i}\) allows for a compact representation of the scattering operator as \[R(\mathbf{u}_{o},\mathbf{u}_{i})=P_{o}(\mathbf{u}_{o})\,\hat{\chi}^{(2)}( \mathbf{q})\,P_{l}(\mathbf{u}_{i}), \tag{18}\] but with the backscattered form of the scattering vector. ## Appendix C Optimally of pupil phase estimation through the singular value decomposition The estimation and removal of the input and output pupil phases to produce and aberration-free synthetic aperture SHG spectrum can be viewed as a constrained optimization problem to produce an undistorted image. By using the method of Lagrange multipliers to find the optimal correction to the reflection and transmission matrices, we show that the dominant eigenvector of the shifted scattering matrix operators, \(\mathbf{D_{q,u_{i}}}\) and \(\mathbf{D_{q,u_{o}}}\), corresponds to the optimal correction. As shown above, since the structure of the matrices \(\mathbf{D_{q,u_{i}}}\) and \(\mathbf{D_{q,u_{o}}}\) approximately decouples the input and output pupils, the phase shifting problem can be written as a simple matrix operation: \(\mathbf{D_{q,u_{i}}}\vec{s}(\mathbf{u}_{i})=E^{\mathrm{s}}_{\mathrm{SHG}}( \mathbf{q})\) where \(E^{\mathrm{s}}_{\mathrm{SHG}}(\mathbf{q})\) is the reconstructed synthetic aperture spectrum and \(\vec{s}(\mathbf{u}_{i})\) is a unit vector that shifts the phase of each column: \(\vec{s}(\mathbf{u}_{i})=e^{i\vec{\phi}_{c}(\mathbf{u}_{i})}\), with \(\phi_{c}\) being the phase correction. We would like to find \(\vec{s}(\mathbf{u}_{i})\) such that it maximizes the total intensity of \(E^{\mathrm{s}}_{\mathrm{SHG}}(\mathbf{q})\) with \(\vec{s}(\mathbf{u}_{i})\) being a unit vector (\(\vec{s}^{\dagger}\vec{s}=1\)). When the total intensity is maximum this corresponds to when all the columns (fields) are in phase. This occurs when \(\vec{s}(\mathbf{u}_{i})=P_{1}(\mathbf{u}_{i})^{*}\), implying that \(\phi_{c}=-\phi_{i}(\mathbf{u}_{i})\), thereby correcting the aberrations imparted by the input pupil. The total intensity as a function of the vector \(\vec{s}(\mathbf{u}_{i})\) is: \[f(\vec{s})=[E^{\mathrm{s}}_{\mathrm{SHG}}(\mathbf{q})]^{\dagger}[E^{\mathrm{s }}_{\mathrm{SHG}}(\mathbf{q})]=\vec{s}^{\dagger}D^{\dagger}D\vec{s} \tag{10}\] The optimization problem can then be written as: \[\mathrm{maximize}f(\vec{s})\ s.t.\ \vec{s}^{\dagger}\vec{s}=1 \tag{11}\] Using Lagrange multipliers, the maximum or minimum of a function \(f\) is the solution to \(\nabla f=\lambda\nabla g\) where \(g\) is a constraint function, in this case \(g(\vec{s})=0=\vec{s}^{\dagger}\vec{s}-1\). Written in a different way the Lagrangian is \(\mathcal{L}=\vec{s}^{\dagger}D^{\dagger}D\vec{s}-\lambda(\vec{s}^{\dagger} \vec{s}-1)\), where \(\nabla\mathcal{L}=0\). Since the matrices and vectors here are complex valued some care is needed to properly calculate these derivatives using Wirtinger calculus. Conveniently, the expressions for the derivatives we need are in appendix A of this book [46]. Taking Wirtinger derivatives: \[\frac{\partial\mathcal{L}}{\partial\vec{s}}=0=(D^{\dagger}D)^{T}\vec{s}^{*}- \lambda\vec{s}^{*} \tag{12}\] Simplifying we get: \[(D^{\dagger}D)^{*}\vec{s}^{*}=\lambda\vec{s}^{*} \tag{13}\] Finally, taking the complex conjugate of both sides: \[(D^{\dagger}D)\vec{s}=\lambda^{*}\vec{s} \tag{14}\] which is an eigenvalue equation with \(\vec{s}\) being and eigenvector of \(D^{\dagger}D\) with eigenvalue \(\lambda^{*}\). Since \(D^{\dagger}D\) is a hermitian matrix it has real eigenvalues so \(\lambda^{*}=\lambda\). The eigenvectors of \(D^{\dagger}D\) are the left singular vectors of \(D\) with \(\lambda\) being the corresponding singular values. Therefore, the unit vector \(\vec{s}\) which maximizes the total intensity of the synthetic aperture image is the left singular vector of \(D\) corresponding to the largest singular value. When the total intensity is maximized, this corresponds to when each field is added coherently in phase with one another. ## Appendix D Comparison of the robustness of phase estimation algorithms The critical aspect of aberration-free synthetic aperture SHG holographic imaging is to robustly estimate the correct input and output pupil phase and use those to correct the data and estimate an undistorted SHG object spatial frequency spectrum. This becomes difficult when signal levels are low which certainly is the case for epi directed SHG from biological samples. Not only is the signal low, but exposure times must be kept short due to the instabilities of the interferometer. Finding and correcting for aberrations amounts to finding phase differences between scattered fields originating from similar input angles. These measurements contain phase information so the phase difference between two fields can be found by taking their cross-correlation. This works well when signal levels are high, but as SNR decreases the noise disrupts this measurement. The SVD approach has a distinct advantage as it takes the entire dataset into consideration at once instead of finding phase differences between two neighboring fields one at a time. To test the robustness of our new SVD-based algorithm, we have run simulations with varying noise levels and compare the fidelity of estimating the pupil phase with our new SVD algorithm as compared to the cross-correlation algorithm used previously to great effect for linear scattering [39]. In the simulation, a reflection matrix is generated and then a pupil phase distortion is applied. A phase distortion is applied by applying random weights to the first 30 elements of the Zernike basis. Then varying levels of noise were applied to each field so that the noise is uncorrelated from one field Figure 7: Performance of SVD algorithm compared to the cross correlation algorithm for estimating the pupil phase under varying levels of SNR. The actual pupil phase is shown in the top left with a black border. At selected SNR levels the recovered pupil phase maps for each technique are shown. The result using the cross correlation method is shown with dashed red borders and dashed blue borders for the SVD method. to another. The noise was added to the fields in the spatial domain with a uniformly distributed random amplitude and a uniformly distributed random phase from \(-\pi\) to \(\pi\). The noise level was changed by varying the amplitude. To quantify the error of the phase map reconstruction, the recovered pupil map is first transformed into the spatial domain by an inverse Fourier transform. Then each reconstruction is compared to the actual using a normalized mean squared error calculation: \(\text{NMSE}=\left[\left\|x_{ref}-x\right\|\right]/\left[\left\|x_{ref}-\text{ mean}(x_{ref})\right\|\right]\).
2303.01486
Understanding plasticity in neural networks
Plasticity, the ability of a neural network to quickly change its predictions in response to new information, is essential for the adaptability and robustness of deep reinforcement learning systems. Deep neural networks are known to lose plasticity over the course of training even in relatively simple learning problems, but the mechanisms driving this phenomenon are still poorly understood. This paper conducts a systematic empirical analysis into plasticity loss, with the goal of understanding the phenomenon mechanistically in order to guide the future development of targeted solutions. We find that loss of plasticity is deeply connected to changes in the curvature of the loss landscape, but that it often occurs in the absence of saturated units. Based on this insight, we identify a number of parameterization and optimization design choices which enable networks to better preserve plasticity over the course of training. We validate the utility of these findings on larger-scale RL benchmarks in the Arcade Learning Environment.
Clare Lyle, Zeyu Zheng, Evgenii Nikishin, Bernardo Avila Pires, Razvan Pascanu, Will Dabney
2023-03-02T18:47:51Z
http://arxiv.org/abs/2303.01486v4
# Understanding plasticity in neural networks ###### Abstract Plasticity, the ability of a neural network to quickly change its predictions in response to new information, is essential for the adaptability and robustness of deep reinforcement learning systems. Deep neural networks are known to lose plasticity over the course of training even in relatively simple learning problems, but the mechanisms driving this phenomenon are still poorly understood. This paper conducts a systematic empirical analysis into plasticity loss, with the goal of understanding the phenomenon mechanistically in order to guide the future development of targeted solutions. We find that loss of plasticity is deeply connected to changes in the curvature of the loss landscape, but that it often occurs in the absence of saturated units or divergent gradient norms. Based on this insight, we identify a number of parameterization and optimization design choices which enable networks to better preserve plasticity over the course of training. We validate the utility of these findings in larger-scale learning problems from the Arcade Learning Environment. Machine Learning, Reinforcement Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning ## 1 Introduction It is a widely observed phenomenon that after training on a non-stationary objective, neural networks exhibit a reduced ability to solve new tasks (Lyle et al., 2021; Nikishin et al., 2022; Dohare et al., 2021). This loss of plasticity occurs most robustly when the relationship between inputs and prediction targets changes over time, and the network must learn to 'overwrite' its prior predictions (Lyle et al., 2021). While such scenarios are relatively rare in supervised learning, they are baked into the way that deep reinforcement learning (RL) agents are trained. Understanding how plasticity is lost, and whether this loss can be mitigated, is crucial if we wish to develop deep RL agents which can rise to the challenge of complex and constantly-changing environments. Existing methods to promote trainability act on a wide variety of potential mechanisms by which plasticity might be lost, including resetting of layers (Nikishin et al., 2022) and activation units (Dohare et al., 2021), and regularization of the features (Kumar et al., 2020; Lyle et al., 2021). While all of these works observe performance improvements, it is unlikely that they are all obtaining these improvements by the same mechanism. As a result, it is difficult to know how to improve on these interventions to further preserve plasticity. This paper seeks to identify the mechanisms by which plasticity loss occurs. We begin with an analysis of two interpretable case studies, illustrating the mechanisms by which both adaptive optimizers and naive gradient descent can drive the loss of plasticity. Prior works have conjectured, implicitly or explicitly, that a variety of network properties might cause plasticity loss: we present a falsification framework inspired by the study of causally robust predictors of generalization (Dziugaite et al., 2020), and leverage this framework to show that loss of plasticity cannot be uniquely attributed to any of these properties. While difficult to characterize explicitly, we provide evidence that the curvature of the loss landscape induced by new tasks on trained parameters is a crucial factor determining a network's plasticity, particularly in value-based reinforcement learning algorithms. We conclude by completing a broad empirical analysis of methods which aim to improve the ability of a network to navigate the loss landscape throughout training, ranging from architectural choices to regularization schemes. We find that architectural choices which have been conjectured to smooth out the loss landscape, such as categorical output representations and normalization layers, provide the greatest improvements to plasticity, while methods which perturb the parameters or provide other forms of regularization tend to see less benefit. To test the generality of these findings, we apply the best-performing intervention, layer normalization, to a standard DQN architecture and obtain significant improvements in performance on the Arcade Learning Environment benchmark. We conclude that controlling the loss landscape sharpness and optimizer stability present highly promising avenues to improve the robustness and usability of deep RL methods. ## 2 Background It has long been observed that training a network first on one task and then a second will result in reduced performance on the first task (French, 1999). This phenomenon, known as catastrophic forgetting, has been widely studied by many works (2017). This paper concerns itself with a different phenomenon: in certain situations, training a neural network on a series of distinct tasks can result in worse performance on later tasks than what would be obtained by training a randomly initialized network of the same architecture. ### Preliminaries **Temporal difference learning.** Plasticity loss naturally arises under non-stationarity; we will focus our analysis on temporal difference (TD) learning with neural networks, an setting known to induce significant non-stationarity. We assume the standard reinforcement learning problem of an agent interacting with an environment \(\mathcal{M}\), with observation space \(\mathcal{S}\), action space \(\mathcal{A}\), reward \(R\) and discount factor \(\gamma\), with the objective of maximizing cumulative reward. Networks trained via temporal difference learning receive as input sampled _transitions_ from an agent's interaction with the environment, of the form \(\tau_{t}=(s_{t-1},a_{t},r_{t},s_{t})\), where \(s_{t-1},s_{t}\in\mathcal{S}\), \(a_{t}\in\mathcal{A}\), and \(r_{t}=R(s_{t})\) some reward. We let \(\theta^{\prime}\) denote the _target parameters_; in practice, \(\theta^{\prime}\) is usually an outdated copy of \(\theta\) from a previous timestep, but other choices include setting it to be equal to the current parameters, or using a moving average of past values. The network \(f:\Theta\times\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) is trained to minimize the temporal difference error \[\ell(\theta,\tau_{t})=\left[f(\theta,s_{t-1},a_{t})-\Box(r_{t}+\gamma f(\theta ^{\prime},s_{t},a^{\prime}))\right]^{2} \tag{1}\] where \(\Box\) denotes a stop-gradient, \(\gamma<1\) is the discount factor, and \(a^{\prime}\) is chosen based on the variant of TD learning used. Crucially, the regression target \(r_{t}+\gamma f(\theta^{\prime},s_{t},a^{\prime})\) depends on the parameters \(\theta^{\prime}\) and changes as learning progresses. This nonstationarity occurs even if the policy and input distribution are fixed, meaning that we can study the role of nonstationarity independent of the agent's exploration strategy. We will use the shorthand \(\ell(\theta)\) for the expectation of this loss over some input distribution. **Loss landscape analysis.** We will be particularly interested in the study of the structure of the loss landscape traversed by an optimization algorithm. We will leverage two principal quantities in this analysis: the Hessian of the network with respect to some loss function, and the gradient covariance. The Hessian of a network \(f\) at parameters \(\theta\) with respect to some loss \(\ell(\theta)\) is a matrix defined as \[H_{\ell}(\theta)=\nabla_{\theta}^{2}\ell(\theta)\in\mathbb{R}^{d\times d} \tag{2}\] where \(d=|\theta|\) is the number of parameters. Of particular relevance to optimization is the eigenspectrum of the Hessian \(\Lambda(H_{\ell}(\theta))=(\lambda_{1}\geq\cdots\geq\lambda_{d})\). The maximal eigenvalue, \(\lambda_{1}\), can be interpreted as measuring the sharpness of the loss landscape (Dinh et al., 2017), and the condition number \(\lambda_{1}/\lambda_{d}\) has significant implications for convergence of gradient descent optimization in deep neural networks (Gilmer et al., 2022). We will also take interest in the covariance structure of the gradients of different data points in the input distribution, a property relevant to both optimization and generalization (Fort et al., 2019; Lyle et al., 2022). We will estimate this covariance structure by sampling \(k\) training points \(\mathbf{x}_{1},\ldots,\mathbf{x}_{k}\), and computing the matrix \(C_{k}\in\mathbb{R}^{k\times k}\) defined entrywise as \[C_{k}[i,j]=\frac{\langle\nabla_{\theta}\ell(\theta,\mathbf{x}_{i}),\nabla_{ \theta}\ell(\theta,\mathbf{x}_{j})\rangle}{\|\nabla_{\theta}\ell(\theta, \mathbf{x}_{i})\|\|\nabla_{\theta}\ell(\theta,\mathbf{x}_{j})\|}. \tag{3}\] If the off-diagonal entries of \(C_{k}\) contain many negative values, this indicates interference between inputs, wherein the network cannot reduce its loss on one region without increasing its loss on another. If the matrix \(C_{k}\) exhibits low rank (which, given a suitable ordering \(\sigma\) of the data points \(\mathbf{x}_{\sigma(1)},\ldots,bx_{\sigma(k)}\) will yield a block structure) then the gradients are degenerate and largely colinear, which can indicate either generalization when their dot product is positive, or interference when their dot product is negative. ### Defining plasticity The study of plasticity has concerned neuroscience for several decades (Mermillod et al., 2013; Abbott and Nelson, 2000), but has only recently emerged as a topic of interest in deep learning (Berariu et al., 2021; Ash and Adams, 2020; **7**). Classical notions of complexity from the computational learning theory literature (Vapnik, 1968; Bartlett and Mendelson, 2002) evaluate whether a hypothesis class contains functions that capture arbitrary patterns, but are agnostic to the ability of a particular search algorithm, such as gradient descent, to find them. A billion-parameter neural network architecture might have the _capacity_ to represent a rich class of functions, but if all of its activation units are saturated then it cannot be trained by gradient descent to realize this capacity. Plasticity, broadly construed, refers to a network's ability to learn new things. Learning can be quantified as either the reduction in the model's training loss after optimization, or as the performance of the model on held-out test points. Studies of plasticity in both supervised and reinforcement learning have observed reduced generalization performance as a result of overfitting to limited data early in training (Ash and Adams, 2020; Berariu et al., 2021; Igl et al., 2021). However, reinforcement learning tasks often lack a natural notion of a test set as the data gathered by the agent is not generally independent and identically distributed, and many works have identified impaired ability to even reduce the learning objective on the training distribution (Dohare et al., 2021; Lyle et al., 2021; Nikishin et al., 2022). This work will leverage the formulation of Lyle et al. (2021), who define plasticity as the ability of a network to update its predictions in response to a wide array of possible learning signals on the input distribution it has been trained on. This formulation is applicable to learning problems which do not admit a straightforward train-test split, as is the case in many deep RL environments. Concretely, we consider an optimization algorithm \(\mathcal{O}:(\theta,\ell)\mapsto\theta^{*}\) which takes initial parameters \(\theta\in\Theta\) and some objective function \(\ell:\Theta\rightarrow\mathbb{R}\), and outputs a new set of parameters \(\theta^{*}\). The parameters \(\theta^{*}\) need not be an optimum: \(\mathcal{O}\) could, for example, run gradient descent for five steps. In order to measure the flexibility with which a network can update its predictions under this optimization algorithm, we consider a distribution over a set of loss functions \(\mathcal{L}\) each defined by some learning objective. For example, consider a distribution over regression losses \[\ell_{f,\mathbf{X}}(\theta)=\mathbb{E}_{x\sim\mathbf{X}}[(f(\theta,\mathbf{x} )-g_{\omega}(\mathbf{x}))^{2}] \tag{4}\] where \(g_{\omega}\) is induced by a random initialization \(\omega\) of a neural network. In order to match the intuition that more adaptable networks should have greater plasticity, we set a baseline value \(b\) to be the loss obtained by some baseline function (e.g. if \(\ell\) is a regression loss on some set of targets, we set \(b\) to be the variance of the targets), and then define plasticity to be the difference between the baseline and the expectation of the final loss obtained by this optimization process after starting from an initial parameter value \(\theta_{t}\) and optimizing a sampled loss function \(\ell\) subtracted from the baseline \(b\). \[\mathcal{P}(\theta_{t})=b-\mathbb{E}_{\ell\sim\mathcal{L}}[\ell(\theta_{t}^{* })]\text{ where }\theta_{t}^{*}=\mathcal{O}(\theta_{t},\ell) \tag{5}\] We then define the loss of plasticity over the course of a trajectory \((\theta_{t})_{t=0}^{N}\) as the difference \(\mathcal{P}(\theta_{t})-\mathcal{P}(\theta_{0})\). We note that this definition of plasticity loss is independent of the value of the baseline \(b\), i.e. the difficulty of the probe task for the network, allowing us to measure the relative change in performance of checkpoints taken from a training trajectory. ## 3 Methodology and motivating questions The following sections will present a series of experiments which tease apart different causal pathways by which plasticity loss occurs and evaluate the predictive power of a range of hypotheses concerning the root causes thereof. We now outline the experimental methodology and research questions underpinning this investigation. ### Measuring plasticity In order to determine whether a candidate intervention preserves plasticity, we must first set out a consistent standard by which we will measure plasticity. Given a suitably generic class of target functions inducing the losses \(\ell\), equation 4 characterizes the adaptability of the network to arbitrary new learning signals from the unknown set of possible future tasks. We therefore construct a set of regression targets to sample over which correspond to this uniform prior over possible future update directions. A different distribution over future target functions might give different numerical results; however, we believe that a uniform distribution captures a more universal notion of plasticity. In our empirical evaluations, we will set \(\mathbf{X}\) to be the set of transitions gathered by an RL agent and stored in some replay buffer, and \(f\) to be the neural network architecture. Given some offset \(a\in\mathbb{R}\), we will apply the transformation \(g(x)=a+\sin(10^{5}f(\mathbf{x};\omega_{0}))\), with \(\omega_{0}\) sampled from the same distribution as \(\theta_{0}\), to construct a challenging prediction objective which measures the ability of the network to perturb its predictions in random directions sampled effectively uniformly over the input space. Because the mean prediction output by a deep RL network tends to evolve away from zero over time as the policy improves and the reward propagates through the value function, we will set \(a\) to be equal to the network's mean prediction in order not to bias the objective in favour of random initializations, which have mean much closer to zero. The optimizer \(\mathcal{O}\) will be identical to that used by the network on its primary learning objective, and we found running this optimizer for a budget of two thousand steps enabled reasonably efficient iteration time while also providing enough opportunity for most random initializations to solve the task. ### Environments The experimental framework we consider, and which will be touched on in each of the following sections is as follows. We construct a simple MDP analogue of image classification, i.e. the underlying transition dynamics are defined over a set of ten states and ten actions, and the reward and transition dynamics depend on whether or not the action taken by the agent is equal to the state's latent label. We construct three variants of a block MDP whose state space can be given by the discrete set \(\{0,\dots,9\}\) and whose observation space is given by either the CIFAR-10 or MNIST dataset. **True-label:** each state \(s\) of the MDP produces an observation from that class in the underlying classification dataset. Given action \(a\), the reward is the indicator function \(\delta_{a=s}\). The MDP then randomly transitions to a new state. **Random-label:** follows the same dynamics as the previous environment, but each image is assigned a random label in \(\{0\dots 9\}\), and the observation from an MDP state \(i\) is sampled from images with (randomized) label \(i\). **Sparse-reward:** exhibits the same observation mapping as _true-label_. The reward is equal to \(\delta_{a=s=9}\). The MDP transitions to a random state if \(a\neq s\) and otherwise to \(s+1\). We design these environments to satisfy two principal desiderata: first, that they present visually interesting prediction challenges with varying degrees of reward smoothness and density, and second that they allow us to isolate non-stationarity due to policy and target network updates independent of a change in the state visitation distribution. In the true-label and random-label variants, the transition dynamics do not depend on the agent's action, whereas in the sparse environment the policy influences the state visitation distribution. The different reward functions allow us to compare tasks which are aligned with the network's inductive bias (in the true-label task) and those which are not (the random-label task). ### Outline of experiments The experiments presented in Sections 4 and 5 aim to answer a fundamental question: **what happens when neural networks lose plasticity?** Section 4 constructs two experimental settings which illuminate phenomena driving two very different forms of plasticity loss. The first constructs a non-stationary learning problem that induces extreme forms of instability in adaptive optimizers, leading to plasticity loss. The second identifies a bias in the dynamics of gradient descent which leads to a progressive sharpening of the loss landscape of not just the current task, but also new tasks, by contrasting a gradient descent trajectory with a random walk in parameter space. Section 5 asks **what properties _cause_ plasticity loss?** Disentangling cause from correlation is a notoriously difficult problem throughout science and economics. We put a number of quantities which have been conjectured to drive plasticity loss to the test, evaluating the consistency of quantities such as weight norm, feature rank, and the number of dead units in the network across the range of tasks outlined in Section 3.2. Crucial to this investigation is the notion of a causally robust predictor: a quantity which causally influences plasticity should exhibit consistency across a variety of experimental settings, such as different learning environments or network architectures. We follow up the largely negative results of these experiments with a qualitative analysis of learning curves on the probe tasks described in Section 3.1 that emphasizes the critical role of the loss landscape in plasticity. Section 6.2 addresses the question: **how can we mitigate plasticity loss?** It evaluates the effectiveness of a range of interventions on the network architecture and on the optimization protocol, focusing on methods known to increase the smoothness of the loss landscape, applying the same evaluation protocol as described in this section in order to measure plasticity across our classification MDP testbed. ## 4 Two simple studies on plasticity We begin with some interpretable examples of learning problems where plasticity loss occurs. These examples illustrate how the design of optimizers can interact with nonstationarity to produce instabilities that drive plasticity loss in one of the examples above, and explore how the dynamics of gradient-based optimizers might affect more subtle properties of the loss landscape. ### Optimizer instability and non-stationarity The robustness of existing optimizers across a wide range of datasets and network architectures has played a key role in the widespread adoption of deep learning methods. For example, the Adam optimizer (Kingma & Ba, 2015) with a learning rate of \(10^{-3}\) will often yield reasonable initial results on a range of network architectures from which the practitioner can iterate. However, when the assumptions on stationarity underlying the design of this optimizer no longer hold, the optimization process can experience catastrophic divergence, killing off most of the network's ReLU units. We can see an example of this in a simple non-stationary task in Figure 1. A two-hidden-layer fully-connected neural network is trained to memorize random labels of MNIST images (full details provided in Appendix A.1). After a fixed training budget, the labels are re-randomized, and the network continues training from its current parameters. This process quickly leads a default Adam optimizer to diverge, saturating most of its ReLU units and resulting in trivial performance on the task that a freshly initialized network could solve perfectly. Figure 1: Abrupt task changes can drive instability in optimizers which depend on second-order moment estimates for adaptive learning rate scaling. Setting these estimators to be more robust to small gradient norms and to update moment estimates more quickly mitigates this issue. The mechanism of this phenomenon emerges when we consider the update rule for Adam, which tracks a second-order estimate \(\hat{v}_{t}\) along with a first-order moment estimate \(\hat{m}_{t}\) of the gradient via an exponential moving average \[u_{t}=\alpha\frac{\hat{m}_{t}}{\sqrt{\hat{v}_{t}+\bar{\epsilon}}+\epsilon}. \tag{6}\] Gradients tend to have norm proportional to the training loss. When the loss changes suddenly, as is the case when the perfectly-memorized MNIST labels are re-randomized (or when the target network is updated in an RL agent), \(\hat{m}_{t}\) and \(\hat{v}_{t}\) will no longer be accurate estimates of their moment distributions. Under the default hyperparameters for deep supervised learning, \(\hat{m}_{t}\) is updated more aggressively than \(\hat{v}_{t}\), and so the updates immediately after a task change will scale as a large number divided by a much smaller number, contributing to the instability we observe in Figure 1. In this instance, the solution is simple: we simply increase \(\epsilon\) and set a more aggressive decay rate for the second-moment estimate, and the network avoids catastrophic instability. Intriguingly, a large value of \(\epsilon\) is frequently used in deep RL algorithms such as DQN (Mnih et al., 2015) relative to the default provided by optimization libraries, suggesting that the community has implicitly converged towards optimizer hyperparameters which promote stability under nonstationarity. ### Loss landscape evolution under non-stationarity Even when optimization is sufficiently stable to avoid saturated units, prior work still observes reductions in network plasticity (Lyle et al., 2021). The causes of this phenomenon are more difficult to tease apart; neural network initializations have been tuned over several decades to maximize trainability, and many properties of the network change during optimization which could be driving the loss of plasticity. A natural question we can ask in RL is whether the optimization dynamics followed by a network bias the parameters to become less trainable, or whether the loss of plasticity is a natural consequence of any perturbation away from a carefully chosen initialization distribution. We frame this question as a controlled experiment, in which we compare the evolution of two coupled updating procedures: one follows gradient-based optimization on a non-stationary objective (full details in Appendix A.1); the second follows a random walk, where we add a Gaussian perturbation to the parameters with norm equal to the size of the gradient-based optimizer update. Both trajectories start from the same set of randomly initialized parameters and apply updates of equal norm; the only difference is the direction each step takes. We evaluate how the structure of the local loss landscape with respect to a probe task evolves in both networks by comparing the Hessian eigenvalue distribution, and by comparing the covariance structure \(C_{k}\) of gradients on sampled inputs, with \(k=512\) equal to the batch size used for training. We compute the Hessian matrix for a regression loss towards a perturbation \(\epsilon\sim\mathcal{N}(0,1)\) of the network's current output, i.e. \(\ell(\theta)=[f_{\theta}(\mathbf{X})-\Box f_{\theta}(\mathbf{X})+\epsilon]^{2}\) where \(\Box\) indicates a stop-gradient, to obtain a proxy for how easily the network can update its predictions in arbitrary directions; we do not evaluate the Hessian or gradient structure of the primary learning objective as these will trivially differ between the trajectories. We observe that the spectral norm of the Hessian of both processes increases over time; however, the outliers of the spectrum grow significantly faster in the network trained with gradient descent. Additionally, the network trained with gradient descent begins to exhibit negative interference between gradients, a phenomenon not observed in the Brownian motion. In other words, the inductive bias induced by gradient descent can push the parameters towards regions of the parameter space where the local loss landscape is less friendly to optimization towards arbitrary new objectives than what would be obtained by blindly perturbing randomly initialized parameters. ## 5 Explaining plasticity loss While in some instances it is straightforward to deduce the cause of plasticity loss, most learning problems induce com Figure 2: Evolution of the gradient and Hessian under gradient-based optimization compared to random perturbation of the parameters. Top: the density of the spectrum of the Hessian over different values of \(\lambda\) exhibits a larger outlier peak after gradient descent. Bottom: gradient descent induces more gradient interference between inputs and greater curvature of the loss landscape. plex learning dynamics that make it difficult to determine root causes. This section will show that a number of plausible explanations of plasticity loss, including the rank of the network's features, the number of saturated units, the norm of its parameters, and the rank of the weight matrices, do not identify robust causal relationships. We provide some evidence supporting the hypothesis that plasticity loss arises due to changes in the network's loss landscape, and conclude with a discussion of the potential trade-offs that must be faced between preserving a trainable gradient structure and accurately predicting a value function. ### Experimental setting We train a set of DQN agents on each environment-observation space combination in the classification MDP set described in Section 3.2, and evaluate the ability of each network to fit a randomly generated set of target functions as described in Section 2.2 after a fixed number of training steps. In the experiments shown here, we run the DQN agents with a target network update period of 1,000 steps; as mentioned previously, this is the principal source of non-stationarity in the true-label and random-label tasks. Every 5000 steps, we pause training, and from a copy of the current parameters \(\theta_{t}\) we train the network on a set of new regression problems to probe its plasticity. We log the loss at the end of 2,000 steps of optimization, sampling 10 different random functions, then resume training of the RL task from the saved parameters \(\theta_{t}\). We consider two network architectures: a fully-connected network (MLP) and a convolutional network architecture (CNN). Full details of the environments are included in Appendix A.2. ### Falsification of prior hypotheses Prior work has proposed a number of plausible explanations of why neural networks may exhibit reduced ability to fit new targets over time. Increased weight norm (Nikishin et al., 2022), low rank of the features or weights (Kumar et al., 2020; Gulcehre et al., 2022), and inactive features (Lyle et al., 2021; Dohare et al., 2021) have all been discussed as plausible mechanisms by which plasticity loss may occur. However, the explanatory power of these hypotheses has not been rigorously tested. While a correlation between a particular variable and plasticity loss can be useful for diagnosis, only a causal relationship indicates that intervening on that variable will necessarily increase plasticity. This section will seek to answer whether the above candidate explanations capture causal pathways. Our analysis is based on a simple premise: that for a quantity to exhibit explanatory power over plasticity loss, it should exhibit a consistent correlation across different experimental interventions (Buhlmann, 2020). If, for example, parameter norm is positively correlated with plasticity in one observation space and negatively correlated in another, then it can be ruled out as a causal factor in plasticity loss. To construct this experiment, we train 128 DQN agents under a range of tasks, observation spaces, optimizers, and seeds. Over the course of training, we log several statistics of the parameters and activations, along with the plasticity of the parameters at each logging iteration. In Figure 3, we show scatterplots illustrating the relationship between plasticity and each statistic, where each point in the scatterplot corresponds to a single training run. We see that for each of four quantities, there exists a learning problem where the quantity positively correlates with plasticity, and one in which it exhibits a negative correlation. In many learning problems the correlation between plasticity loss and the quantity of interest is nonexistent. In all cases we note that the correlation with plasticity is already quite weak; even so, the ability to reverse the sign of this correlation is a further mark against the utility of these simple statistics as causal explanations of plasticity. For example, we see a positive correlation between weight norm and plasticity loss in environments which use CIFAR-10 observations, but a slight negative correlation in environments which sample observations from MNIST. A similar reversal happens with respect to feature rank across environments. ### Loss landscape evolution during training If the simple statistics we have considered thus far lack explanatory power, how should we characterize plasticity loss? One open question is whether the reduced ability to fit arbitrary new targets arises because the optimization process gets caught in local optima, or whether it arises due to overall slow or inconsistent optimization progress. To answer this question, we turn our attention towards the learning curves obtained by networks when we ask them to fit new target functions. We study these learning curves primarily because they convey precisely the ease or difficulty of navigating the loss landscape. In particular, the learning curve tells us whether optimization is getting trapped in bad minima (in which case the learning curve would hit an early plateau at a large loss value), or whether the network has greater difficulty reducing the loss enough to find a minimum in the first place (corresponding to a flatter slope). We show in Figure 4 the learning curves obtained by an optimization trajectory from parameters \(\theta_{t}\) on the probe task from different timesteps \(t\) of training on the RL task. We see that parameters from early training checkpoints quickly attain low losses, but that the slopes of these learning curves are monotonically increasing with the parameter age \(t\). Of particular note is the increasing variance of the curves: in the full-batch case, this non-monotonicity is associated with increasing loss landscape sharpness (Cohen et al., 2021). In the mini-batch optimization setting, we observed both increasing interference between minibatches as well as non-monotonicity in the loss even on the minibatch on which the gradient was computed. In short, we see that it is increasing difficulty of navigating the loss landscape, rather than poor local minima, that appears to drive plasticity loss. ## 6 Solutions Thus far, we have demonstrated that neural networks can be demonstrated to lose plasticity even in a task as simple as classifying MNIST digits, assuming that a degree of non-stationarity is introduced into the optimization dynamics. We now turn our attention to means of reducing or reversing this loss of plasticity. Section 6.1 will evaluate whether scaling alone can eliminate plasticity loss. Section 6.2 will evaluate the effects of a variety of interventions on plasticity. We test the applicability of these findings to larger scale tasks in Section 6.3. ### The role of scaling on plasticity Before considering sophisticated methods to address plasticity loss, we must first answer the question of whether this is simply a disease of small networks. In the context of the impressive successes of large models and the resultant'scaling laws' phenomenon (Kaplan et al., 2020), it is entirely plausible that plasticity loss, like many other challenges, vanishes in the limit of infinite computation. We find that while plasticity loss is easiest to induce in extreme forms in small networks, scaling a CNN to the limit of a single GPU's memory is insufficient to eliminate plasticity loss even in the simple classification tasks described in the previous section. We visualize the relationship between network width and plasticity loss in Figure 5. These observations suggest that plasticity loss is unlikely to be the limiting factor for sufficiently large networks on sufficiently simple tasks. However, for tasks which do not align with the inductive bias of the network (as in the MLPs trained on CIFAR-10), or for which the network is not sufficiently expressive (as is the case for the small networks of any architecture), we see a reduction in the ability to fit new targets over time. Because we typically cannot guarantee a priori that a learning problem will fall in the first category, we therefore turn our attention to other design choices which Figure 4: Plasticity loss corresponds to slower training progress, rather than higher plateaus, in the networks studied in this paper. We plot learning curves on a new target fitting task starting from network checkpoints at different points in training. This figure illustrates a CNN trained on the true-label MDP described in Section 6 with a CIFAR-10 observation space. Figure 5: We observe a consistent decline in plasticity loss across different target update frequencies as a result of scaling in several architecture-dataset combinations; however, even when scaling the architecture to the point where it no longer fits on a single GPU, we are still unable to completely eliminate plasticity loss on these simple classification-inspired problems. Figure 3: Results of our experimental falsification design: for any variable we consider, it is possible to construct a set of learning problems in which the variable exhibits either a positive or a negative correlation with plasticity. For example, weight norm and weight rank exhibit differing correlation signs depending on the observation space, while feature rank and sparsity depend on the reward structure of the environment. might further insure networks against plasticity loss. ### Interventions in toy problems In this section we evaluate the effect of a variety of interventions on plasticity loss. We evaluate interventions on the same task used in Section 5.1, training for 100 iterations of 1000 steps. We consider four architectures: a multi-layer perceptron (MLP), a convolutional neural network (CNN) without skip connections, a ResNet-18 (He et al., 2016), and a small transformer based on the Vision Transformer (ViT) architecture (Dosovitskiy et al., 2020). We consider the following interventions: **resetting** the last layer of the network at each target network update, a simplified variant of the scheme proposed by Nikishin et al. (2022); **resetting the network optimizer state** at each target network update; adding **layer normalization**(Ba et al., 2016) after each convolutional and fully-connected layer of the CNN and the MLP; performing **Shrink and Perturb**(Ash and Adams, 2020): multiplying the network weights by a small scalar and adding a perturbation equal to the weights of a randomly initialized network; leveraging a **two-hot** encoding, which presents a distributional formulation of scalar regression wherein the network outputs a categorical probability distribution over fixed support and minimizes a cross-entropy loss with respect to an encoding of a regression target which distributes mass across two adjacent bins of the support; **spectral normalization** of the initial linear layer of the CNN and the MLP (Gogianu et al., 2021); and **weight decay,** setting the \(\ell_{2}\) penalty coefficient to \(10^{-5}\). These methods were chosen to be representative samples of a number of approaches to mitigating plasticity loss: resetting the optimizer state and last layer temporarily remove a source of poor conditioning from the optimization process; layer normalization and residual connections tend to make networks more robust to optimizer choices; weight decay and spectral normalization both regularize the parameters of the network in different ways; shrink and perturb applies a perturbation to the current parameters without significantly changing the decision boundary (though we note that for regression tasks this will still influence the scale of the network outputs, and so may not be suitable ). We visualize our key takeaways in Figure 6, which compares plasticity loss after 100 iterations of training on each of the architecture-intervention combinations. Overall, explicitly constructing a network parameterization which smooths out the loss landscape is the most effective means of preserving plasticity of all approaches we have considered, and has a greater effect on plasticity than resetting the final layer of the network. We visualize some learning curves of networks with and without layer normalization in Figure 17 in the supplementary material. We note that while the two-hot encoding does demonstrate significant reductions in plasticity loss, it does so at the cost of stability of the learned policy in several instances we considered. Additionally, this intervention required significantly different optimizer hyperparameters from the regression parameterization, suggesting that while it can be a powerful tool to stabilize optimization, it might not be suitable as a plug-in solution to mitigate plasticity loss in an existing protocol. ### Application to larger benchmarks We now evaluate whether the benefits of layer normalization on plasticity in toy classification tasks translate to larger-scale benchmarks. We use the standard implementation of double DQN (Van Hasselt et al., 2016) provided by Quan and Ostrovski (2020), and evaluate three seeds on each of the 57 games in the Arcade Learning Environment benchmark (Bellemare et al., 2013). We use the RMSProp optimizer, \(\epsilon\)-greedy exploration, and frame stacking (Mnih et al., 2015). Full implementation details can be found in Appendix A.3. The only difference between the baseline implementation and our modification is the incorporation of layer normalization after each hidden layer in the network. We see in Figure 7 that the introduction of layer normalization robustly improves performance across the benchmark. We emphasize that we did not perform any optimizer or other hyper parameter tuning. While this improvement cannot be definitively attributed to a reduction in plasticity loss from the evidence provided, it points towards the regularization of the optimization landscape as a fruitful direction towards more robust RL agents. We further observe that many of the environments where layer normalization offers a significant boost to performance are those where the gradient covariance structure of the default architecture is degenerate or where the Hessian is ill-conditioned, and the LN networks which obtain performance improvements tend to have correspondingly better behaved gradient covariance. We provide a hint into this phenomenon in Figure 7, and defer the complete evaluation over all 57 games to Ap Figure 6: Effect of architectural and optimization interventions on plasticity loss. Colour indicates change in loss on challenge targets between initial and final epoch of training on RL task. Darker shading indicates less plasticity loss. pendix B.3. ## 7 Related Work **Trainability:** the problem of finding suitable initializations for neural networks to enable training has a long history (Glorot and Bengio, 2010; He et al., 2015; Sutskever et al., 2013). Without careful initialization and architecture design, it is common to run into the issue that gradients will either explode or vanish as the depth of the network grows (Yang and Schoenholz, 2017). ResNets (He et al., 2016) in particular are known to resolve many of these pathologies by biasing each layer's mapping towards the identity function, leading to better-behaved gradients (Balduzzi et al., 2017). Mean-field analysis (Yang and Schoenholz, 2017; Schoenholz et al., 2017; Yang et al., 2019), information propagation (Poole et al., 2016), and deep kernel shaping (Zhang et al., 2021b; Martens et al., 2021) have all been applied to study trainability in neural networks. A wealth of prior work additionally studies the role of loss landscape smoothness in generalization and performance (Li et al., 2018; Santurkar et al., 2018; Ghorbani et al., 2019; Park and Kim, 2022). Other works highlight the chaotic behaviour of early training periods (Jastrzebski et al., 2020), in particular the 'edge of stability' phenomenon (Cohen et al., 2021) and the 'catapault mechanism' (Lewkowycz et al., 2020), and relate closely to the observations grounding 'linear mode connectivity' (Frankle et al., 2020) to explain generalization and trainability in deep neural networks; however, these approaches all focus on supervised learning with a stationary objective. **Resetting + continual learning:**(Zhang et al., 2021a), (Berariu et al., 2021)(Hadsell et al., 2020; Rolnick et al., 2019). (Ostapenko et al., 2019) class-incremental learning, differs from our setting because the input distribution changes, not the functional relationship. Tangarasa et al. (2020) propose a modified Hebbian learning rule. Studies of plasticity in task-shift continual learning usually focus on ability to learn under new input distributions (Rolnick et al., 2019), rather than new targets. Most related to our study is the identification of the _loss_ of plasticity as a potentially limiting factor in deep reinforcement learning (Lyle et al., 2021; Dohare et al., 2021). This study can be motivated by the rich literature studying the effect of resetting and distillation on performance (Fedus et al., 2020; Nikishin et al., 2022; Igl et al., 2021; Schmitt et al., 2018). ## 8 Conclusions The findings of this paper highlight a divide between the study of curriculum learning and foundation models, which identify suitable early training objectives to accelerate learning and improve generalization on later tasks, and the phenomena we have identified concerning the loss of plasticity in non-stationary prediction problems. However, as reinforcement learning algorithms scale up to more complex tasks, the divide between these regimes shrinks. While it is possible that in many settings, plasticity loss is not a limiting factor in network performance and so need not be a concern for many of the relatively small environments used to benchmark algorithms today, we conjecture that as the complexity of the tasks to which we apply RL grows, so will the importance of preserving plasticity. The findings of this paper point towards stabilizing the loss landscape as a crucial step towards promoting plasticity. This approach is likely to have many ancillary benefits, presenting an exciting direction for future investigation. A smoother loss landscape is both easier to optimize and tends to exhibit better generalization, and it is an exciting direction for future work to better disentangle the complementary roles of memorization and generalization in plasticity. Figure 7: Layer normalization improves performance and changes the gradient covariance structure in DDQN agents. Top: Human-normalized improvement score (Wang et al., 2016) of adding layer normalization over the default double DQN agent. Bottom: Gradient covariance matrices for Freeway (left) and Kangroo (right). In environments where layer normalization significantly improves performance, it also induces weaker gradient correlation.
2303.07949
Bordering of Symmetric Matrices and an Application to the Minimum Number of Distinct Eigenvalues for the Join of Graphs
An important facet of the inverse eigenvalue problem for graphs is to determine the minimum number of distinct eigenvalues of a particular graph. We resolve this question for the join of a connected graph with a path. We then focus on bordering a matrix and attempt to control the change in the number of distinct eigenvalues induced by this operation. By applying bordering techniques to the join of graphs, we obtain numerous results on the nature of the minimum number of distinct eigenvalues as vertices are joined to a fixed graph.
Aida Abiad, Shaun M. Fallat, Mark Kempton, Rupert H. Levene, Polona Oblak, Helena Šmigoc, Michael Tait, Kevin Vander Meulen
2023-03-14T14:47:03Z
http://arxiv.org/abs/2303.07949v2
Bordering of Symmetric Matrices and an Application to the Minimum Number of Distinct Eigenvalues for the Join of Graphs ###### Abstract An important facet of the inverse eigenvalue problem for graphs is to determine the minimum number of distinct eigenvalues of a particular graph. We resolve this question for the join of a connected graph with a path. We then focus on bordering a matrix and attempt to control the change in the number of distinct eigenvalues induced by this operation. By applying bordering techniques to the join of graphs, we obtain numerous results on the nature of the minimum number of distinct eigenvalues as vertices are joined to a fixed graph. **Keywords:** inverse eigenvalue problem, minimum number of distinct eigenvalues, borderings, joins of graphs, paths, cycles, hypercubes. **AMS subject classification:** 05C50, 15A18. ## 1 Introduction Given a simple graph \(G\) on \(|G|=n\) vertices, let \(S(G)\) denote the set of all \(n\times n\) real symmetric matrices \(A=\left(a_{ij}\right)\) such that, for \(i\neq j\), \(a_{ij}\neq 0\) if and only if \(i\) and \(j\) are adjacent in \(G\). There are no restrictions on the main diagonal entries of \(A\). The inverse eigenvalue problem for \(G\) asks which possible multi-sets of eigenvalues (spectra) occur in the class \(S(G)\). This is a very difficult problem for most graphs (which generally remains open, except for some sporadic graphs, including, for example, paths, cycles, complete graphs and some basic families of trees). Considerable work on this important problem has occurred over the past several decades (see the recent book [11]). Our work generally pertains to multiplicity lists associated to the spectra of matrices in \(S(G)\). Suppose \(A\) is an \(n\times n\) real symmetric matrix and \(\lambda\) is an eigenvalue of \(A\), that is \(\lambda\in\sigma(A)\), where \(\sigma(A)\) denotes the collection of eigenvalues (spectrum) of the matrix \(A\). We let \(m_{A}(\lambda)\) denote the multiplicity of \(\lambda\) in \(\sigma(A)\); if a scalar \(\lambda\) is not an eigenvalue of a matrix \(A\) then we define \(m_{A}(\lambda)=0\). Perhaps one of the most important results on the eigenvalues of real symmetric matrices is Cauchy's interlacing inequalities, from which it immediately follows that if \(A\) is an \(n\times n\) principal submatrix of an \((n+1)\times(n+1)\) real symmetric matrix \(B\), then \(|m_{A}(\lambda)-m_{B}(\lambda)|\leq 1\) for any scalar \(\lambda\). Another way to view the principal submatrix \(A\) of \(B\) is to consider that \(B\) was obtained from \(A\) by bordering \(A\) with one row and column, and since the spectrum is invariant under permutation similarity, we might as well assume that the new row and column added to \(A\) are the first row and column of \(B\). More generally, given a symmetric \(n\times n\) matrix \(A\) and \(r\geq 1\), an \(r\)_-bordering_ of \(A\) is any symmetric \((n+r)\times(n+r)\) matrix \(B\) which contains \(A\) as a trailing principal \(n\times n\) submatrix (that is, \(A\) lies in rows and columns indexed by \(\{r+1,r+2,\ldots,n+r\}\) of \(B\)), and it follows that \(|m_{A}(\lambda)-m_{B}(\lambda)|\leq r\). For brevity, we will also let \(A[S]\) denote the principal submatrix of \(A\) lying in rows and columns indexed by \(S\subseteq\{1,2,\ldots,n\}\). We define the _maximum multiplicity_ of a symmetric matrix \(A\) to be \[M(A)=\max\{m_{A}(\lambda):\lambda\in\sigma(A)\},\] and the _maximum multiplicity of a graph_\(G\) is \[M(G)=\max\{M(A):A\in S(G)\}.\] Let \(\mathbf{m}=(m_{1},\ldots,m_{k})\in\mathbb{N}_{0}^{k}\) be a sequence of \(k\) nonnegative integers and \(q(\mathbf{m})=|\{i\colon m_{i}>0\}|\). We say \(\mathbf{m}\) is an ordered multiplicity list for a symmetric matrix \(A\), if \(A\) possesses \(q(\mathbf{m})\) distinct eigenvalues \(\lambda_{1}<\lambda_{2}<\cdots<\lambda_{q(\mathbf{m})}\) and \(m_{A}(\lambda_{i})=m_{ji}\) for \(i=1,2,\ldots,q(\mathbf{m})\), where \(1\leq j_{1}<j_{2}<\cdots<j_{q(\mathbf{m})}\leq k\) are the \(q(\mathbf{m})\) indices \(j\) with \(m_{j}>0\). In this case we write \(\mathbf{m}=\mathbf{m}(A)\). For any matrix \(A\), we write \(q(A)=k\) if \(A\) has \(k\) distinct eigenvalues. For a given graph \(G\), we define \[q(G)=\min\{q(A):A\in S(G)\}.\] It is easy to observe that for any graph \(G\) we have \(q(G)\geq\frac{|G|}{M(G)}\). In this paper our goal is to investigate the behaviour of \(q(\cdot)\) upon appending vertices to a fixed graph \(G\). Here, when a vertex is appended, all possible edges between the existing vertices and the new vertex are inserted. We let \(K_{n}\) (\(n\geq 1\)), \(P_{n}\) (\(n\geq 1\)), \(C_{n}\) (\(n\geq 3\)) denote the complete graph, the path, and the cycle on \(n\) vertices. If \(G\) and \(H\) are two graphs, then the _join of \(G\) and \(H\)_, denoted by \(G\lor H\), is the graph obtained from the union of \(G\) and \(H\) by adding all edges with one endpoint in \(G\) and one endpoint in \(H\). Hence, our goal in this paper is to investigate the behaviour of \(q(G\lor H)\) for various graphs \(G\) and \(H\). Given a graph \(G\), let \(V(G)\) denote its vertex set. For \(v\in V(G)\), we define \(\operatorname{jdup}(G,v)\) to be the supergraph of \(G\) obtained from \(G\) by duplicating \(v\), with an edge connecting \(v\) to its duplicate. That is, \(V(\operatorname{jdup}(G,v))=V(G)\cup\{w\}\), where \(w\not\in V(G)\), and \(\{v,w\}\in E(\operatorname{jdup}(G,v))\), and \(w\) has the same neighbours as \(v\) in \(\operatorname{jdup}(G,v)\). As observed in [2, Theorem 3] and [14, Lemma 2.9], \[q(\operatorname{jdup}(G,v))\leq q(G). \tag{1}\] Since \(K_{n+1}\lor H=\operatorname{jdup}(K_{n}\lor H,v)\) for any vertex \(v\in K_{n}\), we see that \(q(K_{n}\lor H)\) is monotone decreasing in \(n\). One of the first examples considered along these lines was the case of determining \(q(K_{1}\lor P_{n})\). In [5, Example 4.5] it was shown that \(q(K_{1}\lor P_{n})=\lceil\frac{n+1}{2}\rceil\) for \(n\geq 2\). We note here that the lower bound on \(q(K_{1}\lor P_{n})\) follows from Cauchy's interlacing inequalities since \(q(P_{n})=n\). Another important example is the star, or \(S_{n}=K_{1}\lor E_{n-1}\), where \(E_{k}\) represents the empty graph on \(k\) vertices. It is straightforward to show that \(M(S_{n})=n-2\) and that \(q(S_{n})=3\). We remark that the star has played a key role in the inverse eigenvalue problem for graphs (mostly in the case of trees), and in many ways was a critical tool used in [6] to establish a converse to Cauchy's interlacing inequalities. This technique has been extended and adapted to broaden the scope of which spectra can be realized by a graph that contains a dominating vertex (see, for example, [3, 13, 15]). Merging the concepts of bordering a particular matrix and joining a vertex to a given graph, we are interested in determining the minimum number of distinct eigenvalues of a graph joined by a sequence of vertices, and we develop techniques, based in part of the nature of ordered multiplicity lists and eigenvectors, to aid this computation. We begin, in Section 2, with the necessary background and present a general upper bound (Theorem 2.2) on \(q(G\lor H)\) for connected graphs \(G\) and \(H\), which reduces to a simple exact formula in the case \(G=P_{n}\). In Section 3 we investigate the borderings of a given symmetric matrix. Theorem 3.1 describes in detail how a \(1\)-bordering can change the spectrum of a symmetric matrix, and in Proposition 3.4 we find a necessary and sufficient condition for the existence of an \(r\)-bordering of a symmetric matrix with a given value of \(q\). In Section 4 we make several observations on the patterns of such bordered matrices, and we apply them to estimate \(q(K_{n}\lor H)\) when \(H\) is either a hypercube or a cycle. Finally, in Section 5, we pay particular attention to some possible limitations of our methods (Corollary 5.2). ## 2 General graphs and paths It is known that if \(G\) and \(H\) are two connected graphs and \(|G|=|H|\), then \(q(G\lor H)=2\) (see [10]). This result was extended in [15, 16] where it was shown that \(q(G\lor H)=2\) if \(G\) and \(H\) are connected graphs with \(\left|\left|G\right|-\left|H\right|\right|\leq 2\). Moreover, for trees \(T_{1}\) and \(T_{2}\) we have \(q(T_{1}\lor T_{2})=2\) if and only if \(\left|\left|T_{1}\right|-\left|T_{2}\right|\right|\leq 2\), so in this case the result is sharp. An important notion used in [15] is generic realizability. Recall that a matrix (vector) is said to be _nowhere zero_ if none of its entries is zero. Suppose \(G\) is a graph with \(|G|=n\) vertices and \(\sigma\) is a collection of realizable eigenvalues in \(S(G)\) (with multiplicities), i.e., \(\sigma=\sigma(A)\) for some \(A\in S(G)\). The collection \(\sigma\) is said to be _generically realizable_ in \(S(G)\) if, for any finite set \(\mathcal{Y}\) of nonzero vectors in \(\mathbb{R}^{n}\), there is an orthogonal matrix \(U\) such that \(Uy\) is nowhere zero for all \(y\in\mathcal{Y}\), and \(UDU^{T}\in S(G)\), where \(D\) is a diagonal matrix with eigenvalues equal to \(\sigma\) (see [15] for more details). We will use the following result. **Theorem 2.1**.: _[_15_, Theorem 2.5]_ _Suppose \(G\) is a connected graph. Then any \(\sigma\) with \(|G|\) distinct elements is generically realizable in \(S(G)\)._ Theorem 2.1 allows us to construct matrices in \(S(G\lor H)\) with some desired spectral properties, using matrices \(A\in S(G)\) and \(B\in S(H)\) with distinct eigenvalues. In particular, in the next result we explore this idea of constructing matrices in \(S(G\lor H)\) with bounded number of distinct eigenvalues. **Theorem 2.2**.: _Suppose \(G\) and \(H\) are two connected graphs. If \(k\) is a positive integer and \(|G|\leq|H|\leq k|G|+k+1\), then_ \[q(G\lor H)\leq k+1.\] _In particular, for any connected graphs \(G\) and \(H\) with \(\max\{|G|,|H|\}\neq 1\) we have:_ \[q(G\lor H)\leq\left\lceil\frac{|G|+|H|}{\min\{|G|,|H|\}+1}\right\rceil.\] Proof.: Let \(|G|=n\), \(|H|=m\), where \(n\leq m\leq kn+k+1\). In this proof we will construct a matrix in \(S(G\lor H)\) with distinct eigenvalues contained in \(\mathcal{S}:=\{\lambda_{j}\}_{j=1}^{k+1}\) for any chosen set \(\mathcal{S}\) of \(k+1\) distinct numbers. To this end, choose real numbers \(\lambda_{1}<\cdots<\lambda_{k+1}\), and integers \(k_{i}\) with \(1\leq k_{i}\leq k\) for \(i=1,\ldots,n\), satisfying: \[0\leq k^{\prime}:=m-\sum_{i=1}^{n}k_{i}\leq k+1.\] Now select \(n\) sets of real numbers \(\mathcal{M}_{i}:=\{\mu_{i,1},\ldots,\mu_{i,k_{i}}\}\), \(i=1,\ldots,n\), where we assume \[\mu_{i,1}<\cdots<\mu_{i,k_{i}}.\] Furthermore, we assume that \(\mathcal{M}_{i}\) strictly interlaces \(\{\lambda_{1},\ldots,\lambda_{k_{i}+1}\}\), that the numbers \(\mu_{i,j}\) for \(j=1,\ldots,k_{i}\) and \(i=1,\ldots,n\) are all distinct, and finally we demand that numbers \(a_{i}:=\left(\sum_{j=1}^{k_{i}+1}\lambda_{j}\right)-\left(\sum_{j=1}^{k_{i}} \mu_{i,j}\right)\), \(i=1,\ldots,n\), are all distinct. Writing \(\operatorname{diag}(x_{1},\ldots,x_{m})\) for the diagonal matrix with main diagonal \((x_{1},\ldots,x_{m})\), we define \[\Lambda :=\operatorname{diag}(\lambda_{1},\ldots,\lambda_{k^{\prime}}),\] \[D_{i} :=\operatorname{diag}(\mu_{i,1},\ldots,\mu_{i,k_{i}}),\;i=1, \ldots,n,\] \[D_{a} :=\operatorname{diag}(a_{1},\ldots,a_{n}),\] \[D_{\mu} :=D_{1}\oplus\cdots\oplus D_{n}.\] By [9, Theorem 4.2] and our strict eigenvalue interlacing requirement, for \(i=1,\ldots,n\) there exist matrices: \[M_{i}:=\begin{pmatrix}a_{i}&\mathbf{b}_{i}^{T}\\ \mathbf{b}_{i}&D_{i}\end{pmatrix}\] with eigenvalues \(\lambda_{1},\ldots,\lambda_{k_{i}+1}\), where \(\mathbf{b}_{i}\) is a nowhere zero vector. Clearly, the distinct eigenvalues of \(M:=M_{1}\oplus\cdots\oplus M_{n}\oplus\Lambda\) are contained in \(\{\lambda_{1},\ldots,\lambda_{k+1}\}\), and in particular, \(q(M)\leq k+1\). The same is true for the matrix: \[M^{\prime}:=\begin{pmatrix}D_{a}&B^{T}&0\\ B&D_{\mu}&0\\ 0&0&\Lambda\end{pmatrix},\] where \(B=\bigoplus_{i=1}^{n}\mathbf{b}_{i}\), since \(M^{\prime}\) is permutationally similar to \(M\). Observe that the \(n\times n\) matrix \(D_{a}\), and the \(m\times m\) matrix \(D_{\mu}\oplus\Lambda\) are both diagonal matrices with distinct eigenvalues. By Theorem 2.1, their spectra are generically realizable for \(G\) and \(H\), respectively. In particular, there exist orthogonal matrices \(U\) and \(V\) so that \(UD_{a}U^{T}\in S(G)\), \(V(D_{\mu}\oplus\Lambda)V^{T}\in S(H)\), and \(U\left(B^{T}\quad 0\right)V^{T}\) is nowhere zero. Now \[(U\oplus V)M^{\prime}(U^{T}\oplus V^{T})=\begin{pmatrix}UD_{a}U^{T}&U\left(B ^{T}\quad 0\right)V^{T}\\ V\begin{pmatrix}B\\ 0\end{pmatrix}U^{T}&V(D_{\mu}\oplus\Lambda)V^{T}\end{pmatrix}\in S(G\lor H),\] so \(q(G\lor H)\leq k+1\) as required. The upper bound of Theorem 2.2 is sharp when \(H\) is a path, as shown below. **Corollary 2.3**.: _If \(m>1\) and \(G\) is a connected graph with \(|G|=n\leq m\), then_ \[q(G\lor P_{m})=\left\lceil\frac{n+m}{n+1}\right\rceil.\] Proof.: Let \(X\) be a matrix in \(S(G\lor P_{m})\). Since \(X\) has an \(m\times m\) principal submatrix corresponding to \(P_{m}\), this submatrix must have distinct eigenvalues. By eigenvalue interlacing, the matrix \(X\) can have maximum eigenvalue multiplicity at most \(n+1\). Hence \[q(X)\geq\left\lceil\frac{|G\lor P_{m}|}{M(X)}\right\rceil\geq\left\lceil\frac{n +m}{n+1}\right\rceil.\] The opposite inequality was established in Theorem 2.2. **Remark 2.4**.: _In the case \(G=P_{n}\) where \(2\leq n\leq m\), the formula of Corollary 2.3 improves on the upper bound \(q(P_{n}\lor P_{m})\leq\lceil\frac{n+m}{2}\rceil\) which follows from [4, Corollary 49], since \(P_{n}\cup P_{m}\) contains a Hamiltonian path._ We conclude this section with a theorem which resolves a question from [15, Remark 3.13]. **Corollary 2.5**.: _If \(m,n\geq 2\), then_ \[q(K_{n}\lor P_{m})=\left\lceil\frac{n+m}{n+1}\right\rceil.\] Proof.: For \(n\leq m\), this is a special case of Corollary 2.3. For \(n\geq m\), note that \(\lceil\frac{m+n}{n+1}\rceil=2\). We know from [10] that \(q(K_{n}\lor P_{n})=2\), and for \(n>m\), it follows that \(q(K_{n}\lor P_{m})=2\) by applying the notion of join duplication (jdup) and the inequality presented in (1). ## 3 Bordering Recall from the introduction that an \(r\)_-bordering_ of a symmetric \(n\times n\) matrix \(A\) is any symmetric \((n+r)\times(n+r)\) matrix \(B\) which contains \(A\) as its \(n\times n\) trailing principal submatrix of \(B\). Building upon the classical results derived from Cauchy's interlacing inequalities that characterize all possible eigenvalues of a \(1\)-bordering of \(A\), we aim to understand the fewest number of distinct eigenvalues possible for an \(r\)-bordering of \(A\). First we have a look at \(1\)-borderings, noting that any \(r\)-bordering of \(A\) can be obtained by repeated \(1\)-bordering. **Theorem 3.1**.: _Let \(A\) be an \(n\times n\) symmetric matrix and \(A^{\prime}\) a \(1\)-bordering of \(A\). The following statements are equivalent:_ 1. \(\mathcal{N}\) _is the set of distinct eigenvalues_ \(\lambda\) _of_ \(A^{\prime}\) _that satisfy_ \(m_{A^{\prime}}(\lambda)=m_{A}(\lambda)+1\)_, and_ \(\mathcal{R}_{0}\) _is the set of distinct eigenvalues_ \(\lambda\) _of_ \(A\) _that satisfy_ \(m_{A^{\prime}}(\lambda)=m_{A}(\lambda)-1\)_._ 2. \(A^{\prime}=\left(\begin{array}{cc}\alpha&\mathbf{b}^{T}U_{0}^{T}\\ U_{0}\mathbf{b}&A\end{array}\right)\) _where_ \(k:=|\mathcal{R}_{0}|\)_,_ \(U_{0}\) _is an_ \(n\times k\) _matrix with_ \(U_{0}^{T}U_{0}=I_{k}\)_, and_ \(U_{0}^{T}AU_{0}\) _is a_ \(k\times k\) _diagonal matrix_ \(D_{0}\) _with distinct eigenvalues equal to_ \(\mathcal{R}_{0}\)_. Further,_ \(\mathbf{b}\in\mathbb{R}^{k}\) _is a nowhere zero vector so that the matrix_ \[B=\left(\begin{array}{cc}\alpha&\mathbf{b}^{T}\\ \mathbf{b}&D_{0}\end{array}\right)\] _has eigenvalues_ \(\mathcal{N}\)_._ _If the above hold, then \(A^{\prime}\) is similar to a matrix of the form \(D_{\mathcal{N}}\oplus D_{1}\) for some diagonal matrix \(D_{1}\) via an orthogonal similarity using_ \[W=\left(\begin{array}{cc}\mathbf{v}_{0}^{T}&0\\ U_{0}V_{0}&U_{1}\end{array}\right), \tag{2}\] _where \(V=\left(\begin{array}{cc}\mathbf{v}_{0}^{T}\\ V_{0}\end{array}\right)\in\mathbb{R}^{|\mathcal{N}|\times|\mathcal{N}|}\) is an orthogonal matrix that satisfies \(V^{T}BV=D_{\mathcal{N}}\), and \(U=\left(\begin{array}{cc}U_{0}&U_{1}\end{array}\right)\) is an orthogonal matrix that satisfies \(U^{T}AU=D_{0}\oplus D_{1}\)._ Proof.: \((1\Rightarrow 2)\) Let \(\lambda_{1},\ldots,\lambda_{q}\) be the distinct eigenvalues of \(A\) with multiplicities \(m_{i}:=m_{A}(\lambda_{i})\), \(i=1,\ldots,q\), and let \(U^{\prime}\) be an orthogonal matrix that diagonalizes \(A\), that is, \(U^{\prime T}AU^{\prime}=\oplus_{j=1}^{q}\lambda_{j}I_{m_{j}}\). Then for some \(\alpha\in\mathbb{R}\) and \(\mathbf{a}\in\mathbb{R}^{n}\), we have \[A_{1}^{\prime}:=(1\oplus U^{\prime T})A^{\prime}(1\oplus U^{\prime})=\left( \begin{array}{cc}\alpha&\mathbf{a}^{T}\\ \mathbf{a}&\oplus_{j=1}^{q}\lambda_{j}I_{m_{j}}\end{array}\right).\] Write \(\mathbf{a}^{T}=\left(\begin{array}{cc}\mathbf{a}_{1}^{T}&\mathbf{a}_{2}^{T} &\cdots&\mathbf{a}_{q}^{T}\end{array}\right)\), where \(\mathbf{a}_{i}\in\mathbb{R}^{m_{i}}\). Choose orthogonal matrices \(Z_{i}\in\mathbb{R}^{m_{i}\times m_{i}}\) that satisfy \(Z_{i}\mathbf{a}_{i}=b_{i}\mathbf{e}_{1}\), where \(b_{i}\in\mathbb{R}\) and \(\mathbf{e}_{1}\) denotes the basic unit vector in \(\mathbb{R}^{m_{i}}\) whose first element is equal to \(1\). Note that \(b_{i}\neq 0\) if and only if \(\lambda_{i}\in\mathcal{R}_{0}\). Applying the orthogonal similarity \(1\oplus(\oplus_{i=1}^{q}Z_{i})\) to \(A_{1}^{\prime}\), followed by a permutation similarity \(1\oplus P\), we see that \(A^{\prime}\) is orthogonally similar to \(B\oplus D_{1}\), where \(D_{1}\) is a diagonal \((n-k)\times(n-k)\) matrix, \[B=\left(\begin{array}{cc}a&\mathbf{b}^{T}\\ \mathbf{b}&D_{0}\end{array}\right)\] and \(\mathbf{b}\in\mathbb{R}^{k}\) is a nowhere zero vector. In summary, \(U:=U^{\prime}(\oplus_{i=1}^{k}Z_{i})P\) satisfies \(U^{T}AU=D_{0}\oplus D_{1}\) and \((1\oplus U)^{T}A^{\prime}(1\oplus U)=B\oplus D_{1}\). Writing \(U=\left(\begin{array}{cc}U_{0}&U_{1}\end{array}\right)\) where \(U_{0}\in\mathbb{R}^{n\times k}\) and computing \(A^{\prime}=(1\oplus U)(B\oplus D_{1})(1\oplus U^{T})\) gives the form for \(A^{\prime}\) as in item 2. \((2\Rightarrow 1)\) Let \(A^{\prime}\) and \(U_{0}\) be as in item 2, and \(U_{1}\in\mathbb{R}^{n\times(n-k)}\) be such that \(U:=\left(\begin{array}{cc}U_{0}&U_{1}\end{array}\right)\) is orthogonal and \(U^{T}AU\) is a diagonal matrix \(D_{0}\oplus D_{1}\). From \[(1\oplus U^{T})A^{\prime}(1\oplus U)=B\oplus D_{1}\] we conclude that \(A^{\prime}\) has eigenvalues as stated in item 1. To prove the final claim we note that: \[W:=(1\oplus U)(V\oplus I_{n-k})=\left(\begin{array}{cc}\mathbf{v}_{0}^{T}&0 \\ U_{0}V_{0}&U_{1}\end{array}\right)\] and \(W^{T}A^{\prime}W=(V^{T}\oplus I)(S\oplus D_{1})(V\oplus I)=D_{\mathcal{N}} \oplus D_{1}\), as claimed. Theorem 3.1 provides a construction of a \(1\)-bordering of a symmetric matrix, subject to quite general eigenvalue constraints. Our first application of this theorem produces a known result [12, Thm. 4.3.10]. We include it here mostly to establish notation that we will depend on in the rest of this section. **Corollary 3.2**.: _Let \(A\) be an \(n\times n\) symmetric matrix, \(\mathcal{R}\) the set of distinct eigenvalues of \(A\), and \(\mathcal{R}_{0}\subseteq\mathcal{R}\). If \(\mathcal{N}\) is any set of \(|\mathcal{R}_{0}|+1\) distinct real numbers which strictly interlaces \(\mathcal{R}_{0}\), then there is a \(1\)-bordering \(A^{\prime}\) of \(A\) so that for \(\lambda\in\mathbb{R}\),_ \[m_{A^{\prime}}(\lambda)=\begin{cases}m_{A}(\lambda)-1&\text{if }\lambda\in \mathcal{R}_{0},\\ m_{A}(\lambda)+1&\text{if }\lambda\in\mathcal{N},\\ m_{A}(\lambda)&\text{otherwise},\end{cases}\] _where \(m_{A^{\prime}}(\lambda)=0\) means that \(\lambda\) is not an eigenvalue of \(A^{\prime}\)._ Proof.: Let \(D_{0}\) be a diagonal matrix with distinct diagonal elements equal to elements in \(\mathcal{R}_{0}\). By [6], since \(\mathcal{N}\) strictly interlaces \(\mathcal{R}_{0}\), there exist \(a\in\mathbb{R}\) and a (nowhere zero) vector \(\mathbf{b}\in\mathbb{R}^{|\mathcal{R}_{0}|}\) so that the matrix \[B=\left(\begin{array}{cc}a&\mathbf{b}^{T}\\ \mathbf{b}&D_{0}\end{array}\right)\] has the set of eigenvalues equal to \(\mathcal{N}\). The result now follows from Theorem 3.1. Starting with the eigenvalues of \(A\), we will reduce the number of distinct eigenvalues of an \(r\)-bordering of \(A\) by removing all eigenvalues from different intervals. Along these lines, we let \(m_{A}(\alpha,\beta)\) denote the sum of multiplicities of all eigenvalues \(\lambda\) of \(A\) that are contained in the open interval \((\alpha,\beta)\), where \(\alpha,\beta\in\mathbb{R}\cup\{-\infty,\infty\}\) with \(\alpha<\beta\). The following straightforward consequence of eigenvalue interlacing produces a lower bound on \(r\) for an \(r\)-bordering to have no eigenvalues in a given interval. **Lemma 3.3**.: _If \(M\) is an \(r\)-bordering of a symmetric matrix \(A\) and \(\alpha,\beta\in\mathbb{R}\cup\{-\infty,\infty\}\) with \(\alpha<\beta\), then_ \[|m_{A}(\alpha,\beta)-m_{M}(\alpha,\beta)|\leq r.\] Proof.: The eigenvalues of \(A\) and any \(1\)-bordering of \(A\) must interlace by Cauchy's interlacing inequalities, which establishes the case \(r=1\). In general, \(M\) is obtained by \(r\) successive \(1\)-borderings of \(A\), and the statement follows immediately. Let \(\mathbf{m}=(m_{1},\ldots,m_{k})\in\mathbb{N}_{0}^{k}\) be an ordered multiplicity list of a symmetric matrix. For \(t\geq 2\), we define \[C(\mathbf{m},t)=\min_{1=p_{1}\leq p_{2}\leq\cdots\leq p_{t}=k}\left(\max_{1 \leq i\leq t-1}\sum_{j=p_{i}+1}^{p_{i+1}-1}m_{j}\right).\] In other words, \(C(\mathbf{m},t)\) is the solution to the problem of minimizing the largest "gap multiplicity" of \(\mathbf{m}\), over the gaps given by the various choices of \(t\) "gap boundaries" \(P=\{1=p_{1}\leq p_{2}\leq\cdots\leq p_{t}=k\}\). Note that \(q(\mathbf{m})\leq t\) if and only if \(C(\mathbf{m},t)=0\), so we can view \(C(\mathbf{m},t)\) as a measure of how far the multiplicity list \(\mathbf{m}\) is from having \(q(\mathbf{m})=t\). This will be made more precise in the next proposition. **Proposition 3.4**.: _Let \(A\) be a symmetric matrix with \(k\geq 2\) distinct eigenvalues and ordered multiplicity list \(\mathbf{m}=(m_{1},\ldots,m_{k})\in\mathbb{N}_{0}^{k}\), and let \(2\leq t\leq k\). For \(r\in\mathbb{N}_{0}\) the following statements are equivalent:_ 1. _there is an_ \(r\)_-bordering_ \(M\) _of_ \(A\) _with_ \(q(M)\leq t\)_;_ 2. \(C(\mathbf{m},t)\leq r\)_._ Proof.: \((1\Rightarrow 2)\) Let \(\lambda_{1}<\cdots<\lambda_{k}\) be the distinct eigenvalues of \(A\), with \(m_{A}(\lambda_{i})=m_{i}\) for \(i=1,\ldots,k\). Let \(\mu_{1}<\cdots<\mu_{\tau}\) be the distinct eigenvalues of an \(r\)-bordering \(M\) of \(A\). By eigenvalue interlacing, we have \(\lambda_{j}\in[\mu_{1},\mu_{\tau}]\) for every \(j\). Hence, there is a unique \(i_{0}\) with \(\lambda_{1}\in[\mu_{i_{0}},\mu_{i_{0}+1})=[\nu_{1},\nu_{2})\), where \(\nu_{i}:=\mu_{i_{0}-1+i}\). For \(1\leq i\leq\tau-i_{0}\), define \[p_{i}:=\min\{j\colon 1\leq j\leq k,\lambda_{j}\in[\nu_{i},\nu_{i+1})\}\] and let \(p_{i}:=k\) for \(i>\tau-i_{0}\). Then \(1=p_{1}\leq p_{2}\leq\cdots\leq p_{t}=k\). Moreover, if \(p_{i}<j<p_{i+1}\), then \(\lambda_{j}\in(\nu_{i},\nu_{i+1})\), so \[\sum_{j:p_{i}<j<p_{i+1}}m_{j}=\sum_{j:p_{i}<j<p_{i+1}}m_{A}(\lambda_{j})\leq m _{A}(\nu_{i},\nu_{i+1})\leq r,\] where the final inequality follows from Lemma 3.3, since \(m_{M}(\nu_{i},\nu_{i+1})=0\). Hence, \[C(\mathbf{m},t)\leq\max_{1\leq i\leq t-1}\sum_{j:p_{i}<j<p_{i+1}}m_{j}\leq r,\] as required. * If \(C(\mathbf{m},t)=0\), then \(q(A)\leq t\) and we can take \(r=0\). From now on we assume \(C(\mathbf{m},t)>0\). Since \(r\geq C(\mathbf{m},t)\), there exist \(p_{1}=1<p_{2}<\cdots<p_{\tau}=k\) where \(\tau\leq t\) so that \[m_{A}(\lambda_{p_{i}},\lambda_{p_{i+1}})=\sum_{j:p_{i}<j<p_{i+1}}m_{j}\leq r, \quad 1\leq i<\tau.\] It suffices to find a \(1\)-bordering \(M_{1}\) of \(A\) so that \(\sigma(M_{1})\subseteq[\lambda_{1},\lambda_{k}]\) and \[m_{M_{1}}(\lambda_{p_{i}},\lambda_{p_{i+1}})\leq\max\{r-1,0\},\quad 1\leq i<\tau,\] since we can then continue inductively to find \(A=M_{0},M_{1},\ldots,M_{r}=:M\), where \(M_{\ell+1}\) is a \(1\)-bordering of \(M_{\ell}\), so that \(m_{M_{r}}(\lambda_{p_{i}},\lambda_{p_{i+1}})=0\) for \(1\leq i<\tau\) and every eigenvalue of \(M_{r}\) is in \([\lambda_{1},\lambda_{k}]\), hence \(M_{r}\) has only the \(\tau\) distinct eigenvalues \(\{\lambda_{p_{1}},\ldots,\lambda_{p_{\tau}}\}\). To show that such a matrix \(M_{1}\) exists, first enumerate the open intervals \(L_{i}:=(\lambda_{p_{i}},\lambda_{p_{i+1}})\) which contain at least one eigenvalue of \(A\) as \(L_{i_{1}},\ldots,L_{i_{s}}\), where \(1\leq i_{1}<\cdots<i_{s}<\tau\), and choose \(\mu_{j}\in\sigma(A)\cap L_{i_{j}}\) for \(1\leq j\leq s\). (The assumption \(C(\mathbf{m},t)>0\) guarantees that at least one such interval exists.) Let \(\mathcal{R}_{0}=\{\mu_{1},\ldots,\mu_{s}\}\), and choose any set \(\mathcal{N}\subseteq\{\lambda_{p_{1}},\ldots,\lambda_{p_{\tau}}\}\) of size \(s+1\) which strictly interlaces \(\mathcal{R}_{0}\). The matrix constructed in Corollary 3.2 then has the desired properties. Given an \(n\times n\) matrix \(A\) with \(\sigma(A)=\{\lambda_{1}^{(m_{1})},\ldots,\lambda_{k}^{(m_{k})}\}\), \(\sum_{i=1}^{k}m_{i}=n\), the general procedure to find an \(r\)-bordering matrix \(M\) of \(A\) with \(q(M)\leq t\) is as follows: 1. Choose an integer \(t\), \(2\leq t\leq q(A)\). Define \(M_{0}:=A\), \(r:=C(\mathbf{m},t)\). 2. For \(\ell=1,\ldots,r\), use Corollary 3.2 to construct an \((n+\ell)\times(n+\ell)\) matrix \(M_{\ell}\) such that \[C(\mathbf{m}(M_{\ell}),t)=C(\mathbf{m}(M_{\ell-1}),t)-1.\] Note that we may have some freedom in how we choose the sets \(\mathcal{R}_{0}\) and \(\mathcal{N}\) in each step. 3. The resulting \((n+r)\times(n+r)\) matrix \(M:=M_{r}\) has \(q(M)\leq t\). **Algorithm 3.1**Find an \(r\)-bordering matrix \(M\) of \(A\) with \(q(M)\leq t\) ## 4 Joins with complete graphs In this section we consider the join of two graphs and develop a technique for determining, under certain conditions, the minimum number of distinct eigenvalues for the join of a graph with a complete graph. ### Patterns and eigenvectors If we want a \(1\)-bordering of the matrix \(A\in S(G)\) to produce a matrix \(A^{\prime}\in S(K_{1}\lor G)\), then we need \(U_{0}\mathbf{b}\) to have no zero entries in Theorem 3.1 above. This will happen for most choices of \(\mathbf{b}\), unless \(U_{0}\) contains a zero row, or equivalently, unless eigenvectors corresponding to the eigenvalues in \(\mathcal{R}_{0}\) all have a zero entry in the same position. The next results considers the case \(|\mathcal{R}_{0}|=1\). We call an eigenvalue of a symmetric matrix _extreme_ if it is the smallest or the largest eigenvalue of that matrix. **Corollary 4.1**.: _Suppose \(G\) is a non-empty graph and there exists an \(A\in S(G)\) with a nowhere zero eigenvector associated with some eigenvalue \(\lambda\) of \(A\). Then there exists a \(1\)-bordering \(A^{\prime}\) of \(A\) in \(S(K_{1}\lor G)\) so that:_ * \(q(A^{\prime})=q(A)+1\) _if_ \(\lambda\) _is an extreme eigenvalue,_ * \(q(A^{\prime})=q(A)\) _if_ \(\lambda\) _is not an extreme eigenvalue,_ * \(q(A^{\prime})=q(A)-1\) _if_ \(\lambda\) _is simple and not an extreme eigenvalue._ Proof.: In Theorem 3.1 we choose \(\mathcal{R}_{0}=\{\lambda\}\), \(U_{0}\in\mathbb{R}^{n\times 1}=\mathbb{R}^{n}\) a nowhere zero eigenvector of \(A\) with eigenvalue \(\lambda\), and \(S\) with eigenvalues \(\mu_{1}\), \(\mu_{2}\), satisfying \(\mu_{1}<\lambda<\mu_{2}\), so that either \(\mu_{1}\) or \(\mu_{2}\) agrees with an eigenvalue of \(A\), if \(\lambda\) is an extreme eigenvalue, and so that both \(\mu_{1}\) and \(\mu_{2}\) are eigenvalues of \(A\), if \(\lambda\) is not an extreme eigenvalue of \(A\). Since \(U_{0}\) is a single column with no zero entries we get \(A^{\prime}\in S(K_{1}\lor G)\), and since the spectrum of \(A^{\prime}\) can be obtained for the spectrum of \(A\) by removing one multiple of \(\lambda\) and increasing the multiplicity of \(\mu_{1}\) and \(\mu_{2}\) by \(1\), the result follows. In Theorem 3.1 we have seen that after \(1\)-bordering, some eigenvectors will necessarily have a zero entry, and this has an interesting consequence for the patterns of \(2\)-borderings. **Corollary 4.2**.: _Let \(A\) be a symmetric matrix, \(A^{\prime}\) a \(1\)-bordering of \(A\), and \(A^{\prime\prime}\) a \(1\)-bordering of \(A^{\prime}\). If \((A^{\prime\prime})_{1,2}\neq 0\), then there is an eigenvalue \(\lambda\) of \(A^{\prime}\) so that \(m_{A^{\prime\prime}}(\lambda)=m_{A^{\prime}}(\lambda)-1=m_{A}(\lambda)\)._ Proof.: Adopting the notation and definitions from Theorem 3.1, we observe that the columns of the matrices \[W_{\mathcal{N}}=\left(\begin{array}{c}\mathbf{v}_{0}^{T}\\ U_{0}V_{0}\end{array}\right)\quad\text{and}\quad W_{1}=\left(\begin{array}{c} 0\\ U_{1}\end{array}\right)\] are eigenvectors of \(A^{\prime}\) corresponding to the eigenvalues of \(D_{\mathcal{N}}\) and \(D_{1}\), respectively. If \(\lambda\) is an eigenvalue of \(A^{\prime}\) which is not in \(\mathcal{N}\), then the \(\lambda\)-eigenspace of \(A^{\prime}\) is contained in the column space of \(W_{1}\), so every vector in this eigenspace has first entry equal to zero. It follows that any eigenvector of \(A^{\prime}\) with nonzero first entry must have its corresponding eigenvalue \(\lambda\) in \(\mathcal{N}\). Consider now the \(1\)-bordering \(A^{\prime\prime}\) of \(A^{\prime}\). Let us define \(\mathcal{R}^{\prime}_{0}\), \(D^{\prime}_{0}\), \(U^{\prime}_{0}\) and \(\mathbf{b}^{\prime}\) for this \(1\)-bordering, analogously as was done above for the \(1\)-bordering \(A^{\prime}\) of \(A\). If \((A^{\prime\prime})_{1,2}\neq 0\), then \((U^{\prime}_{0}\mathbf{b}^{\prime})_{1}\neq 0\) by the above, so the first row of \(U^{\prime}_{0}\) cannot be a zero row. Since \({U^{\prime}_{0}}^{T}A^{\prime}U^{\prime}_{0}=D^{\prime}_{0}\), this implies that there is some eigenvector of \(A^{\prime}\), with eigenvalue \(\lambda\in\mathcal{R}^{\prime}_{0}\), which has a nonzero first entry. Hence, by the previous paragraph, \(\lambda\in\mathcal{N}\cap\mathcal{R}^{\prime}_{0}\), and thus \(m_{A^{\prime\prime}}(\lambda)=m_{A^{\prime}}(\lambda)-1=m_{A}(\lambda)\). **Remark 4.3**.: _Suppose \(r\geq 2\) and \(A_{0},A_{1},\ldots,A_{r}\) are successive \(1\)-borderings of a matrix \(A_{0}\in S(G)\). If \(A_{r}\in S(K_{r}\lor G)\), then by Corollary 4.2, it is necessarily the case that for \(0\leq s\leq r-2\), there is a real number \(\lambda_{s}\) so that \(m_{A_{s+2}}(\lambda_{s})=m_{A_{s+1}}(\lambda_{s})-1=m_{A_{s}}(\lambda_{s})\)._ In the following example we illustrate how Algorithm 3.1 may be used to border a matrix achieving a small \(q\) value in \(3\)-bordering in different ways. We also identify cases when Remark 4.3 implies that the resulting \(3\)-bordering cannot be in \(S(K_{3}\lor G)\). **Example 4.4**.: _Let \(A\) be a \(9\times 9\) symmetric matrix with ordered multiplicity list \(\mathbf{m}=(1,3,3,1,1)\) and spectrum \(\{1,2^{(3)},3^{(3)},4,5\}\). Then \(C(\mathbf{m},3)=3\) and Table 1 shows the eigenvalues we can obtain in its \(3\)-bordering with three distinct eigenvalues._ _We note that the construction in the proof of Proposition 3.4 produces only the spectrum \(\{1^{(4)},3^{(6)},5^{(2)}\}\), which we obtain after \(1\)-bordering with spectrum \(\{1^{(2)},2^{(2)},3^{(4)},4,5\}\) and \(2\)-bordering with spectrum \(\{1^{(3)},2,3^{(5)},4,5\}\). This example shows that there may be several options of choosing appropriate \(\mathcal{N}\) and \(\mathcal{R}_{0}\) sets in each step as we develop an \(r\)-bordering with the desired number of distinct eigenvalues._ _In all three situations (corresponding to three columns of Table 1), if \(A\in S(G)\), by appropriately choosing the free parameters, it is possible to satisfy the necessary conditions of Remark 4.3 for the \(3\)-bordering of \(A\) to be in \(S(K_{3}\lor G)\). However, if, for example, we choose \(\lambda=4\) or \(\lambda^{\prime}=\lambda\) in the last column, then the conditions of the remark do not hold._ ### Hypercubes In this section we explore the minimum number of distinct eigenvalues for joins of complete graphs with a hypercube graph. Recall that for \(t\geq 1\), the vertices of the hypercube graph \(Q_{t}\) are the \(2^{t}\) binary strings of length \(t\), and its edges are the pairs of vertices with Hamming distance one. It was shown in [1, Corollary 6.9] that if \(Q_{t}\) is the hypercube graph with \(t\geq 2\), then \(q(Q_{t})=2\). In the following, we use the matrix construction from [1] to demonstrate that \(Q_{t}\) has a realization \(A\) having \(q(A)=2\) and a nowhere zero eigenvector. **Theorem 4.5**.: _For any two positive integers \(s\) and \(t\),_ \[q(K_{s}\lor Q_{t})\leq 3.\] _Moreover, if \(s\leq t\), then_ \[q(K_{s}\lor Q_{2t+2})=3.\] Proof.: We will demonstrate that \(Q_{t}\) has a realization \(A\) having \(q(A)=2\) and a nowhere zero eigenvector. Corollary 4.1 will then imply that \(q(K_{1}\lor Q_{t})\leq 3\) and so the result follows from the inequality (1). As observed in [1], for any nonzero \(\alpha\) and \(\beta\) with \(\alpha^{2}+\beta^{2}=1\), \(Q_{t}\) has a realization \[B=\left(\begin{array}{cc}\alpha A&\beta I\\ \beta I&-\alpha A\end{array}\right)\] such that \(A^{2}=I\) and \(q(B)=2\). The vector \[\left(\begin{matrix}(I+\alpha A)\mathbf{1}\\ \beta\mathbf{1}\end{matrix}\right)\] with \(\mathbf{1}\) representing the all ones vector, will be a nowhere zero eigenvector of \(B\) with eigenvalue \(1\) for any \(\alpha\) sufficiently small. The second part of the statement is a generalization of [5, Proposition 5.1]. It uses [5, Theorem 1.9], which is a small correction of [1, Theorem 4.4]. For \(i=1,2,\ldots,t+1\), consider the vertices of the hypercube \(Q_{t}\) given by the binary strings \(v_{i}=00\cdots 01100\cdots 0\), with the two ones in positions \(2i-1\) and \(2i\). Then \(\{v_{1},\ldots,v_{t+1}\}\) is a set of \(t+1\) independent vertices in \(K_{s}\lor Q_{2t+2}\), and \(N(v_{i})\cap N(v_{j})=V(K_{s})\) for \(i\neq j\). Therefore \[\left|\bigcup_{i\neq j}N(v_{i})\cap N(v_{j})\right|=s<t+1,\] \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(A\) & \multicolumn{3}{|c|}{\(\{1,2^{(3)},3^{(3)},4,5\}\)} \\ \hline 1-bordering & \(\{1^{(2)},2^{(2)},3^{(4)},5,\mu\}\) & \(\{1^{(2)},2^{(2)},3^{(3)},\lambda,\mu,\nu\}\) & \(\{1^{(2)},2^{(2)},3^{(3)},\lambda,5^{(2)}\}\) \\ \hline 2-bordering & \(\{1^{(3)},2,3^{(5)},\mu^{\prime},\mu^{\prime\prime}\}\) & \(\{1^{(3)},2,3^{(4)},\rho,\mu^{(2)}\}\) & \(\{1^{(3)},2,3^{(3)},\lambda^{\prime},5^{(3)}\}\) \\ \hline 3-bordering & \(\{1^{(4)},3^{(6)},\mu^{\prime\prime(2)}\}\) & \(\{1^{(4)},3^{(5)},\mu^{(3)}\}\) & \(\{1^{(4)},3^{(4)},5^{(4)}\}\) \\ \hline \end{tabular} \end{table} Table 1: Red eigenvalues are the ones that are forced to have reduced multiplicity in the next bordering, the blue ones satisfy the conclusion of Corollary 4.2 for the \(2\)-bordering of \(A\), and the green ones satisfy the same condition when we consider instead the \(3\)-bordering of \(A\). Moreover, \(\lambda,\lambda^{\prime}\in[3,4]\), \(\nu\in[4,5]\), \(\mu,\mu^{\prime},\mu^{\prime\prime}\geq 5\) and \(\rho\in(3,\mu)\), are arbitrary. hence \(q(K_{s}\lor Q_{2t+2})\geq 3\) by [1, Theorem 4.4]. By Theorem 2.2, if \(s\) is chosen sufficiently large, then \(q(K_{s}\lor Q_{t})=2\). Thus, in light of Theorem 4.5, and the fact that \(q(K_{s}\lor Q_{t})\) is a non-increasing function of \(s\) as per Equation (1), it is natural to ask the following question: What is the minimum \(s\) for which \(q(K_{s}\lor Q_{t})=2\)? ### Cycles Given \(A\in S(H)\) and a graph \(G\), let \(S(G\lor A)\) be the set of all matrices \(X\in S(G\lor H)\) so that \(X[H]=A\), and let \(q(G\lor A)\) be the minimum \(q(X)\) over all such matrices \(X\). Note that \(q(G\lor A)\geq q(G\lor H)\). Suppose \(A\) has ordered multiplicity list \(\mathbf{m}=\mathbf{m}(A)\). Given a number \(t\geq 2\) and a graph \(G\), we want to determine whether or not \(q(G\lor A)\leq t\). By Proposition 3.4, a necessary condition is that \[C(\mathbf{m},t)\leq|G|.\] In Section 5 we will show that this condition is not sufficient in general, since it may happen that none of the \(|G|\)-borderings guaranteed by Proposition 3.4 has the correct graph, \(G\lor H\), where for any \(n\times n\) symmetric matrix \(A=(a_{ij})\), \(H=G(A)\) is defined as the graph on \(n\) vertices with edges \(\{i,j\}\) whenever \(i\neq j\) and \(a_{ij}\neq 0\). In fact, it is not generally sufficient even in the case that \(G\) is a complete graph. Despite this, we provide examples when the procedure from Section 3 is applied successfully. Note that the necessary condition above may be written as \[q(G\lor A)\geq\min\{t\geq 2:C(\mathbf{m}(A),t)\leq|G|\}. \tag{3}\] Turning to cycles, it is known by [15, Theorem 3.4] that \(q(K_{2k-2}\lor C_{2k})=2\). Next, we use the following result on the inverse eigenvalue problem for cycles to determine the minimum number of eigenvalues allowed for joins of complete graphs with even cycles. **Proposition 4.6**.: _(IEPG for cycles [8]). Nonincreasing real numbers \(\lambda_{1}\geq\cdots\geq\lambda_{n}\) are the eigenvalues of \(A\in S(C_{n})\) if and only if either_ \[\lambda_{1}\geq\lambda_{2}>\lambda_{3}\geq\lambda_{4}>\lambda_{5}\geq\cdots\] _or_ \[\lambda_{1}>\lambda_{2}\geq\lambda_{3}>\lambda_{4}\geq\lambda_{5}>\cdots.\] _Hence, if \(k\geq 2\), \(q(C_{2k})=k\) and \(M(C_{2k})=2\)._ Observe that if \(\lambda\) is a multiple eigenvalue of \(A\in S(C_{n})\), then the multiplicity of \(\lambda\) is two and there exists a nowhere zero eigenvector for \(\lambda\) associated with \(A\). If the latter did not hold then every eigenvector \(\mathbf{x}\) for \(\lambda\) would satisfy \(\mathbf{x}_{i}=0\) for some \(i=1,2,\ldots,n\). In this case \(\lambda\) is a multiple eigenvalue for the principal submatrix of \(A\) obtained by deleting row and column \(i\). However, this submatrix lies in \(S(P_{n-1})\), and can only possess simple eigenvalues. **Theorem 4.7**.: _If \(k\geq 2\) then \(q(K_{1}\lor C_{2k})=k\)._ Proof.: To obtain the upper bound \(q(K_{1}\lor C_{2k})\leq k\), use Proposition 4.6 to choose a matrix \(A\in S(C_{2k})\) with multiplicity list \((2,2,\ldots,2)\), choose a non-extreme eigenvalue of \(A\) and a nowhere zero eigenvector and apply Corollary 4.1. To show the lower bound, assume that \(M\in S(K_{1}\lor C_{2k})\) has eigenvalues \(\mu_{1}\leq\mu_{2}\leq\cdots\leq\mu_{2k+1}\) and that \(A\) is the submatrix corresponding to \(C_{2k}\) and has eigenvalues \(\lambda_{1}\leq\cdots\leq\lambda_{2k}\). By Proposition 4.6, we have that the maximum multiplicity of an eigenvalue \(\lambda_{i}\) is \(2\) and furthermore, if there are eigenvalues \(\lambda_{i}\) and \(\lambda_{j}\) with multiplicity \(2\) then \(m_{A}(\lambda_{i},\lambda_{j})\) must be even. By eigenvalue interlacing we have that the maximum multiplicity of any eigenvalue of is 3. We claim that if \(\mu_{i}\) and \(\mu_{j}\) each have multiplicity 3, then there must be an eigenvalue of multiplicity 1 between them, and the lower bound follows once we show this. By way of contradiction, assume that there is some pair of eigenvalues with multiplicity 3 and \(j\) distinct eigenvalues between them, each with multiplicity 2 (with the possibility that \(j\) is 0). That is, we have \[\mu_{i}=\mu_{i+1}=\mu_{i+2}<\cdots<\mu_{i+2+2j+1}=\mu_{i+2+2j+2}=\mu_{i+2+2j+3}.\] From eigenvalue interlacing we must have \(\lambda_{i}=\lambda_{i+1}\) and \(\lambda_{i+2j+3}=\lambda_{i+2j+4}\). Hence it follows that both \(\lambda_{i+1}\) and \(\lambda_{i+2j+3}\) have multiplicity 2 and \(m_{A}(\lambda_{i+1},\lambda_{i+2j+3})=2j+1\) is odd, a contradiction. As seen in Example 4.4, we have to be careful about the choice of 1-bordering of \(A\) in order to assure that the 2-bordering of \(A\) is of the desired pattern. For example, if an eigenvalue \(\lambda\) of \(A\in S(C_{6})\) has multiplicity 2 and multiplicity 1 for a 1-bordering \(A^{\prime}\) of \(A\), then eigenvectors of \(\lambda\) for \(A^{\prime}\) will not be nowhere zero. This shows that we cannot start with a matrix \(A\in S(C_{6})\) with multiplicity list \((2,2,2)\) and produce a matrix \(A^{\prime\prime}\) in \(S(K_{2}\lor C_{6})\) with \(q(A^{\prime\prime})=2\) by 2-bordering of \(A\). In the next example we show that starting with a matrix \(A\in S(C_{6})\) with a different multiplicity list, \(q(A^{\prime\prime})=2\) can still be reached for \(A^{\prime\prime}\in S(K_{2}\lor C_{6})\). **Example 4.8**.: _Let_ \[A=\left(\begin{array}{cccccc}1&1&0&0&0&-1\\ 1&-1&1&0&0&0\\ 0&1&1&1&0&0\\ 0&0&1&-1&1&0\\ 0&0&0&1&1&1\\ -1&0&0&0&1&-1\end{array}\right)\in S(C_{6}),\quad U_{0}=\frac{1}{\sqrt{3}} \left(\begin{array}{cccc}1&0\\ 0&1\\ -1&0\\ 0&-1\\ 1&0\\ 0&1\end{array}\right).\] _Then \(\sigma(A)=\{(-2)^{(2)},-1,1,2^{(2)}\}\) and \(U_{0}^{T}AU_{0}=\left(\begin{smallmatrix}1&0\\ 0&-1\end{smallmatrix}\right)\). Let \(\mathcal{R}_{0}=\{-1,1\}\), choose any \(t\in(-1,1)\) and let \(\mathcal{N}=\{-2,t,2\}\). Following Corollary 3.2,_ \[B=\left(\begin{array}{cccc}t&\sqrt{3}u&\sqrt{3}v\\ \sqrt{3}u&1&0\\ \sqrt{3}v&0&-1\end{array}\right)\] _where_ \[u=\sqrt{(1-t)/2}\quad\text{and}\quad v=\sqrt{(1+t)/2},\] _and_ \[A^{\prime}=\left(\begin{array}{cccccc}t&u&v&-u&-v&u&v\\ u&1&1&0&0&0&-1\\ v&1&-1&1&0&0&0\\ -u&0&1&1&1&0&0\\ -v&0&0&1&-1&1&0\\ u&0&0&0&1&1&1\\ v&-1&0&0&0&1&-1\end{array}\right)\in S(K_{1}\lor C_{6})\] _with \(\sigma(A^{\prime})=\{(-2)^{(3)},t,2^{(2)}\}\)._ _Repeating the construction, choosing \(\mathcal{R}^{\prime}_{0}=\{t\}\) and \(\mathcal{N}^{\prime}=\{-2,2\}\), we obtain_ \[A^{\prime\prime}=\left(\begin{array}{cccccccc}-t&\sqrt{1-t^{2}}&-v&u&v&-u&-v&u \\ \sqrt{1-t^{2}}&t&u&v&-u&-v&u&v\\ -v&u&1&1&0&0&0&-1\\ u&v&1&-1&1&0&0&0\\ v&-u&0&1&1&1&0&0\\ -u&-v&0&0&1&-1&1&0\\ -v&u&0&0&0&1&1&1\\ u&v&-1&0&0&0&1&-1\end{array}\right)\in S(K_{2}\lor C_{6})\] _with \(\sigma(A^{\prime\prime})=\{(-2)^{(4)},2^{(4)}\}\) and thus \(q(K_{2}\lor C_{6})=2\)._ **Example 4.9**.: _Using the Jacobi-Ferguson algorithm [7], we can construct numerical matrices \(A\in S(C_{10})\) with spectrum \(\{(-6)^{(2)},-4,-2,0^{(2)},4,2,6^{(2)}\}\), and hence find numerical matrices \(A^{\prime\prime}\in S(K_{2}\lor C_{10})\) with spectrum \(\{(-6)^{(4)},0^{(4)},6^{(4)}\}\) and thus \(q(K_{2}\lor C_{10})=3\). One such numerical matrix is:_ \[A^{\prime\prime}=\left(\begin{array}{cccccccc}0&-1.9720&-0.11321&-0.40399&-2.4521&-1.3819&0.001884&0.00028437&-0.003489&2.1264&-2.9646&1.3043\\ -1.9720&0&-2.2195&-2.0495&2.2752&-0.83772&0.008390&0.005574&-0.01851&-1.9731&- 1.9791&1.6644\\ -0.11322&-2.2195&8.3468&0&0&0&0&0&0&0&0\\ -2.45391&-2.2725&3.2468&3.675&1.5399&0&0&0&0&0&0\\ -1.3819&-0.8372&0.3&0.675&1.5399&0&0&0&0&0&0\\ 0.0068184&0.00390&0&0&0&0.010306&0&4.5891&0&0&0&0\\ 0.00028437&0.00576&0&0&0&0&0.010306&0.54891&0&2.4227&0&0&0\\ -0.006489&-0.015811&0&0&0&0&0&0.24227&0.013409&0&0\\ -2.1264&-1.9731&0&0&0&0&0&0&0.013409&2.0999&2.0999\\ 1.3043&1.6944&3.6901&0&0&0&0&0&0&0&-2.7171&-2.9171\end{array}\right)\] ## 5 Limitations of Algorithm 3.1 for graph joins In this section we show that the condition \(C(\mathbf{m}(A),t)\leq r\) from Proposition 3.4 is not generally sufficient in the case \(G=K_{r}\) for the existence of a matrix with at most \(t\) eigenvalues in \(S(G\lor A)\). **Proposition 5.1**.: _Suppose \(t\geq 2\) and \(A_{1},\ldots,A_{k}\) are successive \(1\)-borderings of a symmetric matrix \(A=A_{0}\), and \(\mathbf{m}(A)=(m_{1},k,m_{2},k,\ldots,k,m_{t})\) where \(m_{j}\geq k\geq t\) for each \(j\), and \(q(A_{k})=t\). Then_ \[\mathbf{m}(A_{j})=(m_{1}+j,k-j,m_{2}+j,k-j,\ldots,k-j,m_{t}+j),\quad j=0,1, \ldots,k.\] Proof.: Let \(\mu_{1}<\cdots<\mu_{2t-1}\) be the distinct eigenvalues of \(A_{0}\) and \(\lambda_{1}<\cdots<\lambda_{t}\) the distinct eigenvalues of \(A_{k}\). By eigenvalue interlacing, every eigenvalue of \(A_{j}\) is in the closed interval \([\lambda_{1},\lambda_{t}]\). Moreover, by Lemma 3.3, for \(i=1,\ldots,t-1\) and \(j=0,\ldots,k\), we have \[m_{A_{j}}(\lambda_{i},\lambda_{i+1})\geq m_{A_{0}}(\lambda_{i},\lambda_{i+1})-j.\] In particular, \(0=m_{A_{k}}(\lambda_{i},\lambda_{i+1})\geq m_{A_{0}}(\lambda_{i},\lambda_{i+1} )-k\), so \[m_{A_{0}}(\lambda_{i},\lambda_{i+1})\leq k.\] Let \(S=\{\mu_{1},\ldots,\mu_{2t-1}\}\setminus\{\lambda_{1},\ldots,\lambda_{t}\}\). Then \(|S|\geq 2t-1-t=t-1\), and each eigenvalue in \(S\) has multiplicity at least \(k\) in \(A_{0}\) by hypothesis, so \[k(t-1)\leq k|S|\leq\sum_{i=1}^{t-1}m_{A_{0}}(\lambda_{i},\lambda_{i+1})\leq k( t-1).\] Hence, \(|S|=t-1\), so \(\{\lambda_{1},\ldots,\lambda_{t}\}\subseteq\{\mu_{1},\ldots,\mu_{2t-1}\}\). Since \(\mu_{1},\mu_{2t-1}\in[\lambda_{1},\lambda_{t}]\), this forces \(\mu_{1}=\lambda_{1}\) and \(\mu_{2t-1}=\lambda_{t}\). If \(\lambda_{i}=\mu_{j}\) and \(\lambda_{i+1}=\mu_{l}\) where \(l>j+2\), then \(m_{A_{0}}(\lambda_{i},\lambda_{i+1})\geq 2k\), a contradiction. It follows that \(\lambda_{i}=\mu_{2i-1}\) for \(1\leq i\leq t\). Hence, \(m_{A_{0}}(\lambda_{i},\lambda_{i+1})=k\) for each \(i\), and the bound we observed above becomes \[k-j\leq m_{A_{j}}(\lambda_{i},\lambda_{i+1}).\] Since \(A_{k}\) is a \((k-j)\)-bordering of \(A_{j}\), by Lemma 3.3 we also have \[m_{A_{j}}(\lambda_{i},\lambda_{i+1})\leq m_{A_{k}}(\lambda_{i},\lambda_{i+1})+ k-j=k-j,\] so \(m_{A_{j}}(\lambda_{i},\lambda_{i+1})=k-j\). Moreover, by eigenvalue interlacing, \(k-j\leq m_{A_{j}}(\mu_{2i})\leq m_{A_{j}}(\lambda_{i},\lambda_{i+1})=k-j\), so we have equality. Hence, the multiplicity of \(\mu_{2i}\) as an eigenvalue of \(A_{j}\) is \(k-j\), and no other real number in \((\lambda_{i},\lambda_{i+1})\) is an eigenvalue of \(A_{j}\). It follows that every eigenvalue of \(A_{j}\) other than \(\mu_{2},\ldots,\mu_{2t}\) is in the set \(\{\lambda_{1},\ldots,\lambda_{t}\}\). The total multiplicity of these eigenvalues in \(A_{j}\) is \(\sum_{i=1}^{t}(m_{i}+j)\), and by eigenvalue interlacing, the multiplicity of \(\lambda_{i}=\mu_{2i-1}\) is bounded above by \(m_{i}+j\), so this must be precisely its multiplicity. **Corollary 5.2**.: _If \(A\) is a symmetric matrix with \(\mathbf{m}(A)=(m_{1},k,m_{2},k,\ldots,k,m_{t})\) where \(m_{i}\geq k\geq t\geq 2\) for each \(i\), then \(C(\mathbf{m}(A),t)=k\) yet \(q(G\lor A)>t\) for all non-empty graphs \(G\) with \(|G|=k\). Hence, the inequality (3) is strict in this case._ Proof.: We have \(C(\mathbf{m}(A),t)=k\), so \(q(B)\geq t\) for all \(k\)-borderings \(B\) of \(A\) by Proposition 3.4. Consider a sequence of successive \(1\)-borderings taking us from \(A\) to some \(k\)-bordering \(B\) with \(q(B)=t\). By Proposition 5.1, the successive eigenvalue multiplicities of any given \(\lambda\in\mathbb{R}\) in this sequence of matrices is monotone. Hence, by Corollary 4.2, the superdiagonal of the leading principal \(k\times k\) submatrix of \(B\) is zero. We can repeat this argument after first permuting the rows and columns of this principal submatrix to see that all its off-diagonal entries are zero, so it has an empty graph. This shows a limitation of Algorithm 3.1. However, we show in the following proposition that this limitation is quite specific, and that if the multiplicity list is perturbed only slightly we may have success using this procedure. **Proposition 5.3**.: _Suppose \(t\geq 2\) and \(A\) is a symmetric matrix with eigenvalues_ \[\lambda_{1}^{(m_{1})}<\beta<\gamma<\lambda_{2}^{(m_{2})}<\mu_{2}^{(2)}<\lambda _{3}^{(m_{3})}<\mu_{3}^{(2)}<\cdots<\mu_{t-1}^{(2)}<\lambda_{t}^{(m_{t})}.\] _If \(A\) has an eigenbasis such that for each vertex \(u\) there is at least one eigenvector corresponding to an eigenvalue in \(\{\mu_{i}\}\) which is nonzero in the entry corresponding to \(u\), then there exists a matrix \(B\in S(K_{2}\lor A)\) such that \(B\) has eigenvalues \(\lambda_{1}^{(m_{1}+2)},\ldots,\lambda_{t}^{(m_{t}+2)}\). In particular, \(q(K_{2}\lor G(A))\leq t\)._ Proof.: By [6] we know that there are \(1\)-borderings \(C_{\beta}\) and \(C_{\gamma}\) of the matrices \(\operatorname{diag}(\beta,\mu_{2},\ldots,\mu_{t})\) and \(\operatorname{diag}(\gamma,\mu_{2},\ldots,\mu_{t})\) respectively which each have eigenvalues \(\{\lambda_{1},\ldots,\lambda_{t}\}\). Furthermore, we know that these borderings can have no zeros in the first row or column, and by computing traces we see that the \((1,1)\) entries are \(k-\beta\) and \(k-\gamma\) respectively. Let the first row of \(C_{\beta}\) have entries \(k-\beta,b_{1},\ldots,b_{t-1}\) and the first row of \(C_{\gamma}\) have entries \(k-\gamma,c_{1},\ldots,c_{t-1}\). Define \(B_{0}=[v_{\beta},v_{\gamma}]\) where \(v_{\beta}=(k-\beta,0,b_{1},0,b_{2},0,\ldots)^{T}\), and \(v_{\gamma}=(0,k-\gamma,0,c_{1},0,c_{2},\ldots)^{T}\). That is, we are making vectors with the first rows of the borderings in the even or odd positions. Now define matrices \(D_{1}=\operatorname{diag}(k-\beta,k-\gamma)\), \(D_{2}=\operatorname{diag}(\beta,\gamma,\mu_{2}^{(2)},\ldots,\mu_{t-1}^{(2)})\) and \(D_{0}=\text{diag}(\lambda_{1}^{(m_{1})},\dots,\lambda_{t}^{(m_{t})})\), and finally define \[M=\begin{pmatrix}k-\beta&&b_{1}&&b_{2}&&\cdots&b_{t-1}&\\ &k-\gamma&&c_{1}&&c_{2}&\cdots&&c_{t-1}\\ b_{1}&&\beta&&&&&&&\\ &c_{1}&&\gamma&&&&&&&\\ b_{2}&&&&\mu_{2}&&&&\\ &&c_{2}&&&&\mu_{2}&&&&\\ &&&&&&&\ddots&&\\ b_{t-1}&&&&&&&\mu_{t-1}&\\ &&c_{t-1}&&&&&&&\mu_{t-1}\end{pmatrix}\oplus D_{0}\] where the blank entries denotes \(0\)s. Since \(M\) is permutationally similar to the block diagonal matrix with blocks \(C_{\beta}\), \(C_{\gamma}\), and \(D_{0}\), the eigenvalues of \(M\) are \(\lambda_{1}^{(m_{1}+2)},\dots,\lambda_{t}^{(m_{t}+2)}\). By the assumption, we may choose \(V\) to be a matrix which diagonalizes the matrix \(A\) such that for any row \(u\), there is a column \(j\) corresponding to an eigenvector of some \(\mu_{\ell}\) such that \(V_{uj}\neq 0\). Without loss of generality assume that \[V^{T}AV=\text{diag}(\beta,\gamma,\mu_{2}^{(2)},\dots,\mu_{t-1}^{(2)},\lambda_{ 1}^{(m_{1})},\dots,\lambda_{t}^{(m_{t})})=D_{2}\oplus D_{0}.\] Define \(W^{\prime}=I_{2}\oplus W_{2}\oplus\dots\oplus W_{t-1}\) where the \(W_{i}\) are any orthogonal \(2\times 2\) matrices, and define \(W=W^{\prime}\oplus I_{m_{1}+\dots+m_{t}}\). Then, as \(W^{\prime}\) commutes with \(D_{2}\), we have that \[W^{T}V^{T}AVW=V^{T}AV=D_{2}\oplus D_{0}.\] Let \(V^{\prime}\) be the first \(2t-2\) columns of \(V\), so that it has columns that are the eigenvectors corresponding to the eigenvalues of \(D_{2}\). Notate these columns by \(v_{1},\dots,v_{2t-2}\). Let \(U\) be any orthogonal \(2\times 2\) matrix. Then \[(U\oplus VW)M(U^{T}\oplus W^{T}V^{T}) =(U\oplus VW)\begin{pmatrix}D_{1}&B_{0}\\ B_{0}^{T}&D_{2}\oplus D_{0}\end{pmatrix}(U^{T}\oplus W^{T}V^{T})\] \[=\begin{pmatrix}UD_{1}U^{T}&UB_{0}W^{T}V^{T}\\ VWB_{0}^{T}U^{T}&A\end{pmatrix}.\] This matrix is in \(S(K_{2}\lor G(A))\) if \(UD_{1}U^{T}\) has nonzero off-diagonal entries and the following matrix has no zero entry: \[UB_{0}W^{T}V^{T}=UB_{0}^{\prime}(W^{\prime})^{T}(V^{\prime})^{T},\quad\text{ where}\quad B_{0}^{\prime}=\begin{pmatrix}b_{1}&&b_{2}&&\cdots&b_{t-1}&\\ &c_{1}&&c_{2}&\cdots&&c_{t-1}\end{pmatrix}.\] Let \(\theta_{2},\dots,\theta_{t-1}\) be uniformly and independently chosen angles and let \(W_{i}\) be the \(2\times 2\) rotation matrix by angle \(\theta_{i}\). Then the \(ij\)'th entry of \(B_{0}^{\prime}(W^{\prime})^{T}(V^{\prime})^{T}\) is \[b_{1}v_{1}(j)+\sum_{k=2}^{t-1}b_{k}v_{2k-1}(j)\cos\theta_{k}-b_{k}v_{2k}(j)\sin \theta_{k}\] if \(i=1\) and \[c_{1}v_{2}(j)+\sum_{k=2}^{t-1}c_{k}v_{2k-1}(j)\sin\theta_{k}+c_{k}v_{2k}(j)\cos \theta_{k}\] if \(i=2\). Since the \(b_{i}\) and \(c_{i}\) are nonzero, and by the choice of \(V\) there is at least one \(u\) with \(3\leq u\leq 2t-2\) with \(v_{u}(j)\neq 0\), we have that the \(ij\)'th entry of \(B_{0}^{\prime}(W^{\prime})^{T}(V^{\prime})^{T}\) is nonzero with probability \(1\). So we can choose \(W^{\prime}\) for which \(B_{0}^{\prime}(W^{\prime})^{T}(V^{\prime})^{T}\) has no zero entries. Moreover, since \(\beta\neq\gamma\), \(D_{1}\) is not a zero matrix. It is now easy to choose \(U\) such that \(UD_{1}U^{T}\) and \(UB_{0}W^{T}V^{T}\) have all nonzero entries. In this paper we continued the study of the behaviour of \(q(G\lor H)\). For a general graph \(H\), we obtained results for the case when \(G\) is either a path or a complete graph, and we explored the potential impact of eigenvector patterns on \(q(G\lor H)\), for various families of graphs \(H\). ### Acknowledgements This project started and was made possible by the online research community _Inverse eigenvalue problems for graphs_, which is sponsored by the American Institute of Mathematics with support from the US National Science Foundation. The authors thank AIM and the research community organizers for their support.
2304.05121
APISENS- Sentiment Scoring Tool for APIs with Crowd-Knowledge
Utilizing pre-existing software artifacts, such as libraries and Application Programming Interfaces (APIs), is crucial for software development efficiency. However, the abundance of artifacts that provide similar functionality can lead to confusion among developers, resulting in a challenge for proper selection and implementation. Through our preliminary investigation, we found that utilizing the collective knowledge of a crowd can greatly assist developers in acquiring a thorough and complete understanding of the complexities involved in the software development process. Especially as emotions are an inseparable part of human nature, it influences developers' activities. In this regard, we attempt to build a tool that can retrieve sentiment information for software APIs so that developers can determine APIs to utilize for their tasks. We employ the dataset from the most popular platforms (i.e., Twitter and YouTube) to build our research prototype. The source code, tool, and demo video are available on GitHub at \url{https://github.com/FalconLK/APISens}.
Kisub Kim, Ferdian Thung, Ting Zhang, Ivana Clairine Irsan, Ratnadira Widyasari, Zhou Yang, David Lo
2023-04-11T10:26:12Z
http://arxiv.org/abs/2304.05121v1
# APISens- Sentiment Scoring Tool for APIs with Crowd-Knowledge ###### Abstract Utilizing pre-existing software artifacts, such as libraries and Application Programming Interfaces (APIs), is crucial for software development efficiency. However, the abundance of artifacts that provide similar functionality can lead to confusion among developers, resulting in a challenge for proper selection and implementation. Through our preliminary investigation, we found that utilizing the collective knowledge of a crowd can greatly assist developers in acquiring a thorough and complete understanding of the complexities involved in the software development process. Especially as emotions are an inseparable part of human nature, it influences developers' activities. In this regard, we attempt to build a tool that can retrieve sentiment information for software APIs so that developers can determine APIs to utilize for their tasks. We employ the dataset from the most popular platforms (i.e., Twitter and YouTube) to build our research prototype. The source code, tool, and demo video are available on GitHub at [https://github.com/FalconLK/APISens](https://github.com/FalconLK/APISens). API Sentiment Analysis, API Comprehension, Pre-trained Model ## I Introduction Over the past decades, we have encountered the rapid growth of open-source software (OSS). This phenomenon naturally drives more and more reuse of software artifacts (e.g., libraries or frameworks). As the community has tremendous demand, artifact recommendation systems have received extensive attention. For example, developers often search for existing source code artifacts [1, 2] to obtain the functions they need to implement or to maintain the software [3]. Therefore, many automated artifact recommendation systems have been proposed [4, 5, 6, 7, 8, 9] to help developers select artifacts efficiently. The reuse of devised software artifacts causes another problem due to there being a large number of artifacts that implement the same or similar functionalities, which may confuse developers. For example, two APIs, getData and retrieveInformation, are capable of processing the same functionality, which is "requesting and obtaining the necessary data". As a more complex example, there exists decrypt and unscramble whose names are totally different while they are both for "decoding/cracking something to get the initial state". Among the multiple options, developers need to choose the most appropriate ones. A study [10] suggests that software reuse mainly occurs due to a lack of base knowledge of the corresponding artifacts. This indicates that developers reuse artifacts because they do not know what they need to know to select the most suitable one. They also revealed that re-implementation happens when the artifacts are required to be further enhanced, the dependencies are too complicated, or they are deprecated. Moreover, many developers are actively discussing choosing a suitable artifact in the software development community1 and their websites23. Footnote 1: [https://stackoverflow.com/questions/488348/what-are-your-criteria-for-choosing-a-framework-or-library](https://stackoverflow.com/questions/488348/what-are-your-criteria-for-choosing-a-framework-or-library) Footnote 2: [https://www.lagasoft.com/blog/8-tips-for-choosing-the-right-library](https://www.lagasoft.com/blog/8-tips-for-choosing-the-right-library) Footnote 3: [https://www.theserverside.com/tip/7-tips-to-choose-the-right-Java-library](https://www.theserverside.com/tip/7-tips-to-choose-the-right-Java-library) To support developers with a comprehensive understanding when they select a software artifact, crowd-knowledge is known to be helpful [11, 12]. Especially as emotions are an inseparable part of human nature, it influences developers' activities as well [13]. Several studies in the software engineering community discovered that developers express sentiments on libraries [10], APIs [14], commit messages [15], project artifacts [16], etc. Yet, most of the existing recommendation systems [4, 5, 6, 17, 18] retrieve multiple artifacts without considering their quality in terms of developer sentiments while only some approaches [5, 19] considered discussion data from Stack Overflow4. These observations motivated us to build a tool named APIsens, a software API sentiment scoring tool with crowd-knowledge (i.e., online developer sentiments) from diverse resources. APISens retrieves the sentiment scores for APIs such that the user can recognize its popularity, level of awareness, or how the public reacts to it. To construct the tool, we employ a pre-trained Transformer (i.e., a deep neural network architecture based on the attention mechanism) model, BERT (Bidirectional Encoder Representations from Transformers) [20]. Its effectiveness in sentiment analysis has been proved by an empirical study [12]. APISens consists of two models that are based on BERT-base. The first model is a recognition model, which distinguishes whether the input discussion is related to a software artifact context instead of a normal context (e.g., decrypt and unscramble may be used for a normal context anywhere). We collect broad discussion text data from diverse online resources (i.e., Twitter and YouTube) and filter with the first model. Please note that the discussion with the normal context is the noise for training the API sentiment analysis model. Furthermore, we believe that the recognition model, which can distinguish discussions related to software artifacts, is able to extract API-related ones when it meets the discussion that contains API-related content. We call the results of the first model API-related discussions. The second model infers a score for the input API. It is fine-tuned with API-related discussions to retrieve the score for an API by analyzing its sentiments. In a nutshell, APISens takes an API name as text and retrieves a comprehensive sentiment normalized score for such an API. In summary, this paper contributes the following: * Diversity of the resources (i.e., Twitter and YouTube) for fine-tuning the BERT model to infer the sentiment score that supports users with comprehensive API scores. * Construction of a tool that can provide multiple scores including sentiment and popularity for software APIs such that developers can benefit from determining more appropriate APIs for their tasks. ## II Related Work ### _Sentiment Analysis for Software Engineering_ Sentiment analysis in software engineering is a computational study of various viewpoints of developers on diverse software artifacts. Most of them consider sentiment analysis as a polarity classification. Given a text, the goal of the study is to predict its sentiment orientation among _positive_, _neutral_, or _negative_. We introduce representative approaches to the topic. Stanford CoreNLP [21] is designed for single-sentence classification; it is trained with the Recursive Neural Tensor Network [22] on the Stanford Sentiment Treebank [21]. SentiStrength [23] is a lexicon-based approach with several dictionaries, including both formal and informal terms, while each term is labeled with a sentiment strength. SentiStrength-SE [24] contains a domain-specific dictionary that is constructed based on an in-depth investigation of the results from the previous approach. SentiCR [25] is designed for code review comments. The results show that Gradient Boosting Tree [26] is the most suitable model for their data. Senti4SD [27] is a supervised model that utilizes three different features based on (1) generic sentiment lexicons [23]; (2) keywords (number of occurrences); (3) word representation in a Distributional Semantic Model (DSM) specifically trained on Stack Overflow data. Recently, a study by Zhang et al. [12] was conducted to assess the performance of these sentiment analysis tools and large pre-trained models that are frequently leveraged within the software engineering domain. ### _Deep learning based API Recommendation_ Recent API recommendation approaches utilize neural networks to learn and identify patterns in API usage and documentation. These systems can be trained on datasets consisting of past API usage and documentation, allowing them to make informed recommendations for future API selection. DeepAPI [6] is the first introduced deep learning model to API recommendation, achieving end-to-end API sequence generation. This idea considered the API recommendation as a machine translation problem with a Recurrent Neural Network (RNN) EncoderDecoder model to encode a query into a context vector. Then, it recommends an API sequence based on the context vector for the query. API2Vec [28] utilizes unsupervised learning and word embedding techniques to create embeddings and recommend APIs based on their semantic similarity to a given context. Huang et al. [19] unveiled BIKER by filtering prospective APIs according to their alignment with Stack Overflow posts by leveraging the bag-of-word embedding to optimize the selection of APIs. CLEAR [5] also leverages Stack Overflow posts and BERT sentence embedding to preserve the semantic information in queries and posts. Moreover, it employs contrastive learning [29] to distinguish the queries which are semantically dissimilar but lexically similar. Although BIKER and CLEAR took into account the crowd-knowledge in a way with Stack Overflow posts, they only adopted related terms with the target APIs. Therefore, they still missed considering developer sentiments. ## III APISens in Detail APISens takes an API name and retrieves two types of sentiment scores; one from discussion and the other from videos. As Figure 1 illustrates, the tool consists of the following three components: (1) API Recognizer, (2) Discussion Analyzer, and (3) Video Analyzer. Assuming the data from online platforms are crawled with the corresponding APIs (e.g., GitHub REST API5), APISens must take a simple form of API name as a query. Given an API name as a query, APISens's pre-trained API Recognizer identifies the API-related discussions (i.e., tweets in this tool). Once the relevant discussions are detected, Discussion Analyzer takes them as input to retrieve the sentiment scores. Video Analyzer concurrently analyzes sentiments from the videos corresponding to the target API with various aspects (i.e., the number of 'Likes', 'Comments', and 'Views'). Finally, the user interface shows the results. Figure 1 illustrates the steps that are unfolded in the working of the score retrieval. ### _Datasets (Crowd-knowledge)_ We collect datasets from two big platforms (i.e., Twitter and YouTube). Twitter is one of the biggest social network platforms, and a sentiment analysis tool can benefit in many aspects. It has a large user base, including developers, which can provide a broad overview of sentiment about software APIs, while its large and diverse discussions can ensure that the tool is able to handle a wide range of sentiment. The data from Twitter is constantly being updated, which is useful for practitioners to be up-to-date. YouTube is the largest video-sharing platform with a wide range of content across many different topics, making it a valuable data source for sentiment analysis. Many people rely on YouTube to get information and opinions about products, services, and even software APIs. In order to obtain API datasets from both Twitter and YouTube, we concurrently collect data from the platforms. As tweets may be written in various languages and may contain various forms of noise, such as retweets, mentions of other users, URLs, and emojis, the data is preprocessed prior to collection. The preprocessing step converts emojis to their corresponding textual representation and filters out APIs and corresponding tweets by calculating the average number of tweets per API (i.e., APIs that are related to less than the average are omitted). This filtering helps to prevent partial generalization, as the inclusion of tweets from APIs with only a single tweet could lead to bias. Then, it removes duplicate tweets and those that are written in non-English using a Python library (i.e., langdetect). The detection is possible by taking the language with the highest probability. The video dataset contains the title, description, the number of 'Likes', 'Comments', and 'Views'. Some videos may not contain the statistics of the information for sentiment analysis, and they are the further candidates to be eliminated. We initially target 1409 JDK APIs for our research prototype and collected 56,646 tweets. After processing the data, the number of tweets is 28,278, with the corresponding 476 APIs. While the crawler can be performed in real-time and APISens can analyze the dynamic data, our research prototype assumes the dataset is already collected. ### _API Recognizer_ Our API recognizer is composed of a fully connected layer on top of a pre-trained BERT [20] as it has shown to be effective in the task of software library recognition in tweets [30]. We prepend the [CLS] token to each tweet as the input to BERT. The final hidden state of the [CLS] is seen as the aggregated representation of the tweet. We use the embedding output from [CLS] as the input to the classifier. Our API recognizer is trained with the AdamW optimizer [31] and we use a linear learning rate scheduler. The employed model is to predict whether the tweets are related to software artifacts or not. It was initially trained with a library dataset (i.e., 4,456 tweets with 23 libraries). Still, the key features (i.e., tokens related to the software artifacts) that such a recognition model learns are not different for APIs. Furthermore, the literature [30] shows that the mixed-setting (i.e., utilizing the mixed dataset across the different libraries) performs the best with 90% of the F1-score. Therefore, it is a suitable model to leverage with the same setting to classify the discussions related to our target APIs. Based on the predictions, the authors manually confirm the labels to enhance the correctness. Moreover, we classify the video datasets since there may exist noise, such as the videos related to the normal terms such as unscramble can be associated with any topic. As we have the titles and descriptions, we apply the same procedure to filter out the noise. As a result of the video filtering procedure, the number of target APIs decreased from 476 to 235, and the number of associated tweets dropped to 1,606. ### _Discussion Analyzer_ Given the API-related discussion dataset, this component performs sentiment analysis to get a score for retrieval. We again employ the same pre-trained BERT model that is leveraged in a state-of-the-art sentiment analysis approach [12]. In the literature, the experimental results show that BERT outperforms other state-of-the-art techniques that specifically target sentiment analysis for software engineering downstream tasks (i.e., BERT shows 89% of Micro-avg, which is 7 percentage points better than the second-best model [12]). The API dataset used for training was 4,522 Stack Overflow posts related to software APIs. As the model retrieves class predictions among _Positive_, _Neutral_, and _Negative_, we convert them to integers 10, 5, and 0, respectively, to calculate the average scores with a maximum of 10. We finally integrate the results and calculate the average sentiment score for every tweet for the input API to provide a comprehensive and concise score. ### _Video Analyzer_ Given the list of API-related videos filtered with their title and description by the API Recognizer, APISens extracts the statistics to support the sentiment more comprehensively. The statistics include the number of likes, comments, and views of each video that are leveraged to determine the sentiment of the content (i.e., the input API). These numbers of a video are known to be indicative of public sentiment toward the Fig. 1: Overview of APISens. video. A high number of likes and views imply that the video is popular and well-created, while a high number of comments can provide insight into the types of discussions and reactions the video elicits. We believe these numbers can be useful for sentiment analysis as they directly reflect audiences' opinions that may contain emotional orientations. To get a more accurate reflection of the central tendency in the result set while avoiding the outlier effects, we decide to provide each median value in the user interface. ## IV Prototype explanation and Evaluation ### _Prototype Explanation_ The user interface of APISens is designed to be intuitive and easy to use. Figure 2 illustrates the user interface. The main window is divided into three main sections: the query panel on the top, the result demonstration panel in the center, and the chart panel on the bottom. The query panel (1) takes an API name as the user query. Once the _Search_ button is clicked, it analyzes the discussion sentiment to retrieve the score to a result demonstration panel (2). At the same time, it provides the sentiment information from the related videos in 3. As we mentioned, APISens calculates the median value of the collected videos of the input API. Furthermore, a bar chart (4) displays statistics of likes and comments from a certain number of videos related to the corresponding API in the chart panel. This allows users to recognize the trend of actual numbers of likes and comments on related videos. Users can re-sample the number of videos to get more drift using the slide bar (5). The bar (6) at the top demonstrates the progress. ### _Possible Use-cases from User Experience_ As our goal is to support practitioners, we also conduct a user study/discussion evaluating APISens's usefulness. We ask 2 Research Scientists and 2 Ph.D. students in Software Engineering who have more than 5 years of programming experience. To obtain opinions, we first deliver APISens with 20 samples of APIs and let the participants rate (i.e., 1 to 5 where 1 indicates strongly disagree and 5 denotes strongly agree) their usability and usefulness as well as discuss the potential. Overall, the participants consider APISens to be easy-to-use (5 out of 5) and useful (4.05 out of 5). We have identified and established the promising use cases from the user discussion. **Understanding the popularity of the API at a shot.** The main use case for APISens is, indeed, getting the sentiment score for an API such that the user can recognize whether the API is widely used and well-supported. Also, it can give a sense of how much demand there is for the API as well as it allows users to get valuable context and perspective when comparing different APIs or considering potential alternatives. **Incorporating it from an IDE.** The sentiment information could be used to personalize API recommendations for individual developers based on their preferences and needs. Hence, the integration can provide more options. It also benefits code completion of the IDE, as sentiment information can be a factor for prioritization for real-time API suggestions. **Connection with sequence recommendation techniques.** Once APISens is connected with API sequence recommendation, it can bring synergies such as prioritize/re-order recommending APIs with high positive sentiment scores. This could potentially lead to a better user experience. **Sentiment information as learning features.** The sentiment information could be used as a feature in machine learning models to improve the accuracy of API recommendations. For example, as there is a strong correlation between API sentiment and API popularity or usage, including such information as a feature in a model can potentially improve performance. ## V Further Ideas Based on our observations and early feedback from the users, we believe that APISens has the potential to be a useful tool for API comprehension. Here are some ideas to further improvement: **Concurrent support for multiple APIs.** Developers tend to compare multiple APIs to find the best API for a particular task or to stay up to date with industry trends. This could help them stay informed about new APIs and best practices and ensure that they are using the most popular and effective APIs. **Training the models with a bigger dataset.** To maximize the performance of the tool, the pre-trained models can be fine-tuned with a bigger dataset. As statistical power and generalizability of the model can be derived by the bigger and better dataset, APISens can be further fine-tuned for more reliable results. Moreover, covering more APIs may help our tool to be more generally used by developers. **Further support with similar APIs that perform the same functionality.** APISens can encompass a function that can support similar APIs as well as their sentiment information corresponding to the queried one. This can further boost efficiency and ease the comparison and determination process for more suitable APIs.
2308.03685
Learning Concise and Descriptive Attributes for Visual Recognition
Recent advances in foundation models present new opportunities for interpretable visual recognition -- one can first query Large Language Models (LLMs) to obtain a set of attributes that describe each class, then apply vision-language models to classify images via these attributes. Pioneering work shows that querying thousands of attributes can achieve performance competitive with image features. However, our further investigation on 8 datasets reveals that LLM-generated attributes in a large quantity perform almost the same as random words. This surprising finding suggests that significant noise may be present in these attributes. We hypothesize that there exist subsets of attributes that can maintain the classification performance with much smaller sizes, and propose a novel learning-to-search method to discover those concise sets of attributes. As a result, on the CUB dataset, our method achieves performance close to that of massive LLM-generated attributes (e.g., 10k attributes for CUB), yet using only 32 attributes in total to distinguish 200 bird species. Furthermore, our new paradigm demonstrates several additional benefits: higher interpretability and interactivity for humans, and the ability to summarize knowledge for a recognition task.
An Yan, Yu Wang, Yiwu Zhong, Chengyu Dong, Zexue He, Yujie Lu, William Wang, Jingbo Shang, Julian McAuley
2023-08-07T16:00:22Z
http://arxiv.org/abs/2308.03685v1
# Learning Concise and Descriptive Attributes for Visual Recognition ###### Abstract Recent advances in foundation models present new opportunities for interpretable visual recognition - one can first query Large Language Models (LLMs) to obtain a set of attributes that describe each class, then apply vision-language models to classify images via these attributes. Pioneering work shows that querying thousands of attributes can achieve performance competitive with image features. However, our further investigation on 8 datasets reveals that LLM-generated attributes in a large quantity perform almost the same as random words. This surprising finding suggests that significant noise may be present in these attributes. We hypothesize that there exist subsets of attributes that can maintain the classification performance with much smaller sizes, and propose a novel learning-to-search method to discover those concise sets of attributes. As a result, on the CUB dataset, our method achieves performance close to that of massive LLM-generated attributes (e.g., 10k attributes for CUB), yet using only 32 attributes in total to distinguish 200 bird species. Furthermore, our new paradigm demonstrates several additional benefits: higher interpretability and interactivity for humans, and the ability to summarize knowledge for a recognition task. ## 1 Introduction Explaining black-box neural models is a critical research problem. For visual recognition, one line of research tries to classify objects with descriptions or attributes [12, 8, 39, 18, 22], which provide additional information beyond visual cues such as activation maps [41, 40]. However, they require in-depth human analysis and intensive annotation to obtain key attributes for a particular recognition task. Such a paradigm is costly and thus impractical to scale up when the number of classes and domains grows. The recent advance of language foundation models creates new opportunities for building interpretable visual recognition models, as demonstrated by the powerful capabilities of models such as GPT-3 and ChatGPT in encoding world knowledge [5, 32, 21]. One can query useful visual attributes from LLMs and classify images via these attributes by converting visual features from vision-language models (VLMs) (_e.g._, CLIP [36]) into attribute scores [56]. One recent work [52] shows that a large set of attributes from LLMs (_e.g._, 50 attributes per class) can achieve comparable performance to image features in a linear probing setting. However, two key observations motivate us to rethink this formulation: (1) A large number of attributes dramatically hurts the interpretability of a model. It is unrealistic to manually check thousands of attributes to fully understand model decisions. (2) We surprisingly find that when the number of attributes is large enough (_e.g._, the dimension of image features), random words drawn from the entire vocabulary can perform equally well as LLM-generated attributes. Moreover, reducing the number of random words by 25% can still attain competitive performance. This indicates that redundant and noisy information exists in the massive LLM-generated attributes. With our findings, we ask the research question: _Can we learn a concise set of representative visual attributes in the form of natural language to explain how visual recognition works?_**For example, can we find a few representative attributes to distinguish 200 bird species?** This is a non-trivial problem. Even for humans, it is not easy to summa Figure 1: Our proposed paradigm for visual recognition via learning a concise set of descriptive attributes. rize what are the representative visual attributes given many visual classes. To tackle this challenge, we propose a novel learning-to-search method, which uses image-level labels to guide the searching of discriminative attributes. Specifically, we train a learnable dictionary to approximate the embedding space of VLMs, and then find descriptive attributes in the latent text space via nearest neighbor search. In summary, we propose a new paradigm for visual recognition (Figure 1), which seeks to learn a concise set of visual attributes in the form of natural language. Once learned, there are several benefits to our new paradigm: **(1)** Our discovered attributes are highly descriptive. On 8 visual recognition datasets, our model classifies images via these attributes and achieves comparable classification performance as image features, even if the number of attributes is much smaller than the dimension of image features. **(2)** The condensed sets of attributes enable strong interpretability for the model decision process through a few human-friendly text descriptions. **(3)** Additionally, our framework presents a natural language interface for humans to interact with. One can correct a wrong prediction during model inference, by perturbing the values of attribute scores where it made mistakes. **(4)** Lastly, these expressive attributes can be viewed as a concise form of knowledge to summarize useful features for a visual recognition task, without costly human effort. Overall, our contributions are three-fold: * Leveraging recent advances in foundation models, we propose a new paradigm for visual recognition by learning a concise set of attribute descriptions. * To find these attributes, we propose a novel learning-to-search method which prunes the large attribute pool from large language models to a descriptive subset. * We conduct extensive experiments across 8 visual recognition datasets to validate our recognition effectiveness and efficiency with additional benefits. ## 2 Methodology In this section, we introduce our key components for a new paradigm of visual recognition. It mainly consists of three modules: **First**, in Section 2.1, given an image domain, we query large language models to obtain a large set of visual attributes for the categories of a task. **Second**, we use a semantic transformation (Section 2.2) to project the image features into attribute features via a vision-language model, where each dimension in the new space corresponds to an attribute concept, and a higher value represents higher correlation between the image and the attribute. **Finally**, given the large space of attributes, we propose a novel learning-to-search method (Section 2.4) to efficiently prune the attributes into a much smaller subset to obtain a concise model for classification. ### Generating Attribute Concepts via LLMs The first step of our framework is to obtain a set of appropriate attribute concepts. Given a dataset with different categories, (e.g., CUB with 200 bird classes), what are the distinctive visual attributes to recognize them? Manually labeling and designing these attribute concepts can be costly, and can not scale to large numbers of classes. Large Language Models (LLMs), such as GPT-3 [5] and ChatGPT, provide an alternative solution. We can view these language models as implicit knowledge bases with exceptional world knowledge on a variety of tasks and topics, which humans can easily interact with through natural language to query knowledge. To this end, prompt engineering, or the ability to ask good questions to language models, is still important. To effectively query knowledge from LLMs with regard to classifying images, we design two types of prompts. **Instance Prompting for Class-level Features.** For each class \(c\) in a given task, our first design choice is to query class-level information from LLMs. We prompt a language model with the instance prompt: _Q: What are the useful visual features to distinguish \(Y_{c}\) in a photo?_ where \(Y_{c}\) corresponds to the name of class \(c\) in the form of natural language. **Batch Prompting for Group-level Features.** For certain datasets (e.g., CIFAR-100 and ImageNet), there is inherently a hierarchy that some categories belong to the same group. For example, in CIFAR-100, there is a superclass for every five categories. Hence, we propose batch prompting, where we ask the language model to reason about the distinctive visual features among a batch of categories: _Q: Here are \(N_{g}\) kinds of \(Y_{g}\): \(\{Y_{c_{1}}\), \(Y_{c_{2}}\),..., \(Y_{c_{M}}\}\). What are the useful visual features to distinguish them in a photo?_ where \(N_{g}\) is the number of classes in a group \(g\), \(Y_{g}\) is the name of the group, \(Y_{c_{i}}\) corresponds to the name of each class \(c_{i}\) in the form of natural language. We present more details regarding our prompt design, robustness check of different prompts, and examples of the generated attributes in Appendix A. ### Semantic Projection After obtaining a pool consisting of \(N\) attribute concepts \(\mathcal{C}=\{a_{1},a_{2},\dots,a_{N}\}\), the second challenge is how we can best leverage these attributes to build interpretable image classifiers. Recent advances of vision-language models such as CLIP bridge the gap between images and text, by pre-training models with large scale image-text pairs. Intuitively, converting from images to text is a discretization process that will unavoidably lose rich semantic information stored in an image. To better preserve information, we use a semantic projection that transforms a visual feature into an attribute concept space. Given an image \(I\), we convert the D-dimensional image feature \(\textbf{V}\in\mathbb{R}^{D}\) into an N-dimensional attribute concept vector \(\textbf{A}\in\mathbb{R}^{N}\): \[\begin{split}\textbf{V}&=\Theta_{V}(I),\textbf{T}_{i }=\Theta_{T}(a_{i})\\ s_{i}&=\cos(\textbf{V},\textbf{T}_{i}),i=1,...,N\\ \textbf{A}&=(s_{1},\dots,s_{N})^{T}\end{split} \tag{1}\] where \(\cos(\cdot,\cdot)\) is the cosine similarity between two vectors, \(s_{i}\) is the cosine similarity between two vectors. \(\Theta_{V}\) and \(\Theta_{T}\) are the visual and text encoder of a VLM. \(\textbf{T}_{i}\) is the embedding of the \(i\)-th attribute in the attribute concept pool, \(i\in\{1,\dots,N\}\). **A** is the semantic vector of image \(I\). ### The Hypothesis of Attribute Concept Space Conceptually, our semantic projection resembles principal component analysis, where we aim to find a set of bases in the form of natural language, and by projecting the images into these bases we obtain a new attribute concept space where each dimension in the space corresponds to a visual attribute concept. However, the large bag of attribute concepts we obtained from large language models is not the optimal language basis. As of today, LLMs are models that noisily condense world knowledge from the web, and are not optimized for visual recognition or visual reasoning tasks. We hypothesize that there exist subsets of attributes that can still achieve high classification performance with a much smaller size. Intuitively, most attributes in the large attribute concept pool are irrelevant to classify a certain class. For example, attributes that describe dogs are less likely to be suitable attributes to recognize birds or cars. Practically, formatting a compact attribute set is also helpful for humans to interact with the model and understand its behavior better. A small number of attributes is much easier for diagnostic purposes and making decisions with these neural models, which is the ultimate goal of building interpretable models. ### Task-Guided Attribute Concept Searching Finding an expressive set of language bases is non-trivial. The massive attributes from LLMs are noisy, and finding a few representative attributes for hundreds of classes in a task can be challenging and costly, even for human experts with domain knowledge. An exhaustive search is also impractical given the large text space. Inspired by dictionary learning and vector quantization techniques [43], we present a learning-to-search method that learns a dictionary to approximate an expressive sub Figure 2: The framework of our model. (a) Querying attributes from LLMs and finding a concise set of representative attributes; (b) An example using the attributes for interpretable visual recognition. set of attributes given fixed \(K\). Specifically, we first define an embedding matrix \(\textbf{E}\in\mathbb{R}^{K\times D}\), where \(K\) is a \(K\)-way categorical that equals the number of attributes, and \(D\) is the dimensionality of embedding vectors **V** and \(\textbf{T}_{i}\) (_i.e_., the latent dimension of VLMs), where **V** and \(\textbf{T}_{i}\) is the image embedding and the \(i\)-th attribute embedding shown in Eq.(1). Since our goal is to find \(K\) attributes to be expressive, we propose a task-guided attribute concept searching method to optimize for a particular task. For visual recognition tasks, we use a classification head to project the dictionary into \(K_{C}\) classes and guide the learning process with the categorical cross-entropy loss: \[\mathcal{L}_{ce}=-\frac{1}{M}\sum_{i=1}^{M}\sum_{c=1}^{K_{C}}y_{i,c}\log(p_{i,c}) \tag{2}\] where \(M\) is the number of images in a mini-batch, \(y_{i,c}\) is the binary indicator of the \(i\)-th image in the mini-batch belonging to class \(c\), and \(p_{i,c}\) is the predicted probability of the \(i\)-th image belonging to class \(c\). But simply training with the guidance of the cross-entropy loss is suboptimal, as the embeddings **E** are not in the same space of **T**. Thus, we use the Mahalanobis distance as a constraint to encourage the embeddings to be optimized towards the latent space of vision-language models. Given a sampled probability distribution **T**, the Mahalanobis distance of \(\textbf{E}_{j}\) from **T** is defined as \[\mathcal{D}^{j}_{mah}=\sqrt{(\textbf{E}_{j}-\mathbf{\mu})\textbf{S}^{-1}(\textbf{ E}_{j}-\mathbf{\mu})} \tag{3}\] where \(\mathbf{\mu}=(\mu_{1},...,\mu_{D})\) is the mean vector and **S** is the positive-definite covariance matrix of **T**. Then the regularization term is defined as: \[\mathcal{L}^{j}_{mah}=\frac{1}{K}\sum_{j=1}^{k}\mathcal{D}^{j}_{mah} \tag{4}\] Overall, our model is optimized with a mixture of two losses: \[\mathcal{L}_{loss}=\mathcal{L}_{ce}+\lambda\sum_{j=1}^{K}\mathcal{L}^{j}_{mah}. \tag{5}\] After training, we have the embedding matrix **E** which will be used for searching the attributes from the attribute concept pool \(\mathcal{C}\). Note that for \(\textbf{E}\in\mathbb{R}^{K\ast D}\), each row of **E** is a \(D\)-dimensional vector. We denote the \(j\)-th row of **E** as \(\textbf{E}_{j}\). We use greedy search as follows: \[\textbf{T}^{*}_{j}=\operatorname*{arg\,max}_{i\in\{1,\cdots,N\}} \cos(\textbf{T}_{i},\textbf{E}_{j}), \tag{6}\] \[\text{s.t.}\ \textbf{T}^{*}_{j}\neq\textbf{T}^{*}_{k},\forall 1 \leq k<j,\] \[\text{where }j\text{ is from }1\text{ to }K,\] As \(j\) iterates from \(1\) to \(K\), we can find \(K\) attribute embeddings \(\textbf{T}^{*}_{j},j\in\{1,\cdots,K\}\), which corresponds to \(K\) expressive attribute concepts and are the condensed features containing the necessary knowledge for the task. With the selected attributes, we can calculate the semantic vector of each image as in Eq. (1), where each dimension of the vector is a similarity score between the image and an attribute. We evaluate the performance of these semantic vectors with linear probes, and the obtained linear model is used for inference and analysis. ## 3 Experiments ### Experimental Setup DatasetsWe conduct our experiments on 8 different image classification datasets, including: CUB [44], CIFAR-10 and CIFAR-100 [24], Food-101 [4], Flower [31], Oxford-pets [33], Stanford-cars [23], Imagenet [9]. For Imagenet, it is not trivial to analyze all 1000 diverse classes. So we narrow the scope to 397 animal classes, with 509,230/19,850 samples for train/test. We denote this subset as Imagenet-Animals. For other datasets, most of them include images within a specific domain (CUB, Flower, Food, Oxford-pets, Stanford-cars), while CIFAR-10 and CIFAR-100 contain broader classes that lie across domains. Implementation DetailsOur method involves two stages of training. The first stage consists of task-guided learning of a dictionary **E** to approximate CLIP text embeddings and using this dictionary to find \(K\) attributes for visual recognition. For the Mahalanobis distance, the parameter \(\lambda\) is tuned with a grid search in {1, 0.1, 0.01, 0.001, 0}. The second stage is one-layer linear probing to classify semantic vectors. The batchsize is set to 4,096 for all datasets except 32,768 on Imagenet-Animals for faster converging. We set the number of epochs to 5,000 epochs with early stopping. The learning rate is set to 0.01 in all experiments with an Adam optimizer [20]. Unless specified, we use GPT-3 and CLIP ViT-B/32 for all performance comparison. BaselinesWe compare with state-of-the-art works that leverage attributes either from human annotations or from LLMs. For a fair comparison, we use linear probes to evaluate all methods: (1) **CompDL**[56] builds semantic vectors using CLIP scores between human-designed attributes and images. (2) **LaBO**[52] is a recent work that builds semantic vectors with a large set of attributes from LLMs. (3) **Human**[44, 22]. Attribute labels for each image are annotated by humans. We compare with two versions: binary labels for each attribute, and calibrated labels with confidence scores given by annotators. To validate the effectiveness of learning-to-search, we explore other baselines: (1) **K-means**. Perform K-means clustering on CLIP attribute embeddings, then find \(K\) attributes with nearest distance to each clustering center. Intuitively this can be a strong baseline, as \(K\) attributes close to each center can be distinctive. (2) **Uniform Sampling** from the large attribute pool. (3) **SVD**. After obtaining the attribute embeddings \(\mathbf{T}\), we run SVD decomposition of \(\mathbf{T}\) to get the top \(K\) vectors and find attributes with the largest similarity with the \(K\) important vectors. (4) **Similarity**. We calculate the average score of each attribute across all images and then find the \(K\) attributes with the largest average scores. (5) **Img Features**. Black-box linear probing on latent image features with two linear layers and an intermediate dimension \(K\) as a reference. ### Main Results Comparison with previous workWe first compare our method with LaBo [52]. It is designed to use \(M_{c}\) concepts per class with default number of 50, which corresponds to 10,000 attributes for CUB. For fair-comparison, we set \(M_{c}\) as 1 and 2 in the experiments. As shown in Table 1, our method outperforms LaBo with the same number of attributes on both the full and few-shot setting. Furthermore, our method can achieve similar accuracy with only a smaller number of attributes (e.g., 32 attributes for CUB). These results suggest that our learned attributes are discriminative enough to classify the images, despite given much fewer attributes. We then further compare with human annotations from CUB. For \(K<312\), we select attributes based on their accumulated confidence score for all samples. As shown in Table 2, human annotated attributes are more noisy than CLIP similarities. With the same attributes, CLIP scores from CompDL build more expressive features. Furthermore, our LLM-suggested attributes significantly outperform human designs, e.g. by using 16 attributes we achieve similar performance as 312 attributes defined by humans. Large-scale attributes behave like random wordsWe present our finding that LLM-generated attributes in a large quantity behave like random words. Specifically, we compare our method of using GPT-3 attributes with random or similar words. Here, we constructed random words by randomly choosing 1-5 words from the entire English vocabulary, and semantically similar words by combining 1-3 random colors with the noun "wings" as suffix. As shown in Figure 3, when \(K=512\), random words perform as well as GPT-3 attributes in terms of classification accuracy. Even reducing \(K\) from 512 to 256 does not significantly hurt its performance. But when \(K\) is small (e.g., 64), the performance of random words drops dramatically. We conjecture that it is because text embeddings randomly drawn from CLIP are nearly orthogonal bases [45]. Given an image feature \(\in\mathbb{R}^{D}\), projection with a set of \(K\)=\(D\) orthogonal bases can perfectly preserve its information. We further \begin{table} \begin{tabular}{c|c c|c c c|c c c|c c c} \hline \hline Datasets & \multicolumn{3}{c|}{CUB} & \multicolumn{3}{c|}{CIFAR-10} & \multicolumn{3}{c|}{CIFAR-100} & \multicolumn{3}{c}{Flower} \\ \hline \(K\) & 32 & 200 & 400 & 8 & 10 & 20 & 64 & 100 & 200 & 32 & 102 & 204 \\ \hline LaBo & – & 60.93 & 62.61 & – & 78.11 & 84.84 & – & 75.10 & 76.94 & – & 80.98 & 86.76 \\ Ours & 60.27 & **63.88** & **64.05** & 77.47 & **80.09** & **87.99** & 73.31 & **75.12** & **77.29** & 80.88 & **87.26** & **89.02** \\ \hline Datasets & \multicolumn{3}{c|}{Food} & \multicolumn{3}{c|}{Oxford\_Pets} & \multicolumn{3}{c|}{Stanford\_cars} & \multicolumn{3}{c}{Imagenet\_Animals} \\ \hline \(K\) & 64 & 101 & 202 & 16 & 37 & 74 & 64 & 196 & 392 & 128 & 397 & 794 \\ \hline LaBo & – & 79.95 & 81.33 & – & 76.91 & 84.33 & – & 72.33 & 74.39 & – & 74.88 & 75.49 \\ Ours & 78.41 & **80.22** & **81.85** & 76.29 & **83.15** & **85.91** & 72.07 & **74.57** & **75.56** & 74.48 & **75.69** & **75.83** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison with state-of-the-art. LaBo is designed to use at least as many attributes as classes. We use “\(-\)” to denote non-applicability. \begin{table} \begin{tabular}{l|c c c} \hline \hline K (\# of attributes) & 8 & 16 & 32 & 312 \\ \hline Human Binary [44] & 4.02 & 7.31 & 10.11 & 47.38 \\ Human Calibration [22] & 3.75 & 7.15 & 9.78 & 43.37 \\ CompDL [56] & 12.64 & 26.41 & 28.69 & 52.60 \\ Ours & **31.67** & **48.55** & **60.27** & **65.17** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison with human annotations on CUB. explore how similar words (e.g., red wings, yellow wings) behave. Embeddings of similar words in a trained language model are not orthogonal bases hence the projection will lose information when \(K\) is large (e.g., intuitively it is hard to classify 200 bird species using only the color combination of wings). But as \(K\) gets smaller, since those similar words have close semantic meanings, they start to outperform random words. Overall, these findings motivate us to find a concise set of meaningful attributes while maintaining competitive performance. Number of attributes and selection methodsFinally, we study performance change under different number of attributes in Figure 4. First, our method is competitive with image features when \(K\) is large. Reducing number of attributes \(K\) to the number of classes \(C\) (e.g., 512 to 128 for CUB) does not significantly hurt performance, even for baseline methods. This validates our hypothesis that there is plenty of redundant information in the semantic space when the number of attributes is large (as used in LaBO [52]). It is possible to find a subset of expressive attributes for visual recognition. Second, we also consistently outperform other methods such as K-means clustering and uniform sampling, demonstrating the effectiveness of our task-guided searching method. Third, a heuristic design such as K-means performs similar as uniform selection. Note that though there is a performance gap between image features and using attributes, the gap can be minimized by using a stronger VLM, as the classification accuracy of attributes relies on the accurate estimation of the correlation between images and attributes ( see more results in appendix D ). ### Ablation Study Robustness to the attribute poolFirst, we aim to explore the effects of different initialized attribute concept pools generated by LLMs. On CUB and CIFAR-100, we compare two attribute pools, attributes generated from classes in each dataset, and attributes generated from the full set of ImageNet classes. As shown in Table 4, even with the large and noisy attributes from ImageNet, our method can still efficiently find a small number of representative attributes for a task, and obtains competitive classification performance. Effectiveness of learning-to-searchThen, we discuss possible choices for selection out of the large attribute pool. Results are shown in Table 5 with the following observations: heuristic methods such as K-means and SVD are not optimal choices for identifying the most distinctive attributes. In fact, they are sometimes less effective than uniform sampling. This is likely because we need to identify the most distinguishing attributes for visual recognition, rather than the most diverse ones based on text embeddings. Overall, our method significantly outperforms other baseline selection methods, showing its efficacy. \begin{table} \begin{tabular}{c|c c c|c c} \hline \hline Datasets & \multicolumn{3}{c|}{CUB} & \multicolumn{3}{c}{CIFAR-100} \\ \hline K & 8 & 16 & 32 & 8 & 16 & 32 \\ \hline GPT-3 & **31.67** & 48.55 & 60.27 & **34.77** & **52.24** & **66.30** \\ GPT-3-Imagenet & 30.81 & **49.29** & **60.41** & 33.80 & 51.01 & 65.61 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study _w.r.t._ different concept pools. Figure 4: Overall Performance on all datasets. X-axis: number of attributes, Y-axis: Accuracy (%), “(f)” means “full”, _i.e._, all attributes in the pool are used. Uniform refers to uniform sampling. Effectiveness of regularizationWe compare the Mahalanobis distance (_MAH_) with two variations: (1) _COS_: For each vector \(\mathbf{E}_{j}\) and \(\mathbf{T}_{i}\) (of the \(i\)-th attribute) in the concept pool, we computed averaged cosine distance as follows: \[\mathcal{L}_{cos}=\frac{1}{K^{2}}\sum_{j=1}^{K}\sum_{i=1}^{K}\frac{\mathbf{T}_{ i}^{\top}\mathbf{E}_{j}}{||\mathbf{T}_{i}||||\mathbf{E}_{j}||}\] (2) _CE_: Learning with Eq. (2) only. Results are in Table 6. Overall, Mahalanobis distance is an effective constraint to encourage the dictionary \(E\) to be close to the distribution of CLIP embeddings. ### Analysis of Interpretability and Interactivity We perform analysis and visualizations to show that: (1) **Our learned attributes provide interpretability**. As shown in Figure 5, the upper half presents the images in a class \(c\) and high relevant attributes to recognize them. Specifically, we denote \(\mathbf{W}\in\mathbb{R}^{K_{C}*K}\) as the weight of the FC layer in linear probing, where \(K_{C}\), \(K\) are the number of classes and attributes. Then for each image \(i\) and its semantic vector \(\mathbf{A}\in\mathbb{R}^{K}\), we multiply the corresponding score vector of image \(i\) with the corresponding row of the FC layer \(\mathbf{W}_{c}\) to compute Importance Score \(\mathbf{IS}\in\mathbb{R}^{K}\): \[\mathbf{IS}=\mathbf{W}_{c}\otimes\mathbf{A} \tag{7}\] where \(\otimes\) means element-wise multiplication. Then we present attributes with the top absolute values of \(\mathbf{IS}\) averaged over all samples in a class from the test set, with blue/orange bars indicating the positive/negative importance. Higher absolute values denote greater significance. Since all CLIP scores are positive [16], the positivity or negativity of high IS signifies their relevance to the class. (2) **Our concise set of attributes enables simple interactivity**. As shown in the lower half of Figure 5, we can correct the model's wrong predictions during inference by changing only a single similarity score between an image \begin{table} \begin{tabular}{c|c c c c} \hline \hline Dataset & \multicolumn{4}{c}{CUB} \\ \hline K & 8 & 16 & 32 & 64 \\ \hline _MAH_ & 30.76 & 47.87 & **60.27** & **64.25** \\ _COS_ & 28.96 & 47.35 & 58.27 & 63.25 \\ _CE_ & **31.67** & **48.55** & 55.88 & 60.73 \\ \hline Dataset & \multicolumn{4}{c}{CIFAR-100} \\ \hline K & 8 & 16 & 32 & 64 \\ \hline _MAH_ & **34.77** & **52.24** & 65.91 & **73.31** \\ _COS_ & 31.98 & 51.15 & 65.02 & 72.80 \\ _CE_ & 32.45 & 50.83 & **66.29** & 73.25 \\ \hline \hline \end{tabular} \end{table} Table 6: Ablation study _w.r.t._ different regularization. Figure 5: Examples on interpretability and interactivity. (1) The upper half of each figure show important attributes for two classes of birds. We choose 6 out of 32 attributes with highest importance scores, which are computed by multiplication between clip scores and weights in the linear probe, defined in Eq. (7). (2) The lower half of each figure demonstrates the intervention on the semantic vector (i.e., CLIP scores) to correct the prediction, we use \(\delta\)=0.03 for all interventions on clip scores as an empirical value. The array of 6 scores are of the same order as the attributes. \begin{table} \begin{tabular}{c|c c c|c c c} \hline \hline Datasets & \multicolumn{4}{c|}{CUB} & \multicolumn{4}{c}{CIFAR-100} \\ \hline K & 8 & 16 & 32 & 8 & 16 & 32 \\ \hline K-means & 16.83 & 21.02 & 32.76 & 25.39 & 45.26 & 64.41 \\ Uniform & 7.02 & 25.98 & 40.58 & 28.07 & 47.14 & 64.34 \\ SVD & 6.52 & 20.02 & 35.83 & 29.06 & 50.00 & 64.99 \\ Similarity & 4.73 & 9.72 & 18.00 & 26.75 & 45.61 & 62.79 \\ Ours & **31.67** & **48.55** & **60.27** & **34.77** & **52.24** & **66.30** \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation study _w.r.t._ different attribute selection strategies. and the attribute that the CLIP model made a mistake on. This is a significant simplification compared with previous work [22] where they need to manipulate scores from a group of concepts for the CUB dataset. We present more user studies in appendix E. ### Visualization of Our Discovered Attributes We show our learned descriptive attributes with \(K=8\) in Figure 6. Intuitively, we can observe these attributes are distinctive for each domain. Take birds recognition (CUB) as an example, the eight attributes covered most of the body parts of a bird (head, breast, legs, etc.). As we are condensing knowledge from hundreds of bird classes, each attribute broadly covers many categories. A bright red head and breast can be a noticeable visual attribute for many bird species, such as the Northern Cardinal and the Vermilion Flyccatcher. Overall, explaining a domain with a few descriptive attributes is challenging, even for an expert with sufficient domain knowledge. But our model is able to automatically provide a level of knowledge to help humans understand how visual recognition works. We then present case studies on CIFAR-10 with 4 attributes and CLIP scores of 10 random images from each class in Figure 7. In general, each image is activated in an distinguishable way in the heat map. Some attributes can distinguish a few classes, for example, cat and dog have higher activation on "fur coat" compared to automobile or truck. Thus "fur coat" may be an important feature to differentiate animals and vehicles. ## 4 Related work Interpretable Deep LearningInterpretability is a critical research problem for deep learning with black-box models [11, 34, 37, 38, 13, 2, 50]. Some works study model behavior and explore if deep models could encode concepts for understanding [19, 28, 49, 29]. For image classification, preliminary attempts aim to describe objects with attributes [12, 26, 25] or building concept bottleneck mod Figure 6: A concise set of 8 descriptive attributes learned for each dataset with sampled images. Figure 7: Case study on CIFAR-10. The numbers are CLIP similarity scores between each image and attributes. els [22, 56, 55, 6]. These methods require in-depth human analysis and intensive labeling, which are impractical to scale to more classes and domains. Recent works [30, 35, 52] tackle this problem by using GPT-3 as a knowledge base to query visual attributes or concepts. Specifically, [30, 35] generate descriptions with LLMs, and use them for knowledge-aware prompting for each class to improve zero-shot performance of CLIP [36]. For example, given the class name "bee", it will augment it with attributes such as "A bee with black and yellow body". Our work differs in that our goal is to learn representative attributes for visual recognition without using class names. LABO [52] extends the idea of concept bottleneck models by generating thousands of concepts from LLMs. Inspired by our finding that there is great redundancy in the large-scale attributes, we aim to learn a concise set of attributes that are initially generated from LLMs for each task, while maintaining the classification performance as possible. Concise attributes also enable stronger interpretability and interactivity, and can help humans to summarize critical knowledge for visual recognition in an automatic way. Foundation ModelsRecently, foundation models [3], which are pre-trained with a large amount of data and large model sizes, have revolutionized machine learning research and many fields. These models are shown to be adaptable to a wide range of downstream tasks for computer vision [15, 46, 58], natural language processing [10, 7, 57, 48] and cross-modal research [27, 42, 17, 14]. One direction is to train LLMs such as GPT3 [5] and Chat-GPT with massive text to serve as a powerful knowledge base with high interactivity and beyond. Another direction is to build VLMs [36, 51, 54, 53, 1], which connect vision and language by pre-training with image-text pairs and learning a joint embedding space for both. In this work, we use LLMs as a knowledge base for querying visual related knowledge, and use VLMs to bridge vision and text, presenting a new paradigm for interpretable visual recognition in the era of foundation models. ## 5 Discussion There are many interesting topics to explore with our new paradigm. First, our framework is a plug-and-play model that can be readily applied to many other vision tasks, by simply changing the task-guided learning objective to a particular task, e.g., classification losses for object detection, video understanding, and 3D classification. Furthermore, a concise set of descriptive attributes enables interactivity for vision models and empowers human-machine cooperation in a user-friendly way through natural language interfaces. Lastly, we show the potential of summarizing knowledge for challenging vision tasks in the new era of LLMs, which could have broad impact for various domains. ## 6 Conclusion In this work, we propose a new paradigm for visual recognition that leverages a concise set of descriptive attributes. Motivated by our insightful finding that significant redundancy exists in massive LLMs-generated attributes, we design a simple yet effective searching method guided by image-level labels, to identify an informative subset. Our new paradigm is validated across 8 datasets to achieve strong classification accuracy with multiple benefits and broad impacts, including efficiency, interpretability, human interactivity, and knowledge summarization. ## Acknowledgments We would like to sincerely thank the anonymous reviewers and chairs for their careful review of our work, with helpful and constructive suggestions to improve the paper.
2306.04446
Classification results for polyharmonic helices in space forms
We derive various classification results for polyharmonic helices, which are polyharmonic curves whose geodesic curvatures are all constant, in space forms. We obtain a complete classification of triharmonic helices in spheres of arbitrary dimension. Moreover, we show that polyharmonic helices of arbitrary order with non-vanishing geodesic curvatures to space forms of negative curvature must be geodesics.
Volker Branding
2023-06-07T14:21:13Z
http://arxiv.org/abs/2306.04446v1
# Classification results for polyharmonic helices in space forms ###### Abstract. We derive various classification results for polyharmonic helices, which are polyharmonic curves whose geodesic curvatures are all constant, in space forms. We obtain a complete classification of triharmonic helices in spheres of arbitrary dimension. Moreover, we show that polyharmonic helices of arbitrary order with non-vanishing geodesic curvatures to space forms of negative curvature must be geodesics. Key words and phrases:r-harmonic curves; helices; space form 2010 Mathematics Subject Classification: 58E20; 53C43; 31B30; 58E10 The author gratefully acknowledges the support of the Austrian Science Fund (FWF) the project P 34853 "Geometric Analysis of Biwave Maps". with \(R^{M}\) being the curvature tensor of the manifold \(M\). Solutions of (1.4) are called _polyharmonic curves of order \(r\)_ or shortly _r-harmonic curves_. In the case of \(r=1\) the energy (1.3) reduces to the usual energy of a curve (1.1) whose critical points are _geodesics_. Clearly, every geodesic is a solution of the equation for \(r\)-harmonic curves (1.4), hence we are interested in finding non-geodesic solutions of (1.4) which we will call _proper_\(r\)-harmonic curves. For the current status of research on higher order variational problems we refer to [1], a collection of recent results on \(r\)-harmonic curves can be found in [3]. Throughout this manuscript we use the terminology _helix_ to represent a curve whose geodesic curvatures \(k_{i},i=1,\ldots\) are all constant. Our first result is the explicit form of the Euler-Lagrange equation for \(4\)-harmonic curves on the Euclidean sphere with the round metric. **Theorem 1.1**.: _Let \(\gamma\colon I\to\mathbb{S}^{n}\subset\mathbb{R}^{n+1}\) be a curve which is parametrized by arclength. Then \(\gamma\) is a proper \(4\)-harmonic curve if it is a non-geodesic solution of_ \[0= \gamma^{(8)}+2\gamma^{(6)}+3\gamma^{(4)}-\gamma^{(4)}|\gamma^{ \prime\prime}|^{2}-6\gamma^{\prime\prime}|\gamma^{\prime\prime}|^{2}+4\gamma^ {\prime\prime}-2\gamma^{\prime\prime}\langle\gamma^{(4)},\gamma^{\prime \prime}\rangle \tag{1.5}\] \[+5\frac{d^{2}}{ds^{2}}\big{(}\gamma^{\prime}\langle\gamma^{(4)}, \gamma^{\prime}\rangle\big{)}-\frac{d^{4}}{ds^{4}}\big{(}|\gamma^{\prime \prime}|^{2}\gamma\rangle-6\gamma^{\prime}\frac{d}{ds}|\gamma^{\prime\prime}|^ {2}-2\gamma^{\prime}\frac{d}{ds}\langle\gamma^{(4)},\gamma^{\prime\prime} \rangle-5\frac{d}{ds}\big{(}\gamma^{\prime\prime}\langle\gamma^{(4)},\gamma^ {\prime}\rangle\big{)}\] \[-\gamma\big{(}\langle\gamma^{(8)},\gamma\rangle+2\langle\gamma^ {(6)},\gamma\rangle+3|\gamma^{\prime\prime}|^{2}-|\gamma^{\prime\prime}|^{4}+6| \gamma^{\prime\prime}|^{2}-4+2\langle\gamma^{(4)},\gamma^{\prime\prime}\rangle \big{)}\] \[-\gamma\bigg{(}5\langle\gamma,\frac{d^{2}}{ds^{2}}\big{(}\gamma^ {\prime}\langle\gamma^{(4)},\gamma^{\prime}\rangle\big{)}\rangle-\langle \gamma,\frac{d^{4}}{ds^{4}}\big{(}|\gamma^{\prime\prime}|^{2}\gamma\rangle \rangle\big{)}-5\langle\gamma,\frac{d}{ds}\big{(}\gamma^{\prime\prime}\langle \gamma^{(4)},\gamma^{\prime}\rangle\big{)}\rangle\bigg{)}.\] **Remark 1.2**.: One explicit solution of (1.5) can be obtained as follows: The curve \(\gamma\colon I\to\mathbb{S}^{n}\) given by \[\gamma(s)=\cos(\sqrt{4}s)e_{1}+\sin(\sqrt{4}s)e_{2}+e_{3},\] where \(e_{i},i=1,2,3\) are mutually perpendicular and satisfy \(|e_{1}|^{2}=|e_{2}|^{2}=\frac{1}{4},|e_{3}|^{2}=\frac{3}{4}\), is a proper \(4\)-harmonic curve which is parametrized by arclength. The existence of this particular \(4\)-harmonic curve was already established in [3, Theorem 1.5] without using the Euler-Lagrange equation (1.5). Our next result provides a characterization of triharmonic helices in the sphere extending the analysis presented in [3]. **Theorem 1.3**.: _Consider a curve \(\gamma\colon I\to\mathbb{S}^{n}\subset\mathbb{R}^{n+1}\) of the form_ \[\gamma(s)=\cos(as)e_{1}+\sin(as)e_{2}+\cos(bs)e_{3}+\sin(bs)e_{4}\] _with \(|e_{1}|^{2}=|e_{2}|^{2},|e_{3}|^{2}=|e_{4}|^{2}\) and \(|e_{1}|^{2}+|e_{3}|^{2}=1\). Then, \(\gamma\) is a proper triharmonic curve parametrized by arclength if the following algebraic relations hold_ \[a^{4}+b^{4}-4(a^{2}+b^{2})+3a^{2}b^{2}+3= 0, \tag{1.6}\] \[|e_{1}|^{2}a^{2}+|e_{3}|^{2}b^{2}= 1.\] **Remark 1.4**.: 1. The condition (1.6) has already been derived in [3, Equation 2.3] using a different approach as utilized in this manuscript. 2. Setting \(a^{2}=x\) and \(b^{2}=y\) the equation (1.6) describes a particular conic section which turns out to be a hyperbola. Hence, it is obvious that there is a whole family of triharmonic helices in the sphere. It is well-known that a polyharmonic curve of order \(r\) has \(2r-2\) non-vanishing geodesic curvatures and effectively lies on a target of dimension \(2r-1\). Exploiting this fact the next Theorem gives some further characterizations of \(r\)-harmonic helices in the cases \(r=3,4\). From a computational point of view it turns out to be more effective to work with the geodesic curvatures of a curve instead of trying to explicitly solve the Euler-Lagrange equation. **Theorem 1.5**.: _Let \(\gamma\colon I\to M\) be a proper \(r\)-harmonic curve parametrized by arclength where \(M\) is a space form of constant curvature \(K\). Moreover, assume that the geodesic curvatures \(k_{i},i=1,\dots,2r-2\) are all constant._ 1. _If_ \(r=3\) _the geodesic curvatures_ \(k_{j},j=1,\dots 4\) _satisfy_ \[(k_{1}^{2}+k_{2}^{2})^{2}+k_{2}^{2}k_{3}^{2} =K(2k_{1}^{2}+k_{2}^{2}),\] (1.7) \[k_{2}k_{3}\big{(}k_{1}^{2}+k_{2}^{2}+k_{3}^{2}+k_{4}^{2}\big{)} =k_{2}k_{3}K.\] 2. _If_ \(r=4\) _the geodesic curvatures_ \(k_{j},j=1,\dots 6\) _satisfy_ \[(k_{1}^{2}+k_{2}^{2})^{3}+ k_{2}^{2}k_{3}^{2}(2k_{1}^{2}+2k_{2}^{2}+k_{3}^{2}+k_{4}^{2})\] (1.8) \[=K\big{(}(k_{1}^{2}+k_{2}^{2})^{2}+k_{2}^{2}k_{3}^{2}\big{)}+2Kk_{1}^{2}( k_{1}^{2}+k_{2}^{2}),\] \[k_{2}k_{3}\big{(}(k_{1}^{2}+k_{2}^{2})^{2}+(k_{3}^{2}+k_{4}^{2})^{2}+k_{1}^{ 2}k_{3}^{2}+2k_{2}^{2}k_{3}^{2}+k_{4}^{2}(k_{1}^{2}+k_{2}^{2}+k_{5}^{2})\big{)}\] \[=k_{2}k_{3}K(2k_{1}^{2}+k_{2}^{2}+k_{3}^{2}+k_{4}^{2}),\] \[k_{2}k_{3}k_{4}k_{5}\big{(}k_{1}^{2}+k_{2}^{2}+k_{3}^{2}+k_{4}^{2}+k_{5}^{2} +k_{6}^{2}\big{)}=k_{2}k_{3}k_{4}k_{5}K.\] **Remark 1.6**.: 1. Note that in (1.7) and (1.8) there does not appear a factor of \(k_{1}\). As we are considering proper tri- and 4-harmonic curves we have that \(k_{1}\neq 0\) such that this factor can be split off. 2. In the case of \(k_{1},k_{2}\neq 0\) and \(k_{j}=0,j\geq 3\) the first equation of (1.7) and the first equation of (1.8) reduce to the formula obtained in [3, Theorem 1.1]. 3. An immediate consequence of Theorem 1.5 is that triharmonic and 4-harmonic helices whose geodesic curvature are all non-vanishing need to be geodesics if the target is a space form of negative curvature. Employing Theorem 1.5 we can deduce the following classification result: **Theorem 1.7**.: _A proper triharmonic helix \(\gamma\colon I\to\mathbb{S}^{n}\subset\mathbb{R}^{n+1}\) parametrized by arclength must be one of the following:_ 1. _A planar curve of the form (_\(n\geq 2\)_)_ \[\gamma(s)=\cos(\sqrt{3}s)e_{1}+\sin(\sqrt{3}s)e_{2}+e_{3},\] _where_ \(e_{i},i=1,2,3\) _are mutually perpendicular and satisfy_ \(|e_{1}|^{2}=|e_{2}|^{2}=\frac{1}{3},|e_{3}|^{2}=\frac{2}{3}\)_._ 2. _A non-planar curve of the form (_\(n\geq 3\)_)_ \[\gamma(s)=\cos(as)e_{1}+\sin(as)e_{2}+\cos(bs)e_{3}+\sin(bs)e_{4}\] _with_ \(|e_{1}|^{2}=|e_{2}|^{2},|e_{3}|^{2}=|e_{4}|^{2},|e_{1}|^{2}+|e_{3}|^{2}=1\) _and_ \(e_{i},i=1,\dots,4\) _mutually perpendicular satisfying_ \[a^{4}+b^{4}-4(a^{2}+b^{2})+3a^{2}b^{2}+3= 0,\qquad|e_{1}|^{2}a^{2}+|e_{3}|^{2}b^{2}=1.\] **Remark 1.8**.: It is quite remarkable that triharmonic helices on the sphere have the same structure as biharmonic curves on the sphere. In both cases there exist two families of the form detailed in the previous theorem, which consist of a planar family and a non-planar generalization, see Theorem 2.6 for the precise details on biharmonic curves. However, biharmonic curves necessarily have constant curvature while there may be triharmonic curves on the sphere of non-constant geodesic curvature. In order to obtain a complete classification of triharmonic curves on the sphere one would need to obtain a full understanding of the non-constant curvature case as well. The last result of this manuscript provides a characterization of polyharmonic helices of arbitrary order whose geodesic curvatures are all different from zero. **Theorem 1.9**.: _Let \(\gamma\colon I\to M\) be an \(r\)-harmonic curve parametrized by arclength whose geodesic curvatures \(k_{i},i=1,\dots,2r-2\) are all constant and non-vanishing. Moreover, suppose that \(M\) _is a space form of constant curvature \(K\). Then, the following equation holds_ \[\sum_{j=1}^{2r-2}k_{j}^{2}=K.\] **Remark 1.10**.: The previous Theorem gives further insights into the structure of higher order variational problems. In particular, it gives further evidence to support the claim that polyharmonic maps to space forms of negative curvature must be harmonic while there may be additional solutions in the case of a spherical target. So far, this observations was mostly made in the case of codimension one, that is for polyharmonic hypersurfaces, see for example [11]. The analysis above suggests that this fact stays true in higher codimension. Furthermore, we will collect a number of results giving rise to the following conjecture: **Conjecture 1.11**.: _The equation for \(r\)-harmonic curves (1.4) admits solutions with non-constant geodesic curvature_ \[k_{1}(s)=\frac{\alpha}{s^{r-2}},\qquad\alpha\in\mathbb{R},\alpha \neq 0, \tag{1.9}\] _where \(s\) represents the parameter of the curve \(\gamma\) which we assume to be parametrized by arclength._ This conjecture is based on 1. the well-known fact that biharmonic curves (\(r=2\)) necessarily have constant geodesic curvature, 2. the results of the recent article on triharmonic curves [10], 3. and the observations presented in Subsection 3.1. Throughout this article we will use the following notation. By \(s\) we will denote the parameter of the curve \(\gamma\), the first, second and third derivative of \(\gamma\) will be written as \(T:=\gamma^{\prime},\gamma^{\prime\prime}\) and \(\gamma^{\prime\prime\prime}\), respectively. The \(l\)-th derivative of \(\gamma\) with respect to \(s\) will be denoted by \(\gamma^{(l)}\) where \(l=4,\ldots,2r\). This article is organized as follows: In Section 2 we provide some background material on the Euler-Lagrange method and use it to reprove a number of well-known results on biharmonic curves on the Euclidean sphere. Finally, in Section 3 we give the proofs of the main results of the article. ## 2. Some preliminary results Throughout this article we consider a space form of constant curvature \(K\) in which case the Riemann curvature tensor acquires the simple form \[R(X,Y)Z=K(\langle Y,Z\rangle X-\langle X,Z\rangle Y),\] where \(X,Y,Z\) are vector fields and \(K\) represents the constant curvature of the space form. In this case the equation for polyharmonic curves (1.4) simplifies to \[\tau_{r}(\gamma)=\nabla_{T}^{2r-1}T+K\sum_{l=0}^{r-2}(-1)^{l} \big{(}\langle T,\nabla_{T}^{l}T\rangle\nabla_{T}^{2r-3-l}T-\langle T,\nabla_ {T}^{2r-3-l}T\rangle\nabla_{T}^{l}T\big{)}. \tag{2.1}\] ### The Euler-Lagrange method for polyharmonic curves Let us briefly recall the so-called _Euler-Lagrange method_ which is a powerful tool in the analysis of one-dimensional variational problems. This method is the cornerstone of the Lagrangian formulation of classical mechanics in theoretical physics, see for example [8, Chapter 7]. Moreover, this method can also successfully be applied in order to study biharmonic curves [6], biharmonic maps [9, 12] and also higher order variational problems [1, Theorem 4.5]. The following theorem may be well-known in the mathematics community. However, for the sake of completeness we also provide a complete proof below. **Theorem 2.1**.: _Let \(\gamma\colon I\to\mathbb{R}^{q}\) be a curve. Suppose we have an energy functional_ \[E_{r}(\gamma)=\int_{I}\mathcal{L}_{r}\ ds,\] _where the Lagrangian_ \[\mathcal{L}_{r}=\mathcal{L}_{r}(\gamma,\gamma^{\prime},\ldots,\gamma^{(r-1)}, \gamma^{(r)})\] _may depend on the derivatives of the curve \(\gamma\) up to order \(r\). Then, \(\gamma\) is a critical point of \(E_{r}(\gamma)\) if the following ordinary differential equation holds_ \[\sum_{l=1}^{r}(-1)^{l}\frac{d^{l}}{ds^{l}}\frac{\partial\mathcal{L}_{r}}{ \partial\gamma^{(l)}}+\frac{\partial\mathcal{L}_{r}}{\partial\gamma}=0. \tag{2.2}\] Proof.: We choose \(\beta\in C_{c}^{\infty}(I,\mathbb{R}^{q})\) and compute the first variation of \(E_{r}(\gamma)\) as follows \[\frac{d}{dt}\big{|}_{t=0} E_{r}(\gamma+t\beta)\] \[=\frac{d}{dt}\int_{I}\mathcal{L}_{r}(\gamma+t\beta,\gamma^{ \prime}+t\beta^{\prime},\gamma^{\prime\prime}+t\beta^{\prime\prime},\ldots, \gamma^{(r-1)}+t\beta^{(r-1)},\gamma^{(r)}+t\beta^{(r)})\ ds\big{|}_{t=0}\] \[=\int_{I}\frac{d}{dt}\big{(}\mathcal{L}_{r}(\gamma+t\beta,\gamma ^{\prime}+t\beta^{\prime},\gamma^{\prime\prime}+t\beta^{\prime\prime},\ldots, \gamma^{(r-1)}+t\beta^{(r-1)},\gamma^{(r)}+t\beta^{(r)})\big{)}\ ds\big{|}_{t=0}\] \[=\int_{I}\big{(}\frac{\partial\mathcal{L}_{r}}{\partial\gamma} \beta+\frac{\partial\mathcal{L}_{r}}{\partial\gamma^{\prime}}\beta^{\prime}+ \frac{\partial\mathcal{L}_{r}}{\partial\gamma^{\prime\prime}}\beta^{\prime \prime}+\ldots+\frac{\partial\mathcal{L}_{r}}{\partial\gamma^{(r-1)}}\beta^{( r-1)}+\frac{\partial\mathcal{L}_{r}}{\partial\gamma^{(r)}}\beta^{(r)}\big{)}\ ds.\] Now, we use integration by parts \[\int_{I}\frac{\partial\mathcal{L}_{r}}{\partial\gamma^{(p)}}\beta^{(p)}\ ds= \int_{I}(-1)^{p}\frac{d^{p}}{ds^{p}}\big{(}\frac{\partial\mathcal{L}_{r}}{ \partial\gamma^{(p)}}\big{)}\beta\ ds,\qquad 1\leq p\leq r,\] where we used that \(\beta\) is compactly supported. By combining both equations the proof is complete. In the following we will often make use of the following lemma which follows from a direct calculation. **Lemma 2.2**.: _Let \(\gamma\colon I\to\mathbb{S}^{n}\subset\mathbb{R}^{n+1}\) be a curve which is parametrized by arclength. Then the following identities hold_ \[\langle\gamma,\gamma^{\prime}\rangle =0,\qquad\langle\gamma^{\prime\prime},\gamma\rangle=-1,\qquad \langle\gamma^{\prime\prime\prime},\gamma\rangle=0,\qquad\langle\gamma^{ \prime},\gamma^{\prime\prime}\rangle=0,\] \[\langle\gamma^{(4)},\gamma\rangle+\langle\gamma^{\prime\prime \prime},\gamma^{\prime}\rangle =0,\qquad\langle\gamma^{(4)},\gamma\rangle=|\gamma^{\prime\prime} |^{2}.\] Throughout this section we will frequently make use of the inclusion map \(\iota\colon\mathbb{S}^{n}\to\mathbb{R}^{n+1}\) and also exploit the special structure of the Levi-Civita connection on the sphere \[d\iota(\nabla_{T}X)=X^{\prime}+\langle X,\gamma^{\prime}\rangle\gamma,\] where \(X\) is a vector field on \(\mathbb{S}^{n}\subset\mathbb{R}^{n+1}\). ### Biharmonic curves on the sphere In order to highlight the power of the Euler-Lagrange method we will first investigate biharmonic curves on the Euclidean sphere and give a new-proof of some well-known results which serves as an inspiration for the classification results on triharmonic helices presented in this manuscript. The intrinsic form of the equation for biharmonic curves on the sphere is given by \[\tau_{2}(\gamma)=\nabla_{T}^{3}T+|T|^{2}\nabla_{T}T-\langle T,\nabla_{T}T \rangle T.\] Assuming that \(\mathbb{S}^{n}\subset\mathbb{R}^{n+1}\) and considering a curve \(\gamma\colon I\to\mathbb{S}^{n}\subset\mathbb{R}^{n+1}\) we obtain the following Lagrangian for biharmonic curves on the sphere \[\mathcal{L}_{2}^{\mathbb{S}^{n}}(\gamma^{\prime\prime},\gamma^{\prime},\gamma) =|\gamma^{\prime\prime}|^{2}-|\gamma^{\prime}|^{4}+\lambda(|\gamma|^{2}-1). \tag{2.3}\] Note that we have to include the Lagrange multiplyer \(\lambda\) as we are constraining the curve \(\gamma\) to be on the unit sphere. Then, employing Theorem 2.1, a direct calculation shows that the critical points of (2.3) are given by \[\frac{d^{2}}{ds^{2}}\big{(}\frac{\partial\mathcal{L}_{2}^{\mathbb{S}^{n}}}{ \partial\gamma^{\prime\prime}}\big{)}-\frac{d}{ds}\big{(}\frac{\partial \mathcal{L}_{2}^{\mathbb{S}^{n}}}{\partial\gamma^{\prime}}\big{)}+\frac{ \partial\mathcal{L}_{2}^{\mathbb{S}^{n}}}{\partial\gamma}=2(\gamma^{(4)}+2(| \gamma^{\prime}|^{2})^{\prime}\gamma^{\prime}+2|\gamma^{\prime}|^{2}\gamma^{ \prime\prime}+\lambda\gamma).\] Taking also into account the variation of \(\mathcal{L}_{2}^{\mathbb{S}^{n}}\) with respect to the Lagrange multiplyer \(\lambda\) we obtain the Euler-Lagrange equation \[\gamma^{(4)}+2(|\gamma^{\prime}|^{2})^{\prime}\gamma^{\prime}+2|\gamma^{ \prime}|^{2}\gamma^{\prime\prime}+\lambda\gamma=0 \tag{2.4}\] together with the constraint \(|\gamma|^{2}=1\). From now on, we will assume that the curve \(\gamma\) is parametrized with respect to arclength, that is \(|\gamma^{\prime}|^{2}=1\), such that (2.4) simplifies to \[\gamma^{(4)}+2\gamma^{\prime\prime}+\lambda\gamma=0. \tag{2.5}\] In order to determine \(\lambda\) we test (2.5) with \(\gamma\) and find \[\lambda=-\langle\gamma^{(4)},\gamma\rangle-2\langle\gamma,\gamma^{\prime \prime}\rangle=-|\gamma^{\prime\prime}|^{2}+2,\] where we used the identities provided by Lemma 2.2 and thus exploited the fact that \(\gamma\) is parametrized with respect to arclength. It is well-known that biharmonic curves have constant geodesic curvature which, using our framework, can be seen as follows: **Remark 2.3**.: 1. It is easy to see that the Lagrange multiplyer \(\lambda\) and the geodesic curvature \(k_{1}\) of the curve \(\gamma\) are related via the identity \[\lambda=-|\gamma^{\prime\prime}|^{2}+2=-k_{1}^{2}+1.\] Hence, the inclusion of the Lagrange multiplyer \(\lambda\) in (2.3) has the effect that it forces the curve \(\gamma\) to have constant geodesic curvature. This fact is well-known and is usually deduced by choosing a Frenet-frame for the curve \(\gamma\) and analyzing the associated Frenet equations. 2. We will present another short argument why biharmonic curves always need to have constant geodesic curvature that holds for biharmonic curves on an arbitrary manifold. Suppose we have a biharmonic curve on a Riemannian manifold, then it satisfies \[\tau_{2}(\gamma)=\nabla_{T}^{3}T+R(\nabla_{T}T,T)T=0.\] Multiplying this equation with \(T\) we obtain \[\langle\nabla_{T}^{3}T,T\rangle=0\] which implies \[\frac{d}{ds}\langle\nabla_{T}^{2}T,T\rangle-\frac{1}{2}\frac{d}{ ds}|\nabla_{T}T|^{2}=0.\] Exploiting that the curve is parametrized with respect to arclength we can then deduce that \[\frac{d}{ds}|\nabla_{T}T|^{2}=0,\] which implies that \(k_{1}^{2}=const\). Combining the previous observations we get the following well-known result: **Proposition 2.4**.: _Let \(\gamma\colon I\to\mathbb{S}^{n}\subset\mathbb{R}^{n+1}\) be a curve parametrized by arclength. Then \(\gamma\) is biharmonic if_ \[\gamma^{(4)}+2\gamma^{\prime\prime}+\gamma(2-|\gamma^{\prime\prime}|^{2})=0. \tag{2.6}\] **Remark 2.5**.: 1. The equation for biharmonic curves on spheres (2.6) was first derived in [5, Corollary 4.2] making use of geometric methods. In that reference the equation for biharmonic curves to spheres is given in the following form \[\gamma^{(4)}+2\gamma^{\prime\prime}+\gamma(1-k_{1}^{2})=0,\] where \(k_{1}\) represents the geodesic curvature of the curve \(\gamma\). Noting that \[k_{1}^{2}=|\nabla_{T}T|^{2}=|\gamma^{\prime\prime}|^{2}-|\gamma^{\prime}|^{4 }=|\gamma^{\prime\prime}|^{2}-1\] it is obvious that this version is the same as (2.6). 2. We would like to point out that it is necessary to include the Lagrange multiplyer \(\lambda\) in the Lagrangian (2.3) as we are dealing with a constraint variational problem. However, we do not have to include a second Lagrange multiplyer to justify that the curve is parametrized with respect to arclength. The fact that we are choosing an arclength parametrization can be considered as making a convenient choice in order to simplify our calculations but it is not a constraint required from the actual variational problem. The following result was proved in [4] and [5, Proposition 4.4]. **Theorem 2.6**.: _Let \(\gamma\colon I\to\mathbb{S}^{n}\subset\mathbb{R}^{n+1}\) be a curve parametrized by arclength. Then there exist the following two classes of proper biharmonic curves on \(\mathbb{S}^{n}\):_ 1. _When_ \(k_{1}^{2}=1\) _these are circles parametrized by_ \[\gamma(s)=\cos(\sqrt{2}s)e_{1}+\sin(\sqrt{2}s)e_{2}+e_{3},\] (2.7) _where_ \(e_{i},i=1,2,3\) _are constant orthogonal vectors satisfying_ \(|e_{1}|^{2}=|e_{2}|^{2}=|e_{3}|^{2}=\frac{1}{2}.\)__ 2. _When_ \(0<k_{1}^{2}<1\) _they are non-planar curves parametrized as follows_ \[\gamma(s)=\cos(as)e_{1}+\sin(as)e_{2}+\cos(bs)e_{3}+\sin(bs)e_{4},\] (2.8) _where_ \(|e_{i}|^{2}=\frac{1}{2},i=1,\ldots,4\) _and_ \(a^{2}+b^{2}=2\) _with_ \(a^{2}\neq b^{2}\)_._ In the following we will give a different proof of Theorem 2.6 as was originally presented in [4, 5], making use of the Euler-Lagrange method, as we want to employ it frequently in the rest of this article. Proof of Theorem 2.6.: In order to find the first class of solutions (2.7) we make the ansatz \[\gamma(s)=\cos(as)e_{1}+\sin(as)e_{2}+e_{3},\] where \(e_{i},i=1,2,3\) are constant orthogonal vectors satisfying \(|e_{1}|^{2}=|e_{2}|^{2}\) and \(|e_{1}|^{2}+|e_{3}|^{2}=1\) as we require \(|\gamma|^{2}=1\) and \(a\in\mathbb{R}\). In the following we set \(\alpha^{2}:=|e_{1}|^{2}.\) Inserting this ansatz into the Lagrangian for biharmonic curves (2.3) we find \[\mathcal{L}_{2}^{\mathbb{S}^{n}}(\alpha)=a^{4}(\alpha^{2}-\alpha^{4}).\] To determine the critical points of \(\mathcal{L}_{2}^{\mathbb{S}^{n}}(\alpha)\) we calculate \[\frac{d}{d\alpha}\mathcal{L}_{2}^{\mathbb{S}^{n}}(\alpha)=2a^{4}\alpha(1-2 \alpha^{2})\] and it is clear that this expression vanishes if \(\alpha^{2}=\frac{1}{2}\). Finally, we use the fact that \(\gamma\) is parametrized with respect to arclength which, given our ansatz, is expressed via \(\alpha^{2}a^{2}=1\). Hence, we get that \(a^{2}=2\) completing the proof. In order to obtain the second class of solutions (2.8) we make the ansatz \[\gamma(s)=\cos(as)e_{1}+\sin(as)e_{2}+\cos(bs)e_{3}+\sin(bs)e_{4},\] where \(|e_{1}|^{2}=|e_{2}|^{2},|e_{3}|^{2}=|e_{4}|^{2}\). We set \(\alpha_{j}^{2}:=|e_{j}|^{2},j=1,3.\) Inserting this ansatz into the Lagrangian for biharmonic curves (2.3) we get \[\mathcal{L}_{2}^{\mathbb{S}^{n}}(\alpha_{1},\alpha_{3},\lambda)=a^{4}(\alpha _{1}^{2}-\alpha_{1}^{4})+b^{4}(\alpha_{2}^{2}-\alpha_{3}^{4})-2a^{2}b^{2}\alpha _{1}^{2}\alpha_{3}^{2}+\lambda(\alpha_{1}^{2}+\alpha_{3}^{2}-1).\] In order to find the critical points of this Lagrangian we differentiate with respect to \(\alpha_{1},\alpha_{3},\lambda\) and set the resulting equations equal to zero leading to the system \[a^{4}(1-2\alpha_{1}^{2})-2a^{2}b^{2}\alpha_{3}^{2}-\lambda= 0,\] \[b^{4}(1-2\alpha_{3}^{2})-2a^{2}b^{2}\alpha_{1}^{2}-\lambda= 0,\] \[\alpha_{1}^{2}+\alpha_{3}^{2}= 1.\] Combining this set of equations we find after some algebraic manipulations \[(a^{4}+b^{4}-2a^{2}b^{2})(1-2\alpha_{1}^{2})=0\] from which we deduce that \(\alpha_{1}^{2}=\alpha_{3}^{2}=\frac{1}{2}\). The requirement that \(\gamma\) is parametrized with respect to arclength is then given by the constraint \(a^{2}+b^{2}=2,a^{2}\neq b^{2}\) completing the proof. ## 3. Proofs of the main results In this section we provide the proofs of the main results of this article. Proof of Theorem 1.1.: The proof is based on the Euler-Lagrange method provided by Theorem 2.1. Recall that the \(4\)-energy of a curve \(\gamma\colon I\to\mathbb{S}^{n}\) is given by \[E_{4}(\gamma)=\int_{I}|\nabla_{T}^{3}T|^{2}ds.\] We again make use of the embedding \(\iota\colon\mathbb{S}^{n}\to\mathbb{R}^{n+1}\) which helps us rewrite \[d\iota(\nabla_{T}^{3}T)=\gamma^{(4)}+4\langle\gamma^{\prime\prime\prime}, \gamma^{\prime}\rangle\gamma+3|\gamma^{\prime\prime}|^{2}\gamma+5\langle \gamma^{\prime},\gamma^{\prime\prime\prime}\rangle\gamma^{\prime}+|\gamma^{ \prime}|^{2}\gamma^{\prime\prime}+|\gamma^{\prime}|^{4}\gamma.\] Consequently, the Lagrangian associated with the \(4\)-energy for a curve on the sphere has the form \[\mathcal{L}_{4}^{\mathbb{S}^{n}}(\gamma^{(4)},\gamma^{\prime \prime\prime},\gamma^{\prime},\gamma^{\prime},\gamma)= |\gamma^{(4)}|^{2}+16|\langle\gamma^{\prime\prime\prime}, \gamma^{\prime}\rangle|^{2}+9|\gamma^{\prime\prime}|^{4}+35|\langle\gamma^{ \prime\prime},\gamma^{\prime}\rangle|^{2}|\gamma^{\prime}|^{2} \tag{3.1}\] \[+|\gamma^{\prime}|^{4}|\gamma^{\prime\prime}|^{2}-|\gamma^{ \prime}|^{8}\] \[+8\langle\gamma^{\prime\prime\prime},\gamma^{\prime}\rangle \langle\gamma^{(4)},\gamma\rangle+6\langle\gamma^{(4)},\gamma\rangle|\gamma^ {\prime\prime}|^{2}+10\langle\gamma^{\prime\prime},\gamma^{\prime}\rangle \langle\gamma^{(4)},\gamma^{\prime}\rangle\] \[+2|\gamma^{\prime}|^{2}\langle\gamma^{(4)},\gamma^{\prime\prime }\rangle+2|\gamma^{\prime}|^{4}\langle\gamma^{(4)},\gamma\rangle\] \[+24\langle\gamma^{\prime\prime\prime},\gamma^{\prime}\rangle| \gamma^{\prime\prime}|^{2}\] \[+\lambda(|\gamma|^{2}-1),\] where we again introduced the Lagrange multiplyer \(\lambda\) to constrain the curve \(\gamma\) to \(\mathbb{S}^{n}\). By a direct calculation we find taking into account that the curve \(\gamma\) is parametrized with respect to arclength \[\frac{d^{4}}{ds^{4}}\frac{\partial\mathcal{L}_{4}^{\mathbb{S}^{n }}}{\partial\gamma^{(4)}}= 2\gamma^{(8)}-2\frac{d^{4}}{ds^{4}}\big{(}|\gamma^{\prime \prime}|^{2}\gamma)+2\gamma^{(6)}+2\gamma^{(4)},\] \[\frac{d^{3}}{ds^{3}}\big{(}\frac{\partial\mathcal{L}_{4}^{ \mathbb{S}^{n}}}{\partial\gamma^{\prime\prime}}\big{)}= 0,\] \[\frac{d^{2}}{ds^{2}}\big{(}\frac{\partial\mathcal{L}_{4}^{\mathbb{ S}^{n}}}{\partial\gamma^{\prime\prime}}\big{)}= 2\gamma^{(6)}+2\gamma^{(4)}+10\frac{d^{2}}{ds^{2}}\big{(} \gamma^{\prime}\langle\gamma^{(4)},\gamma^{\prime}\rangle\big{)},\] \[\frac{d}{ds}\big{(}\frac{\partial\mathcal{L}_{4}^{\mathbb{S}^{n}}} {\partial\gamma^{\prime}}\big{)}= 12\gamma^{\prime\prime}|\gamma^{\prime\prime}|^{2}-8\gamma^{ \prime\prime}+4\gamma^{\prime\prime}\langle\gamma^{(4)},\gamma^{\prime\prime}\rangle\] \[+12\gamma^{\prime}\frac{d}{ds}|\gamma^{\prime\prime}|^{2}+4\gamma ^{\prime}\frac{d}{ds}\langle\gamma^{(4)},\gamma^{\prime\prime\prime}\rangle+10 \frac{d}{ds}\big{(}\gamma^{\prime\prime}\langle\gamma^{(4)},\gamma^{\prime} \rangle\big{)},\] \[\frac{\partial\mathcal{L}_{4}^{\mathbb{S}^{n}}}{\partial\gamma}= -2\gamma^{(4)}|\gamma^{\prime\prime}|^{2}+2\gamma^{(4)}+2\lambda\gamma.\] Varying (3.1) with respect to the Lagrange multiplyer \(\lambda\) we obtain the constraint \(|\gamma|^{2}=1\). Hence, from Theorem 2.1 we can deduce that \[0= \gamma^{(8)}+2\gamma^{(6)}+3\gamma^{(4)}-\gamma^{(4)}|\gamma^{\prime \prime}|^{2}-6\gamma^{\prime\prime}|\gamma^{\prime\prime}|^{2}+4\gamma^{\prime \prime}-2\gamma^{\prime\prime}\langle\gamma^{(4)},\gamma^{\prime\prime}\rangle\] \[+5\frac{d^{2}}{ds^{2}}\big{(}\gamma^{\prime}\langle\gamma^{(4)}, \gamma^{\prime}\rangle\big{)}-\frac{d^{4}}{ds^{4}}\big{(}|\gamma^{\prime\prime }|^{2}\gamma\big{)}-6\gamma^{\prime}\frac{d}{ds}|\gamma^{\prime\prime}|^{2}-2 \gamma^{\prime}\frac{d}{ds}\langle\gamma^{(4)},\gamma^{\prime\prime}\rangle-5 \frac{d}{ds}\big{(}\gamma^{\prime\prime}\langle\gamma^{(4)},\gamma^{\prime} \rangle\big{)}+\lambda\gamma.\] In order to determine \(\lambda\) we form the scalar product with \(\gamma\), using the identifies provided by Lemma 2.2 and inserting back into the above equation completes the proof. As the proof of Theorem 1.3 is based on the Lagrangian for triharmonic curves we again use the embedding of \(\mathbb{S}^{n}\) into \(\mathbb{R}^{n+1}\) via the map \(\iota\) and find \[d\iota(\nabla_{T}^{2}T)=\gamma^{\prime\prime\prime}+3\langle\gamma^{\prime \prime},\gamma^{\prime}\rangle\gamma+|\gamma^{\prime}|^{2}\gamma^{\prime}.\] Thus, we obtain the following Lagrangian \[\mathcal{L}_{3}^{\mathbb{S}^{n}}(\gamma,\gamma^{\prime},\gamma^{\prime\prime}, \gamma^{\prime\prime\prime})=|\gamma^{\prime\prime\prime}|^{2}+9|\langle \gamma^{\prime\prime},\gamma^{\prime}\rangle|^{2}+|\gamma^{\prime}|^{6}+6 \langle\gamma^{\prime\prime},\gamma^{\prime}\rangle\langle\gamma^{\prime \prime\prime},\gamma\rangle+2|\gamma^{\prime}|^{2}\langle\gamma^{\prime}, \gamma^{\prime\prime\prime}\rangle+\lambda(|\gamma|^{2}-1) \tag{3.2}\] with the Lagrange multiplyer \(\lambda\in\mathbb{R}\). In order to prove Theorem 1.3 we first establish the following **Proposition 3.1**.: _Consider a curve \(\gamma\colon I\to\mathbb{S}^{n}\) of the form_ \[\gamma(s)=\cos(as)e_{1}+\sin(as)e_{2}+\cos(bs)e_{3}+\sin(bs)e_{4} \tag{3.3}\] _with \(|e_{1}|^{2}=|e_{2}|^{2},|e_{3}|^{2}=|e_{4}|^{2}\). Then, \(\gamma\) is a proper triharmonic curve parametrized by arclength if the following algebraic relations hold_ \[a^{6}(1-2\alpha_{1}^{2})-2a^{4}+3a^{2}-2a^{2}b^{4}\alpha_{3}^{2} +\lambda= 0, \tag{3.4}\] \[b^{6}(1-2\alpha_{3}^{2})-2b^{4}+3b^{2}-2b^{2}a^{4}\alpha_{1}^{2} +\lambda= 0,\] \[a^{2}\alpha_{1}^{2}+b^{2}\alpha_{3}^{2}= 1,\] \[\alpha_{1}^{2}+\alpha_{3}^{2}= 1,\] _whenever \(a^{2},b^{2}\neq 1\). Here, \(\lambda\in\mathbb{R}\) and we have set \(\alpha_{j}^{2}=|e_{j}|^{2},j=1,3\)._ Proof.: From the ansatz (3.3) we get \[|\gamma^{(l)}|^{2}=\alpha_{1}^{2}a^{2l}+\alpha_{3}^{2}b^{2l},\qquad l=1,2,3.\] Inserting into the Lagrangian for triharmonic curves (3.2) we obtain \[\mathcal{L}_{3}^{\mathbb{S}^{n}}(\alpha_{1},\alpha_{3},\lambda)= \alpha_{1}^{2}a^{6}+\alpha_{3}^{2}b^{6}+(\alpha_{1}^{2}a^{2}+ \alpha_{3}^{2}b^{2})^{3}-2(\alpha_{1}^{2}a^{2}+\alpha_{3}^{2}b^{2})(\alpha_{1 }^{2}a^{4}+\alpha_{3}^{2}b^{4})\] \[+\lambda(\alpha_{1}^{2}+\alpha_{3}^{2}-1)\] \[= a^{6}(\alpha_{1}^{2}-2\alpha_{1}^{4}+\alpha_{1}^{6})+b^{6}( \alpha_{3}^{2}-2\alpha_{3}^{4}+\alpha_{3}^{6})\] \[+a^{2}b^{4}(3\alpha_{1}^{2}\alpha_{3}^{4}-2\alpha_{1}^{2}\alpha_ {3}^{2})+a^{4}b^{2}(3\alpha_{1}^{4}\alpha_{3}^{2}-2\alpha_{1}^{2}\alpha_{3}^{ 2})\] \[+\lambda(\alpha_{1}^{2}+\alpha_{3}^{2}-1).\] The critical points of \(\mathcal{L}_{3}^{\mathbb{S}^{n}}(\alpha_{1},\alpha_{3},\lambda)\) are given by the set of equations \[0= a^{6}(1-4\alpha_{1}^{2}+3\alpha_{1}^{4})+a^{2}b^{4}(3\alpha_{3}^{4}-2 \alpha_{3}^{2})+a^{4}b^{2}(6\alpha_{1}^{2}\alpha_{3}^{2}-2\alpha_{3}^{2})+\lambda, \tag{3.5}\] \[0= b^{6}(1-4\alpha_{3}^{2}+3\alpha_{3}^{4})+a^{2}b^{4}(6\alpha_{1}^ {2}\alpha_{3}^{2}-2\alpha_{1}^{2})+a^{4}b^{2}(3\alpha_{1}^{4}-2\alpha_{1}^{2} )+\lambda,\] \[1= \alpha_{1}^{2}+\alpha_{3}^{2}.\] In addition, we have the following constraint due to the requirement that the curve \(\gamma\) is supposed to be parametrized by arclength \[a^{2}\alpha_{1}^{2}+b^{2}\alpha_{3}^{2}=1.\] Using this constraint in the first equation of (3.5) we manipulate \[0= a^{6}(1-4\alpha_{1}^{2}+3\alpha_{1}^{4})+a^{4}((6\alpha_{1}^{2}-2)(1 -a^{2}\alpha_{1}^{2}))+a^{2}\big{(}3(1-a^{2}\alpha_{1}^{2})^{2}-2b^{4}\alpha_{3} ^{2}\big{)}+\lambda\] \[= a^{6}(1-2\alpha_{1}^{2})-2a^{4}+3a^{2}-2a^{2}b^{4}\alpha_{3}^{2}+\lambda.\] This shows the validity of the first equation in (3.4), the second one can be derived by the same method. The last two equations in (3.4) represent the fact that \(|\gamma|^{2}=|\gamma^{\prime}|^{2}=1\). **Remark 3.2**.: One can easily check that \(a=b=1\) solves the system (3.4) which corresponds to a geodesic solution. Proof of Theorem 1.3.: Using the first two equations of (3.4) we obtain \[a^{6}-b^{6}-2(a^{4}-b^{4})+3(a^{2}-b^{2})-2\alpha_{1}^{2}a^{4}(a^{2}-b^{2})-2 \alpha_{3}^{2}b^{4}(a^{2}-b^{2})=0.\] Employing the identity \[a^{6}-b^{6}=(a^{2}-b^{2})(a^{4}+b^{4}+a^{2}b^{2})\] and assuming that \(a\neq b\) we can thus deduce \[a^{4}+b^{4}+a^{2}b^{2}-2(a^{2}+b^{2})+3-2\alpha_{1}^{2}a^{4}-2\alpha_{3}^{2}b^ {4}=0.\] In order to manipulate the last two terms involving \(\alpha_{1}^{2},\alpha_{3}^{2}\) we make use of the last two equations of (3.4) as follows \[\alpha_{1}^{2}a^{4}+\alpha_{3}^{2}b^{4}= a^{2}(1-\alpha_{3}^{2}b^{2})+b^{2}(1-\alpha_{1}^{2}a^{2})\] \[= a^{2}+b^{2}-a^{2}b^{2}(\alpha_{1}^{2}+\alpha_{3}^{2})\] \[= a^{2}+b^{2}-a^{2}b^{2}\] such that we obtain \[a^{4}+b^{4}+a^{2}b^{2}-2(a^{2}+b^{2})+3-2\alpha_{1}^{2}a^{4}-2\alpha_{3}^{2}b^ {4}=a^{4}+b^{4}-4(a^{2}+b^{2})+3a^{2}b^{2}+3\] yielding the claim. For the further analysis we recall the following **Definition 3.3** (Frenet-frame).: Let \(\gamma\colon I\to M\) be a curve which is parametrized with respect to arclength. Then, its Frenet-frame is defined by \[F_{1}= T, \tag{3.6}\] \[\nabla_{T}F_{1}= k_{1}F_{2},\] \[\nabla_{T}F_{i}= -k_{i-1}F_{i-1}+k_{i}F_{i+1},\qquad i=2,\dots,n-1,\] \[\vdots\] \[\nabla_{T}F_{n}= -k_{n-1}F_{n-1},\] where \(k_{i},i=1,\dots n-1\) represent the curvatures of the curve \(\gamma\). Proof of Theorem 1.5.: In order to prove the first result concerning the classification of triharmonic helices in space forms we note that the equation for triharmonic curves in space forms reads as \[\nabla_{T}^{5}T+K\nabla_{T}^{3}T-K\langle T,\nabla_{T}^{3}T\rangle T-K\langle T,\nabla_{T}T\rangle\nabla_{T}^{2}T+K\langle T,\nabla_{T}^{2}T\rangle\nabla_{T }T=0. \tag{3.7}\] A direct calculation using (3.6), assuming that \(k_{i},i=1,\dots,4\) are constant and \(k_{i}=0,i\geq 5\), shows that \[\nabla_{T}^{2}T= -k_{1}^{2}T+k_{1}k_{2}F_{3},\] \[\nabla_{T}^{3}T= -k_{1}(k_{1}^{2}+k_{2}^{2})F_{2}+k_{1}k_{2}k_{3}F_{4},\] \[\nabla_{T}^{4}T= k_{1}^{2}(k_{1}^{2}+k_{2}^{2})T-k_{1}k_{2}(k_{1}^{2}+k_{2}^{2}+k_{ 3}^{2})F_{3}+k_{1}k_{2}k_{3}k_{4}F_{5},\] \[\nabla_{T}^{5}T= k_{1}\big{(}(k_{1}^{2}+k_{2}^{2})^{2}+k_{2}^{2}k_{3}^{2})F_{2}-k_{ 1}k_{2}k_{3}(k_{1}^{2}+k_{2}^{2}+k_{3}^{2}+k_{4}^{2})F_{4}.\] Inserting these identities into (3.7) then yields \[k_{1}\big{(}(k_{1}^{2}+k_{2}^{2})^{2}+k_{2}^{2}k_{3}^{2}-K(2k_{1}^{2}+k_{2}^{2}) \big{)}F_{2}-k_{1}k_{2}k_{3}(k_{1}^{2}+k_{2}^{2}+k_{3}^{2}+k_{4}^{2}-K)F_{4}=0.\] Testing this equation with both \(F_{2},F_{4}\) then completes the first part of the proof. Concerning the second claim of the theorem, which is the classification of \(4\)-harmonic helices on space forms, we recall that in this case the equation for \(4\)-harmonic curves acquires the form \[\nabla_{T}^{7}T+ K\nabla_{T}^{5}T-K\langle\nabla_{T}^{5}T,T\rangle T-K\langle \nabla_{T}T,T\rangle\nabla_{T}^{4}T+K\langle\nabla_{T}^{4}T,T\rangle\nabla_{T}T \tag{3.8}\] \[+K\langle\nabla_{T}^{2}T,T\rangle\nabla_{T}^{3}T-K\langle T, \nabla_{T}^{3}T\rangle\nabla_{T}^{2}T=0,\] which is precisely (2.1) for \(r=4\). In order to characterize the solutions of (3.8) with constant curvatures we use the Frenet equations (3.6) and a direct calculation shows \[\nabla_{T}^{2}T= -k_{1}^{2}T+k_{1}k_{2}F_{3},\] \[\nabla_{T}^{3}T= -k_{1}(k_{1}^{2}+k_{2}^{2})F_{2}+k_{1}k_{2}k_{3}F_{4},\] \[\nabla_{T}^{4}T= k_{1}^{2}(k_{1}^{2}+k_{2}^{2})T-k_{1}k_{2}(k_{1}^{2}+k_{2}^{2}+k_{3} ^{2})F_{3}+k_{1}k_{2}k_{3}k_{4}F_{5},\] \[\nabla_{T}^{5}T= k_{1}\big{(}(k_{1}^{2}+k_{2}^{2})^{2}+k_{2}^{2}k_{3}^{2}\big{)}F_{ 2}-k_{1}k_{2}k_{3}(k_{1}^{2}+k_{2}^{2}+k_{3}^{2}+k_{4}^{2})F_{4}+k_{1}k_{2}k_{3 }k_{4}k_{5}F_{6},\] \[\nabla_{T}^{6}T= -k_{1}^{2}\big{(}(k_{1}^{2}+k_{2}^{2})^{2}+k_{2}^{2}k_{3}^{2}\big{)} T+k_{1}k_{2}\big{(}(k_{1}^{2}+k_{2}^{2})^{2}+k_{3}^{2}(k_{1}^{2}+2k_{2}^{2}+k_{3} ^{2}+k_{4}^{2})\big{)}F_{3}\] \[-k_{1}k_{2}k_{3}k_{4}(k_{1}^{2}+k_{2}^{2}+k_{3}^{2}+k_{4}^{2}+k_{5 }^{2})F_{5}+k_{1}k_{2}k_{3}k_{4}k_{5}k_{6}F_{7},\] \[\nabla_{T}^{7}T= -k_{1}\big{(}(k_{1}^{2}+k_{2}^{2})^{3}+k_{2}^{2}k_{3}^{2}(2k_{1}^{ 2}+2k_{2}^{2}+k_{3}^{2}+k_{4}^{2})\big{)}F_{2}\] \[+k_{1}k_{2}k_{3}\big{(}(k_{1}^{2}+k_{2}^{2})^{2}+(k_{3}^{2}+k_{4}^ {2})^{2}+k_{1}^{2}k_{3}^{2}+2k_{2}^{2}k_{3}^{2}+k_{4}^{2}(k_{1}^{2}+k_{2}^{2}+ k_{5}^{2})\big{)}F_{4}\] \[-k_{1}k_{2}k_{3}k_{4}k_{5}(k_{1}^{2}+k_{2}^{2}+k_{3}^{2}+k_{4}^{2} +k_{5}^{2}+k_{6}^{2})F_{6}.\] Using the above expressions it is easy to see that a number of terms in (3.8) vanish and we obtain the following simplification \[\nabla_{T}^{7}T+ K\nabla_{T}^{5}T+K\langle\nabla_{T}^{4}T,T\rangle\nabla_{T}T+K \langle\nabla_{T}^{2}T,T\rangle\nabla_{T}^{3}T=0,\] which, when expressed in terms of its Frenet frame, acquires the form \[k_{1}\bigg{[}-(k_{1}^{2}+k_{2}^{2})^{3}-k_{2}^{2}k_{3}^{2}(2k_{1} ^{2}+2k_{2}^{2}+k_{3}^{2}+k_{4}^{2})+K\big{(}(k_{1}^{2}+k_{2}^{2})^{2}+k_{2}^{2 }k_{3}^{2}\big{)}+2Kk_{1}^{2}(k_{1}^{2}+k_{2}^{2})\bigg{]}F_{2}\] \[+k_{1}k_{2}k_{3}\bigg{[}(k_{1}^{2}+k_{2}^{2})^{2}+(k_{3}^{2}+k_{4} ^{2})^{2}+k_{1}^{2}k_{3}^{2}+2k_{2}^{2}k_{3}^{2}+k_{4}^{2}(k_{1}^{2}+k_{2}^{2}+ k_{5}^{2})\] \[-K(2k_{1}^{2}+k_{2}^{2}+k_{3}^{2}+k_{4}^{2})\bigg{]}F_{4}\] \[+k_{1}k_{2}k_{3}k_{4}k_{5}\bigg{[}-(k_{1}^{2}+k_{2}^{2}+k_{3}^{2}+ k_{4}^{2}+k_{5}^{2}+k_{6}^{2})+K\bigg{]}F_{6}=0.\] The claim now follows from testing this system with \(F_{2},F_{4},F_{6}\) completing the proof. Proof of Theorem 1.7.: The idea of the proof is to use the constraints (1.7) and to perform a case by case analysis. First of all we note that we have \(k_{1}\neq 0\) as we are considering a proper triharmonic curve. 1. If \(k_{1}\neq 0\) and \(k_{i}=0,i=1,2,3\) then we get \(k_{1}^{2}=2\) leading to the first class of curves. It was shown in [3, Theorem 1.1] that it actually solves the equation for triharmonic curves. 2. If \(k_{1},k_{2}\neq 0\) and \(k_{3}=k_{4}=0\) we are in the situation detailed in Theorem 1.3 leading to the second case. 3. If \(k_{1},k_{2},k_{3}\neq 0\) and \(k_{4}=0\) the constraints (1.7) acquire the form \[\sum_{i=1}^{3}k_{i}^{2}=1,\qquad(k_{1}^{2}+k_{2}^{2})^{2}+k_{2}^{2}k_{3}^{2}=2k_ {1}^{2}+k_{2}^{2}.\] (3.9) Using the first equation to eliminate \(k_{3}^{2}\) from the second one we find \[k_{1}^{2}+k_{2}^{2}=2\] exploiting that \(k_{1}\neq 0\). Reinserting this into the second equation of (3.9) we find \[k_{2}^{2}+k_{2}^{2}k_{3}^{2}=0\] leading to a contradiction such that this case cannot occur. 4. If \(k_{j}\neq 0,j=1,\ldots 4\) the constraints (1.7) are given by \[\sum_{i=1}^{4}k_{i}^{2}=1,\qquad(k_{1}^{2}+k_{2}^{2})^{2}+k_{2}^{2}k_{3}^{2}=2 k_{1}^{2}+k_{2}^{2}.\] (3.10) Again, eliminating \(k_{3}^{2}\) from the second equation, making use of the first constraint, we find \[k_{1}^{2}(k_{1}^{2}+k_{2}^{2})-k_{2}^{2}k_{4}^{2}=2k_{1}^{2}.\] Using once more the first equation of (3.10) to replace \(k_{1}^{2}+k_{2}^{2}\) we arrive at \[k_{1}^{2}(1+k_{3}^{2}+k_{4}^{2})+k_{2}^{2}k_{4}^{2}=0\] leading to a contradiction again. 5. If \(k_{1}\neq 0,k_{2}=0,k_{3}\neq 0,k_{4}=0\) the system (1.7) reduces to \(k_{1}^{2}=2\) leading to the first claim of the theorem. 6. If \(k_{1},k_{2}\neq 0,k_{3}=0,k_{4}\neq 0\) we get the condition \((k_{1}^{2}+k_{2}^{2})^{2}=2k_{1}^{2}+k_{2}^{2}\) leading to the second case of the theorem while \(k_{4}\) can be arbitrary. However, it is a direct consequence of the Frenet equations (3.6) that once \(k_{3}=0\) any geodesic curvature \(k_{j},j\geq 4\) will no longer appear when expressing the equation for triharmonic curves in terms of its Frenet frame. The proof is now complete. A careful inspection of the proof of Theorem 1.5 shows that it is enough to know the structure of the highest order derivatives of \(r\)-harmonic helices to obtain classification results if we assume that all geodesic curvatures of the curve are non-vanishing. Hence, as a first step towards the proof of Theorem 1.9 we establish an expression for the iterated derivatives appearing in the equation for \(r\)-harmonic curves (1.4) suited to our particular analysis. **Lemma 3.4**.: _Let \(\gamma\colon I\to M\) be an \(r\)-harmonic curve parametrized by arclength whose geodesic curvature are all constant together with its Frenet frame \(\{F_{j}\},j=1,2r-2\)._ 1. _For_ \(2\leq l\leq 2r-3\) _we have_ \[\nabla_{T}^{2l-1}T=\sum_{j=1}^{l-2}a_{j}F_{2j}-\big{(}\prod_{i=1}^{2l-3}k_{i} \big{)}\big{(}\sum_{j=1}^{2l-2}k_{j}^{2}\big{)}F_{2l-2}+\big{(}\prod_{i=1}^{2l -1}k_{i}\big{)}F_{2l},\] (3.11) _where_ \(a_{j}\) _is a function of_ \(k_{p},p=1,\ldots,l-2\)_._ 2. _The highest derivative appearing in the equation for_ \(r\)_-harmonic curves has the form_ \[\nabla_{T}^{2r-1}T=\sum_{j=1}^{r-2}b_{j}F_{2j}-\big{(}\prod_{i=1}^{2r-3}k_{i} \big{)}\big{(}\sum_{j=1}^{2r-2}k_{j}^{2}\big{)}F_{2r-2},\] (3.12) _where_ \(b_{j}\) _is a function of_ \(k_{p},p=1,\ldots l-2\)_._ Proof.: The proof uses induction. Choosing \(l=3\) in (3.11) we get precisely the formula derived in the proof of Theorem 1.5 confirming the base case. For the induction step we differentiate (3.11) using the Frenet equations (3.6) and find \[\nabla_{T}^{2l}T= \sum_{j=1}^{l-2}\tilde{a}_{j}F_{2j-1}+\sum_{j=1}^{l-2}a_{j}k_{2j}F_ {2j+1}\] \[+\big{(}\prod_{i=1}^{2l-3}k_{i}\big{)}\big{(}\sum_{j=1}^{2l-2}k_{j} ^{2}\big{)}k_{2l-3}F_{2l-3}-\big{(}\prod_{i=1}^{2l-3}k_{i}\big{)}\big{(}\sum_{j =1}^{2l-2}k_{j}^{2}\big{)}k_{2l-2}F_{2l-1}\] \[-\big{(}\prod_{i=1}^{2l-1}k_{i}\big{)}k_{2l-1}F_{2l-1}+\big{(}\prod _{i=1}^{2l-1}k_{i}\big{)}k_{2l}F_{2l+1},\] where \(\tilde{a}_{j}\) is again a function of the \(k_{p},p=1,\ldots l-1\). Differentiating again using (3.6) then yields \[\nabla_{T}^{2l+1}T= -\sum_{j=1}^{l-2}\tilde{a}_{j}k_{2j-2}F_{2j-2}+\sum_{j=1}^{l-2} \tilde{a}_{j}k_{2j-1}F_{2j}\] \[-\sum_{j=1}^{l-2}\tilde{a}_{j}k_{2j}^{2}F_{2j}+\sum_{j=1}^{l-2} \tilde{a}_{j}k_{2j}k_{2j+1}F_{2j+2}\] \[-\big{(}\prod_{i=1}^{2l-3}k_{i}\big{)}\big{(}\sum_{j=1}^{2l-2}k_{j }^{2}\big{)}k_{2l-3}k_{2l-4}F_{2l-4}+\big{(}\prod_{i=1}^{2l-3}k_{i}\big{)}\big{(} \sum_{j=1}^{2l-2}k_{j}^{2}\big{)}k_{2l-3}^{2}F_{2l-2}\] \[+\big{(}\prod_{i=1}^{2l-3}k_{i}\big{)}\big{(}\sum_{j=1}^{2l-2}k_{j }^{2}\big{)}k_{2l-2}^{2}F_{2l-2}-\big{(}\prod_{i=1}^{2l-3}k_{i}\big{)}\big{(} \sum_{j=1}^{2l-2}k_{j}^{2}\big{)}k_{2l-2}k_{2l-1}F_{2l}\] \[+\big{(}\prod_{i=1}^{2l-1}k_{i}\big{)}k_{2l-1}k_{2l-2}F_{2l-2}- \big{(}\prod_{i=1}^{2l-1}k_{i}\big{)}k_{2l-1}^{2}F_{2l}\] \[-\big{(}\prod_{i=1}^{2l-1}k_{i}\big{)}k_{2l}^{2}F_{2l}+\big{(}\prod _{i=1}^{2l+1}k_{i}\big{)}F_{2l+2}.\] Now, it is straightforward to see that \[\big{(}\prod_{i=1}^{2l-3}k_{i}\big{)}\big{(}\sum_{j=1}^{2l-2}k_{j}^{2}\big{)}k _{2l-2}k_{2l-1}F_{2l}+\big{(}\prod_{j=1}^{2l-1}k_{i}\big{)}k_{2l-1}^{2}F_{2l}+ \big{(}\prod_{i=1}^{2l-1}k_{i}\big{)}k_{2l}^{2}F_{2l}=\big{(}\prod_{i=1}^{2l-1} k_{i}\big{)}\big{(}\sum_{j=1}^{2l}k_{j}^{2}\big{)}F_{2l}.\] Hence, we may conclude that \[\nabla_{T}^{2l+1}T=\sum_{j=1}^{l-1}\tilde{a}_{j}F_{2j}-\big{(}\prod_{i=1}^{2l- 1}k_{i}\big{)}\big{(}\sum_{j=1}^{2l}k_{j}^{2}\big{)}F_{2l}+\big{(}\prod_{i=1}^{2 l+1}k_{i}\big{)}F_{2l+2}\] completing the induction step and thus establishing the first claim. The second formula follows from the first one taking into account that for an \(r\)-harmonic curve we have \(k_{2r-1}=0\). We are now ready to give the proof of Theorem 1.9. Proof of Theorem 1.9.: First, we rewrite the equation for \(r\)-harmonic curves on space forms (2.1), extracting the two leading derivatives, as follows \[\nabla_{T}^{2r-1}T +K\nabla_{T}^{2r-3}T-K\langle T,\nabla_{T}^{2r-3}T\rangle T \tag{3.13}\] \[+\sum_{l=1}^{r-2}(-1)^{l}\big{(}\langle T,\nabla_{T}^{l}T\rangle \nabla_{T}^{2r-3-l}T-\langle T,\nabla_{T}^{2r-3-l}T\rangle\nabla_{T}^{l}T\big{)} =0.\] Testing this equation with \(F_{2r-2}\) we obtain \[\langle\nabla_{T}^{2r-1}T,F_{2r-2}\rangle+K\langle\nabla_{T}^{2r-3} T,F_{2r-2}\rangle\] \[+\sum_{l=1}^{r-2}(-1)^{l}\big{(}\langle T,\nabla_{T}^{l}T\rangle \langle\nabla_{T}^{2r-3-l}T,F_{2r-2}\rangle-\langle T,\nabla_{T}^{2r-3-l}T \rangle\langle\nabla_{T}^{l}T,F_{2r-2}\rangle\big{)}=0.\] Regarding the last two terms, we make the following splitting into even and odd addends \[\sum_{l=1}^{r-2}(-1)^{l}\langle\nabla_{T}^{2r-3-l}T,F_{2r-2}\rangle=\sum_{l=1} ^{r-2}\big{(}\langle\nabla_{T}^{2r-3-2l}T,F_{2r-2}\rangle-\langle\nabla_{T}^{ 2r-3-(2l-1)}T,F_{2r-2}\rangle\big{)}.\] As \(2r-3-(2l-1)=2r-2-2l\) is clearly even it is a direct consequence of the Frenet equations (3.6) that \(\nabla_{T}^{2r-3-(2l-1)}T\) can be written as \[\nabla_{T}^{2r-3-(2l-1)}T=\sum_{j}a_{j}F_{2j+1},\] where \(a_{j}\) are functions of the geodesic curvatures \(k_{j}\), such that \[\langle\nabla_{T}^{2r-3-(2l+1)}T,F_{2r-2}\rangle=0.\] Secondly, using (3.11), we find \[\langle\nabla_{T}^{2r-3-2l}T,F_{2r-2}\rangle= \sum_{j=1}^{r-l-3}a_{j}\underbrace{\langle F_{2j},F_{2r-2}\rangle} _{=0}\] \[-\big{(}\prod_{i=1}^{2r-2l-5}k_{i}\big{)}\big{(}\sum_{j=1}^{2r-2l- 4}k_{j}^{2}\big{)}\underbrace{\langle F_{2(r-l-2)},F_{2r-2}\rangle}_{=0}\] \[+\big{(}\prod_{i=1}^{2r-2l-3}k_{i}\big{)}\underbrace{\langle F_{2 (r-l-1)},F_{2r-2}\rangle}_{=0}\] for all \(l\geq 1\). Hence, we may conclude that \[\sum_{l=1}^{r-2}(-1)^{l}\langle\nabla_{T}^{2r-3-l}T,F_{2r-2}\rangle=0\] and by the same reasoning we can also deduce that \[\langle\nabla_{T}^{l}T,F_{2r-2}\rangle=0,\qquad 1\leq l\leq r-2.\] Now, using (3.11) and (3.12) we obtain from the equation for \(r\)-harmonic curves (3.13) that \[0= \langle\nabla_{T}^{2r-1}T,F_{2r-2}\rangle+K\langle\nabla_{T}^{2r- 3}T,F_{2r-2}\rangle\] \[= -\big{(}\prod_{i=1}^{2r-3}k_{i}\big{)}\big{(}\sum_{j=1}^{2r-2}k_{ j}^{2}\big{)}+K\big{(}\prod_{i=1}^{2r-3}k_{i}\big{)}.\] This completes the proof. ### Evidence in support of Conjecture 1 In this subsection we will collect a number of results which support the statement of Conjecture 1.11. First, recall that the equation for a triharmonic curve on a general Riemannian manifold is given by \[0=\tau_{3}(\gamma)=\nabla_{T}^{5}T+R^{M}(\nabla_{T}^{3}T,T)T-R^{M}(\nabla_{T} ^{2}T,\nabla_{T}T)T, \tag{3.14}\] which is precisely (1.4) for \(r=3\). **Proposition 3.5**.: _Let \(\gamma\colon I\to M\) be a triharmonic curve parametrized with respect to arclength. Then the following equation holds_ \[\frac{d^{3}}{ds^{3}}|\nabla_{T}T|^{2}-\frac{d}{ds}|\nabla_{T}^{2}T|^{2}= 0. \tag{3.15}\] Proof.: To obtain the first identity we multiply (3.14) by \(T\) and calculate \[0= \langle\nabla_{T}^{5}T,T\rangle\] \[= \frac{d}{ds}\langle\nabla_{T}^{4}T,T\rangle-\langle\nabla_{T}^{4 }T,\nabla_{T}T\rangle\] \[= \frac{d^{2}}{ds^{2}}\langle\nabla_{T}^{3}T,T\rangle-2\frac{d}{ds }\langle\nabla_{T}^{3}T,\nabla_{T}T\rangle+\langle\nabla_{T}^{3}T,\nabla_{T}^ {2}T\rangle\] \[= \frac{d^{3}}{ds^{3}}\langle\nabla_{T}^{2}T,T\rangle-3\frac{d^{2} }{ds^{2}}\langle\nabla_{T}^{2}T,\nabla_{T}T\rangle+\frac{5}{2}\frac{d}{ds}| \nabla_{T}^{2}T|^{2}.\] To finish the proof we note that \(\langle\nabla_{T}^{2}T,T\rangle=-|\nabla_{T}T|^{2}\) which holds as \(\gamma\) is parametrized with respect to arclength. **Corollary 3.6**.: _Let \(\gamma\colon I\to N\) be a triharmonic curve parametrized with respect to arclength. Then the following conservation law holds_ \[\frac{d^{2}}{ds^{2}}|\nabla_{T}T|^{2}-|\nabla_{T}^{2}T|^{2}= c_{1} \tag{3.16}\] _for some \(c_{1}\in\mathbb{R}\)._ Proof.: This is a direct consequence of the conservation law (3.15). Choosing a Frenet frame along \(\gamma\) equation (3.16) implies \[(k_{1}^{\prime})^{2}+2k_{1}k_{1}^{\prime\prime}-k_{1}^{4}-k_{1}^{2}k_{2}^{2}=c _{1},\] where \(k_{i},i=1,2\) represent the curvatures of the curve \(\gamma\). In the case of \(c_{1}=0\) this equation is solved by \[k_{1}=\frac{\alpha}{s},\qquad k_{2}=\frac{\beta}{s},\qquad\alpha^{2}+\beta^{2 }=5.\] which gives rise to the triharmonic curve with non-constant geodesic curvature and torsion constructed in [10]. In the following, we extend the previous analysis to the case of 4-harmonic curves. These are solutions of \[0=\tau_{4}(\gamma)=\nabla_{T}^{7}T+R^{M}(\nabla_{T}^{5}T,T)T-R^{M}(\nabla_{T} ^{4}T,\nabla_{T}T)T+R^{M}(\nabla_{T}^{3}T,\nabla_{T}^{2}T)T, \tag{3.17}\] which is precisely (1.4) in the case of \(r=4\). **Proposition 3.7**.: _Let \(\gamma\colon I\to M\) be a 4-harmonic curve parametrized with respect to arclength. Then the following conservation law holds_ \[\frac{d^{5}}{ds^{5}}|\nabla_{T}T|^{2}-2\frac{d^{3}}{ds^{3}}|\nabla_{T}^{2}T|^ {2}+\frac{d}{ds}|\nabla_{T}^{3}T|^{2}=0. \tag{3.18}\] Proof.: Testing (3.17) with \(T\) a direct calculation yields the following identity \[0=\frac{d^{2}}{ds^{2}}\langle\nabla_{T}^{5}T,T\rangle-2\frac{d}{ds}\langle \nabla_{T}^{5}T,\nabla_{T}T\rangle+\langle\nabla_{T}^{5}T,\nabla_{T}^{2}T\rangle.\] From the proof of Proposition 3.5 we know that \[\langle\nabla_{T}^{5}T,T\rangle=-\frac{5}{2}\frac{d^{3}}{ds^{3}}|\nabla_{T}T| ^{2}+\frac{5}{2}\frac{d}{ds}|\nabla_{T}^{2}T|^{2}.\] Moreover, we have \[\langle\nabla_{T}^{5}T,\nabla_{T}T\rangle=\frac{d^{4}}{ds^{4}}\frac{1}{2}| \nabla_{T}T|^{2}-\frac{d^{2}}{ds^{2}}2|\nabla_{T}^{2}T|^{2}+|\nabla_{T}^{3}T|^ {2}.\] Finally, a direct calculation shows \[\langle\nabla_{T}^{5}T,\nabla_{T}^{2}T\rangle=\frac{1}{2}\frac{d^{3}}{ds^{3}}| \nabla_{T}^{2}T|^{2}-\frac{3}{2}\frac{d}{ds}|\nabla_{T}^{3}T|^{2}.\] The claim follows by combing the different equations. A dimensional analysis of the conservation law (3.18) suggests the following: Assume that we are looking for a \(4\)-harmonic curve with non-constant geodesic curvature \(k_{1}\). Inspecting the terms in (3.18) suggests that \(k_{1}=\frac{C}{s^{2}}\) as all three terms scale as \(\frac{1}{s^{9}}\). Hence, one may expect that there exist \(4\)-harmonic curves with non-constant geodesic curvature \(k_{1}=\frac{\alpha}{s^{2}},\alpha\in\mathbb{R}\). Finally, we note that the construction of conservation laws can be carried out for polyharmonic curves as well by multiplying (1.4) with \(T\) and manipulating the resulting equation as we have demonstrated for triharmonic and \(4\)-harmonic curves. Again, a simple dimensional analysis then leads to the conclusion of Conjecture 1.11.
2308.12950
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve
2023-08-24T17:39:13Z
http://arxiv.org/abs/2308.12950v3
# Code Llama: Open Foundation Models for Code ###### Abstract We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B and 34B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B and 13B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 53% and 55% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.1 Footnote 1: [https://github.com/facebookresearch/codellama](https://github.com/facebookresearch/codellama) + Footnote †: dagger\): Core contributors. \(*\): Meta AI, CERMICS École des Ponts ParisTech. \(\diamond\): Meta AI & Hebrew University of Jerusalem ## 1 Introduction Large language models (LLMs) power a rapidly increasing number of applications, having reached a proficiency in natural language that allows them to be commanded and prompted to perform a variety of tasks (OpenAI, 2023; Touvron et al., 2023b). By utilizing large, in-domain datasets, their efficacy can be greatly improved for applications that require a combination of both natural and domain-specific language and understanding of specialized terminology. By training on domain-specific datasets, they have proved effective more broadly on applications that require advanced natural language understanding. A prominent use-case is the formal interaction with computer systems, such as program synthesis from natural language specifications, code completion, debugging, and generating documentation (for a survey, see Xu & Zhu, 2022, also see Section 5). In this work, we present Code Llama, a family of LLMs for code generation and infilling derived from Llama 2 (Touvron et al., 2023b) and released under the same custom permissive license. We provide inference code for both completion and infilling models in the accompanying repository.1 Our approach is based on gradually specializing and increasing the capabilities of Llama 2 models by applying a cascade of training and fine-tuning steps (Figure 2): Footnote 1: [https://github.com/facebookresearch/codellama](https://github.com/facebookresearch/codellama) * **Code-training from foundation models.** While most LLMs for code generation such as AlphaCode (Li et al., 2022), InCoder (Fried et al., 2023) or StarCoder (Li et al., 2023) are trained on code only, Codex (Chen et al., 2021) was fine-tuned from a general language model. We also start from a foundation model (Llama 2, Touvron et al., 2023b) pretrained on general-purpose text and code data. Our comparison (Section 3.4.1) shows that initializing our model with Llama 2 outperforms the same architecture trained on code only for a given budget. * **Infilling.** Autoregressive training and fine-tuning of LLMs is suitable for prompt completion, but does not provide the capability to fill a missing portion of text while taking the full surrounding context into account. Our code-training for 7B and 13B Code Llama models features a multitask objective (Fried et al., 2023) consisting of both autoregressive and causal infilling prediction, enabling applications such as real-time completion in source code editors or docstring generation. Similarly to Bavarian et al. (2022); Li et al. (2023), our ablation study shows that infilling capabilities come at low cost in code generation performance for a given training compute budget (Section 3.2). * as opposed to function-level or file-level - requires prompting the model with much longer context than the 4,096 tokens supported by Llama 2. We propose an additional fine-tuning stage that extends the maximum context length from 4,096 tokens to 100,000 tokens by modifying the parameters of the RoPE positional embeddings (Su et al., 2021) used in Llama 2. Our experiments show Code Llama operating on very large contexts with a moderate impact on performances on standard coding benchmarks (Section 3.3). * Instruct variants are further fine-tuned on a mix of proprietary instruction data for improved safety and helpfulness, and a new machine-generated _self-instruct_ dataset created by prompting Llama 2 for coding problems and Code Llama to generate associated unit tests and solutions. Our results show that Code Llama - Instruct significantly improves performance on various truthfulness, toxicity and bias benchmarks at moderate cost in terms of code generation performance (Section 4). Different combinations of these approaches lead to a family of code-specialized Llama 2 models with three main variants that we release in three sizes (7B, 13B and 34B parameters): * Code Llama: a foundational model for code generation tasks, * Python: a version specialized for Python, * Instruct: a version fine-tuned with human instructions and self-instruct code synthesis data. An example of using Code Llama - Instruct is given in Figure 1. It show-cases that the model interprets natural language to determine suitable options for a command-line program and provides an explanation of the solution. We provide further qualitative examples in Appendix K. We perform exhaustive evaluations of our models on major code generation benchmarks: HumanEval (Chen et al., 2021), MBPP (Austin et al., 2021), and APPS (Hendrycks et al., 2021), as well as a multilingual version of HumanEval (MultiPL-E, Cassano et al., 2023), where our best models establish a new state of the art amongst open-source LLMs. The technical details of our training and fine-tuning procedures are provided in Section 2, followed by in-depth experiments and ablation studies, details of the safety/helpfulness evaluations and a discussion of related work. Figure 1: Example of response of Code Llama - Instruct (34B) when queried for a specific shell command. ## 2 Code Llama: Specializing Llama 2 for code ### The Code Llama models family Code Llama.The Code Llama models constitute foundation models for code generation. They come in three model sizes: 7B, 13B and 34B parameters. The 7B and 13B models are trained using an infilling objective (Section 2.3), and are appropriate to be used in an IDE to complete code in the middle of a file, for example. The 34B model was trained without the infilling objective. All Code Llama models are intialized with Llama 2 model weights and trained on 500B tokens from a code-heavy dataset (see Section 2.2 for more details). They are all fine-tuned to handle long contexts as detailed in Section 2.4. Code Llama - Python.The Code Llama - Python models are specialized for Python code generation and also come in sizes of 7B, 13B, and 34B parameters. They are designed to study the performance of models tailored to a single programming language, compared to general-purpose code generation models. Initialized from Llama 2 models and trained on 500B tokens from the Code Llama dataset, Code Llama - Python models are further specialized on 100B tokens using a Python-heavy dataset (Section 2.2). All Code Llama - Python models are trained without infilling and subsequently fine-tuned to handle long contexts (Section 2.4). Code Llama - Instruct.The Code Llama - Instruct models are based on Code Llama and fine-tuned with an additional approx. 5B tokens to better follow human instructions. More details on Code Llama - Instruct can be found in Section 2.5. ### Dataset We train Code Llama on 500B tokens during the initial phase, starting from the 7B, 13B, and 34B versions of Llama 2. As shown in Table 1, Code Llama is trained predominantly on a near-deduplicated dataset of publicly available code. We also source 8% of our samples data from natural language datasets related to code. This dataset contains many discussions about code and code snippets included in natural language questions or answers. To help the model retain natural language understanding skills, we also sample a small proportion of our batches from a natural language dataset. Data is tokenized via byte pair encoding (BPE, Sennrich et al. (2016)), employing the same tokenizer as Llama and Llama 2. Preliminary experiments suggested that adding batches sampled from our natural language dataset improves the performance of our models on MBPP. ### Infilling Code infilling is the task of predicting the missing part of a program given a surrounding context. Applications include code completion at the cursor's position in code IDEs, type inference and generation of in-code documentation (e.g., docstrings). We train infilling models following the concept of causal masking (Aghajanyan et al., 2022; Fried et al., 2023), where parts of a training sequence are moved to the end, and the reordered sequence is predicted autoregressively. We train the general-purpose 7B and 13B models with an infilling objective, following the Figure 2: **The Code Llama specialization pipeline**. The different stages of fine-tuning annotated with the number of tokens seen during training. Infilling-capable models are marked with the \(\rightleftarrows\) symbol. recommendations of Bavarian et al. (2022). More precisely, we split training documents at the character level into a prefix, a middle part and a suffix with the splitting locations sampled independently from a uniform distribution over the document length. We apply this transformation with a probability of 0.9 and to documents that are not cut across multiple model contexts only. We randomly format half of the splits in the _prefix-suffix-middle_ (PSM) format and the other half in the compatible _suffix-prefix-middle (SPM)_ format described in Bavarian et al. (2022, App. D). We extend Llama 2's tokenizer with four special tokens that mark the beginning of the prefix, the middle part or the suffix, and the end of the infilling span. To limit the distribution shift between autoregressive and infilling training, we suppress the implicit leading space that SentencePiece tokenizers add upon encoding the middle part and the suffix (Kudo and Richardson, 2018). In SPM format, we concatenate the prefix and the middle part before encoding to tokens. Note that our model doesn't encounter split subtokens in the SPM format while it does in the PSM format. Results on the effect of infilling training on downstream generation tasks and the performance of our infilling models on infilling benchmarks are reported in Section 3.2. ### Long context fine-tuning Effective handling of long sequences is a major topic of research in transformer-based language modeling (Vaswani et al., 2017). The fundamental modeling challenges are extrapolation, i.e., operating on sequence lengths beyond those seen at training time, and the quadratic complexity of attention passes which favors training on short-to-medium length inputs. For Code Llama, we propose a dedicated _long context fine-tuning (LCFT)_ stage in which models are presented with sequences of 16,384 tokens, up from the 4,096 tokens used for Llama 2 and our initial code training stages. By limiting the training time spent on processing long sequences to a fine-tuning stage, we gain long-range capabilities without significantly increasing the cost of training our models. Our strategy is similar to the recently proposed fine-tuning by position interpolation (Chen et al., 2023), and we confirm the importance of modifying the rotation frequencies of the rotary position embedding used in the Llama 2 foundation models (Su et al., 2021). However, instead of downscaling frequencies linearly as Chen et al. (2023), we change the base period from which they are derived. Specifically, with rotary embeddings, the query and key vectors \(\mathbf{x}_{n}\) at position \(n\) are subject to a linear transformation \(\mathbf{R}_{\Theta,n}^{d}\mathbf{x}_{n}\), where \(\mathbf{R}_{\Theta,n}^{d}\) is a block diagonal matrix with entries of the form \[\left(\mathbf{R}_{\Theta,n}^{d}\right)_{i}=\begin{pmatrix}\cos n\theta_{i}&- \sin n\theta_{i}\\ \sin n\theta_{i}&\cos n\theta_{i}\end{pmatrix},\] and \(d\) denotes the embedding dimension. Rotation frequencies are computed as \(\theta_{i}=\theta^{-2i/d}\), and we increase the base period \(\theta\) from 10,000 to 1,000,000 for fine-tuning. This increase allows for processing much larger sequences and reduces bias towards short-distance attention (see Appendix F.1 for further discussion). Our experiments confirm that Code Llama models are not only effective within the increased sequence length used during fine-tuning, but further show extrapolation capabilities and exhibit stable behavior on very long sequences of up to 100,000 tokens (Section 3.3). ### Instruction fine-tuning Our instruction fine-tuned models Code Llama - Instruct are based on Code Llama and trained to answer questions appropriately. They are trained on three different types of data. Proprietary dataset.We use the instruction tuning dataset collected for Llama 2 and described in detail by Touvron et al. (2023). Specifically, we use the version referred to in their paper as "RLHF V5", collected trough several stages of reinforcement learning from human feedback and human feedback annotation (see their Section 3 for more details). It combines thousands of Supervised Fine-Tuning and millions of Rejection Sampling examples. Each example consists of a multi-turn dialogue between a _user_ and an _assistant_. For Rejection Sampling, the output was selected among several generations using a reward model. The final dataset contains both Helpfulness and Safety data. This enables Code Llama to inherit Llama 2's instruction following and safety properties. Self-instruct.Our proprietary dataset contains few examples of code-related tasks. Collecting supervised data from human annotators or training from human feedback (Ouyang et al., 2022) is expensive for coding tasks as it requires input from professional developers. Instead of human feedback, we use execution feedback to select data to train our instruct model. We construct the self-instruction dataset following the recipe below, resulting in \(\sim\)14,000 question-tests-solution triplets: 1. Generate 62,000 interview-style programming questions by prompting (Figure 9) Llama 2 70B. 2. De-duplicate the set of questions by removing exact duplicates, resulting in \(\sim\)52,000 questions. 3. For each of these questions: 1. Generate unit tests by prompting Code Llama 7B (Figure 10) 2. Generate ten Python solutions by prompting Code Llama 7B (Figure 11) 3. Run the unit tests on the ten solutions. Add the first solution that passes the tests (along with its corresponding question and tests) to the self-instruct dataset. We use Code Llama 7B to generate the tests and Python solutions, as we found it more efficient than generating fewer solutions per question with the 34B model for the same compute budget. Rehearsal.In order to prevent the model from regressing on general coding and language understanding capabilities, Code Llama - Instruct is also trained with a small proportion of data from the code dataset (6%) and our natural language dataset (2%). ### Training details Optimization.Our optimizer is AdamW (Loshchilov and Hutter, 2019) with \(\beta_{1}\) and \(\beta_{2}\) values of 0.9 and 0.95. We use a cosine schedule with 1000 warm-up steps, and set the final learning rate to be 1/30th of the peak learning rate. We use a batch size of 4M tokens which are presented as sequences of 4,096 tokens each. Despite the standard practice of using lower learning rates in fine-tuning stages than in pre-training stages, we obtained best results when retaining the original learning rate of the Llama 2 base model. We carry these findings to the 13B and 34B models, and set their learning rates to \(3e^{-4}\) and \(1.5e^{-4}\), respectively. For python fine-tuning, we set the initial learning rate to \(1e^{-4}\) instead. For Code Llama - Instruct, we train with a batch size of 524,288 tokens and on approx. 5B tokens in total. Long context fine-tuning.For long context fine-tuning (LCFT), we use a learning rate of \(2e^{-5}\), a sequence length of 16,384, and reset RoPE frequencies with a base value of \(\theta=10^{6}\). The batch size is set to 2M tokens for model sizes 7B and 13B and to 1M tokens for model size 34B, respectively. Training lasts for 10,000 gradient steps by default. We observed instabilities in downstream performance for certain configurations, and hence set the number of gradient steps to 11,000 for the 34B models and to 3,000 for Code Llama 7B. \begin{table} \begin{tabular}{l c c c} \hline \hline Dataset & Sampling prop. & Epochs & Disk size \\ \hline **Code Llama (500B tokens)** & & & \\ Code & 85\% & 2.03 & 859 GB \\ Natural language related to code & 8\% & 1.39 & 78 GB \\ Natural language & 7\% & 0.01 & 3.5 TB \\ \hline **Code Llama - Python (additional 100B tokens)** & & & \\ Python & 75\% & 3.69 & 79 GB \\ Code & 10\% & 0.05 & 859 GB \\ Natural language related to code & 10\% & 0.35 & 78 GB \\ Natural language & 5\% & 0.00 & 3.5 TB \\ \hline \hline \end{tabular} \end{table} Table 1: **Training dataset of Code Llama and Code Llama - Python. We train Code Llama on 500B additional tokens and Code Llama - Python further on 100B tokens.** ## 3 Results We report results on a variety of benchmarks. First, we evaluate our models on popular description-to-code generation benchmarks for Python: HumanEval (Chen et al., 2021), MBPP (Austin et al., 2021), and APPS (programming interviews and competitions, Hendrycks et al., 2021). Second, we evaluate our models on further programming languages using MultiPL-E (Cassano et al., 2023), namely on C++, Java, PHP, C#, TypeScript (TS), and Bash. We additionally report results on the GSM8K benchmark (Cobbe et al., 2021), which measures mathematical reasoning capabilities (Appendix C). Next, we perform an extensive ablation study: (i) we study the impact of training from scratch or from a pretrained Llama 2 model in Section 3.4.1; (ii) we perform ablations for infilling and additional infilling specific benchmarks in Section 3.2; (iii) we study the effect of long context fine-tuning on perplexity, a synthetic retrieval task, and code completion with long source code files (Section 3.3); and (iv) we evaluate our instruction fine-tuning procedure, which includes self-instruct training by leveraging self-generated unit tests in Section 3.4.2. \begin{table} \begin{tabular}{l r|l l l|l l l} \hline \hline Model & Size & \multicolumn{3}{c|}{HumanEval} & \multicolumn{3}{c}{MBPP} \\ & & pass@1 & pass@10 & pass@100 & pass@1 & pass@10 & pass@100 \\ \hline code-cushman-001 & 12B & 33.5\% & - & - & 45.9\% & - & - \\ GPT-3.5 (ChatGPT) & - & 48.1\% & - & - & 52.2\% & - & - \\ GPT-4 & - & **67.0\%** & - & - & - & - & - \\ PaLM & 540B & 26.2\% & - & - & 36.8\% & - & - \\ PaLM-Coder & 540B & 35.9\% & - & 88.4\% & 47.0\% & - & - \\ PaLM 2-S & - & 37.6\% & - & 88.4\% & 50.0\% & - & - \\ StarCoder Base & 15.5B & 30.4\% & - & - & 49.0\% & - & - \\ StarCoder Python & 15.5B & 33.6\% & - & - & 52.7\% & - & - \\ StarCoder Prompted & 15.5B & 40.8\% & - & - & 49.5\% & - & - \\ \hline \multirow{4}{*}{Llama 2} & 7B & 12.2\% & 25.2\% & 44.4\% & 20.8\% & 41.8\% & 65.5\% \\ & 13B & 20.1\% & 34.8\% & 61.2\% & 27.6\% & 48.1\% & 69.5\% \\ & 34B & 22.6\% & 47.0\% & 79.5\% & 33.8\% & 56.9\% & 77.6\% \\ & 70B & 30.5\% & 59.4\% & 87.0\% & 45.4\% & 66.2\% & 83.1\% \\ \hline \multirow{4}{*}{Code Llama} & 7B & 33.5\% & 59.6\% & 85.9\% & 41.4\% & 66.7\% & 82.5\% \\ & 13B & 36.0\% & 69.4\% & 89.8\% & 47.0\% & 71.7\% & 87.1\% \\ & 34B & 48.8\% & 76.8\% & 93.0\% & 55.0\% & 76.2\% & 86.6\% \\ \hline \multirow{4}{*}{Code Llama - Instruct} & 7B & 34.8\% & 64.3\% & 88.1\% & 44.4\% & 65.4\% & 76.8\% \\ & 13B & 42.7\% & 71.6\% & 91.6\% & 49.4\% & 71.2\% & 84.1\% \\ & 34B & 41.5\% & 77.2\% & 93.5\% & 57.0\% & 74.6\% & 85.4\% \\ Unnatural Code Llama & 34B & **62.2\%** & **85.2\%** & **95.4\%** & **61.2\%** & **76.6\%** & 86.7\% \\ \hline \multirow{4}{*}{Code Llama - Python} & 7B & 38.4\% & 70.3\% & 90.6\% & 47.6\% & 70.3\% & 84.8\% \\ & 13B & 43.3\% & 77.4\% & 94.1\% & 49.0\% & 74.0\% & 87.6\% \\ \cline{1-1} & 34B & 53.7\% & 82.8\% & 94.7\% & 56.2\% & 76.4\% & **88.2\%** \\ \hline \hline \end{tabular} \end{table} Table 2: **Code Llama pass@ scores on HumanEval and MBPP. The pass@1 scores of our models are computed with greedy decoding. The pass@10 and pass@100 scores are computed with nucleus sampling with p=0.95 and temperature 0.8 following our findings from Figure 6. Models are evaluated in zero-shot on Human Eval and 3-shot on MBPP. The instruct models are trained to be safe and aligned from the base Code Llama models. Results for other models as provided by Li et al. (2023) (code-cushman-001, StarCoder), OpenAI (2023) (GPT-3.5, GPT-4), and Chowdhery et al. (2022); Anil et al. (2023) (PaLM).** \begin{table} \begin{tabular}{l r r|r r r} \hline \hline Model & Size & Pass@ & Introductory & Interview & Competition \\ \hline GPT-Neo & 2.7B & \(\begin{array}{c}1\\ 5\end{array}\) & \(\begin{array}{c}3.9\%\\ 5.5\%\end{array}\) & \(\begin{array}{c}0.6\%\\ 0.8\%\end{array}\) & \(\begin{array}{c}0.0\%\\ 0.0\%\end{array}\) \\ \hline \multirow{4}{*}{Codex} & \multirow{4}{*}{12B} & \(\begin{array}{c}1\\ 5\end{array}\) & \(\begin{array}{c}4.1\%\\ 9.7\%\end{array}\) & \(\begin{array}{c}0.1\%\\ 0.5\%\end{array}\) & \(\begin{array}{c}0.0\%\\ 0.1\%\end{array}\) \\ & & \(\begin{array}{c}1000\\ 25.0\%\end{array}\) & \(\begin{array}{c}25.0\%\end{array}\) & \(\begin{array}{c}3.7\%\\ 3.2\%\end{array}\) \\ \hline AlphaCode & \multirow{4}{*}{1B} & \(\begin{array}{c}1000\\ 5\end{array}\) & \(\begin{array}{c}17.7\%\\ 14.4\%\end{array}\) & \(\begin{array}{c}5.2\%\\ 5.6\%\end{array}\) & \(\begin{array}{c}7.1\%\\ 4.6\%\end{array}\) \\ AlphaCode (Filtered 10000) & & \(\begin{array}{c}5\\ 5\end{array}\) & \(\begin{array}{c}18.2\%\\ 20.4\%\end{array}\) & \(\begin{array}{c}8.2\%\\ 9.7\%\end{array}\) & \(\begin{array}{c}6.7\%\\ 7.8\%\end{array}\) \\ \hline \multirow{4}{*}{Code Llama} & \multirow{4}{*}{13B} & \(\begin{array}{c}5\\ 10\end{array}\) & \(\begin{array}{c}10.8\%\\ 15.6\%\end{array}\) & \(\begin{array}{c}2.0\%\\ 3.1\%\end{array}\) & \(\begin{array}{c}0.8\%\\ 1.4\%\end{array}\) \\ & & \(\begin{array}{c}100\end{array}\) & \(\begin{array}{c}33.5\%\\ 3.2\%\end{array}\) & \(\begin{array}{c}9.4\%\\ 8.1\%\end{array}\) & \(\begin{array}{c}7.1\%\end{array}\) \\ \hline \multirow{4}{*}{Code Llama} & \multirow{4}{*}{13B} & \(\begin{array}{c}5\\ 10\end{array}\) & \(\begin{array}{c}23.7\%\\ 30.2\%\end{array}\) & \(\begin{array}{c}5.6\%\\ 8.1\%\end{array}\) & \(\begin{array}{c}2.1\%\\ 3.4\%\end{array}\) \\ & & \(\begin{array}{c}100\end{array}\) & \(\begin{array}{c}49.0\%\\ \end{array}\) & \(\begin{array}{c}18.4\%\\ 12.0\%\end{array}\) \\ \hline \multirow{4}{*}{Code Llama - Python} & \multirow{4}{*}{13B} & \(\begin{array}{c}5\\ 10\end{array}\) & \(\begin{array}{c}23.8\%\\ 32.8\%\end{array}\) & \(\begin{array}{c}7.1\%\\ 10.0\%\end{array}\) & \(\begin{array}{c}2.8\%\\ 4.3\%\end{array}\) \\ & & \(\begin{array}{c}100\end{array}\) & \(\begin{array}{c}51.6\%\\ 51.6\%\end{array}\) & \(\begin{array}{c}21.5\%\\ 14.6\%\end{array}\) \\ \hline \multirow{4}{*}{Code Llama - Instruct} & \multirow{4}{*}{13B} & \(\begin{array}{c}5\\ 10\end{array}\) & \(\begin{array}{c}28.9\%\\ 35.9\%\end{array}\) & \(\begin{array}{c}7.8\%\\ 11.1\%\end{array}\) & \(\begin{array}{c}\textbf{3.5\%}\\ **5.5\%**\end{array}\) \\ & & \(\begin{array}{c}100\end{array}\) & \(\begin{array}{c}54.9\%\\ 54.9\%\end{array}\) & \(\begin{array}{c}23.9\%\\ 23.9\%\end{array}\) & \(\begin{array}{c}\textbf{16.8\%}\end{array}\) \\ \hline \multirow{4}{*}{Code Llama - Python} & \multirow{4}{*}{7B} & \(\begin{array}{c}5\\ 10\end{array}\) & \(\begin{array}{c}12.9\%\\ 17.9\%\end{array}\) & \(\begin{array}{c}2.1\%\\ 3.1\%\end{array}\) & \(\begin{array}{c}1.1\%\\ 2.0\%\end{array}\) \\ & & \(\begin{array}{c}100\end{array}\) & \(\begin{array}{c}35.4\%\\ 35.4\%\end{array}\) & \(\begin{array}{c}9.4\%\\ 9.4\%\end{array}\) & \(\begin{array}{c}8.5\%\\ 8.5\%\end{array}\) \\ \hline \multirow{4}{*}{Code Llama - Instruct} & \multirow{4}{*}{13B} & \(\begin{array}{c}5\\ 10\end{array}\) & \(\begin{array}{c}24.0\%\\ 30.3\%\end{array}\) & \(\begin{array}{c}6.9\%\\ 9.6\%\end{array}\) & \(\begin{array}{c}2.4\%\\ 3.8\%\end{array}\) \\ & & \(\begin{array}{c}100\end{array}\) & \(\begin{array}{c}49.0\%\\ 48.7\%\end{array}\) & \(\begin{array}{c}6.9\%\\ 9.6\%\end{array}\) & \(\begin{array}{c}2.4\%\\ 3.8\%\end{array}\) \\ \cline{1-1} & & \(\begin{array}{c}5\\ 10\end{array}\) & \(\begin{array}{c}31.6\%\\ 37.8\%\end{array}\) & \(\begin{array}{c}7.9\%\\ 11.1\%\end{array}\) & \(\begin{array}{c}3.2\%\\ 5.1\%\end{array}\) \\ & & \(\begin{array}{c}100\end{array}\) & \(\begin{array}{c}37.8\%\\ 55.7\%\end{array}\) & \(\begin{array}{c}11.1\%\\ 22.8\%\end{array}\) & \(\begin{array}{c}5.1\%\\ 16.4\%\end{array}\) \\ \hline \hline \end{tabular} \end{table} Table 3: **Code Llama pass@ scores on APPS.** We list the two-shot pass@5, pass@10, and pass@100 scores of Code Llama on APPS. For our models, we use nucleus sampling with p=0.95 and a temperature of 0.6. Code Llama is not fine-tuned on the training set of APPS and all results are calculated with raw predictions without filtering by the test cases from the prompt. Fine-tuned GPT-Neo numbers are reported by Hendrycks et al. (2021), one-shot Codex results by Chen et al. (2021), and fine-tuned AlphaCode numbers by Li et al. (2022). ### Code generation #### 3.1.1 Python code generation We start by reporting results for Python code generation using the HumanEval (Chen et al., 2021), MBPP (Austin et al., 2021) and APPS (Hendrycks et al., 2021) benchmarks. Results are summarized in Tables 2 and 3. The full list of results on HumanEval and MBPP, including models with and without infilling and long context fine-tuning, can be found in Table 10 in Appendix B. We provide zero-shot results of our instruction fine-tuned models on APPS in Table 15 with evaluation details in Appendix E. Our main findings are as follows. The value of model specialization.We observe that model specialization is yields a boost in code generation capabilities when comparing Llama 2 to Code Llama and Code Llama to Code Llama - Python. Llama 2 was trained on 2T tokens, and training on only 500B of extra tokens from a code-heavy dataset results in massive performance gains on both HumanEval and MBPP, to the point that Llama 2 70B is roughly equivalent to Code Llama 7B on Python coding benchmarks. Although Code Llama was trained on more than two epochs of our code dataset, which contains our entire Python dataset, training on 100B extra tokens of a Python-heavy data mix leads to significant gains on Python code generation benchmarks, between 4.3% points and 8.3% points in HumanEval pass@1 and between 1.2% points and 6.4% points in MBPP pass@1. These gains are smaller than for the first code training step, but still allow Code Llama - Python 7B to outperform even Code Llama 13B on MBPP and HumanEval. For the APPS benchmark, the prompts are much less direct and more complex compared to MBPP and HumanEval. Our Code Llama - Python models show slightly decreased performance on the introductory and interview level problems, where understanding the prompt is often more challenging for a language model than implementing a solution. However, Code Llama - Python shows clear gains on the competition-level problems where solutions are more complex. While large language models have enough capacity to learn to generate text on various topics, we observe that model specialization is beneficial for models between 7B and 34B parameters and after two full epochs on the training data. Scaling of specialized models.We observe that scaling the number of parameters matters for models specialized for coding. With the same training process, our larger models outperform their smaller counterparts on almost every metric from HumanEval, MBPP and APPS (Table 2, 3). For instance, we gain 5.6 percentage points on MBPP pass@1 scaling Code Llama from 7B to 13B parameters, and 8 more points when scaling to 34B. We can hypothesize that specializing larger models to code would lead to significant further gains on coding tasks. Moreover, the Chinchilla scaling laws (Hoffmann et al., 2022) indicate that larger models would benefit more from training on more tokens. #### 3.1.2 Multilingual evaluation Next, we evaluate our models on a more diverse set of programming languages. For that, we use the MultiPL-E benchmark (Cassano et al., 2023). We report results for Python, C++, Java, PHP, TypeScript, C#, and Bash in Table 4. We observe a similar improvement from Llama 2 to Code Llama in the multilingual setting as in the evaluation on Python (Section 3.1.1). The Code Llama models clearly outperform Llama 2 models of the same size on code generation in any language, and Code Llama 7B even outperforms Llama 2 70B. Compared to other publicly available models, ours are especially strong in the multilingual setting. Code Llama 7B outperforms larger models such as CodeGen-Multi or StarCoder, and is on par with Codex (code-cushman-001, Chen et al., 2021). The performance of Code Llama - Python is comparable to that of Code Llama. Code Llama - Python 30B performs slightly worse than Code Llama but Code Llama - Python 7B and 13B perform slightly better than their counterparts without Python fine-tuning. More detailed results can be found in Table 11, Appendix B. To better understand the influence of multilingual pre-training, we measure the correlations between each of the evaluated languages and report the results separately for different model sizes in Figure 3. We observe high correlation between model performance on C++, C#, Java, and PHP. Interestingly, we also notice strong correlation between model performance on Python and Bash. Lastly, as expected the bigger and more expressive the models, the higher the correlation between the performance across all different languages. ### Infilling evaluations Performance cost of infilling training.Previous studies on infilling (or _fill-in-the-middle, FIM_) code models assert that the traditional next token prediction objective can be replaced by a multitask infilling \begin{table} \begin{tabular}{l r r r r r r r|r} \hline \hline Model & Size & \multicolumn{6}{c}{Multi-lingual Human-Eval} \\ & & C++ & Java & PHP & TS & C\# & Bash & Average \\ \hline CodeGen-Multi & 16B & 21.0\% & 22.2\% & 8.4\% & 20.1\% & 8.2\% & 0.6\% & 13.4\% \\ CodeGeeX & 13B & 16.9\% & 19.1\% & 13.5\% & 10.1\% & 8.5\% & 2.8\% & 11.8\% \\ code-cushman-001 & 12B & 30.6\% & 31.9\% & 28.9\% & 31.3\% & 22.1\% & 11.7\% & 26.1\% \\ StarCoder Base & 15.5B & 30.6\% & 28.5\% & 26.8\% & 32.2\% & 20.6\% & 11.0\% & 25.0\% \\ StarCoder Python & 15.5B & 31.6\% & 30.2\% & 26.1\% & 32.3\% & 21.0\% & 10.5\% & 25.3\% \\ \hline \multirow{4}{*}{Llama-v2} & 7B & 6.8\% & 10.8\% & 9.9\% & 12.6\% & 6.3\% & 3.2\% & 8.3\% \\ & 13B & 13.7\% & 15.8\% & 13.1\% & 13.2\% & 9.5\% & 3.2\% & 11.4\% \\ & 34B & 23.6\% & 22.2\% & 19.9\% & 21.4\% & 17.1\% & 3.8\% & 18.0\% \\ & 70B & 30.4\% & 31.7\% & 34.2\% & 15.1\% & 25.9\% & 8.9\% & 24.4\% \\ \hline \multirow{4}{*}{Code Llama} & 7B & 28.6\% & 34.2\% & 24.2\% & 33.3\% & 25.3\% & 12.0\% & 26.3\% \\ & 13B & 39.1\% & 38.0\% & 34.2\% & 29.6\% & 27.3\% & 15.2\% & 30.6\% \\ & 34B & **47.8\%** & **45.6\%** & **44.1\%** & 33.3\% & 30.4\% & 17.1\% & **36.4\%** \\ \hline \multirow{4}{*}{Code Llama - Instruct} & 7B & 31.1\% & 30.4\% & 28.6\% & 32.7\% & 21.6\% & 10.1\% & 25.8\% \\ & 13B & 42.2\% & 40.5\% & 32.3\% & 39.0\% & 24.0\% & 13.9\% & 32.0\% \\ & 34B & 45.3\% & 43.7\% & 36.6\% & **40.3\%** & 31.0\% & **19.6\%** & 36.1\% \\ \hline \multirow{4}{*}{Code Llama - Python} & 7B & 32.3\% & 35.4\% & 32.3\% & 23.9\% & 24.7\% & 16.5\% & 27.5\% \\ & 13B & 39.1\% & 37.3\% & 33.5\% & 35.2\% & 29.8\% & 13.9\% & 31.5\% \\ \cline{1-1} & 34B & 42.2\% & 44.9\% & 42.9\% & 34.3\% & **31.7\%** & 14.6\% & 35.1\% \\ \hline \hline \end{tabular} \end{table} Table 4: **Multi-Lingual HE Pass@1 scores.** Pass@1 scores for different programming languages using greedy decoding. These scores are computed in zero-shot. Results for other models from Li et al. (2023). Figure 3: **Correlations between Languages.** Correlation scores between the Python, C++, Java, PHP, C#, TypeScript (TS), and Bash, reported for different model sizes. The code for this figure was generated by Code Llama - Instruct, the prompt and code can be seen in Figure 21. objective with an infilling rate of up to 90 % at no cost for left-to-right autoregressive test losses (Bavarian et al., 2022) and only small cost for downstream evaluation performance (Allal et al., 2023). In Table 5, we independently validate both findings at the scale of 7B and 13B parameters and 500B training tokens of code. The 7B model loses 0.6 percentage points on average across HumanEval and MBPP pass@1, pass@10 and pass@100 scores if trained with an infilling objective, while the 13B model loses 1.1 percentage points. Because of this modest decline in performance and the wide applicability of models with infilling capability, we decide to release Code Llama 7B and 13B in this configuration. Code infilling benchmarks.Our infilling models reach state-of-the-art performances in code infilling benchmarks among models of their size. We evaluate on two related code infilling benchmarks based on the HumanEval benchmark (Chen et al., 2021). The HumanEval infilling benchmark (Fried et al., 2023) turns the reference solutions of the HumanEval benchmark (Chen et al., 2021) into infilling problems by masking out either individual lines or blocks consisting of multiple consecutive lines. It has been extended in Bavarian et al. (2022) with a random span infilling task in which the masking is applied to a randomly selected substring at the character level. Predictions are scored with a pass@1 score based on the test cases of the original HumanEval problems. According to the results in Table 14, our models outperform all other infilling models of their size. Note, however, that the results in random span infilling are significantly worse in suffix-prefix-middle (SPM) format than in prefix-suffix-middle (PSM) format as it would require token healing (Microsoft, 2023), which we have not implemented for this evaluation (see Appendix D for further discussion). Allal et al. (2023) translates the HumanEval infilling benchmark to other programming languages using MultiPL-E (Cassano et al., 2023). Single lines are masked and predictions are scored with an exact match metric against the ground truth solution. Our models, including Code Llama 7B, outperform all open infilling models across the three programming languages contained in the benchmark (Table 6). We observe a further increase in performance when prompting the models in SPM format, like witnessed in Bavarian et al. (2022). ### Long context evaluations We explore Code Llama's ability to work with long sequences by measuring perplexity, key retrieval accuracy and performance during generation on code completion tasks. These tasks, and our results are detailed below. For full results and comparisons to alternative techniques of increasing the context length of LLMs, we refer to Appendix F. Perplexity during extrapolation.In Figure 3(a), perplexity is computed over 4M tokens from the code dataset, using a subset of our validation data consisting of large source files (\(\geq\)50kB). For all model sizes, \begin{table} \begin{tabular}{l r r r r r r r r} \hline \hline Model & FIM & Size & \multicolumn{3}{c}{HumanEval} & \multicolumn{3}{c}{MBPP} & Test loss \\ & & \multicolumn{2}{c}{pass@1} & pass@10 & pass@100 & pass@1 & pass@100 & pass@100 & \\ \hline \multirow{2}{*}{Code Llama (w/o LCFT)} & \multirow{2}{*}{\begin{tabular}{c} 7B \\ 13B \\ \end{tabular} } & 7B & 33.2\% & 43.3\% & 49.9\% & 44.8\% & 52.5\% & 57.1\% & 0.408 \\ & & 13B & 36.8\% & 49.2\% & 57.9\% & 48.2\% & 57.4\% & 61.6\% & 0.372 \\ \hline \multirow{2}{*}{Code Llama (w/o LCFT)} & \multirow{2}{*}{\begin{tabular}{c} 7B \\ 13B \\ \end{tabular} } & 7B & 33.6\% & 44.0\% & 48.8\% & 44.2\% & 51.4\% & 55.5\% & 0.407 \\ & & 13B & 36.2\% & 48.3\% & 54.6\% & 48.0\% & 56.8\% & 60.8\% & 0.373 \\ \hline \hline \multirow{2}{*}{Absolute gap} & \multirow{2}{*}{ \begin{tabular}{c} \(\boldsymbol{\chi}\) - \(\boldsymbol{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{ \check{\ we observe a steady decrease in perplexity well beyond 16384 tokens, which is the sequence length we use for long-context fine-tuning. After 100K tokens, the perplexity increases only slightly, in contrast to the well-known instability phenomenon when testing transformer models on sequences larger than those seen during training (Press et al., 2022). Key retrieval.In Figure 3(b), we investigate key retrieval performance in synthetic task. The prompt consists of a large amount of syntactically valid Python code, with a function returning a scalar inserted at a specified position. The model is asked to complete an assert statement with the return value of the inserted function. Liu et al. (2023) showed that the inability to recall content placed in the middle of long prompts is a common failure mode in LLMs; our retrieval task is analogous to their setup, albeit tailored to code models which are not fine-tuned to follow instructions. All models exhibit strong retrieval performance on the sequence length they were trained on, with the exception of the 7B model for test cases in which the function is placed at the beginning of the prompt. We include OpenAI's gpt-3.5-turbo-16k-0613 as a reference. We query GPT with a system prompt of "Complete the following code." and a temperature of 0. For sequences beyond 16K tokens, i.e., when extrapolating, our models exhibit a decrease in performance (Appendix F.3). Single line completion.Finally, we test the benefits of the ability to handle long context sizes in a single line code completion task. Our task is based on the Long Code Completion (LCC) benchmark (Guo et al., 2023).2 The LCC test set is skewed towards shorter files and we hence sample a new set of examples from LCC's validation and test set with an equalized distribution over file size (Appendix F.2). In Table 7, we \begin{table} \begin{tabular}{l r r r r r r} \hline \hline Model & Size & \multicolumn{2}{c}{Python} & \multicolumn{2}{c}{Java} & \multicolumn{2}{c}{JavaScript} \\ & & PSM & SPM & PSM & SPM & PSM & SPM \\ \hline InCoder & 6B & & 31.0\% & & 49.0\% & & 51.0\% \\ SantaCoder & 1.1B & & 44.0\% & & 62.0\% & & 60.0\% \\ StarCoder & 15.5B & & 62.0\% & & 73.0\% & & 74.0\% \\ \hline \multirow{2}{*}{Code Llama} & 7B & 67.6\% & 72.7\% & 74.3\% & 77.6\% & 80.2\% & 82.6\% \\ \cline{2-7} & 13B & **68.3\%** & **74.5\%** & **77.6\%** & **80.0\%** & **80.7\%** & **85.0\%** \\ \hline \hline \end{tabular} \end{table} Table 6: **Multilingual HumanEval single line infilling with MultiPL-E. Exact match rates on the line infilling benchmark from Allal et al. (2023) with greedy decoding. Evaluated in both prefix-suffix-middle (PSM) and suffix-prefix-middle (SPM) format. Numbers for InCoder, SantaCoder and StarCoder are reported from Li et al. (2023).** Figure 4: **Code Llama behavior on long sequences.****(a)** Perplexity on large source files (\(\geq\)50 kB) from the validation data from the code dataset. The dashed line marks the fine-tuning context length. Perplexity decreases for up to 100K tokens for all Code Llama sizes. **(b)** Accuracy on a synthetic key retrieval task, with a context of 16K tokens and comparison to gpt-3.5-turbo. compare the completion accuracy of the Code Llama models to their counterparts prior to long-context fine-tuning. Non-LCFT models fail to generate meaningful completions on long sequences and we thus truncate their prompts to the 4,000 tokens immediate preceding the line to complete. Across all metrics, models fine-tuned to handle long contexts achieve significantly higher performance. This demonstrates that long contexts are informative for code completion, and that with LCFT our models are able to leverage this information to improve their generations. We note that the longest example's prompt in this test consists of 103K tokens, for which all Code Llama models generate syntactically correct completions, with the 7B model producing an exact match. Performance impact on short contexts.While our models are effective on long sequences, we observe that LCFT slightly hurts performance on standard code synthesis benchmarks consisting of short sequences. In Table 10, we observe an average decrease of 0.52 percentage points on HumanEval pass@1 and 1.9 points on MBPP for the pass@1 metric. Similarly, a breakdown of the code completion results in Table 7 by the number of tokens in each example shows that for prompts shorter than 4k tokens, long context fine-tuning induces a reduction of up to 2 BLEU points from base models after code training (Figure 7(b)). We observe similar decreases in performance for infilling tasks (Table 14). LCFT comes at a cost for short sequences, and slightly decreases our scores on standard coding benchmarks such as HumanEval and MBPP. However, many real-world use cases are not captured by these benchmarks, and we believe that this cost is more than offset by the potential of handling long sequences for real downstream applications. Hence we opt to release all our Code Llama, Code Llama - Python and Code Llama - Instruct models with long-context capabilities. ### Ablation studies #### 3.4.1 Fine tuning Llama 2 vs. training from scratch on code Code Llama is based on the Llama 2 models, which are trained on 2T tokens of text, including only 80B tokens of code. We tune these models on 500B extra tokens, consisting mostly of code (85%). Figure 4(a) shows the training curves of Code Llama. We compare the 7B parameters model to an identical model trained from scratch on the same data mix (Figure 4(b)). At the end of training, the loss of the model trained from scratch is equal to the loss of Code Llama 7B at about half of its training (with 240B less training tokens). Moreover, this gap becomes larger over time. #### 3.4.2 Instruction fine-tuning General helpfulness vs. coding abilityWe evaluate Code Llama - Instruct and compare it to Llama 2-Chat for coding tasks and helpfulness (Figure 4(c)). We observe that Code Llama improves its \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline Model & & & EM & BLEU & EM & BLEU & EM & BLEU \\ \hline Code Llama & 7B & ✗ & 36.86 & 60.16 & 47.82 & 69.20 & 46.29 & 67.75 \\ Code Llama & 7B & ✓ & **39.23** & **61.84** & **51.94** & **71.89** & **50.20** & **70.22** \\ \hline Code Llama & 13B & ✗ & 37.96 & 61.33 & 50.49 & 69.99 & 49.22 & 69.87 \\ Code Llama & 13B & ✓ & **41.06** & **62.76** & **52.67** & **72.29** & **52.15** & **71.00** \\ \hline Code Llama & 34B & ✗ & 42.52 & 63.74 & 54.13 & 72.38 & 52.34 & 71.36 \\ Code Llama & 34B & ✓ & **44.89** & **65.99** & **56.80** & **73.79** & **53.71** & **72.69** \\ \hline \hline \end{tabular} \end{table} Table 7: **Average single line completion performance on LCC-balanced. Comparison of models before and after long-context fine-tuning in terms of exact match (EM) and BLEU. For non-LCFT models, context size limits are respected by truncating prompts to 4,000 tokens.** coding abilities for each model sizes, while preserving the general helpfulness performance inherited from Llama 2. The results on the helpfulness axis is an indication that Code Llama performs greatly on general instructions following. But we emphasize that this result should be taken with a grain of salt, since we limited our automatic evaluation to scoring the models answers with Llama 2 reward model. ' The value of self-instruct dataWe also perform ablations, showing the value of the self-instruct data that we generate with our own model. To evaluate the capacity of the model to answer questions, we use a zero-shot version of MBPP. We prompt the model to generate the code between [PYTHON] and [/PYTHON] tags to make it easy to parse the result. Our exact prompt is shown in Figure 12 in the Appendix. Table 8 show the impact of training on data generated using our models and filtered with unit tests as described in Section 2.5. The self-instruct data allows us to improve our scores on benchmarks such as HumanEval and MBPP. It also makes the training more reliable. With self-instruct, the model easily learns to follow the format requested for MBPP zero-shot while it sometimes fails without it. Unnatural model.For comparison purposes, we also finetuned Code Llama - Python 34B on 15,000 unnatural instructions similarly to Honovich et al. (2023) using the same prompts as for the self-instruct dataset. We do not release this model, but we observe clear improvements on HumanEval and MBPP which are indicative of the improvements that can be reached with a small set of high-quality coding data. The results of the unnatural model are shown in Table 2. Figure 5: (a) **Training perplexity of Code Llama models.** The continued decrease at 500B tokens suggests further training would be beneficial. Results are presented without infilling for 7B and 13B models. (b) **Training losses** of both Code Llama 7B versus an identical model trained from scratch (c) **MBPP (coding benchmark) vs. Helpfulness** according to the helpfulness reward model from Llama 2 (Touvron et al., 2023b). \begin{table} \begin{tabular}{c c c c c} \hline \hline Size & SI & HumanEval & \multicolumn{2}{c}{MBPP} \\ & & & 3-shot & zero-shot \\ \hline \multirow{2}{*}{7B} & ✗ & 30.5\% & 43.4\% & 37.6\% \\ & ✓ & 34.8\% & 44.4\% & 37.4\% \\ \hline \multirow{2}{*}{13B} & ✗ & 40.9\% & 46.2\% & 20.4\% \\ & ✓ & 42.7\% & 49.4\% & 40.2\% \\ \hline \hline \end{tabular} \end{table} Table 8: **Impact of self-instruct data.** Impact of self-instruct data (SI) on the MBPP and HumanEval scores of our self-instruct models. The scores are computed using greedy decoding. In MBPP zero-shot, we prompt the model to generate the solution between [PYTHON]/PYTHON] tags. Removing SI results in generally lower scores on HumanEval and MBPP, and makes learning to generate code with the right format for MBPP zero shot much less reliable. #### 3.4.3 Pass@k evaluation We study the effect of the sampling temperature on the pass@k performance. Specifically, we report pass@1, 10, and 100 using temperature \(\in\{0.1,0.4,0.6,0.8\}\) on both HumanEval and MBPP. Results are depicted in Figure 6. As expected, as we increase the temperature, the pass@1 scores are getting worse while the pass@10 and pass@100 improve. ## 4 Responsible AI and safety Large language models have been shown to have the potential to produce known falsehoods due to misconceptions or false beliefs (Lin et al., 2022), generate toxic or offensive content (Hartvigsen et al., 2022) and reproduce or even amplify the biases that are contained in the training data (Dhamala et al., 2021). As mentioned in Section 2.5, we make Code Llama - Instruct safer by fine-tuning on outputs from Llama 2, including adversarial prompts with safe responses, as well as prompts addressing code-specific risks. In this section, we perform evaluations on three widely-used automatic safety benchmarks from the perspectives of truthfulness, toxicity, and bias, respectively. Specifically, we assess the safety capabilities of both pretrained Code Llama and fine-tuned Code Llama - Instruct with Falcon (Almazrouei et al., 2023), MPT (MosaicML, 2023), and StarCoder (Li et al., 2023). Although we have chosen certain standard benchmarks commonly used in the language model community to highlight some of the problems with these models, it's important to note that these evaluations alone do not provide a comprehensive understanding of the risks associated with them. We complement the safety analysis of Code Llama - Instruct with additional red teaming from various domain experts in offensive security, malware development, responsible AI and software engineering, similar to Touvron et al. (2023b). Truthfulness.We use **TruthfulQA**(Lin et al., 2022) to gauge the factuality and common sense of our models. The TruthfulQA benchmark comprises 817 questions spread across 38 categories, encompassing topics such as health, finance, law, and politics (Lin et al., 2022). The questions are designed to be challenging, even Figure 6: **Code Llama scores different temperature values. Results are presented for 7B, 13B, and 34B models on HumanEval and MBPP benchmarks. We report Pass@1, Pass@10, and Pass@100 for different temperature values. We use nucleus sampling with p=0.95.** for humans, causing them to answer incorrectly due to unfounded beliefs or misconceptions. To evaluate the generated outputs from LLMs, we utilize GPT-3-based metrics following Lin et al. (2022) to determine the truthfulness and informativeness of the outputs. For the QA prompt, we use a few-shot prompt containing 6 random QA pairs, structured according to the InstructGPT format (Ouyang et al., 2022). The results are reported as the percentage of generations that are both truthful and informative, as well as the percentage that are either truthful or informative. **Toxicity.** We use **ToxiGen**(Hartvigsen et al., 2022) to quantify the extent of toxic language and hate speech generation across various demographic groups. The ToxiGen dataset contains implicitly toxic and benign sentences mentioning 13 minority groups. Following Touvron et al. (2023b), we utilize an improved version of the dataset, which minimizes noise by removing prompts with disagreements among annotators regarding the target demographic group. To measure the toxicity of the generated outputs from each of the LLMs, we employ the default ToxiGen classifier, tuned on RoBERTa (Liu et al., 2019). **Bias.** We employ the Bias in Open-Ended Language Generation Dataset (**BOLD**) (Dhamala et al., 2021) to investigate how the sentiment in the model's outputs may differ based on demographic attributes. The BOLD benchmark consists of a total of 23,679 English Wikipedia prompts that span five domains: race, gender, religion, political ideology, and profession. These prompts cover 43 different subgroups. In our analysis, we exclude prompts belonging to the religious ideology subgroups Hinduism and Atheism due to their limited representation, consisting of only 12 and 29 prompts, respectively. To assess the sentiments conveyed by the combination of the prompt prefix and model generation, we employ sentiment analysis using the Valence Aware Dictionary and Sentiment Reasoner (VADER) (Hutto and Gilbert, 2014). The VADER produces sentiment scores between -1 and 1, where a positive (negative) score indicates a positive (negative) sentiment towards the population mentioned in the prompt. A score closer to 0 indicates a neutral sentiment. **Benchmark evaluation results.** Table 9 shows the evaluation results of the three safety benchmarks. We follow the decoding setting as in Touvron et al. (2023b) where a temperature of 0.1 and top-p of 0.9 are used. Regarding TruthfulQA, we provide the percentage of generations that are both truthful and informative, where a higher percentage indicates better performance. Regarding ToxiGen, we present the percentage of generations deemed toxic by the metric, with a lower percentage indicating better results. Regarding BOLD, we present the average sentiment scores across demographic groups within the five domains in the BOLD dataset. The fine-tuned Code Llama - Instruct exhibits significant improvements over the pretrained Code Llama in terms of truthfulness (from 34.64 to 47.37 for 34B) and toxicity (from 17.62 to 0.00 for 34B). The percentage of toxic generations drastically reduces to virtually 0% across all Code Llama sizes, making it the least toxic among all the models compared. When compared to Falcon and MPT fine-tuned models, the fine-tuned Code Llama demonstrates the second-best performance level in both toxicity and truthfulness, right after Llama 2 Chat. Additionally, similar to Llama 2 Chat, the Code Llama - Instruct, after fine-tuning, also tends to show an overall increase in positive sentiment for many demographic groups in BOLD. More detailed results split by different demographic groups can be found in Appendix H. **Red teaming.** It is important to also proactively identify risks with adversarial testing or red teaming. We conducted 3 red teaming exercises with 25 Meta employees, including domain experts in responsible AI, malware development, and offensive security engineering. The red teamers provided a nuanced evaluation specifically on the risk from so called "dual intent prompts." Dual intent prompts are requests for help with writing code that could be used maliciously but the prompt does not directly address the topic (example "Mosaic Prompts" Glukhov et al. (2023)). For example, the model rightfully refuses to provide support with writing ransomware code but it complies when asked to provide a script to encrypt all files in the user's home directory since such a script could be used for benign purposes. After conducting red team exercises, we asked participants (who had also participated in Llama 2 Chat exercises) to also provide qualitative assessment of safety capabilities of the model. Some participants who had expertise in offensive security and malware development questioned the ultimate risk posed by "malicious code generation" through LLMs with current capabilities. One red teamer remarked, "While LLMs being able to iteratively improve on produced source code is a risk, producing source code isn't the actual gap. That said, LLMs may be risky because they can inform low-skill adversaries in production of scripts through iteration that perform some malicious behavior." According to another red teamer, "[v]arious scripts, program code, and compiled binaries are readily available on mainstream public websites, hacking forums or on 'the dark web.' Advanced malware development is beyond the current capabilities of available LLMs, and even an advanced LLM paired with an expert malware developer is not particularly useful- as the barrier is not typically writing the malware code itself. That said, these LLMs may produce code which will get easily caught if used directly." In addition to red teaming sessions, we ran a quantitative evaluation on risk from generating malicious code by scoring Code Llama's responses to ChatGPT's (GPT3.5 Turbo) with LLAMAv2 70B's safety reward model. For this second quantitative evaluation, we selected prompts that the red teamers generated specifically attempting to solicit malicious code (even though the red teaming included consideration of a broad set of safety risks). These prompts were a mix of clear intent and slightly obfuscated intentions (see some examples in Figure 15. We show a KDE plot of the distribution of the safety score for all models in Figure 7). We observe that Code Llama tends to answer with safer responses; the distribution of safety scores for Code Llama has more weight in the safer part of the range. False refusals.LLMs that are too safe can have a tendency to over-refuse valid claims similar to what was reported after the release of Llama 2. We specifically asked red teamers to test for this behavior. They found some limited evidence of false refusals (when not using a system preprompt). False refusals could also be solved by rephrasing the prompt e.g. "Can you tell me how to kill a process?" rephrased to "How do I kill a process?". We show some examples in Appendix Table 14. This behavior is something we plan to investigate in more details in the future. Safety and coding performance.As our instruction finetuning set prioritizes safety, longer finetunings tend to degrade coding performance. We trained our models to reach high coding performances, while not compromising on safety. As shown in Figure 7, our Code Llama - Instruct models are safer than ChatGPT. ## 5 Related work Early observations with LLMs such as GPT-Neo (Black et al., 2021) or GPT-J (Wang and Komatsuzaki, 2021) showed that adding code in the training data makes program synthesis possible even with medium size LLMs. Code from open-source software is now a standard part of the training data for general-purpose LLMs such Figure 7: KDE plot of the risk score output by the Llama 2 safety reward model on prompts with clear intent specific to code risk created by red teamers with background in cybersecurity and malware generation. as PaLM (Chowdhery et al., 2022), Chinchilla (Hoffmann et al., 2022), Gopher (Rae et al., 2021), GPT-4 (OpenAI, 2023), and Llama(Touvron et al., 2023a;b). In parallel, models specifically trained or fine-tuned for code understanding and program synthesis from natural language prompts emerged with LLMs such as Codex (Chen et al., 2021), CodeT5 (Wang et al., 2021), InCoder (Fried et al., 2023), AlphaCode (Li et al., 2022), CodeGen (Nijkamp et al., 2023b) and CodeGen 2 (Nijkamp et al., 2023a), GPT-NeoX (Black et al., 2022), SantaCoder (Allal et al., 2023), StarCoder (Li et al., 2023) and phi-1 (Gunasekar et al., 2023), consistently demonstrating better performance on code benchmarks than general-purpose LLMs of comparable or even larger size. This paper follows this line, by fine-tuning the recent general-purpose language model Llama 2 on code data. Closed-source vs open-source models.The landscape of LLMs is marked by whether the technology is free and the code is available for research or commercial use. ChatGPT and GPT-4 (OpenAI, 2023), PaLM (Chowdhery et al., 2022) and Chinchilla (Hoffmann et al., 2022) are closed source, while BLOOM (Scao et al., 2022), OPT (Zhang et al., 2022b), and the seminal work of Llama are public (Touvron et al., 2023a). The more recent Llama 2 has been released under a custom licence for commercial use (Touvron et al., 2023b). A similar dichotomy exists for code models, with Codex/copilot (Chen et al., 2021), AlphaCode (Li et al., 2022), GPT-4 or phi-1 (Gunasekar et al., 2023) being closed source, whereas the recent SantaCoder (Allal et al., 2023) and StarCoder (Li et al., 2023) have been released open-source and allow for commercial use. In this work, we allow for commercial use of the models under the same terms as Llama 2. Moreover, our largest model, with its 34B parameters, is significantly larger than previous open-source models - GPT-NeoX-20B (Black et al., 2022) and StarCoder with 15.5B parameters - which allows it to achieve state-of-the-art performances on HumanEval, MBPP and MultiPL-E among open-source models. Data.It is well-known that data quality is critical in the training and responsible development of LLMs (e.g., Hoffmann et al., 2022; Penedo et al., 2023), and this is also true for code as discussed by Allal et al. \begin{table} \begin{tabular}{l c c c} \hline \hline & TruthfulQA \(\uparrow\) & ToxiGen \(\downarrow\) & BOLD \\ \hline Pretrained models & & & \\ \hline Falcon TB & 25.95 & 14.53 & 0.283 \\ MPT 7B & 29.13 & 22.32 & 0.322 \\ StarCoder (Python) 15.5B & 22.77 & **10.36** & 0.310 \\ Llama 2 7B & 33.29 & 21.25 & 0.304 \\ Llama 2 13B & 41.86 & 26.10 & 0.330 \\ Llama 2 34B & **43.45** & 21.19 & 0.318 \\ Code Llama 7B & 26.19 & 22.64 & 0.230 \\ Code Llama 13B & 33.29 & 22.45 & 0.176 \\ Code Llama 34B & 34.64 & 17.62 & 0.255 \\ \hline \hline Instruct (aligned) & & & \\ \hline Falcon-instruct 7B & 28.03 & 7.89 & 0.332 \\ MPT-instruct 7B & 29.99 & 16.33 & 0.302 \\ Llama 2 Chat 7B & 57.04 & **0.00** & 0.482 \\ Llama 2 Chat 13B & 62.18 & **0.00** & 0.471 \\ Llama 2 Chat 34B & **67.20** & 0.02 & 0.461 \\ Code Llama - Instruct 7B & 31.46 & 0.04 & 0.503 \\ Code Llama - Instruct 13B & 36.84 & 0.01 & 0.365 \\ Code Llama - Instruct 34B & 47.37 & **0.00** & 0.452 \\ \hline \hline \end{tabular} \end{table} Table 9: **Evaluations on safety datasets** for both pretrained (base) models and aligned (instruct) models. For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher, the better). For ToxiGen, we present the percentage of toxic generations (the smaller, the better). For BOLD, we present the average sentiment scores across demographic groups. A score closer to 0 indicates a neutral sentiment, while a positive (negative) score indicates a positive (negative) sentiment towards the population mentioned in the prompt. (2023). Modern models are trained on publicly available, open-source code. In addition, Allamanis (2019) and Allal et al. (2023) discuss the impact of effective deduplication and of selecting code from repositories based on the number of GitHub stars (as a proxy for popularity), while Li et al. (2023) augment their data with GitHub issues and commits collected from BigQuery. Gunasekar et al. (2023) filter data up to only containing "textbook"-quality code and add synthetic problems collected using GPT-3.5, following Jung et al. (2023), in order to obtain good performance on simple benchmarks such as HumanEval and MBPP. We follow the approach of learning from publicly available code only, without additional meta-level or temporal information such as issues or commits. We also do not train our foundation models on additional synthetic exercises, since we did not want to take the risk of reducing the scope of our models to simple coding exercises similar to those contained in HumanEval and MBPP. Code understanding and synthesis tasks.In addition to program synthesis from natural language prompts or infilling (Fried et al., 2023; Bavarian et al., 2022; Li et al., 2023; Nguyen et al., 2023), many tasks related to code understanding or synthesis have been addressed since the early 2020s with NLP models adapted for code (Raffel et al., 2020; Feng et al., 2020; Guo et al., 2021; Wang et al., 2021; Ahmad et al., 2021), also see the survey by Xu and Zhu (2022). These tasks include code summarization, refinement, translation (Roziere et al., 2020; 2021; Szafraniec et al., 2023) fixing bugs (Yasunaga and Liang, 2021; Zhang et al., 2022; Prenner et al., 2022), fixing build errors (Tarlow et al., 2020) or generating unit tests (Tufano et al., 2020; Li et al., 2022; Chen et al., 2023), as well as solving math problems as demonstrated by PaLM (Chowdhery et al., 2022) or Codex (Chen et al., 2021). 14 code understanding tasks are represented in the CodeXGlue benchmark (Lu et al., 2021). Here we focused on the main problem of program synthesis, as well as infilling/completion for our 7B and 13B models where the ability comes with little impact on the generation performance as previously observed by Bavarian et al. (2022). Additional modifications to LLM training and inference.A number of works proposed to incorporate within the training objective structural knowledge of programs, with specialized objectives for code deobfuscation (Lachaux et al., 2021), contrastive learning through semantic-preserving code transformations (Jain et al., 2021), leveraging Abstract Syntax Trees to learn tree-aware positional encodings (Shiv and Quirk, 2019; Peng et al., 2021). A recent stream of work takes into account program execution or unit tests to filter, cluster, or improve the correctness of programs when few candidates must be submitted (Li et al., 2022; Chen et al., 2023; Le et al., 2022; Zhang et al., 2023), or unit tests them within a reinforcement learning objective to enrich the training signal (Le et al., 2022; Liu et al., 2023). We focused here on improving the base model rather than tweaking the inference scheme, since we believe this is where most of the long-term progress comes from; it is nonetheless an interesting direction to experiment with more elaborated inference schemes on top of Code Llama. Long sequences in LLMs.Scaling Transformers and LLMs to long input sequences has attracted much recent interest (Dai et al., 2019; Beltagy et al., 2020; Yu et al., 2023; Ding et al., 2023). The context lengths supported by available models and APIs has seen a steady increase, with StarCoder being trained on 8K token sequences ((Li et al., 2023), up from the 4K of Allal et al. (2023)), recent GPT versions supporting 16K (gpt-3.5-turbo-16k) and 32K tokens (gpt-4-32k), MPT-7b fine-tuned on 65K tokens (MosaicML, 2023), and Claude featuring 100K context windows (Anthropic, 2023). Previous research focuses on alleviating the \(O(n^{2})\) space and time complexity of self-attention (Vaswani et al., 2017) by introducing sparsity patterns, as well as by encoding positional information in such a way that models can leverage input sizes larger than those presented at training time (length extrapolation). In our work, we do not rely on hand-crafted sparsity patterns such as those proposed for code input by Guo et al. (2023), who operate on sequences of up to 4,096 tokens, as to not curtail the model's expressivity, and modify the encoding of positions instead. Starting from pretrained Llama 2 models that utilize RoPE (Su et al., 2021), Chen et al. (2023) propose additional fine-tuning for long sequence handling, an approach we pursue as well. However, we tailor our hyper-parameter modifications to allow for extrapolation at inference time. Our modification of the RoPE hyper-parameters (Su et al., 2021) is a simple modification which does not require any architectural changes or restrictions and can be readily applied to existing implementations.3 Press et al. (2022) propose a linear bias for attacking extrapolation; in contrast, our approach seeks to reduce existing bias towards shot-range attention. Recent work suggests that causal models do not require an explicit encoding of position information (Haviv et al., 2022; Kazemmejad et al., 2023), a hypothesis we did not test in this work as we demonstrated that starting from pretrained Llama 2 models is significantly more efficient than training from scratch. ## 6 Discussion We release a family of code-specialized Llama 2 models called Code Llama, with three main variants that we release with three sizes (7B, 13B and 34B parameters): Code Llama, Code Llama - Python, Code Llama - Instruct. With real-world applications in mind, we trained our 7B and 13B models to support infilling, and all our models to leverage large contexts. We tested their stability in inference up to 100K tokens (Figure 3(a)). Large context fine-tuning and infilling come at a cost on standard benchmarks left-to-right code generation benchmarks (Table 10), that are all based on short sequences (i.e. function level). Still, our 30B model is state-of-the-art among public models on standard python completion benchmarks, and our other models are competitive compared to models with similar numbers of parameters. On multilingual benchmarks, even our smallest model (Code Llama 7B) outperforms every other public model. The Code Llama - Instruct models are trained to provide zero-shot instruction ability to Code Llama. In this further fine-tuning, where we somewhat distillate Llama 2-Chat, we focused not only on being more directly helpful (Figure 4(c)) but also sought to provide a safer model to use and deploy (Section 4). Following instruction and being overly safe can cost some points on evaluations (e.g. on HumanEval for the 34B model in Table 2), as exemplified in Figure 14. Further work is needed for LLMs to understand context and nuance in their instructions.
2302.09495
Free-electron Brewster radiation
Free-electron radiation offers an enticing route to create light emission at arbitrary spectral regime. However, this type of light emission is generally weak, which is intrinsically limited by the weak particle-matter interaction and unavoidably impedes the development of many promising applications, such as the miniaturization of free-electron radiation sources and high-energy particle detectors. Here we reveal a mechanism to enhance the particle-matter interaction by exploiting the pseudo-Brewster effect of gain materials - presenting an enhancement of at least four orders of magnitude for the light emission. This mechanism is enabled by the emergence of an unprecedented phase diagram that maps all phenomena of free-electron radiation into three distinct phases in a gain-thickness parameter space, namely the conventional, intermediate, and Brewster phases, when an electron penetrates a dielectric slab with a modest gain and a finite thickness. Essentially, our revealed mechanism corresponds to the free-electron radiation in the Brewster phase, which also uniquely features ultrahigh directionality, always at the Brewster angle, regardless of the electron velocity. Counterintuitively, we find that the intensity of this free-electron Brewster radiation is insensitive to the Fabry-Perot resonance condition and thus the variation of slab thickness, and moreover, a weaker gain could lead to a stronger enhancement for the light emission. The scheme of free-electron Brewster radiation, especially along with its compatibility with low-energy electrons, may enable the development of high-directionality high-intensity light sources at any frequency.
Ruoxi Chen, Jialin Chen, Zheng Gong, Xinyan Zhang, Xingjian Zhu, Yi Yang, Ido Kaminer, Hongsheng Chen, Baile Zhang, Xiao Lin
2023-02-19T06:57:06Z
http://arxiv.org/abs/2302.09495v1
# Free-electron Brewster radiation ###### Abstract **Free-electron radiation offers an enticing route to create light emission at arbitrary spectral regime.** However, this type of light emission is generally weak, which is intrinsically limited by the weak particle-matter interaction and unavoidably impedes the development of many promising applications, such as the miniaturization of free-electron radiation sources and high-energy particle detectors. Here we reveal a mechanism to enhance the particle-matter interaction by exploiting the pseudo-Brewster effect of gain materials - presenting an enhancement of at least four orders of magnitude for the light emission. This mechanism is enabled by the emergence of _an unprecedented phase diagram_ that maps all phenomena of free-electron radiation into three distinct phases in a gain-thickness parameter space, namely the conventional, intermediate, and Brewster phases, when an electron penetrates a dielectric slab with a modest gain and a finite thickness. Essentially, our revealed mechanism corresponds to the free-electron radiation in the Brewster phase, which also uniquely features ultrahigh directionality,
2306.02037
A Peer-to-peer Federated Continual Learning Network for Improving CT Imaging from Multiple Institutions
Deep learning techniques have been widely used in computed tomography (CT) but require large data sets to train networks. Moreover, data sharing among multiple institutions is limited due to data privacy constraints, which hinders the development of high-performance DL-based CT imaging models from multi-institutional collaborations. Federated learning (FL) strategy is an alternative way to train the models without centralizing data from multi-institutions. In this work, we propose a novel peer-to-peer federated continual learning strategy to improve low-dose CT imaging performance from multiple institutions. The newly proposed method is called peer-to-peer continual FL with intermediate controllers, i.e., icP2P-FL. Specifically, different from the conventional FL model, the proposed icP2P-FL does not require a central server that coordinates training information for a global model. In the proposed icP2P-FL method, the peer-to-peer federated continual learning is introduced wherein the DL-based model is continually trained one client after another via model transferring and inter institutional parameter sharing due to the common characteristics of CT data among the clients. Furthermore, an intermediate controller is developed to make the overall training more flexible. Numerous experiments were conducted on the AAPM low-dose CT Grand Challenge dataset and local datasets, and the experimental results showed that the proposed icP2P-FL method outperforms the other comparative methods both qualitatively and quantitatively, and reaches an accuracy similar to a model trained with pooling data from all the institutions.
Hao Wang, Ruihong He, Xiaoyu Zhang, Zhaoying Bian, Dong Zeng, Jianhua Ma
2023-06-03T07:31:45Z
http://arxiv.org/abs/2306.02037v2
A Peer-to-peer Federated Continual Learning Network for Improving CT Imaging from Multiple Institutions ###### Abstract Deep learning techniques have been widely used in computed tomography (CT) but require large data sets to train networks. Moreover, data sharing among multiple institutions is limited due to data privacy constraints, which hinders the development of high-performance DL-based CT imaging models from multi-institutional collaborations. Federated learning (FL) strategy is an alternative way to train the models without centralizing data from multi-institutions. In this work, we propose a novel peer-to-peer federated continual learning strategy to improve low-dose CT imaging performance from multiple institutions. The newly proposed method is called peer-to-peer continual FL with intermediate controllers, i.e., icP2P-FL. Specifically, different from the conventional FL model, the proposed icP2P-FL does not require a central server that coordinates training information for a global model. In the proposed icP2P-FL method, the peer-to-peer federated continual learning is introduced wherein the DL-based model is continually trained one client after another via model transferring and inter-institutional parameter sharing due to the common characteristics of CT data among the clients. Furthermore, an intermediate controller is developed to make the overall training more flexible. Numerous experiments were conducted on the AAPM low-dose CT Grand Challenge dataset and local datasets, and the experimental results showed that the proposed icP2P-FL method outperforms the other comparative methods both qualitatively and quantitatively, and reaches an accuracy similar to a model trained with pooling data from all the institutions. Keywords:Low-Dose CT CT image denoising Peer-to-peer Federated Learning Continual learning Deep learning ## 1 Background Low-dose computed tomography (CT) imaging has been widely used in clinics. However, lowering the dose level can result in degraded image quality [1]. Deep learning (DL)-based methods have been developed to improve low-dose CT image quality, including data-driven DL methods [2][3], and model-driven deep unrolling methods [4][5]. However, most DL-based methods for low-dose CT imaging usually require large amounts of diversity-rich data which can be labor-intensive to collect. In addition, it is difficult to collect and share CT data from multiple institutions efficiently due to patient privacy [6], Moreover, a single-institution training network may suffer from inadvertent bias that ultimately limits imaging performance even on the data collected at the same institution due to the considerable heterogeneity exists among multiple institutions. Federated learning (FL) enables decentralized model training across geographically dispersed multi-institutional data silos to address these limitations. In the FL, distributed institutions collaboratively learn a shared model while keeping data local for privacy [7]. The FL is also used in the medical imaging field and obtains better benefits over traditional DL-based methods [8][9][10][11]. In centralized (center-to-peer) federated learning, a central server is required to orchestrate the process of training a single shared model, which needs high communication costs between the central server and multiple institutions. Various strategies have been investigated to reduce communication costs [12][13]. Among them, the decentralized FL (peer-to-peer) model avoids the need for a central server to orchestrate the process and is more efficient for different issues specific to centralized FL models [14][15]. In the peer-to-peer (P2P) FL model, to avoid the catastrophic forgetting problem, continual learning plays a critical role [16][17][18]. The continual learning strategy in the P2P FL can train the model with the ability to remember old knowledge and learn new knowledge. Figure 1: The framework of the presented icP2P-FL. In this work, inspired by the P2P FL and continual learning strategy, we propose a novel P2P federated continual learning strategy to improve low-dose CT imaging performance from multiple institutions. The newly proposed method is termed as P2P continual FL with intermediate controllers, i.e., icP2P-FL. To the best of our knowledge, this is the first one to utilize P2P continual FL in low-dose CT image reconstruction. Specifically, the proposed icP2P-FL does not require a central server and adopts a P2P model where the institutions only communicate with their one-hop neighbors to reduce certain communication costs. In the proposed icP2P-FL method, due to the stable and effective common features among different institutions, all the institutions collaboratively share a global model and directly interact with each other without depending on a central server. Each institution can initiate an update process dynamically. Furthermore, an intermediate controller is developed to make the overall training more flexible. Due to the high frequency of interaction and the intermediate controller, the proposed icP2P-FL converges quickly. The CT dataset with different protocols from multiple institutions is collected to validate and evaluate the denoising performance of the proposed icP2P-FL for image reconstruction. Experimental results show that the proposed icP2P-FL method obtains competitive denoising performance in the CT reconstruction task compared with the other competing methods, and reaches an accuracy similar to a model trained with centralized data from all the institutions. ## 2 Methods Let us consider an environment with \(K\) institutions, where each institution has training data \(D^{k}=\left\{x_{i}^{k},y_{i}^{k}\right\}_{i=1}^{n^{k}}\) and characteristic data \(C^{k}=\left\{z_{i}^{k}\right\}_{i=1}^{n^{k}}\) with \(n^{k}\) labeled samples. ### The description of icP2P-FL Figure 1 shows the framework of the presented icP2P-FL method. The presented icP2P-FL method consists of a peer-to-peer FL framework, an intermediate controller, and a DL network. There is no central server for coordinating the training process, and all the institutions are connected directly in a P2P manner. The training process of one cycle is conducted as the following steps: 1. The initial institution order is set (Initial_sequence: [Institution I, Institution II, Institution III,..., Institution k]), and network parameters are initialized. 2. The dataset \(D^{1}\) of Institution I is used to train the shared global model \(\Omega_{\omega}\) for CT image denoising. 3. After the training procedure is completed in Institution I, the trained model \(\Omega\) is employed to evaluate the performance on the characteristic data from Institution I to obtain quantitative measurement vectors, i.e., \(\left[p,s,m\right]_{i}^{k}\). 4. The parameters \(\omega_{1}^{*}\) in the model are transferred to the one-hop neighbor, i.e., Institution II. \(D^{2}\) is used to fine-tune the shared global model with the gradient correcting constraints \(\left\|g_{i}^{k}-\widetilde{g}_{i}^{k}\right\|_{1}\) included to retain the significant model weights and archive the catastrophic forgetting compensation. 5. After the training is completed in Institution II, the trained model \(\Omega\) is also employed to evaluate the performance on the characteristic data from Institution II to obtain quantitative measurement vectors, and the another institutions are trained similarly. 6. When all the institutions finish training, the intermediate controller grades all the quantitative measurement vectors \([p,s,m]\), of all the institutions online. This will immediately determine the inter-institutional training strategy and whether the global shared model is well-trained. ``` 0: Initialize the global model \(\Omega\) parameters; Initialize \(K\) institution sequence: [Institution I, Institution II, Institution III,..., Institution k]; Datasets: \(D^{k}=\left\{x_{i}^{k},y_{i}^{k}\right\}_{i=1}^{n^{k}}\) and \(C^{k}=\left\{z_{i}^{k}\right\}_{i=1}^{n^{k}}\); Training: while Transmission \(\gets\) 1 to \(T\)do whilek \(\leftarrow\) 1 to \(K\)do Institution k training: ResNet \(\gets\)\(D^{k}\), and initial \(\omega\) while site-round \(\leftarrow\) 1 to \(S\)do for\(i\) to \(n^{k}\)do loss \(=\left\|\Omega_{\omega}\left(x_{i}^{k}\right)-y_{i}^{k}\right\|_{2}^{2} \gets x_{i}^{k}\) and \(y_{i}^{k}\) loss \(\leftarrow\)\(\varepsilon\left\|g_{i}^{k}-\widetilde{g}_{i}^{k}\right\|_{1}=\left\|\frac{ \partial\left\|\Omega_{\omega}(x_{i}^{k})-y_{i}^{k}\right\|_{2}^{2}}{\partial N _{y_{i}^{k}}}-QP(g_{i}^{k},G_{i}^{k})\right\|_{1}\) \(\omega_{t+1}=\omega_{t}-\sigma\left\{\min_{\omega}\left\|\Omega_{\omega}\left(x _{i}^{k}\right)-y_{i}^{k}\right\|_{2}^{2}+\varepsilon\left\|g_{i}^{k}-\widetilde {g}_{i}^{k}\right\|_{1}\right\}_{t}\) Calculate: \([p,s,m]_{i}^{k}\)\(\leftarrow\) the characteristic data \(z_{i}^{k}\) endfor Calculate: \(\sum_{i=0}^{n}\left[p,s,m\right]_{i}^{k}\) endwhile \(k:\left[p,s,m\right]\)\(\rightarrow\) Intermediate controller \(\overset{MLP}{\Longrightarrow}\)\(\rho\)\({}^{k}\) \(\omega_{k}^{*}\): Institution k \(\rightarrow\) Institution k+1 endwhile if\(\rho\)\({}^{k}\)\(\geq\) the set determination threshold then Adjust institution sequence, the number of trans_round and site_round endif endwhile ``` **Algorithm 1** icP2P-FL Training. It should be noted that the scope of the intermediate controller can help P2P-FL train the shared global model more flexibly and efficiently with the on line assessment. After completing multiple cycles, we can obtain the well-trained share global model in the proposed icP2P-FL method to obtain high-quality low-dose CT images from all the institution efficiently. ### Inter-institutional Incremental Learning In this work, we adopt a cyclic task-incremental continual learning (CICL) to guarantee the model's performance across multiple institutions in the transmission cycles. In incremental learning (ICL), multiple institutions continuously train a shared model in a set order. Each institution trains the shared model only once in a cycle and finally aggregates it to obtain the final model. Cyclic Incremental continual Learning (CICL) repeats the ICL process, i.e., cyclic training across multiple institutions and reducing forgetting by determining the number of rounds in each institution. Borrowing from the CICL strategy [19][19], the proposed icP2P-FL method adopts the regularization-based CICL to keep old knowledge in the previous institution and learn new knowledge in a new institution, and it can be expressed as follows: \[\min_{\omega}\left\|\Omega_{\omega}\left(x_{i}^{k}\right)-y_{i}^{k}\right\|_{2 }^{2}+\varepsilon\left\|g_{i}^{k}-\widetilde{g}_{i}^{k}\right\|_{1} \tag{1}\] where \(x_{i}^{k}\) is the \(i\)th low-dose CT image, \(y_{i}^{k}\) is the normal-dose CT images data in each institution dataset \(D^{k}=\left\{x_{i}^{k},y_{i}^{k}\right\}_{i=1}^{n^{k}}\) in a batch, respectively. \(n^{k}\) is the number of samples in the \(k\)th institution. \(\Omega_{\omega}\) is network with parameters \(\omega\), and \(\omega_{k-1}^{*}\) is the transmitted parameter of the previous institution after completing all training of the site-round. \(\left\|\cdot\right\|_{2}^{2}\) is the mean squared error (MSE) loss function, i.e., \(\mathcal{L}_{2}\) norm. And \(\varepsilon\) is the hyper-parameter, \(\left\|\cdot\right\|_{1}\) is the \(\mathcal{L}_{1}\) norm and can be concretely expressed as follows: \[\left\|g_{i}^{k}-\widetilde{g}_{i}^{k}\right\|_{1}=\left\|\frac{\partial\|\ \Omega_{\omega}(x_{i}^{k})-y_{i}^{k}\ \|_{2}^{2}}{\partial N_{y_{i}^{k}}}-QP(g_{i}^{k},G_{i}^{k})\right\|_{1} \tag{2}\] where \(g_{i}^{k}\) is the gradient measurements with respect to the \(y_{i}^{k}\)-th neuron \(N_{y_{i}^{k}}\) of the last output layer in \(\Omega_{\omega}\). \(QP()\) is the quadratic programming method to correct the inter-institutional update directions, and \(G_{i}^{k}=[g_{i}^{k-1},g_{i}^{k},\omega-\omega_{k-1}^{*}]\). In the \(t\)th round, model parameters at the \(k\)th institution can be updated as follows: \[\omega_{t+1}=\omega_{t}-\sigma\left\{\min_{\omega}\left\|\Omega_{\omega}\left( x_{i}^{k}\right)-y_{i}^{k}\right\|_{2}^{2}+\varepsilon\left\|g_{i}^{k}- \widetilde{g}_{i}^{k}\right\|_{1}\right\}_{t} \tag{3}\] where \(\sigma\) is the learning rate at each institution. ### Intermediate controller The intermediate controller contains the performance assessment module and on-line determination module. Performance Assessment Module The performance assessment module (PAM) allows for evaluating the denoising performance for image reconstruction with quantitative measures, i.e., peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and mean square error (MSE). To incorporate the three measurements into the proposed icP2P-FL method, the widely used multilayer perceptron (MLP) is introduced as follows: \[\rho\ =h_{1}\left([p,s,m]\right)\nu+h_{2}\left([p,s,m]\right), \tag{4}\] where \(\nu\) is the image feature map of the network layer, \(h_{1}\), \(h_{2}\) are MLPs with shared parameters, and p, s, m are the metrics of PSNR, SSIM, and MSE, \(\rho\) is the score of the entire dataset of each institution. The performance assessment module of the proposed icP2P-FL is pre-trained on labeled image data from each institutions, which consists of four fully connected layers. #### 2.3.2 On-line determination module As the number and distribution of training datasets in each institution varies a lot, there are real differences in the computational efficiency and catastrophic forgetting rate of training the shared global model on different datasets. Then we developed the Online Decision Module (ODM), which relies on the scores obtained from the PAM by evaluating the model on each institution's characteristic dataset. The ODM enables degrade the all the quantitative measurement vectors \([p,s,m]\), of all the institutions online, and then immediately determines inter-institutional training order. The ODM can adjust the number of transmissions between two adjacent institutions (i.e., the number of cycles in the training stage). And it also determines the site-round at each institution (i.e., the number of training rounds in one cycle). Finally, it can finally determine whether the global shared model is well-trained. ### Implementation Details In this work, the backbone DL network of the presented icP2P-FL method adopts a modified residual network (ResNet) with 12 residual blocks and 2 residual layers [19]. The training parameters are set as follows: (1) the number of transmissions and the number of site-round of each institution are respectively set to 10 and 5, (2) the round number is set to 100 and the weight is decayed at the 100th round by multiplying 0.2, (3) the learning rate and batch size of the model are \(1.0\times 10^{-4}\) and 64, respectively, (4) all the institutions have labeled data, i.e., normal-dose CT images/corresponding low-dose ones acquired with different protocols, (5) switch and fine-tuning mode are initially set to the Ture, (6) determination threshold is set to 1.4759, (7) the training image patches are set to \(64\times 64\) with a stride of 64. All the networks in this work are implemented with Pytorch library, and the FBP algorithm is based on the ASTRA toolbox by utilizing one NVIDIA Tesla V100s graphics processing unit (GPU) which has 32 GB memory capacity. The presented icP2P-FL method is compared to several widely used denoising algorithms for image reconstruction, including the FBP algorithm, FedAvg [20], CL-SI (i.e., single institution model with centralized learning trained on institution-specific dataset), CL-MI (i.e., multiple institutions model with centralized learning trained on all datasets), and Semi-centralized Federated Learning network (SC-FL) [21]. Three measures, i.e., PSNR, SSIM, and MSE, are employed to quantitatively evaluate the performance of the proposed our proposed icP2P-FL method and the comparison models. ## 3 Experimental Results ### Dataset To validate and evaluate the denoising performance of the proposed icP2P-FL method for image reconstruction, four different CT datasets are used in the experiments. Specifically, with the approval of the medical ethics committee of the local institution, three CT datasets were collected from three local hospitals, i.e., Institution I, Institution II, and Institution III. The three datasets are acquired with different protocols with different scanners. The fourth CT dataset is from the "2016 NIH-AAPM-Mayo Clinic Low Dose CT Grand Challenge" [22], termed as Institution IV. All the CT images are served as normal-dose ones in this study. Then we simulated low-dose CT images (i.e., quarter dose) from the corresponding CT images based on the previous study [23]. In Institutions I, II, and III, 800 and 100 cases are assigned for training and testing datasets, respectively. 200 and 100 cases are assigned for training and testing in Institution IV. To select characteristic datasets for validation evaluation, we employ 100 cases of paired images from each institution dataset independent of training and testing, with deeper features and individualized data distributions in each institution. ### Performance Comparison Figure 2 shows the example slice denoising with each individual method at four institutions and a corresponding zoomed-in region-of-interest (ROI) view indicated by the red boxes. The zoomed-in ROI view is used shed more light on the denoising performance for image reconstruction. The normal-dose CT images are served as ground truth for comparison. It can be seen that the low-dose FBP images contain severe noise-induced artifacts. FedAvg can reduce noise-induced artifacts to some extent, but it introduces undesired artifacts in the final results. The main reason is that considerable heterogeneity exists among the four institutions, and FedAvg fails to fully take the considerable heterogeneity into consideration. The CL-SI is trained on the institution-specific CT data and it obtains promising results in noise-induced artifacts suppression, but it can not process low-dose CT images from other institutions shown in Fig. 3. The degraded denoising performance for image reconstruction is attributed to the heterogeneity between the training dataset from one specific institution and the testing data from the other institutions. The CL-MI is trained on the pooling CT data from the four institutions, and it achieves the best performance in noise-induced artifacts removal and structure details recovery due to it considers all the latent characteristics in the four institutions. The SC-FL has better performance than the FedAvg and CL-SI by constructing high-quality labels in the central server. But it can be seen that the SC-FL results still contain residual noise-induced artifacts as indicated by the red and blue arrows due to the L1 loss function in the SC-FL. The proposed icP2P-FL method provides a better denoising effect of image reconstruction, and obtains similar denoising and generalization performance visually with the CL-MI method, as shown in the zoomed-in ROIs. The main reason could be that the continuous learning strategy with an intermediate controller for training tuning in the proposed icP2P-FL allows the global model to perform multiple denoising tasks for image reconstruction, i.e., the CT data acquired with different protocols from different protocols scanners, and reduces the side effects of knowledge forgetting. Moreover, the quantitative assessments indicate that the proposed icP2P-FL method obtains superior performance which is similar to the CL-MI method, highlighting that the proposed icP2P-CL reaches an accuracy similar with the CL-MI method trained with pooling data from all the institutions. Figure 2: Qualitative comparison of the image reconstructed by the different methods. The display windows for CT images at Institution 1, Institution 2, Institution 3 and Institution 4 are [-50, 50], [-100, 100], [-50, 50], [-150, 150] HU, respectively. The display windows for zoomed-in ROIs are [-10, 50], [-20, 80], [-30, 50], [-80, 100] HU, respectively. icP2P-FL with CL for Improving CT Imaging from Multiple Institutions ## 4 Conclusion This work proposes icP2P-FL as a novel P2P federated learning framework for training global models for CT image denoising and reconstruction from different institutions simultaneously. The motivation behind the proposed icP2P-FL is to enable efficient and effective denoising of CT images with large heterogeneity directly from different protocols and different scanners. The experimental results show that the proposed icP2P-FL method can achieve better performance for robust cross-institution CT denoising for image reconstruction than the existing methods, and reaches an accuracy similar to a model trained with pooling data from all the institutions. In future studies, more real clinical data should be enrolled to further demonstrate its performance.
2305.14305
Observational constraints on Yukawa cosmology and connection with black hole shadows
We confront Yukawa modified cosmology, proposed in arXiv:2304.11492 [Jusufi et al. arXiv:2304.11492], with data from Supernovae Type Ia (SNe Ia) and Hubble parameter (OHD) observations. Yukawa cosmology is obtained from a Yukawa-like gravitational potential, with coupling parameter $\alpha$ and wavelength parameter $\lambda$, which gives rise to modified Friedmann equations. We show that the agreement with observations is very efficient, and within $1\sigma$ confidence level we find the best-fit parameters $\lambda=\left(2693_{-1262}^{+1191}\right)\, \rm Mpc$, $\alpha=0.416_{-0.326}^{+1.137}$, and a graviton mass of $m_{g}=\left(2.374_{-0.728}^{+2.095}\right)\times 10^{-42}\, \text{GeV}$. Additionally, we establish a connection between the effective dark matter and dark energy density parameters and the angular radius of the black hole shadow of the SgrA and M87 black holes in the low-redshift limit, consistent with the Event Horizon Telescope findings.
Esteban González, Kimet Jusufi, Genly Leon, Emmanuel N. Saridakis
2023-05-23T17:45:06Z
http://arxiv.org/abs/2305.14305v3
# Observational Constraints on Yukawa Cosmology and Connection with Black Hole Shadows ###### Abstract We confront Yukawa modified cosmology, proposed in [Jusufi et al. arXiv:2304.11492], with data from Supernovae Type Ia (SNe Ia) and Hubble parameter (OHD) observations. Yukawa cosmology is obtained from a Yukawa-like gravitational potential, with coupling parameter \(\alpha\) and wavelength parameter \(\lambda\), which gives rise to modified Friedmann equations. We show that the agreement with observations is very efficient, and within \(1\sigma\) confidence level we find the best-fit parameters \(\lambda=2693^{+1191}_{-126}\) Mpc, and \(\alpha=0.416^{+1.137}_{-0.326}\), and a graviton mass of approximately \(m_{g}\simeq 5.6\times 10^{-43}\) GeV. Additionally, we establish a connection between the effective dark matter and dark energy density parameters and the angular radius of the black hole shadow of the SgrA and M87 black holes in the low-redshift limit, which is consistent with the findings of the Event Horizon Telescope. ## I Introduction According to modern cosmology, the universe's large-scale structure is homogeneous and isotropic. Additionally, it is believed that cold dark matter, a type of matter that is not visible and only interacts through gravity, exists [1; 2; 3; 4; 5]. However, despite numerous efforts, there has not been any direct detection of dark matter particles, and its existence is only inferred from its gravitational effects on galaxies and larger structures. On the other hand, dark energy is also introduced to explain the universe accelerated expansion [6], supported by numerous observations [7; 8; 9]. The \(\Lambda\)CDM paradigm has proven to be the most successful model in modern cosmology. This scenario can describe cosmological observations with the least number of parameters [10]. However, specific fundamental physics concepts remain to be fully understood, such as the microphysical nature dark matter and dark energy. Since scalar fields play a significant role in the physical description of the universe in the inflationary scenario [11], a quintessence scalar field is used in a generalization of the \(\Lambda\)CDM model [12; 13; 14; 15; 16; 17; 18]. Additionally, multi-scalar field models can describe various epochs of the cosmological history [19; 20; 21; 22; 23; 24; 25]. Moreover, a unified description of the matter and dark energy epochs was presented for a class of scalar-torsion theories, providing a Hamiltonian description [26]. Nevertheless, there is direction in the literature that deviates from this line of thought and supports the idea that observations can be explained by altering Einstein's equations, leading to modified theories of gravity [27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42]. Concerning dark matter, which is needed to explain the galaxy rotation curves [43], one of the first theories suggesting an explanation for the flatness of rotation curves was the Modified Newtonian dynamics (MOND) proposed by Milgrom [44], which modifies Newton's law [45; 46; 47; 48; 49; 50; 51]. Other interesting proposals are the superfluid dark matter [52], the Bose-Einstein condensate [53], etc. On the other hand, black holes are intriguing astronomical objects, and can potentially test the theories of gravity in strong gravitational fields. One of the most fascinating aspects of black holes is their shadow image. The black hole's silhouette is a dark region that results from the immense gravitational pull of the black hole, which bends the path of light rays near it. Specifically, photons emitted from a bright source close to a black hole can either be drawn into the black hole or scattered away from it and into infinity. Additionally, critical geodesics separate the first two sets, known as unstable spherical orbits. By observing the critical photon geodesic trajectories in the sky, we can obtain the black hole shadow [54; 55; 56]. With the recent results, the black hole shadow for the supermassive black holes M87 and Sgr A was confirmed by the Event Horizon Telescope (EHT) collaboration [79; 80; 81; 82; 83; 84]. In the present paper we follow an approach motivated by cosmology and quantum field theories; we aim to study the dark sector by introducing the Yukawa potential [85; 86; 87; 88; 89]. We adopt the viewpoint of Verlinde and con sider gravity as an entropic force caused by the changes in the system's information [90]. Verlinde further argued that dark matter is an apparent effect, i.e., a consequence of the baryonic matter [78]. Furthermore, the corresponding entropic force was used in deriving the corrected Friedmann equations due to the minimal length, as recently studied in [91; 92; 93; 94]. In particular, as it was recently shown in [94], dark matter can be explained by the coupling between baryonic matter through a long-range force via the Yukawa gravitational potential. This coupling is characterized by the coupling parameter \(\alpha\), the wavelength parameter \(\lambda\), and the Planck length \(l_{0}\). The modified Friedmann equations are derived using Verlinde's entropic force interpretation of gravity based on the holographic scenario and the equipartition law of energy. An equation connects the dark matter density, dark energy density, and baryonic matter density. It is worth noting that dark matter is not associated with a particle but is an apparent effect. Dark energy is related to graviton mass and \(\alpha\), indicating that the cosmological constant can be viewed as a self-interaction effect between gravitons. The model parameters were estimated as \(\lambda\simeq 10^{3}\) [Mpc] and \(\alpha\in(0.0385,0.0450)\). In this work we are interested in performing detailed observations confrontation of the cosmological scenario based on Yukawa potential. In particular, we wish to constrain the parameters of the model using Supernovae Type Ia (SNe Ia) and Hubble parameter (OHD) observations. Additionally, we are interested in investigating the connection to black hole physics. In particular, at low redshifts, one can use the angular radius of black hole shadow to constrain the Hubble constant independently. The manuscript is organized as follows. In Sect. II we review the Yukawa cosmological scenario, while in Sect. III we extract observational constraints. In Sect. IV we study the relation between the modified Yukawa cosmology and black hole shadows, and in Sect. V we comment on our findings. ## II Yukawa modified cosmology In this section, we shall review the model that was recently studied in [94]. The gravitational potential considered is modified via the non-singular Yukawa-type gravitational potential \[\Phi(r)=-\frac{GMm}{\sqrt{r^{2}+l_{0}^{2}}}\left(1+\alpha\,e^{-\frac{\pi}{ \lambda}}\right)|_{r=R}, \tag{1}\] with \(l_{0}\) being a small quantity of Planck length order, i.e. \(l_{0}\sim 10^{-34}\) cm and \(\alpha>0\). Note that the wavelength of massive graviton we have \(\lambda=\frac{\hbar}{m_{g}c}>10^{20}\)m, that leads to \(m_{g}<10^{-64}\) kg for the graviton mass [77]. If we use the relation \(F=-\nabla\Phi(r)|_{r=R}\), and by neglecting the term \(\alpha l_{0}^{2}/R^{2}\to 0\) we get the modified Newton's law of gravitation \[F=-\frac{GMm}{R^{2}}\left[1+\alpha\,\left(\frac{R+\lambda+\frac{l_{0}^{2}}{R} }{\lambda}\right)e^{-\frac{R}{\lambda}}\right]\left[1+\frac{l_{0}^{2}}{R^{2} }\right]^{-3/2}. \tag{2}\] We proceed by studying the implications of the above modified law of gravity in cosmology. First, we assume the background spacetime to be spatially homogeneous and isotropic, described by the Friedmann-Robertson-Walker (FRW) metric \[ds^{2}=-dt^{2}+a^{2}(t)\left[\frac{dr^{2}}{1-kr^{2}}+r^{2}(d\theta^{2}+\sin^{ 2}\theta d\phi^{2})\right], \tag{3}\] with \(R=a(t)r\), \(x^{0}=t,x^{1}=r\), and the two dimensional metric \(h_{\mu\nu}\), and where \(k\) is the spatial curvature (\(k=0,1,-1\) corresponding to flat, closed, and open universes, respectively). In addition, we have an apparent dynamic horizon, which the following relation can determine \(h^{\mu\nu}(\partial_{\mu}R)\left(\partial_{\nu}R\right)=0\). It is easy to show that the apparent horizon radius for the FRW universe reads as \[R=ar=1/\sqrt{H^{2}+k/a^{2}}. \tag{4}\] On the other hand, we have a matter source which can be assumed to be a perfect fluid described by the stress-energy tensor \[T_{\mu\nu}=(\rho+p)u_{\mu}u_{\nu}+pg_{\mu\nu}, \tag{5}\] along with the continuity equation \(\dot{\rho}+3H(\rho+p)=0\), with \(H=\dot{a}/a\) being the Hubble parameter. Let us consider a compact spatial region \(V\) with a compact boundary \(\mathcal{S}\), corresponding to a sphere of radius \(R=a(t)r\), where \(r\) is a dimensionless quantity. Through Newton's law, we can write the gravitational force on a test particle \(m\) near the surface [94] as \[m\ddot{a}r=-\frac{GMm}{R^{2}}\left[1+\alpha\,\left(\frac{R+\lambda}{\lambda} \right)e^{-\frac{R}{\lambda}}\right]\left[1+\frac{l_{0}^{2}}{R^{2}}\right]^{- 3/2}. \tag{6}\] In the Newtonian cosmology we can take \(\rho=M/V\) inside the volume \(V=\frac{4}{3}\pi a^{3}r^{3}\), hence we can rewritten the above equation as [94] \[\frac{\ddot{a}}{a}=-\frac{4\pi G}{3}\rho\left[1+\alpha\,\left(\frac{R+\lambda} {\lambda}\right)e^{-\frac{R}{\lambda}}\right]\left[1+\frac{l_{0}^{2}}{R^{2}} \right]^{-3/2}, \tag{7}\] which is the dynamical equation for Newtonian cosmology. To obtain the Friedmann equations in general relativity, we must use the active gravitational mass \(\mathcal{M}\) rather than the total mass \(M\). By replacing \(M\) with \(\mathcal{M}\), we obtain \[\mathcal{M}=-\ddot{a}a^{2}r^{3}\left[1+\alpha\,\left(\frac{R+\lambda}{\lambda} \right)e^{-\frac{R}{\lambda}}\right]\left[1+\frac{l_{0}^{2}}{R^{2}}\right]^{-3 /2}, \tag{8}\] where the active gravitational mass can also be computed via \[{\cal M}=2\int_{V}dV\left(T_{\mu\nu}-\frac{1}{2}Tg_{\mu\nu}\right)u^{\mu}u^{\nu}. \tag{9}\] Using these equations, we obtain the modified acceleration equation for the dynamical evolution of the FRW universe [94] \[\frac{\ddot{a}}{a}=-\frac{4\pi G}{3}(\rho+3p)\left[1+\alpha\,\left(\frac{R+ \lambda}{\lambda}\right)e^{-\frac{R}{\lambda}}\right]\left[1-\frac{3\,l_{0}^{2 }}{2\,R^{2}}\right]. \tag{10}\] Furthermore, we can simplify the work since \(l_{0}\) is a very small number; we can consider a series expansion around \(x=1/\lambda\) via \[\left[1+\alpha\,\left(\frac{R+\lambda}{\lambda}\right)e^{-\frac{R}{\lambda}} \right]=1+\alpha-\frac{1}{2}\frac{\alpha R^{2}}{\lambda^{2}}+\cdots, \tag{11}\] provided that \(\alpha R^{2}/\lambda^{2}\ll 1\). In general, we expect \(\alpha<1\), and \(\lambda\) to be some large number of magnitude comparable to the radius of the observable Universe \(R\sim 10^{26}\) m. In summary, the corresponding Friedmann equation for \(\alpha R^{2}/\lambda^{2}\ll 1\) becomes \[\frac{\ddot{a}}{a}=-\frac{4\pi G}{3}\sum_{i}\left(\rho_{i}+3p_{i}\right)\left[ 1+\alpha-\frac{1}{2}\frac{\alpha R^{2}}{\lambda^{2}}\right]\left[1-\frac{3l_{ 0}^{2}}{2R^{2}}\right], \tag{12}\] where we have included several matter fluids with a constant equation of state parameters \(\omega_{i}\) along with the continuity equation \(\dot{\rho}_{i}+3H(1+\omega_{i})\rho_{i}=0\), that yield an expression for densities \(\rho_{i}=\rho_{i0}a^{-3(1+\omega_{i})}\). Inserting these into (12) and integrating we obtain [94] \[d(\dot{a}^{2}+k)= \frac{8\pi G}{3}\left[1+\alpha-\frac{1}{2}\frac{\alpha R^{2}}{ \lambda^{2}}\right]\left[1-\frac{3\,l_{0}^{2}}{2\,R^{2}}\right]\] \[\times d\left(\sum_{i}\rho_{i0}a^{-1-3\omega_{i})}\right). \tag{13}\] Using the fact that \(R[a]=ra\), we further get \[\dot{a}^{2}+k= \frac{8\pi G}{3}\int\left[1+\alpha-\frac{1}{2}\frac{\alpha R[a]^{ 2}}{\lambda^{2}}\right]\left[1-\frac{3\,l_{0}^{2}}{2\,R[a]^{2}}\right]\] \[\times \frac{d\left(\sum_{i}\rho_{i0}a^{-1-3\omega_{i})}\right)}{da}da, \tag{14}\] with \(r\) nearly a constant. Considering the equations of state, \(\omega_{i}\notin\{-1,1/3\}\), we have \[\frac{\dot{a}^{2}}{a^{2}}+\frac{k}{a^{2}}=\frac{8\pi G}{3}\left( \alpha\left(\frac{3l_{0}^{2}}{4\lambda^{2}}+1\right)+1\right)\sum_{i}\rho_{i0 }a^{-3(1+\omega_{i})}\] \[-\frac{4\pi(\alpha+1)Gl_{0}^{2}}{3R^{2}}\sum_{i}\frac{3\omega_{i} +1}{\omega_{i}+1}\rho_{i0}a^{-3(1+\omega_{i})}\] \[+\frac{4\pi\alpha GR^{2}}{3\lambda^{2}}\sum_{i}\frac{1+3\omega_{i }}{1-3\omega_{i}}\rho_{i0}a^{-3(1+\omega_{i})}, \tag{15}\] implying that at leading order terms (\(l_{0}^{2}/\lambda^{2}\to 0\)), \[H^{2}+\frac{k}{a^{2}}= \frac{8\pi G_{\rm eff}}{3}\sum_{i}\rho_{i}-\frac{1}{R^{2}}\sum_{i} \Gamma_{1}(\omega_{i})\rho_{i}\] \[+\frac{4\pi G_{\rm eff}}{3}R^{2}\sum_{i}\Gamma_{2}(\omega_{i}) \rho_{i}, \tag{16}\] where the Newton's constant is shifted as \(G_{\rm eff}=G(1+\alpha)\), along with the definitions [94] \[\Gamma_{1}(\omega_{i}) \equiv\frac{4\pi G_{\rm eff}l_{0}^{2}}{3}\left(\frac{1+3\omega_{i }}{1+\omega_{i}}\right), \tag{17}\] \[\Gamma_{2}(\omega_{i}) \equiv\frac{\alpha\,(1+3\omega_{i})}{\lambda^{2}(1+\alpha)(1-3 \omega_{i})}. \tag{18}\] If, for example, we assume only a matter source, at leading order terms we can write \[H^{2}+\frac{k}{a^{2}}=\frac{8\pi G_{\rm eff}}{3}\rho-\frac{\Gamma_{1}}{R^{2}} \rho+\frac{4\pi G_{\rm eff}}{3}\rho\,\Gamma_{2}R^{2}. \tag{19}\] Focusing on the flat case (\(k=0\)) we have \(R^{2}=1/H^{2}\), yielding \[H^{2}\,(1+\Gamma_{1}\rho)-\frac{4\pi G_{\rm eff}}{3}\frac{\Gamma_{2}}{H^{2}} \rho=\frac{8\pi G_{\rm eff}}{3}\rho. \tag{20}\] Finally, by expanding around \(l_{0}\), making use of \(\left(1+\Gamma_{1}\rho\right)^{-1}\simeq\left(1-\Gamma_{1}\rho\right)\), and neglecting the terms \(\sim{\cal O}(l_{0}\alpha^{2}/\lambda^{2})\), we obtain \[H^{2}-\frac{4\pi G_{\rm eff}}{3}\frac{\Gamma_{2}}{H^{2}}\rho=\frac{8\pi G_{\rm eff }}{3}\rho\,(1-\Gamma_{1}\rho)\,. \tag{21}\] ### Late time universe Let us now study the phenomenological aspects of the modified Friedmann equation extracted above. In particular, we are interested in studying the late universe, which implies we can neglect the quantum effects by setting \(l_{0}\to 0\) [\(\Gamma_{1}=0\)]. This gives \[H^{2}-\frac{4\pi G_{\rm eff}}{3}\frac{\sum_{i}\Gamma_{2}(\omega_{i})\,\rho_{i}}{ H^{2}}=\frac{8\pi G_{\rm eff}}{3}\,\sum_{i}\rho_{i}, \tag{22}\] and using \(\rho_{\rm crit}=\frac{3}{8\pi G}H_{0}^{2}\) we acquire two solutions: \[E^{2} =\frac{(1+\alpha)}{2}\,\sum_{i}\Omega_{i}\] \[\pm\frac{\sqrt{\sum_{i}\Omega_{i}^{2}(1+\alpha)^{2}+2\Gamma_{2}( \omega_{i})\Omega_{i}(1+\alpha)/H_{0}^{2}}}{2}, \tag{23}\] where \(\Omega_{i}=\Omega_{i0}(1+z)^{3(1+\omega_{i})}\), \(\Omega_{i0}=8\pi G\rho_{i0}/(3H_{0}^{2})\), with \(E=H/H_{0}\). In addition, we point out that the total quantity \(\Omega^{2}\) in the square root should be viewed as the root-mean-square density energy, i.e \(\Omega\equiv\sqrt{\langle\Omega^{2}\rangle}\). As explained in [94], the most interesting implication of the last equation relies on the physical interpretation of the term \(2\Gamma_{2}(\omega_{i})\Omega_{i}(1+\alpha)/H_{0}^{2}\). In particular, it was shown that this term precisely mimics the effect of cold dark matter of the \(\Lambda\)CDM model. Taking the term \(\Gamma_{2}\Omega_{i}\) and set \(\omega_{i}=0\) we define the quantity (here we shall add the constant term \(c\) to make the equation consistent) [94] \[\frac{\Omega_{D}^{2}(1+\alpha)^{2}}{(1+z)^{3}}\equiv\frac{2\Gamma_{2}\Omega_{i }(1+\alpha)}{H_{0}^{2}}. \tag{24}\] Thus, we can obtain an equation for dark matter as \[\Omega_{D}=\frac{c}{\lambda H_{0}\,(1+\alpha)}\sqrt{2\alpha\Omega_{B}}\,(1+z) ^{3}. \tag{25}\] From this equation we can deduce that dark matter may be viewed as an effective sector, a manifestation of modified Newton's law, quantified by \(\sim\alpha\,\Omega_{B}\). Additionally, we define the quantity \[\Omega_{\Lambda}=\frac{c^{2}}{\lambda^{2}H_{0}^{2}}\frac{\alpha}{(1+\alpha)^{ 2}}. \tag{26}\] Finally, comparing the last expression with \(\rho_{\Lambda}=\frac{\Lambda c^{2}}{8\pi\,G}\), we can estimate the effective cosmological constant to be \(\Lambda=\frac{3\,m_{g}^{2}c^{2}\,\alpha}{\hbar^{2}\,(1+\alpha)^{2}}\). Note that one can combine the above expressions and relate baryonic matter with the effective dark matter and dark energy, acquiring \[\Omega_{D}=\sqrt{2\,\Omega_{B}\Omega_{\Lambda}}(1+z)^{3}. \tag{27}\] In summary, Eq. (23) can be re-written as [94] \[E^{2}(z)\ =\ (1+\alpha)\left[\Omega_{B}(1+z)^{3}+\Omega_{\Lambda}\right]. \tag{28}\] We can now introduce the split \(\Omega_{B}(1+z)^{3}\to\Omega_{B}^{\Lambda CDM}(1+z)^{3}+\Omega_{D}^{\Lambda CDM}\), hence, to get the \(\Lambda\)CDM-like model, we can write \[E^{2}(z)=(1+\alpha)\left[\Omega_{B}^{\Lambda CDM}(1+z)^{3}+\Omega_{D}^{ \Lambda CDM}+\Omega_{\Lambda}\right], \tag{29}\] where \[\Omega_{D}^{\Lambda CDM}=\frac{c}{\lambda H_{0}\,(1+\alpha)}\sqrt{2\alpha \Omega_{B}^{\Lambda CDM}}\,(1+z)^{3}. \tag{30}\] ## III Observational constraints In the previous section we presented Yukawa-modified cosmology Hence, in this section we can shall proceed to observational confrontation with Hubble parameter data (OHD) and Type Ia supernovae (SN Ia) data, in order to extract constraints of the free parameters. For this purpose, we compute the best-fit of the free parameters and their corresponding confidence regions at \(1\sigma\) (68.3%) of confidence level (CL) with the affine-invariant Markov chain Monte Carlo (MCMC) method [95], implemented in the pure-Python code emcece [96]. In particular, we have considered 100 chains or "walkers", using the autocorrelation time provided by the emcece module as a convergence test. In this sense, we computed at every 50 step the autocorrelation time, \(\tau_{\rm corr}\), of the chains. If the current step is larger than \(50\tau_{\rm corr}\) and the values of \(\tau_{\rm corr}\) changed by less than 1%, then we will consider that the chains are converged, and the constraint is stopped. We discard the first \(5\tau_{\rm corr}\) steps as "burn-in" steps. Finally, we compute the mean acceptance fraction of the chains, which must have a value between 0.2 and 0.5 [96] and can be modified by the stretch move provided by the emcee module. For this Bayesian statistical analysis, we need to construct the following Gaussian likelihoods: \[\mathcal{L}_{\rm OHD}\propto\exp\left(-\frac{\chi_{\rm OHD}^{2}}{2}\right),\ \mathcal{L}_{\rm SNe}\propto\exp\left(-\frac{\chi_{\rm SNe}^{2}}{2}\right), \tag{31}\] where \(\chi_{\rm OHD}^{2}\) and \(\chi_{\rm SNe}^{2}\) are the merit function of the OHD and SNe Ia data, respectively. The Gaussian likelihood for the joint analysis SNe Ia+OHD is constructed as \(\mathcal{L}_{\rm joint}=\mathcal{L}_{\rm SNe}+\mathcal{L}_{\rm OHD}\). In the following subsections, we will briefly describe the construction of the merit function of each data set. ### Observational Hubble parameter data For the OHD, we consider the sample compiled by Magana et al. [97], which consists of 51 data points in the redshift range \(0.07\leq z\leq 2.36\), and for which we construct their merit functions as \[\chi_{\rm OHD}^{2}=\sum_{i=1}^{51}\left[\frac{H_{i}-H_{th}(z_{i},\theta)}{ \sigma_{H,i}}\right]^{2}, \tag{32}\] where \(H_{i}\) is the observational Hubble parameter at redshift \(z_{i}\) with an associated error \(\sigma_{H,i}\), all of them provided by the OHD sample, \(H_{th}\) is the theoretical Hubble parameter at the same redshift, and \(\theta\) encompasses the free parameters of the model under study. The theoretical Hubble parameter is obtained from Eq. (29), which we conveniently rewrite as \[E^{2}(z)=(1+\alpha)\left[\left(\Omega_{B,0}+\sqrt{2\Omega_{B,0}\Omega_{\Lambda,0}}\right)(1+z)^{3}+\Omega_{\Lambda,0}\right], \tag{33}\] with \(\Omega_{\Lambda,0}\) given by Eq. (26), while the condition \(H(z=0)=H_{0}\), leads to \[1+\alpha=\left[\Omega_{B,0}+\sqrt{2\Omega_{B,0}\Omega_{\Lambda,0}}+\Omega_{ \Lambda,0}\right]^{-1}, \tag{34}\] and the free parameters of the Yukawa modified cosmology are \(\theta=\{H_{0};\Omega_{B,0};\Omega_{\Lambda,0}\}\). Note that one has the relations \[(1+\alpha)\,\Omega_{B,0} \equiv \Omega_{B,0}^{\Lambda\rm CDM}, \tag{35}\] \[(1+\alpha)\,\sqrt{2\Omega_{B,0}\Omega_{\Lambda,0}} \equiv \Omega_{DM,0}^{\Lambda\rm CDM},\] (36) \[(1+\alpha)\,\Omega_{\Lambda,0} \equiv \Omega_{\Lambda,0}^{\Lambda\rm CDM}, \tag{37}\] and Eq. (33) becomes \[E^{2}(z)=\left(\Omega_{B,0}^{\Lambda\text{CDM}}+\Omega_{DM,0}^{\Lambda\text{CDM }}\right)\left(1+z\right)^{3}+\Omega_{\Lambda,0}^{\Lambda\text{CDM}}, \tag{38}\] which is related to the Hubble parameter for the standard \(\Lambda\)CDM model through \(H(z)=H_{0}E(z)\), with \(H_{0}=H_{0}^{\Lambda\text{CDM}}\). It is important to mention that the OHD, as well as the SNe Ia data, are not able to independently constraint \(\Omega_{B,0}^{\Lambda\text{CDM}}\) and \(\Omega_{DM,0}^{\Lambda\text{CDM}}\). Thus, for the \(\Lambda\)CDM scenario, we define \(\Omega_{m,0}\equiv\Omega_{B,0}^{\Lambda\text{CDM}}+\Omega_{DM,0}^{\Lambda \text{CDM}}\), where the condition \(H(z=0)=H_{0}\) leads to \(\Omega_{\Lambda,0}^{\Lambda\text{CDM}}=1-\Omega_{m,0}\), and the free parameters of the \(\Lambda\)CDM scenario are \(\theta=\left\{H_{0};\Omega_{m,0}\right\}\). Therefore, we consider the following priors for our MCMC analysis: \(H_{0}=100\frac{km/s}{Mpc}\,h\), with \(0.55<h<0.85\), \(0<\Omega_{m,0}<1\), \(0<\Omega_{B,0}<0.2\), \(0<\Omega_{\Lambda,0}<1\), and the condition \(\alpha>0\) implies \(0<\Omega_{B,0}+\sqrt{2\Omega_{B,0}\Omega_{\Lambda,0}}+\Omega_{\Lambda,0}<1\). ### Type Ia supernovae data For the SNe Ia data, we consider the Pantheon+ sample [98], which is the successor of the original Pantheon sample [99] and consist of 1701 data points in the redshift range \(0.001\leq z\leq 2.26\). In this case, the merit function can be conveniently constructed in matrix notation (denoted by bold symbols) as \[\chi_{\text{SNe}}^{2}=\mathbf{\Delta D}(z,\theta,M)^{\dagger}\mathbf{C}^{-1} \mathbf{\Delta D}(z,\theta,M), \tag{39}\] where \([\mathbf{\Delta D}(z,\theta,M)]_{i}=m_{B,i}-M-\mu_{th}(z_{i},\theta)\) and \(\mathbf{C}=\mathbf{C}_{\text{stat}}+\mathbf{C}_{\text{sys}}\) is the total uncertainty covariance matrix, where the matrices \(\mathbf{C}_{\text{stat}}\) and \(\mathbf{C}_{\text{sys}}\) accounts for the statistical and systematic uncertainties, respectively. In this expression, \(\mu_{i}=m_{B,i}-M\) is the observational distance modulus of Pantheon+ sample, obtained by a modified version of the Trip's formula [100], with three nuisance parameters calibrated to zero with the BBC (BEAMS with Bias Corrections) approach [101]. Therefore, the Pantheon+ sample provides directly the corrected apparent B-band magnitude \(m_{B,i}\) of a fiducial SNe Ia at redshift \(z_{i}\), with \(M\) the fiducial magnitude of an SNe Ia, which must be jointly estimated with the free parameters of the model under study. The theoretical distance modulus for a spatially flat FLRW spacetime is given by \[\mu_{th}(z_{i},\theta)=5\log_{10}\left[\frac{d_{L}(z_{i},\theta)}{\text{Mpc}} \right]+25, \tag{40}\] with \(d_{L}(z_{i},\theta)\) the luminosity distance given by \[d_{L}(z_{i},\theta)=c(1+z_{i})\int_{0}^{z_{i}}\frac{dz^{\prime}}{H_{th}(z^{ \prime},\theta)}, \tag{41}\] where \(c\) is the speed of light given in units of km/s. Note that the luminosity distance depends on the theoretical Hubble parameter, which is given by Eq. (33) for the Yukawa cosmology and Eq. (38) for the \(\Lambda\)CDM model. Therefore, we only add to the free parameters the nuisance parameter \(M\), for which we consider the following prior to our MCMC analysis: \(-20<M<-18\). Similarly to the Pantheon sample, there is a degeneration between the nuisance parameter \(M\) and \(H_{0}\). Hence, to constraint the free parameter \(H_{0}\) using SNe Ia data alone, it is necessary to include the SH0ES (Supernovae and \(H_{0}\) for the Equation of State of dark energy program) Cepheid host distance anchors of the form \[\chi_{\text{Cepheid}}^{2}=\Delta\mathbf{D}_{\text{Cepheid}}\left(M\right)^{ \dagger}\mathbf{C}^{-1}\Delta\mathbf{D}_{\text{ Cepheid}}\left(M\right), \tag{42}\] where \(\left[\Delta\mathbf{D}_{\text{Cepheid}}\left(M\right)\right]_{i}=\mu_{i} \left(M\right)-\mu_{i}^{\text{Cepheid}}\), with \(\mu_{i}^{\text{Cepheid}}\) the Cepheid calibrated host-galaxy distance obtained by SH0ES [102]. Hence, for the correspondence we use the Cepheid distances as the "theory" instead of using the model under study to calibrate \(M\), considering that the difference \(\mu_{i}\left(M\right)-\mu_{i}^{\text{Cepheid}}\) is sensitive to \(M\) and \(H_{0}\) and also is largely insensitive to other parameters like \(\Omega_{m,0}\). In this sense, the Pantheon+ sample provides \(\mu_{i}^{\text{Cepheid}}\), and the total uncertainty covariance matrix for Cepheid is contained in the total uncertainty covariance matrix \(\mathbf{C}\). Therefore, we can define the merit function for the SNe Ia data as \[\chi_{\text{SNe}}^{2}=\mathbf{\Delta D}^{\prime}(z,\theta,M)^{\dagger}\mathbf{C }^{-1}\mathbf{\Delta D}^{\prime}(z,\theta,M), \tag{43}\] where \[\Delta\mathbf{D}^{\prime}{}_{i}=\left\{\begin{array}{ll}m_{B,i}-M-\mu_{i}^{ \text{Cepheid}}&i\in\text{Cepheid host}\\ \\ m_{B,i}-M-\mu_{th}(z_{i},\theta)&\text{otherwise}\end{array}\right.. \tag{44}\] It is essential to mention that from now on we will omit the free parameter \(M\), and we will focus our analysis only on the free parameters of each model. Besides, considering that the best-fit parameters minimize the merit function, we can use the evaluation of the best-fit parameters in the merit function, \(\chi_{\text{min}}^{2}\), as an indicator of the goodness of the fit: the smaller the value of \(\chi_{\text{min}}^{2}\) is, the better is the fit. ### Results and discussions In Table 1, we present the total number of steps, the values used for the stretch move, the mean acceptance fraction, and the autocorrelation time for the free parameters \(h\) and \(\Omega_{m,0}\) of the \(\Lambda\)CDM model, and \(h\), \(\Omega_{B,0}\), and \(\Omega_{\Lambda,0}\) of the Yukawa modified cosmology. Additionally, in Table 2, we present their respective best-fit values at \(1\sigma\) CL with the corresponding \(\chi_{\text{min}}^{2}\) criteria. In Figs. 1 and 2, we depict the posterior 1D distribution and the joint marginalized regions of the free parameters space of the \(\Lambda\)CDM model and the Yukawa-modified cosmology. The admissible joint regions presented correspond to \(1\sigma\), \(2\sigma\,(95.5\%)\), and \(3\sigma\,(99.7\%)\) CL, respectively. These results were obtained by the MCMC analysis described in Section III for the SNe Ia data, OHD, and their joint analysis. As we can see from Table 2, the values obtained for the \(\chi^{2}_{\rm min}\) criteria show that the Yukawa modified cosmology can fit the observational data of SNe Ia, OHD and SNe Ia+OHD as accurately as the \(\Lambda\)CDM model. Even more, the value of the Hubble constant is the same in both models, which agrees with our previous identification in Eqs. (33) and (38), where \(H_{0}=H_{0}^{\Lambda{\rm CDM}}\). The only difference between these models relies on the rescaling of energy densities due to the contribution of the \(\alpha\) parameter. On physical grounds, the main difference between these models are that in Yukawa cosmology, dark matter is effective and precisely mimics the cold dark matter of the \(\Lambda\)CDM scenario. To establish the last point, we use the results of our MCMC analysis for SNe Ia+OHD to calculate the values of \(\Omega_{B,0}^{\Lambda{\rm CDM}}\), \(\Omega_{DM,0}^{\Lambda{\rm CDM}}\), and \(\Omega_{\Lambda,0}^{\Lambda{\rm CDM}}\) at \(1\sigma\) CL from their definitions given by Eqs. (35), (36), and (37), obtaining: \(\Omega_{B,0}^{\Lambda{\rm CDM}}=0.038\pm 0.003\), \(\Omega_{DM,0}^{\Lambda{\rm CDM}}=0.235\pm 0.008\), and \(\Omega_{\Lambda,0}^{\Lambda{\rm CDM}}=0.727\pm 0.011\). Note that \(\Omega_{B,0}^{\Lambda{\rm CDM}}+\Omega_{DM,0}^{\Lambda{\rm CDM}}=0.273\pm 0.011\), which match with the value of \(\Omega_{m,0}\) obtained in the \(\Lambda\)CDM model. Therefore, Yukawa cosmology can mimic the late-time \(\Lambda\)CDM model, as we can see from Fig. 3, where we depict the Hubble parameter as a function of redshift \(z\) for the Yukawa and \(\Lambda\)CDM cosmologies, given respectively by Eqs. (33) and (38). Furthermore, in Fig. 4 we depict the matter density parameter \(\Omega_{m}\) as a function of redshift for both models, were \(\Omega_{m}=\Omega_{m,0}(1+z)^{3}/E^{2}\). Both figures were obtained with the results of our MCMC analysis described in Section III for the joint analysis. Finally, the best-fits values for Yukawa parameters at \(1\sigma\) CL in the joint analysis are: \(\lambda=2693^{+1191}_{-1262}\) Mpc, Figure 1: _Posterior 1D distribution and joint marginalized regions of the free parameters space of the \(\Lambda\)CDM model, obtained by the MCMC analysis described in Section III. The admissible joint regions correspond to \(1\sigma\), \(2\sigma\), and \(3\sigma\) CL, respectively. The best-fit values for each model free parameter are shown in Table 2._ Figure 3: _Hubble parameter as a function of redshift for the \(\Lambda\)CDM and Yukawa modified cosmologies. The shaded curve represents the confidence region of the Hubble parameter for the Yukawa cosmology at \(3\sigma\) CL. Additionally, we depict the OHD sample for further comparison. Both curves and the confidence region were obtained with the results of our MCMC analysis described in Section III for the SNe Ia+OHD._ Figure 2: _Posterior 1D distribution and joint marginalized regions of the free parameters space of the Yukawa modified cosmology, obtained by the MCMC analysis described in Section III. The admissible joint regions correspond to \(1\sigma\), \(2\sigma\), and \(3\sigma\) CL, respectively. The best-fit values for each model free parameter are shown in Table 2._ \(\alpha=0.416^{+1.137}_{-0.326}\), and \(m_{g}=\left(4.233^{+3.735}_{-1.298}\right)\times 10^{-69}\,\mathrm{kg}\), or equivalently \(m_{g}\simeq 5.6\times 10^{-43}\) GeV, where we use the expression \(m_{g}=\hbar/c\lambda\) for the graviton mass. ## IV Relating Yukawa cosmology and black hole shadows In this section we will present a relation between the dark matter/dark energy densities and the angular radius of the black hole shadow. We will closely follow the approach developed in [61; 75], in which one can employ the standard definition of the luminosity distance for a flat \(\Lambda\)CDM model as [61; 75] \[d_{L}(z)=(1+z)cI(z)/H_{0}, \tag{45}\] where \(c\) is the speed of light, \(H_{0}\) is the Hubble constant. The quantity \(I(z)\) is given in terms of the integral \[I(z)=\int_{0}^{z}\left(\Omega_{m,0}^{\Lambda\mathrm{CDM}}(1+\tilde{z})^{3}+ \Omega_{\Lambda,0}^{\Lambda\mathrm{CDM}}\right)^{-1/2}\ d\tilde{z}, \tag{46}\] with \(\Omega_{m,0}^{\Lambda\mathrm{CDM}}=\Omega_{B,0}^{\Lambda\mathrm{CDM}}+ \Omega_{DM,0}^{\Lambda\mathrm{CDM}}\), that provides the present values of the critical density parameters for matter and a dark energy component, respectively. On the other hand, one can define the luminosity distance, which is related to the angular diameter distance \(d_{A}(z)\) in terms of the equation [61; 75]: \[d_{L}(z)=(1+z)^{2}d_{A}(z). \tag{47}\] In order to examine the connection with black holes shadows, let us recall that, by definition, the observed angular diameter of an object (say a black hole) is given by \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Data} & \multicolumn{4}{c}{Best-Fit values} & \multirow{2}{*}{\(\chi^{2}_{\mathrm{min}}\)} \\ \cline{2-2} \cline{5-6} & \(h\) & \(\Omega_{m,0}\) & \(\Omega_{B,0}\) & \(\Omega_{\Lambda,0}\) & \(\chi^{2}_{\mathrm{min}}\) \\ \hline \multicolumn{6}{c}{\(\Lambda\)CDM model} \\ \hline SNe Ia & \(0.734\pm 0.010\) & \(0.333\pm 0.018\) & & & \\ OHD & \(0.707\pm 0.012\) & \(0.259\pm 0.018\) & & & 27.5 \\ SNe Ia+OHD & \(0.707\pm 0.007\) & \(0.272\pm 0.011\) & & & 1576.7 \\ \hline \multicolumn{6}{c}{Yukawa cosmology} \\ \hline SNe Ia & \(0.734\pm 0.010\) & & \(0.040^{+0.013}_{-0.017}\) & \(0.468^{+0.141}_{-0.206}\) & 1523.0 \\ OHD & \(0.706\pm 0.012\) & & \(0.024^{+0.008}_{-0.001}\) & \(0.522^{+0.153}_{-0.233}\) & 27.5 \\ SNe Ia+OHD & \(0.707\pm 0.007\) & & \(0.027^{+0.005}_{-0.012}\) & \(0.513^{+0.233}_{-0.229}\) & 1576.7 \\ \hline \hline \end{tabular} \end{table} Table 2: Best-fit values and \(\chi^{2}_{\mathrm{min}}\) criteria for the \(\Lambda\)CDM model and the Yukawa cosmology. The values were obtained by the MCMC analysis described in Section III, and the uncertainties presented correspond to \(1\sigma\) CL. Figure 4: _Matter density parameter as a function of redshift for the \(\Lambda\)CDM and Yukawa modified cosmologies. The shaded curve represents the confidence region of the Matter density parameter for the Yukawa cosmology at \(3\sigma\) CL. Both curves and the confidence region were obtained with the results of our MCMC analysis described in Section III for the SNe Ia+OHD._ \(\theta=R/d_{A}\), with \(R\) is some proper diameter of the object. Hence, we can extract information about the cosmological parameters, having measured one of these distances at a particular redshift \(z\). Assuming Schwarzschild black holes as a first approximation, an observer located far away from the black hole can construct the shadow image in the center with an angular radius \[\hat{\alpha}_{\rm SH}(z)=R_{\rm SH}/d_{A}(z), \tag{48}\] where \(R_{\rm SH}\) is the shadow radius and \(M_{\rm BH}\) is the mass of the supermassive black hole. As it was pointed out in [61], the above equation for the angular radius of the black hole shadow is valid only when the radial coordinate is large enough in comparison with the size of the black hole shadow radius \(3\sqrt{3}GM_{\rm BH}/c^{2}\). If we now combine Eqs. (45)-(47), we can obtain Escamilla-Rivera & Torres Castillejos [61], Tsupko et al. [75] \[\hat{\alpha}_{\rm SH}=\frac{R_{\rm SH}}{(1+z)}\frac{c}{H_{0}}I(z). \tag{49}\] In the present work we are interested in the low-redshift limit, hence utilizing Eqs. (46) and (49) we can obtain [61, 75] \[\hat{\alpha}_{\rm SH}=R_{\rm SH}H_{0}/(cz), \tag{50}\] implying that we can estimate the angular radius of the black hole if we know the Hubble constant, the redshift \(z\) of the black hole, along with its mass. Since we have explicit expressions of the dark matter/dark energy parameters in terms of \(H_{0}\) (see Eqs. (25) and (26)), we can solve for \(H_{0}\) in Eq. (26) and directly relate the angular radius with the dark energy density parameter as \[\hat{\alpha}_{\rm SH}=\frac{R_{\rm SH}}{z\lambda(1+\alpha)}\sqrt{\frac{\alpha }{\Omega_{\Lambda,0}}}, \tag{51}\] or using the notation of (38): \[\hat{\alpha}_{\rm SH}=\frac{R_{\rm SH}}{z\lambda}\sqrt{\frac{\alpha}{(1+ \alpha)\Omega_{\Lambda,0}^{\rm{ACDM}}}}. \tag{52}\] Similarly, we can express this relation in terms of the effective dark matter mass as \[\hat{\alpha}_{\rm SH}=\frac{R_{\rm SH}}{z\lambda(1+\alpha)}\frac{\sqrt{2\alpha \Omega_{B,0}}}{\Omega_{D,0}}, \tag{53}\] or in terms of the notation of (35), (36) and (37), as \[\hat{\alpha}_{\rm SH}=\frac{R_{\rm SH}}{z\lambda\sqrt{1+\alpha}}\frac{\sqrt{2 \alpha\Omega_{B,0}^{\rm{ACDM}}}}{\Omega_{D,0}^{\rm{ACDM}}}. \tag{54}\] In summary, we can obtain the angular radius of black holes using dark matter and dark energy densities. Note that in this modified cosmological scenario, the shadow radius depends on the distance from the black hole to the observer. ### Black hole solution Let us see how the spacetime geometry around the black hole us modified in this theory. The general solution in case of a static, spherically symmetric source reads \[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}(d\theta^{2}+\sin^{2}\theta d\phi^ {2}). \tag{55}\] The energy density of the modified matter can be computed from \(\rho(r)=\frac{1}{4\pi}\Delta\Phi(r)\). In astrophysical scales we can set \(l_{0}/r\to 0\), in that case using (1) we acquire \[\rho(r)=-\frac{M\alpha}{4\pi r\lambda^{2}}e^{-\frac{r}{\lambda}}. \tag{56}\] The negative sign reflects the fact that inside the black hole the energy conditions are violated. On the other hand, we assume that Einstein field equation with a cosmological constant holds in the sense that the effect of effective dark matter is encoded in the total energy-momentum part; namely, \(G_{\mu\nu}+\Lambda g_{\mu\nu}=8\pi T_{\mu\nu}\). Then, from the gravitational field equations for the \(t-t\) component we obtain \[\frac{rf^{\prime}(r)+f(r)-1}{r^{2}}+\Lambda-\frac{2M\alpha}{r\lambda^{2}}e^{- \frac{r}{\lambda}}=0, \tag{57}\] yielding the solution \[f(r)=1-\frac{2M}{r}-\frac{2M\alpha(r+\lambda)e^{-\frac{r}{\lambda}}}{r\lambda }-\frac{\Lambda r^{2}}{3}. \tag{58}\] The third term is due to the apparent dark matter effect, while the last terms is the contribution due to the cosmological constant. We can perform a series expansion around \(x=1/\lambda\), yielding \[f(r)=1-\frac{2M(1+\alpha)}{r}+\frac{M\alpha r}{\lambda^{2}}-\frac{\Lambda r^{ 2}}{3}+... \tag{59}\] It is therefore natural to define the true or the physical mass of the black hole to be \(\mathcal{M}=M(1+\alpha)\), and write the solution in terms of the physical mass \[f(r)=1-\frac{2\mathcal{M}}{r}+\frac{\mathcal{M}\alpha r}{(1+\alpha)\lambda^{2 }}-\frac{\Lambda r^{2}}{3}+... \tag{60}\] Further, we can neglect the term \(\mathcal{O}(\alpha/\lambda^{2})\) and we basically obtain the Kottler spacetime, i.e., Schwarzschild black hole with a cosmological constant \[f(r)\simeq 1-\frac{2\mathcal{M}}{r}-\frac{\Lambda r^{2}}{3}. \tag{61}\] Following [76], it is easy to show that the shadow radius for the metric (60) can be writen as \[R_{\rm SH}=\frac{r_{\rm ph}}{\sqrt{f(r_{\rm ph})}}\sqrt{1-\frac{2GM_{\rm BH}}{ c^{2}r_{O}}+\frac{GM_{\rm BH}\alpha r_{O}}{c^{2}(1+\alpha)\lambda^{2}}-\frac{1}{3} \Lambda r_{O}^{2}}, \tag{62}\] where \(r_{O}\) is the distance to the black hole. Hence, using the best fit values of the previous section one can show that \(\Lambda\simeq 10^{-52}{\rm m}^{-2}\), and we can identify the physical mass of the black hole with \(\mathcal{M}=M_{\rm BH}\). For the Sgr A BH we can take \(M_{\rm BH}^{\rm SgrA}=4\times 10^{6}M_{\rm Sun}\) with the distance \(r_{O}=8.3\) kpc. The change in the shadow radius compared to the Schwarzschild black hole is of the order \(\delta R_{\rm SH}\sim 2\times 10^{-9}\) measured in black hole units. For the M87 black hole, we can take \(M_{\rm BH}^{\rm M87}=6.6\times 10^{9}M_{\rm Sun}\), along with the distance \(r_{O}=16.8\) Mpc, which compared to the Schwarzschild black hole changes by \(|\delta R_{\rm SH}|\sim 2\times 10^{-5}\). In other words, such a change is outside the scope of the present technology we can approximate the shadow radius in both cases to be \(R_{\rm SH}\simeq 3\sqrt{3}GM_{\rm BH}/c^{2}\). Now, it is well known that the real part of quasinormal modes \(\omega_{\Re}\) in the eikonal limit is related to the shadow radius of BHs via \(\omega_{\Re}=\lim_{l\gg 1}\frac{1}{R_{\rm BH}}\left(l+1/2\right)\)[103; 104; 105; 106; 107; 108] where \(l\) is the angular node number. This correspondence is achieved based on the geometric-optics correspondence between the parameters of a quasinormal mode and the conserved quantities along geodesics. This connection allows to test gravitational waves with the next-generation Event Horizon Telescope [109; 110]. However, here we see that the frequency of the quasinormal modes emitted by a perturbed black hole in the eikonal limit will depend on the effect of cosmological constant and apparent dark matter. In particular, we obtain the following relation \(\hat{\alpha}_{\rm SH}=\lim_{l\gg 1}\frac{(l+1/2)}{\omega_{\Re}\frac{1}{ \lambda}\sqrt{\alpha/(1+\alpha)}\Omega_{\Lambda,0}^{\rm ACDM}}\), and \(\hat{\alpha}_{\rm SH}=\lim_{l\gg 1}\frac{(l+1/2)}{\omega_{\Re}\frac{1}{ \lambda}\sqrt{1+\alpha}}\sqrt{2\alpha\Omega_{B,0}^{\rm ACDM}}/\Omega_{D,0}^{ \rm ACDM}\), respectively. These relations are valid in specific conditions, i.e. the eikonal regime and the low-redshift limit. In what follows we will apply Eq. (53) to compute the angular radius, assuming known black hole mass and Yukawa parameters. * Case I: Sgr A supermassive BH Using the best-fit values for \(\lambda\) and \(\alpha\) along with the black hole mass for SgrA, we can estimate the angular radius: \[\hat{\alpha}_{\rm SH}^{\rm SgrA}=\frac{3\sqrt{3}\,GM_{\rm BH}^{\rm SgrA}}{c^{ 2}\,z\lambda}\sqrt{\frac{\alpha}{(1+\alpha)\Omega_{\Lambda,0}^{\rm ACDM}}} \simeq 26.2\mu{\rm as}, \tag{63}\] where we have used \(M_{\rm BH}^{\rm SgrA}=4\times 10^{6}M_{\rm Sun}\), \(z=0.1895\times 10^{-5}\), \(\Omega_{\Lambda,0}^{\rm ACDM}\sim 0.7\), \(\lambda\simeq 2693\) [Mpc] and \(\alpha\simeq 0.416\). Since we showed that the change in the shadow radius is \(\delta R_{\rm SH}\sim 2\times 10^{-9}\), here we have approximated the shadow radius to \(R_{\rm SH}\simeq 3\sqrt{3}GM_{\rm BH}^{\rm SgrA}/c^{2}\). * Case II: M87 supermassive BH For the case of M87, we obtain: \[\hat{\alpha}_{\rm SH}^{\rm M87}=\frac{3\sqrt{3}\,GM_{\rm BH}^{\rm M87}}{c^{2} \,z\lambda}\sqrt{\frac{\alpha}{(1+\alpha)\Omega_{\Lambda,0}^{\rm ACDM}}} \simeq 19.13\mu{\rm as}, \tag{64}\] where we have used \(M_{\rm M87}^{\rm M87}=6.6\times 10^{9}M_{\rm Sun}\), \(z=0.428\times 10^{-2}\), along with \(\Omega_{\Lambda,0}^{\rm ACDM}\sim 0.7\), \(\lambda\simeq 2693\) [Mpc] and \(\alpha\simeq 0.416\). Again, the shadow radius is approximated to be \(R_{\rm SH}\simeq 3\sqrt{3}GM_{\rm BH}^{\rm SgrA}/c^{2}\). These values are consistent with those reported by the EHT [79; 80; 81; 82; 83; 84]. ## V Conclusions In the present work we extracted observational constraints on the Yukawa cosmological model. In this scenario dark matter appears effectively and a relation exists between dark matter, dark energy, and baryonic matter. In particular, the effective dark matter is attributed to the long-range force between the baryonic matter particles. Such a Yukawa-like gravitational potential modifies Newton's law of gravity in large-scale structures. It is characterized by the coupling parameter \(\alpha\) and has a graviton with non-zero mass (which is inversely related to the wavelength parameter \(\lambda\)). We used SNe Ia and OHD observational data and we found within \(1\sigma\) CL the best-fit parameter \(\lambda=2693^{+1191}_{-1262}\) Mpc and \(\alpha=0.416^{+1.137}_{-0.326}\), respectively. With these values, we acquire \(m_{g}\simeq 10^{-69}\) kg, or equivalently \(m_{g}\simeq 5.6\times 10^{-43}\) GeV, for the graviton mass. Additionally, we found a relation between the dark matter/dark energy density parameters and the angular radius of black hole shadows. These equations allow us to constrain graviton mass directly from the EHT results for Sgr A and M87 supermassive black holes. We can further use the following Gaussian likelihoods \(\mathcal{L}_{\rm Shadow}\propto\exp\left(-\chi^{2}_{\rm Shadow}/2\right)\), where \(\chi^{2}_{\rm shadow}=\sum_{i=1}[\hat{\alpha}_{\rm SH}^{\rm observed}-\hat{ \alpha}_{\rm SH}^{\rm theory}/\sigma_{\alpha,i}]^{2}\), and the modify the Gaussian likelihood for the joint analysis SNe Ia+OHD+Shadow as \(\mathcal{L}_{\rm joint}=\mathcal{L}_{\rm SNe}+\mathcal{L}_{\rm OHD}+\mathcal{L}_ {\rm shadow}\). We will consider and explore this possibility in a separate project. ###### Acknowledgements. E.G. acknowledges the support of Direccion de Investigacion y Postgrado at Universidad de Aconcagua. G.L. was funded by Vicerrectoria de Investigacion y Desarrollo Tecnologico (Vridt) at Universidad Catolica del Norte through Resolucion Vridt No. 040/2022, Resolucion Vridt No. 054/2022. He also thanks the support of Nucleo de Investigacion Geometria Diferencial y Aplicaciones, Resolucion Vridt No. 096/2022. ## Data Availability The data underlying this article were cited in Section III.
2306.12224
Python Framework for Modular and Parametric SPICE Netlists Generation
Due to the complex specifications of current electronic systems, design decisions need to be explored automatically. However, the exploration process is a complex task given the plethora of design choices such as the selection of components, number of components, operating modes of each of the components, connections between the components and variety of ways in which the same functionality can be implemented. To tackle these issues, scripts are used generate designs based on high abstract constructions. Still, this approach is usually ad-hoc and platform dependent, making the whole procedure hardly reusable, scalable and versatile. We propose a generic, open-source framework tackling rapid design exploration for the generation of modular and parametric electronic designs that is able to work on any major simulator.
Sergio Vinagrero Gutiérrez, Giorgio Di Natale, Elena-Ioana Vatajelu
2023-06-21T12:34:50Z
http://arxiv.org/abs/2306.12224v1
# Python Framework for Modular and Parametric SPICE Netlists Generation ###### Abstract Due to the complex specifications of current electronic systems, design decisions need to be explored automatically. However, the exploration process is a complex task given the plethora of design choices such as the selection of components, number of components, operating modes of each of the components, connections between the components and variety of ways in which the same functionality can be implemented. To tackle these issues, scripts are used generate designs based on high abstract constructions. Still, this approach is usually ad-hoc and platform dependent, making the whole procedure hardly reusable, scalable and versatile. We propose a generic, open-source framework tackling rapid design exploration for the generation of modular and parametric electronic designs that is able to work on any major simulator. electrical simulation, design space exploration, design aid, SPICE ## I Introduction Design complexity, ultra-low-power requirements, reliability, robustness and security are becoming increasingly important concerns when designing electronic systems. Due to the increasing complexity of analog circuits, it is more difficult to design and assess their performance. Moreover, the aggressive scaling of CMOS technology makes the process of testing the same design under different technologies very tedious, as normally the process has to be repeated for every technology node. What is more, several issues must be considered at design time such as fabrication-induced variability, technology-dependent defects, extreme operating/environmental conditions, stochastic behaviours, aging, and possible perturbations (noise, radiations, malicious attacks). All these factors make the verification and testing of each circuit an arduous process. To explore the behaviour of an electrical circuit under different designs and conditions, multiple iterations and simulations need to be performed under the desired environment. The interdependencies of large and complex circuits can quickly become a significant challenge due to the extensive amount of choices at play. Design space exploration (DSE) examines the different possibilities and design options within the allowed design space considering the constraints and requirements in order to full fill the specified performance goals. DSE normally involves the use of tools as well as high-level abstract models of the system, to automate and streamline the exploration process since the design space is too large to be explored by hand. There is an interest in the industry to accelerate this process and reduce the time between iteration cycles. In the last decades there have been huge advances in computer aided design (CAD) and electronic design automation (EDA) to provide designers powerful tools to perform complex designs and characterisation. The idiosyncrasies of some technologies are very well understood and can be translated to higher levels of abstraction. However, with the present issues faced by today's designs, electrical-level simulations are unavoidable since they allow designers to accurately model and understand the behaviour of the target system. They are a crucial pillar of Analog and Mixed Signal design space exploration, simulation of circuit under the presence of perturbations and research of novel computation paradigms. But unlike digital circuits, where the low-level phases of the design process are automated using fairly standard methodologies, synthesis and layout of analog circuits is still carried out manually or through some sort of ad-hoc automated automated solution. In this paper we show a Python framework [1] for the generation of modular and reusable electronic designs through the use of powerful manipulation primitives. This purpose of this framework is twofold: (i) to provide tools to create electrical components whose characteristics can be expressed trough dynamic models or defined by logical ruls and (ii) to provide powerful manipulation primitives to quickly create complex arrangements of components in a simple fashion. The designs created are modular and reusable and can be serialised an even exported to work with any major available simulator. This paper is organised as follows: the current state of the art is summarised in section II, followed by the motivation for this project in section III; In section IV the framework are described in detail and some use cases are provided in section IV-D. Conclusions are extracted in section V ## II State of the art There are currently a plethora of tools available that tackle design space exploration. Tools like [2] and [3] provide frameworks with a high abstraction level that are able to compile a high level language, like Scala and Python, into fully functional Verilog code for hardware description. In this way, circuit designers have the expressiveness and power of a programming language in order to quickly create reusable circuits. These tools target Register Transfer Level (RTL) and thus are not very well suited for analog and mixed signal designs. PySpice [4] is an utility to generate SPICE netlists and launch simulations by embedding the design and the simulator configuration under the same language, which facilitates the whole design iteration process. However the simulator is limited to NGspice and Xyce and the netlists can only be exported for PCB designs. Skidl [5] is a layer built on top of PySpice that attempts to to facilitate the process of connecting different components. Our framework seeks to provide designs for any available simulator as well as powerful tools to create complex connections and reusable components. Alongside these tools, there are projects that provide automatic layout generation mechanism. One of the most famous known tools in this category is Magic [6]. Magic is an interactive software for creating and modifying very large ncale integration (VLSI) circuit layouts. Its most important feature is the creation of a layout and _plowing_ it to scale it for different technology nodes. The ease of use of this utility comes with a penalty of 5 to 10% increase in area usage. Other tools found online like LibreCell [7] try to reduce this tradeoff by reducing the fan of possibilities that are provided to the user. Lower level tools like GDSTK [8] and GDSFactory [9] enable the creation and manipulation of GDSII and OASIS files, which are the standard file format for foundries to specify circuit layouts. These tools can be used as the basis of a much more complete software that is able to generate the layout based on a circuit definition. AIDA [10] is a tool that AIDA tackles analog IC sizing and layout. It provides powerful utilities to perform parametric analysis, where the under laying parameters and properties of a circuit can be generated and swapped in place before every simulation cycle. However, the user needs to have generated the design beforehand, which does not solve the issue of design exploration. There are complete projects like OpenRAM [11] that provide a Python framework to create the layout, netlists, timing and power models, placement and routing models to use SRAMs in ASIC design. This tools provides an easy interface to configure the characteristics of the SRAM. This is a very powerful tool but is limited to SRAMs and a selected number of technology nodes. Lastly, there are other projects found in the literature like [12, 13] that provide tools that are crafted for an specific problem in mind. But as it has stated before, this tools are ad-hoc and provide almost no code re-usability. ## III Motivation The main objective of this tool is to provide a framework that enables users to perform quick design space exploration and parametrization of electronic designs. A high abstract interface is provided in order to create modular and reusable components, that can be seaming less parameterised in order to provide users a general overview of the design under different design constraints and environments. The advantage of using a programming language like Python as an abstraction layer to generate circuits is that we are not limited by a drag-and-drop graphical user interface. We have access to the full expressiveness and library support of the language. Graphical Interfaces tend to change in time, while a programming language stays fixed. This eliminates the need of learning different software and users can quickly start designing. What is more, changes in the design are represented as changes in the source code which can make the process of versioning much simpler. Most of the commercial available software provide an interface to perform parametric analysis on a design. However, if we want to generate different versions of a circuit, each version has to be generated by hand, thus reducing the possible space of exploration due to time or complexity constraints. With our tool, parametric characteristics can be embedded directly into the components and the multiple designs can be generated in a modular and programmable fashion. ## IV Overview of the tool ### _Electrical components and parameters_ The parameters of electrical components can be defined statically or dynamically calculated through Python functions or SymPy formulas, that may not be available or as accessible in EDA tools. Dynamic parameters bring the possibility of embedding parametric analysis directly into the circuit definition. These parameters can be grouped into ParamSets that behave similar to process corners. The following example shows a reduced number of parameters for a NMOSFET transistor, where the _vth_ of the transistor is drawn from a Gaussian distribution and the _test_ parameter is defined by the formula \(1/vth\). ``` 1nmos_params-Params({"w":0.135,"vth":random.gauss(0.4,0.1),"test":Formula("1/vth")} 3) 4ParamSet({"TT":nmos_params}) ``` Source Code 1: Example parameters for a NMOSFET transistor where the _vth_ is defined dynamically. In order to automatically generate parameters from files that are commonly used, this framework provides a parser interface to extract information from different file formats and PDKs. Multiple parsers are already available but users can extend this functionality by defining their own custom parsers. Certainly this functionality makes the process of testing different technology nodes or constraints more accesible, as the parameters and component names can be can be updated in the moment and swapped in place depending on the desired environment. Electronic components themselves can be created through the Component class. Besides the basic properties like component name, connected nets and parameters, users can embed metadata to provide additional information that can be shared between different tools. These components serve as templates to generate the modular circuits. Verilog-A components can also be be easily accessed thanks to the provided parser and user created components. ``` 1params-Reader.load("/path/to/params_file") ``` 2 Cap - Component("Cap", [0, 1], + params['cap']['T'], prefix - "C") 4model - Model("custom_mmos", "nmos", ["TYPE": 1)] Source Code 2: Example creation of a capacitor and a custom NMOS model. The parameters are extracted from a file using an example Reader. Once the components have been defined, it can be instantiated multiple times by using the operators @ and $ which are overloaded to quickly modify the connections and the parameters of a component. ### _Manipulations and operations_ As it has been show in section II, there are already tools that allow to generate netlists. The core objective of this framework is to provide very efficient manipulation primitives to quickly create complex and reusable connection patterns that can be customised through variables. This framework provides a small list of operations that can be used to create more complex patterns, like the Parallel and Chain operations that create components in parallel and in a daisy chain as their name imply. The manipulations automatically instantiate the number of desired components and update their connections or parameters. In this way, the connection between components occurs in a deterministic and reusable way so it's easier to avoid mistakes when connecting components, which could minimise the need of Electrical Rule Checking (ERC) tools. ``` 1NMOS-Component("nmos",[1,"INPUT",3, + "GND"]) 2Res-Component("res",[1,"GND"]) 3parallel-Parallel(Res,3) 4chain-Chain(NMOS,3,in_port-0,out_port-2) ``` Source Code 3: Example of the basic manipulation operations. Although only a limited number of manipulations are already provided by the library, users can use them to create and extend their own manipulation operations. The components generated by a manipulation can be accessed and modified directly. This ease of modification is handy to simulate process-induced variability or even to evaluate the resilience of a system to faults or errors. Said faults can be injected, as an example, into a list of components and their behaviour can be measured. In the following example, the manipulation Inject receives a chain of components and a probability of defect injection. For each component in the chain it has a chance of generating the desired defect and connecting it to the output of the component. We can also see in this example how the Inject and Chain manipulation can be concatenated to produce the desired circuit. ``` 1classImject(Manip): 2def_init_(self,comps,p=0.5,defect-None): 3super(Inject,self).__init_() 4defect-defectorComponent("Res", [","GND"],{"R":1e4}) 5forcompincomps: 6ifrandom.random()<-p: 7#Injectthedefectand 8#reconcer 9self.children.append(defect@ 10[comp.ports[-1],"GND"]) 11self.children.append(c) 12 13chain_defects-Inject(Chain(mosfet,7),p-0.7) ``` Source Code 4: Example of defect injection in a chain of 7 transistors. Another useful manipulation is the Array, which that allows instantiating components in a 1D or 2D array and their connections can be updated dynamically trough their coordinates, as it is shown in the following example. This array generation utility can be of great use to create crossbar arrays, 2 dimensional CMOS sensors and Micro Electro-Mechanical Systems (MEMS) matrix that contain a very large number of components. ``` 1deports_crossbar(coords): 2x,y-coords 3return[f"X_{x}",f"Y_{y}"] 4 5arr_size-(3,3) 6arr-Array(arr_size,device,ports_crossbar) ``` Source Code 5: Example of 2D crossbar array. The size of the array is determined by the arr_size variable. To allow for reusable designs and more complex logic, multiple components can be grouped inside a _subcircuit_, just like SPICE subcircuits. Subcircuits can be _fixed_ so that no more components can be added. This can be used to stop the addition of components in a loop based on logical tests. Once a subcircuit has been defined, it can be used as a component and thus the manipulation primitives can be applied. The components and subcircuits created can be grouped inside a _Circuit_. A circuit behaves very similarly to a SPICE netlist and can be then converted into a subcircuit to be used in other designs. This is the one of the main interfaces for code re-usability and modular designs. ### _Exporting elements_ All the elements created can be exported to text files so that they can be shared between different utilities or read back in a later future. Moreover, this framework provides an interface to export the elements to different file formats that users can extend to create their desired exporters. This process makes the framework simulator agnostic, as the same design can be exported to different simulators just by using different Exporters. Furthermore, users are not only bounded to simulators as the different components and nets can be exported to other kind of file formats for analysis. ``` 1exporter-CustomExporter() 2exporter.dump(circuit) 3#Ordirectlytoafile 4exporter.dump_to_file("/path/to/file") ``` Source Code 6: Example exporting a design into a file. ### _Example of a complex circuit_ This tool has been used to create the circuits used in the study [14]. A Ring Oscillator is a chain of inverters that oscillates when an input signal is applied to the first inverter in the chain due to the gate delay. Multiple of this Ring oscillators can be connected to multiplexers that allow the selection of a pair of Ring Oscillators. The output signal of the multiplexer can be fed to a counter to measure the oscillation frequency of the Ring oscillators. In this case the number of inverters per ring oscillator and the total number of chains are determined by the N_RO_PER_CHAIN and N_CHAINS variables respectively. The number of inputs of the multiplexer can be defined dynamically also from the N_CHAINS variable. The Counter component has been created in Verilog-A. ``` 1reader-VerilogAReader() 2VCounter-reader.load("/path/to/counter.va") 3#DefinethesizeoftheRingOscillator 4N_RO_PER_CHAIN - 5 5N_CHAINS - 3 7 8#Dynamicgenerationofthemultiplexer 9MUX-Subcircuit("MUX",[f"IN_{d}"fordin 1\(\leftarrow\)range(N_CHAINS)]+["Sel","OUT"],{}) 10 11INV-Subcircuit("INV",["in","out"],{}) #Componentscanbeaddedbyusingthe+-operator 12INV+-Mosfet(["out","in",GND,GND], 13name="#mos") 14INV+-Mosfet(["out","in",VDD,VDD], 15name="#mos") 16inv-INV@["in_chain","1"] 17 18chain-Circuit() 19chain+-NamedChain(inv,N_RO_PER_CHAIN, 2\(\leftarrow\)out_name="OUT") 20r_chain=chain.inito_subckt("RO_CHAIN", 2\(\leftarrow\)["in_chain","OUT"],{}) 21chains-Chain(r_chain@("INPUT","OUTPUT"), 22neltist+-chains 23nodes-[] 24forcompinchains:nodes.append(comp.nodes) 25 26 27counters-Parallel(VCounter([""]),N_CHAINS) 28fori,compinenumerate(counters):comp@-nodes[1][-1] 29netlist+-counters 30SourceCode7:Example of creating the Ring Oscillator Chains. ## V Conclusions The framework shown in this article allows for fast design space exploration and parametrization of electronic designs thanks to its powerful manipulation primitives. It can read components and electrical parameters from a set of file formats and it can export the designs to any major available commercial simulator as well as different output formats specified by the user. The next objective of this framework is to allow the automatic management of libraries and layouts as well as the generation of layouts in different formats and technology nodes. Besides, tools like AGS [15] and N2S [16] can also supported to create an schematic from the generated netlist.
2303.17582
Human-Robot Interaction using VAHR: Virtual Assistant, Human, and Robots in the Loop
Robots have become ubiquitous tools in various industries and households, highlighting the importance of human-robot interaction (HRI). This has increased the need for easy and accessible communication between humans and robots. Recent research has focused on the intersection of virtual assistant technology, such as Amazon's Alexa, with robots and its effect on HRI. This paper presents the Virtual Assistant, Human, and Robots in the loop (VAHR) system, which utilizes bidirectional communication to control multiple robots through Alexa. VAHR's performance was evaluated through a human-subjects experiment, comparing objective and subjective metrics of traditional keyboard and mouse interfaces to VAHR. The results showed that VAHR required 41% less Robot Attention Demand and ensured 91% more Fan-out time compared to the standard method. Additionally, VAHR led to a 62.5% improvement in multi-tasking, highlighting the potential for efficient human-robot interaction in physically- and mentally-demanding scenarios. However, subjective metrics revealed a need for human operators to build confidence and trust with this new method of operation.
Ahmad Amine, Mostafa Aldilati, Hadi Hasan, Noel Maalouf, Imad H. Elhajj
2023-03-30T17:49:55Z
http://arxiv.org/abs/2303.17582v2
# Human-Robot Interaction using VAHR: ###### Abstract Robots have become ubiquitous tools in various industries and households, highlighting the importance of human-robot interaction (HRI). This has increased the need for easy and accessible communication between humans and robots. Recent research has focused on the intersection of virtual assistant technology, such as Amazon's Alexa, with robots and its effect on HRI. This paper presents the Virtual Assistant, Human, and Robots in the loop (VAHR) system, which utilizes bidirectional communication to control multiple robots through Alexa. VAHR's performance was evaluated through a human-subjects experiment, comparing objective and subjective metrics of traditional keyboard and mouse interfaces to VAHR. The results showed that VAHR required 41% less Robot Attention Demand and ensured 91% more Fan-out time compared to the standard method. Additionally, VAHR led to a 62.5% improvement in multi-tasking, highlighting the potential for efficient human-robot interaction in physically- and mentally-demanding scenarios. However, subjective metrics revealed a need for human operators to build confidence and trust with this new method of operation. ## I Introduction Virtual assistants have become ubiquitous in our daily lives, integrated into smartphones, in-car infotainment systems, and smart speakers such as Google Home and Amazon Alexa. At the same time, robots are increasingly used for various tasks, from vacuuming homes [1] to delivering goods in hospitals [2]. The potential benefits of integrating virtual assistants and robots are thus increasingly appealing, as virtual assistants can enhance robots' physical capabilities, while robots can provide a natural Human-Robot Interface (HRI) for virtual assistants. In this paper, we propose VAHR (Virtual Assistant, Human, and Robots in the loop), a system that connects virtual assistants to robots through different communication schemes, either directly or indirectly. Our implementation uses Amazon's Echo Dot and Alexa virtual assistant, given their widespread availability and development environment (Alexa Skills Kit and AWS), but our proposed framework can be easily adapted to other virtual assistants. We review related work in Section II and describe our system design in Section III. We then detail the implementation and experimental setup in Section IV before presenting the results and analysis in Section V. The conclusion and future work are covered in Section VI. ## II Related Work Recent research has focused on developing natural speech interfaces for human-robot communication using affordable home automation tools like Amazon Alexa. In this section, we will present various approaches to human-robot voice interfaces found in the literature, followed by an explanation of the commonly used HRI standard metrics for evaluating our proposed experiment. ### _Voice-controlled Robots_ Earlier attempts of controlling robots through voice date back to 1998 and 2000 in which robots assisted mainly in surgery while responding to a limited set of voice commands and requiring the surgeon to wear a headset [3, 4]. Recently, robotic assistants such as Lio were deployed in hospitals to assist in healthcare, and disinfection [5]. Because of Lio's computational power, its voice interactions were purely synthesized natively without requiring any cloud-based solution [5]. However, these native solutions are considerably more expensive than cloud-based ones. Hence, controlling robots via a friendly communication scheme has been shifting towards low-cost cloud solutions. This particular flesh has attracted many applications in both industry and academia. In [6], researchers developed EchoBot, an interface between YuMi, the industrial robot, and Amazon Alexa to facilitate data collection for Learning from Demonstration (LfD) tasks. To test the efficacy of the experiment, a user is tasked with guiding YuMi to solve the Tower of Hanoi puzzle while occupying both hands. The user utters voice commands to the Echo Dot device to record the state of the two grippers involved in carrying out the task (open or closed) [6]. This interface was juxtaposed with a keyboard interface, and the results demonstrated that EchoBot was significantly more efficient than the keyboard interface [6]. Alexa was also used as a speech interface in [7] to control the Kuka robotic arm with a gripper. The arm would respond to voice commands issued by the user to Amazon Alexa, and Alexa relays the command to the robot to pick and place a wrench from a toolbox via subscripting to an MQTT topic [7]. Further, in [8], a 3D-printed humanoid torso was manipulated through Alexa to enhance trust in HRI. Additionally, as detailed in [9], a six-degrees-of-freedom robotic arm was voice controlled through Alexa via a locally-hosted web server on Ngrok. Ngrok allows a user to host a local server with minimal effort. In [10], the motors of a semi-autonomous lawn mower were strictly controlled through the Amazon Tap speaker, a voice automation tool equipped with Alexa. A more abstract work, such as in [11], depicts multiple approaches and architectures to interface one's robot with Alexa. To our knowledge, no work in the literature includes an architecture that allows for multi-robot bidirectional communication and is assessed according to standard HRI metrics. Our proposed VAHR system is assessed according to the metrics in the following subsection. ### _Popular HRI Metrics_ A comprehensive literature survey of existing HRI metrics over the past two decades was conducted by Murphy et al. [12]. It can be argued that the most standardized and cited metrics are summarized in [13]. Saleh et al. [14] improve these metrics. We list the metrics that are relevant to our study in what follows. * **Robot Attention Demand (RAD):** this metric measures the fraction of time spent by the user interacting with the robot [14]. Typically, This metric is calculated by the following formula: \[RAD=\frac{IE}{IE+NT}\] (1) where NT represents the neglect tolerance, a metric that captures the time elapsed after the human operator issues the robot command until there is a notable drop in robot performance. IE denotes the interaction effort corresponding to the required time to interact with the robot [14]. Both of the metrics are acquired experimentally. * **Fan-out (FO)** evaluates the efficiency and simultaneity of controlling multiple robots. This usually depicts the operator's efficiency in operating multiple homogeneous robots. It is computed by dividing the total task time by the RAD for a single robot. \[FO=\frac{TotalTaskTime}{RAD}\] (2) * **Trust:** to quantify the trust established between a human and a robot, a fuzzy temporal model is incorporated in [14]. This model takes quantifiable inputs such as fault size, productivity, and awareness and provides five states of trust as output (Very Low, Low, Medium, High, and Very High). Each of these states is associated with a value. (Very Low \(\longrightarrow\) 0.1, Low \(\longrightarrow\) 0.3, Medium \(\longrightarrow\) 0.5, High \(\longrightarrow\) 0.7, and Very High \(\longrightarrow\) 0.9). The claim in [14] is that trust can significantly affect RAD. For instance, a human who is less trustworthy of his or her robot is likely to give greater attention to it even after sending the command for it to perform independent work. Hence, the following formula comes about according to [14] \[RAD=DIT+IIT=DIT+NT*(1-T_{r})\] (3) where DIT is the direct interaction time which is essentially the formula of (1), indirect interaction time (IIT) is a result of trust (T\({}_{t}\)) and neglect tolerance (NT). In this way, (3) captures the degree of trust the operator gives to the commanded robot. In controlled experiments, the value of trust can also be obtained through questionnaires rather than relying on a complex fuzzy model. * **Degree of mental computation:** some tasks may require the operator to significantly stress his/her short and long-term memory. Steinfeld et al. [13] mention examples of mental labeling of objects, object-referent association in working memory, and teleportation tasks. One may resolve this by resorting to feedback from the robot itself. One of the aims of this project is to allow robots to report feedback through Amazon Alexa. This metric is also useful in scenarios to assess controlled experiments. A popular tool named NASA-TLX (task load index) is typically utilized to measure this metric in controlled human experiments. It targets six fields in mental and physical workload [15]. * **Efficiency and Effectiveness:** this is a common metric used in HRI [13]. Efficiency measures the time needed to complete a task, while effectiveness is the success rate. It calculates the percentage of the robot's mission that was successful. ## III System Design ### _Alexa to Robot Interaction_ Having surveyed the different approaches used in the literature, we will discuss these architectures in light of our approach to the problem. #### Iii-A1 **MQTT Commands with Shadow Feedback** In this communication scheme, we combine the work done in [16] with the work done in [17] while abstracting the MQTT broker functionality to AWS. We thus utilize AWS IoT Core's MQTT broker to issue commands to robots while providing feedback to users through Alexa via the device shadow service, as shown in Fig. 1. Relying only on device shadows as in [17] incurs redundant shadow requests that increase the cost of the system. Instead, we minimize the number of shadow requests by using MQTT messages to enable asynchronous messages with one-to-many communication capability. This ensures efficient scaling of the communication architecture as more robots are added to the system. Fig. 1: Our proposed communication scheme utilizes MQTT commands and the device shadow service. #### Iii-A2 **One-to-Many Communication** With MQTT messages, we can simultaneously broadcast a message from Alexa to several robots and devices using the publish-subscribe model. By subscribing to a common MQTT topic, multiple robots can receive commands in parallel as Alexa publishes these commands. This offloads communication from Lambda to the MQTT broker and thus ensures timely responses from Lambda. This also allows us to scale to swarms of robots if needed. #### Iii-A3 **Asynchronous Communication** Instead of requiring robots to actively query their shadows for commands, we initiate an MQTT subscriber, which listens to MQTT messages received at a certain port of the robot. This frees the robot from wasting computations and bandwidth to poll its device shadow continuously, especially when most requests are redundant. Now, the robot would only process requests when a message is received. We can now describe the communication flow as follows. 1. User invokes desired robot-related Alexa skill. 2. Desired Alexa Skill interprets the spoken utterance and sends the interpreted intent to the intent handler endpoint. 3. The AWS Lambda function receives the intent as an Alexa Skill intent handler. 4. If the invoked intent requires a robot command to be initiated, AWS Lambda publishes a message with the command values to the respective MQTT topic (i.e., spin topic to spin a robot). 5. All robots subscribed to that topic receive the message with its value when Lambda publishes to that topic. 6. After going through the intent logic and issuing the requested MQTT message (if needed), the intent handler requests the robot's shadow to receive the reported states by the robot (i.e., the command received successfully). 7. The handler finally responds with a speech string to be uttered to the user by the Alexa Skill model, including any updates of the robot's states, if they exist. 8. On the robot side, the device shadow is updated whenever any of the robot's states have changed (i.e., Task Completed or Stuck). Accordingly, we have created a scalable architecture that allows robots to integrate robustly and efficiently with Alexa. The advantages and disadvantages of this service can be summarized as follows. * **Advantages** 1. **Efficient Communication:** Fewer shadow requests and MQTT messages are required. 2. **Flexibility:** Enables one-to-many and one-to-one communication between Alexa and Robots. 3. **Robustness:** The asynchronous nature of MQTT messaging, coupled with the robustness of the AWS MQTT broker, ensures that the system continues to function if any of the involved endpoints stops working. Commands and states would be delayed, but the communication is not fatally broken. * **Disadvantages** 1. **Complexity:** Two services must be configured (MQTT Broker and device shadows) for this setup. ### _Robot to Alexa Voice Interaction_ To further validate the importance of using Alexa in robotics, a system is devised that grants a robot the ability to communicate with Alexa through speech. This system is constrained to the purposes of the experiment explained in the next section, but it can be further extended to be deployed in any application. In essence, the robot utters commands that trigger Alexa to respond, and according to this response, the robot performs a designated action sequence. To implement this, a collection of tools is utilized. First, Google's Text to Speech (gTTS)-API converts assigned text phrases to the robot. For instance, to call Alexa, the robot would utter a phrase such as "Alexa, what is the weather today?" to inquire about the weather. The synthesized speech generated by gTTS is played via a media player python library dubbed playsound. Alexa would naturally respond by stating the weather. For the robot to capture Alexa's voice utterance, it employs the Speech Recognition engine, which translates the voice expression to text through Google's Speech-to-Text API. This captured text phrase is forwarded to an Amazon Lex chatbot for interpretation. Amazon Lex is an easy-to-use conversational bot development tool, part of the AWS suite, capable of recognizing the intent of the text. According to this intent, the robot would utter back a statement to Alexa acknowledging its response and executes a certain action accordingly. Figure 2 illustrates the weather example we applied in the experiment if the weather is sunny. Upon acknowledgment of sunny weather, the robot navigates using the ROS move_base package to a predefined set of coordinates; a "sunny" zone. ## IV Implementation and Experimental Setup We implemented our proposed design using Python and the AWS IoT SDK. The system was tested using an Amazon Alexa Echo dot and a mobile robot (placebot) running ROS Fig. 2: Robot to Alexa voice System architecture as applied to a weather application. melodic. A test setup was created to compare our approach to traditional robot control methods like a keyboard and mouse. A schematic diagram describing the testing environment is provided in Fig. 3. The experiment involves one test user controlling two robots and finishing several tasks along the way. The robots (placebots) are mobile robots programmed to be capable of autonomous navigation inside the allowed boundaries (white zones). The different robotic arms around the testing environment are also programmed to be able to pick and place color-coded (green) items from atop these mobile robots to the ground or from a conveyor belt (loading zone) to atop one placebot. These functionalities are essential, as the experiment's goal is to test our communication architecture and not robots' navigation or robot arm placement. Additionally, the testing environment is primarily virtual, with the robot environments simulated using ROS and Gazebo. This ensures an ideal and repeatable testing environment, especially for robot-to-Alexa communication, where a robot has to be near Alexa to ask and listen to responses. The robots are simulated on a computer on the same desk where Alexa is placed. The Gazebo world used is shown in Fig. 4. The user needs only to order the robots with different commands and solve a physical jigsaw puzzle. The complete setup is illustrated in Fig. 5, showing the simulator and the location of the Amazon Echo Dot and the jigsaw puzzle, all within the user's reach on the bench. A total of 10 subjects (two females and eight males) participated in the experiment. Participants' ages ranged from 18 to 30 years. The flow of the experiment is explained below. 1. The user will go through two runs for each control method, including one 5-minute warm-up run for the user to get used to the controls. 2. A test run is deemed complete once all tasks are completed. 3. The user is tasked with four tasks: three robot-related tasks and one distracting task. 4. The distracting task entails the user finishing a jigsaw puzzle. No time limit is set on the tasks. 5. The three robot tasks are designed based on functionality: **Task I - Alexa-to-Robot Communication:** In this task, a user orders the robots to move into the loading zone to be loaded with a package (green box), before ordering them to deliver the package to one of the Zones (A through D). The zone selection is randomized at the start of the experiment and is briefed to the user during the experiment (via a display on the computer). This task is deemed successful if each robot delivers its package to its set Zone. A failure is marked whenever a robot delivers a package to the wrong room. An intent is programmed in Alexa to enable the user to order a robot to navigate to one Zone (Zone A through D or Loading Zone). Robotic arms will load or unload the robots once they reach the zone waypoint. **Task II - Robot-to-Alexa Communication:** In this task, a user orders the robots to deliver a package based on the weather. The weather is pre-programmed to be randomized by the Alexa skill and is specific to the experiment (i.e., not the real weather). As such, the Fig. 4: Gazebo world used for the experiment Fig. 5: Experiment setup Fig. 3: Experiment setup schematic robots are expected to deliver packages to Zone A for sunny weather and Zone C for rainy weather. There are two ways of approaching this problem: * **Traditional Control (keyboard and mouse):** the user has to ask Alexa for the weather twice (once for each robot) before ordering navigation to the correct zones. * **Proposed Communication Architecture:** the user can trigger a weather-navigation intent on Alexa. This will communicate to one robot that the user wants it to navigate based on the weather. Upon receiving the command, the robot will ask Alexa for the weather, interpret the response, and autonomously load, navigate, and unload a package to the correct zone. This will also test the robot's ability to communicate with Alexa through voice commands. This task is deemed completely successful if each robot delivers one package to the correct zone (based on the weather). Failure in this task is defined as a robot navigating to the wrong room for the weather (i.e., Zone C when the weather is sunny). This is marked as a robot's failure to interpret Alexa's response correctly. An intent is programmed in Alexa to enable the user to order a robot to navigate based on the weather. Robotic arms will load or unload the robots once they reach the respective zone way-point. **Task III - Robot-to-Robot Communication:** In this task, the user is ordered to make the robots deliver two packages to Zone D in sequential order: placebo 1 delivers a package to Zone D, then placebo 2 delivers another package to zone D. This task is deemed successful if each robot delivers one package to Zone D and placebo 1 delivers the package before placebo 2. If the task is successful, a green LED is turned on inside the simulation environment. The task is deemed failed if either of the robots fails to deliver the package to Zone D (i.e., delivered to a different Zone) or the order of delivering the packages is wrong (i.e., placebo 2 delivers the package before placebo 1). An intent is programmed in Alexa to enable the user to order a robot to start communicating with the other robot to coordinate sequential delivery. Robotic arms will load or unload the robots once they reach the respective zone way-point. * Once the experiment is over (all four runs are finished), the user is given a NASA task load index (TLX) appended with two additional questions specific to our experiment. The evaluation criteria used during the experiment are split between subjective and objective metrics. The subjective metrics are filled in by the user after they are done with the experiment. The objective metrics are automatically collected and measured during the experiment. An observer only fills in the calculated results after each run. The metrics used during the experiment are summarized in Table I. ## V Results and Analysis Objective measures were used to evaluate the ability to communicate with and command the robots while solving the puzzle. Both methods achieved 100% command, task, and communication success rates across all trials and for all test subjects. Four objective metrics (RAD, FO, time taken, and the number of solved puzzle pieces) were used to quantitatively compare the effect of both control methods on the users' performance while completing the assigned tasks. The results are summarized in Table II per method showing the average score across both trials. The VAHR method required nearly 41% less Robot Attention Demand while ensuring almost 93% more Fan-out time. This shows significant improvement in controlling both robots simultaneously without the need for increased attention from the user. Although users needed more time to complete the task using VAHR (254 seconds) compared to the standard mouse-and-keyboard method (234 seconds), VAHR allowed the users to solve more pieces of the puzzle averaging a 62.5% improvement. An ANOVA test was conducted to assess the statistical significance of the obtained objective results. The p-values obtained from comparing all metrics under both control methods are listed in Table II. The ANOVA test results show strong evidence against the null hypothesis (i.e., \(p-values<\alpha=0.05\)), thus proving that the results are statistically significant. Figure 6 shows the averages of the NASA Task Load Index subscales. The NASA Task Load Index consists of a scale of 21 assessment levels which we converted to a percentile score from 0% at level 1 to 100% at level 21. The lower score reflects less load on the subject under testing. While VAHR required 17 on average than the traditional communication method, the physical load on the subject was decreased by 41.6%. The temporal load was almost the same for both methods with a 1% difference. The users assess their own performance to be 22.6% better when using the traditional method which is also reflected in the 9.6% more trust. The subjective assessment of performance contradicts the objective evaluation scores as the subjects show superior performance while completing the tasks using VAHR. We estimate that the discrepancy goes back to the fact that the proposed VAHR method is relatively new to the subjects who need time to build trust in the new control method. VAHR required 3.5% more effort on average from the users to complete the assigned tasks but decreased the frustration level by 5.1% and improved the subjects' confidence in the ability to complete the tasks successfully by 1.7%. The ANOVA test conducted on the subjective results obtained from both control methods resulted in a p-value of 0.28 (\(p-value<\alpha=0.05\)) which doesn't show sufficient statistical significance for the obtained results. Individual ANOVA tests were conducted on the results of each subjective metric of the NASA TLX to check for statistical significance. None of the subjective results received a p-value lower than 0.05 across the two methods and therefore the null hypothesis cannot be rejected for any of the subjective metrics. Since VAHR is relatively new to the users, we consider the change in the NASA Task Load Index scores across different trials. As shown in Fig. 7, the users felt that the task required less mental and temporal demand in the second trial compared to the first one, with each of the two metrics registering an improvement of around 29%. Improvement is also evident in the effort index, with around 15% and the highest percentage improvement is in the frustration index which reaches more than 40% in the second trial. On the other hand, VAHR required 29% more physical demand on average from the users in their second trial where they also felt that their performance and confidence dropped by around 3% and 8% respectively. The trust score remained constant across both trials. ## VI Conclusion We present VAHR, a scalable cloud-based communication paradigm that connects humans to robots through virtual assistants. This communication system enables all agents to communicate freely with each other through speech. While it requires more setup than traditional approaches, cloud services provide increased robustness and scalability. We test our system against standard mouse-and-keyboard control through a set of human-subject experiments, which demonstrated the advantage of using VAHR in objective metrics, providing at least 41% improvement. Part of the subjective measures noted by users shows a preference for mouse-and-keyboard control, which we attribute to unfamiliarity with this novel communication scheme. However, the high p-values obtained from the ANOVA tests on both combined and individual subjective metrics show low statistical significance of the subjective test results. This adds evidence to the unfamiliarity of the subjects with the proposed control/communication method, even from a self-assessment point of view. We suggest that our system would be best suited for environments such as warehouses and hospitals, where physical tasks have to be balanced with robot control.
2304.02896
Phase transitions in the Prisoner's Dilemma game on scale-free networks
We study stochastic dynamics of the Prisoner's Dilemma game on random Erd\"{o}s-R\'{e}nyi and Barab\'{a}si-Albert networks with a cost of maintaining a link between interacting players. Stochastic simulations show that when the cost increases, the population of players located on Barab\'{a}si-Albert network undergoes a sharp transition from an ordered state, where almost all players cooperate, to a state in which both cooperators and defectors coexist. At the critical cost, the population oscillates in time between these two states. Such a situation is not present in the Erd\"{o}s-R\'{e}nyi network. We provide some heuristic analytical arguments for the phase transition and the value of the critical cost in the Barab\'{a}si-Albert network.
Jacek Miękisz, Javad Mohamadichamgavi, Jakub Łącki
2023-04-06T07:02:12Z
http://arxiv.org/abs/2304.02896v2
# Phase transitions in the Prisoner's Dilemma game on scale-free networks ###### Abstract We study stochastic dynamics of the Prisoner's Dilemma game on random Erdos-Renyi and scale-free Barabasi-Albert networks with a cost of maintaining a link between interacting players. We show that when the cost increases, the population of players located Barabasi-Albert network undergoes a sharp transition from an ordered state, where almost all players cooperate, to a state in which both cooperators and defectors coexist. At the critical cost, the population oscillates in time between these two states. **Keywords**: evolutionary game theory, social dilemmas, Prisoner's Dilemma game, scale-free network, cost of links, stochastic imitation dynamics, phase transitions ## I Introduction Cooperation between unrelated individuals in human and animal societies is an intriguing issue in biology and social sciences [1; 2; 3; 4; 5; 6]. One can describe it within the framework of evolutionary game theory and especially the Prisoner's Dilemma game. In this game, two players simultaneously decide whether to cooperate or to defect. The mutual cooperation gives both of them the reward \(R\) which is higher than the punishment \(P\) resulting from the mutual defection. However, a cooperating player is tempted to defect to receive the highest payoff \(T\) leaving the other cooperating player with the lowest payoff \(S\). Payoff inequalities \(S<P<R<T\) imply that defection gives a player a higher payoff than cooperation regardless of a strategy adopted by his opponent. Therefore rational individuals defect in spite of the fact that they would be better off if they cooperated. In the framework of evolutionary game theory [7; 8; 9], payoffs are interpreted as numbers of offspring who inherit strategies of their parents. The evolution of very large (infinite) populations is usually modeled by differential or difference replicator equations which describe time changes of fractions of the population of individuals playing given strategies [10; 11]. In the case of the Prisoner's Dilemma, the long-run of such dynamics is the population consisting of just defectors. In replicator dynamics, players receive average payoffs weighted by frequencies of strategies in the infinite population. However, real populations are finite and individuals receive payoffs (not average payoffs) which result from interactions with random opponents in well-mixed populations or neighbors in spatially structured populations. In their pioneering paper [12], Nowak and May located players on regular graphs and allow them to interact only with their neighbors. The payoff of any player is then the sum of payoffs resulting from individual games. In discrete time moments, players imitate neighbors with the highest payoff obtained in the previous round, making perhaps mistakes. In stationary states of such stochastic dynamics, one observed various structures of coexisting cooperators and defectors [13; 14]. Since then various versions of spatial Prisoner's Dilemma and other games have been extensively studied, see a review paper [15]. It was shown and generally understood that cooperation can be maintained in space-structured populations. Cooperating players tend to form clusters, receive high payoffs and therefore are immune to invasion by defectors. Recently there appeared papers indicating that the structure of a network on which players are located may play a significant role in promoting the cooperation. Various non-regular and random graphs were investigated. In particular, Santos and Pacheco [16; 17] shown that the scale-free Barabasi-Albert network favors cooperation for a large range of game parameters. In such a heterogeneous graph, there are vertices with many edges, the so-called hubs. Players located on hubs interact with many individuals. It was shown that existence of hubs favors cooperation. However, maintaining social ties can be costly. It is natural therefore to introduce participation costs in spatial games. It was shown in [18] that participation costs reduce the advantage of heterogeneous networks in maintaining a high level of cooperation. Here we study the equilibrium behavior of the imitation dynamics of systems of interacting individuals playing the Prisoner's Dilemma game on random Erdos-Renyi [19] and scale-free Barabasi-Albert networks [20; 21]. The stochastic dynamics in spatial games are similar to stochastic updating in Ising and lattice-gas models in statistical mechanics. However, in spatial games in general there does not exist a global order parameter, like the energy or free energy in the Ising model, which the system wants to optimize. Similarities and differences between stochastic dynamics in spatial games and in systems of many interacting particles were discussed in [15; 22; 23]. Critical phenomena in random networks were studied very extensively, for a review see [24], mean-field approximation in the Ising model on the Barabasi-Albert network was used in [26; 27], phase transitions in voter models were analysed in [28; 29; 30]. We have performed Monte-Carlo simulations to explore dependence of the cooperation level in the stationary state of the imitation dynamics on the participating cost. We report that in the case of the Barabasi-Albert network we observe a critical value of the cost at which a population changes abruptly from a high to a lower level of cooperation. ## II Model Players are located on vertices of the Erdos-Renyi (ER) [19] and the scale-free Barabasi-Albert (BA) networks [20; 21]. We build the ER network by putting with probability \(p\) an edge between every pair of \(N=10^{4}\) vertices. It follows that the average degree of vertices (the average number of neighbors) is equal to \(\alpha=p(N-1).\) The BA network is built by the preferential attachment procedure. We start with \(m_{o}\) fully connected vertices and then we add \(N-m_{o}\) vertices, each time connecting them with \(m\) already available vertices with probabilities proportional to their degrees. If \(m_{o}=\alpha+1\) and \(m=\alpha/2,\) then we get a graph with the average degree equal to \(\alpha\). It is known that such a graph is scale-free with the probability distribution of degrees given by \(p(k)\sim k^{-3}\)[20; 21; 25]. Individuals play with their neighbors the Prisoner's Dilemma game. As in [12], we set game parameters, \(S=P=0,R=1,\) introduce a costs \(\gamma\) of maintaining a link payed by both connected players, and hence our payoff matrix reads: \[C \quad D\] \[C \quad 1-\gamma \quad 0-\gamma\] \[D \quad T-\gamma \quad 0-\gamma\] where the entry \(ij\) is the payoff of the row player using \(i\)-th strategy while the column player uses \(j\)-th one. At discrete moments of time, all individuals interact with their neighbors and receive payoffs which are sums with respect to individual games. Then the imitation process takes place. A randomly chosen player compares his payoff to payoffs of all his neighbors and with the probability \(1-\epsilon\) chooses the strategy which provided the highest payoff in the previous round and with the probability \(\epsilon\) adopts a random strategy, we fix \(\epsilon=10^{-3}\). We interpret \(\epsilon\) as a measure of irrationality of players or simply the noise level. This completes one step of the discrete-time dynamics - a Markov chain with \(2^{N}\) states. Our Markov chain is ergodic aperiodic and irreducible and therefore it has a unique stationary state probability distribution. To find a cooperation level in the stationary state we perform stochastic simulations. We start with a completely random initial conditions with the fraction of cooperators \(=1/2\). Then we perform \(10^{5}\) Monte-Carlo rounds followed by \(10^{4}\) rounds, in which frequencies of cooperators are computed. One round consists of \(N=10^{4}\) steps, where \(N\) is the number of players, so that in every round, on average each player has the opportunity to update his strategy. We repeat the simulation 50 times and average the results. ## III Results Stationary fractions of cooperators for various average degrees of vertices \(\alpha\) of the Erdos-Renyi (ER) and the Barabasi-Albert (BA) networks as a function of the cost \(\gamma\) of maintaining one link for the temptation to defect \(T=1.5,1.7\) and \(1.9\) are shown in Fig. 1 and as a function of \(T\) for \(\gamma=0.46\) in Fig. 2. We observe that the cost \(\gamma\) plays the crucial role in the long-run behavior of the system. The effect of \(\gamma\) is much bigger for the BA network than for the ER one. For negative and small positive values of \(\gamma\), the level of cooperation is much higher for the BA network than for the ER one; for bigger \(\gamma\) the cooperation level is higher for the ER network. Our main result is that in the case of the BA network, when the cost increases, the population of players undergoes a sharp transition from an efficient ordered state, where almost all players cooperate, to a disordered state in which both cooperators and defectors coexist. For \(T=1.5\), this critical value of \(\gamma\) is about \(0.46\). This is reminiscent of the first order phase transition present in statistical mechanics models of interacting particles. In such models, at the critical point there coexist two (or more) phases of the system. A typical example is the presence of two phases, up and down, in the ferromagnetic Ising model at the zero external magnetic field below the critical Curie temperature. To see if such a situation may be present here in the model of interacting players we looked at the time evolution of the frequency of cooperation. In Fig. 3 we see that for \(\gamma=0.4\) (\(T\)=1.5 and \(\alpha=12\)), that is below a critical value, the population basically stays at an ordered state where almost all players cooperate. For \(\gamma=0.48\), the population settles at a state in which both cooperators and defectors coexist. However for \(\gamma=0.46\), we see that the system oscillates between these two states. Again, this is a typical situation in finite systems of interacting particles with a discontinuous phase transition in the infinite-system limit. ## IV Discussion We investigated how the cost of maintaining links between players affects the cooperation level in the spatial Prisoner's Dilemma games. In the case of the Barabasi-Albert network, we observed that when the cost increases, the population of players undergoes a sharp transition from a high to a lower level of cooperation. Our numerical simulations of time evolution of the frequency of cooperation show that at the critical cost the population oscillates between two states. It means that at such a cost there coexist two population states: an ordered one where almost all players cooperate and one in which both cooperators and defectors coexist. Further research is needed to elucidate the nature of this transition. **Acknowledgments**: This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 955708. J. Miekisz would like to thank Bartosz Sulkowski, constructions and simulations contained in his bachelor's thesis written in 2005 have been extended Figure 1: Fraction of cooperators in the stationary state as a function of a cost of maintaining a link. Figure 3: Fraction of cooperators after each round in a sample simulation for various values of \(\gamma\). Barabási-Albert network, \(T=1.5\), average connectivity is equal to 12. Figure 2: Fraction of cooperators in the stationary state as a function of \(T\), \(\gamma=0.46\). and are presented in this paper.
2308.08908
A New Look at the YY CrB Binary System
This study presented a new analysis for the TESS-observed W Ursae Majoris (W UMa) binary star YY Coronea Borealis (YY CrB). The light curve was analyzed by the PHysics Of Eclipsing BinariEs (PHOEBE) Python version together with the Markov chain Monte Carlo (MCMC) method. The light curve solutions required a hot spot and l3. New eclipse times from the TESS observations were extracted, and the O-C curve of primary and secondary minima showed an anti-correlated manner. In order to study the O-C curve of minima, minima times between 1991 and 2023 were collected. This investigation reported a new linear ephemeris and by fitting a quadratic function to the O-C curve of minima, calculated the orbital period rate of \mathop P\limits^.\approx 5.786*{10^{-8}} day/year. Assuming mass conservation, a mass exchange rate of \mathop{{M_2}}\limits^.=2.472*{10^{-8}} calculated from the more massive component to the less massive one. Then, by using the light travel time function, the possible third body was determined in the binary and derived the mass of the third body as 0.498M_Sun with a period of \simeq 7351.018 days. The O-C curve analysis and the quantity of mass indicate that the presence of a third body is unlikely. This binary is expected to evolve into a broken-contact phase and is a good case to support the thermal relaxation oscillation model.
Somayeh Soomandar, Atila Poro
2023-08-17T10:42:12Z
http://arxiv.org/abs/2308.08908v1
# A New Look at the YY CrB Binary System ###### Abstract This study presented a new analysis for the TESS-observed W Ursae Majoris (W UMa) binary star YY Coronea Borealis (YY CrB). The light curve was analyzed by the PHysics Of Eclipsing BinariEs (PHOEBE) Python version together with the Markov chain Monte Carlo (MCMC) method. The light curve solutions required a hot spot and \(l_{3}\). New eclipse times from the TESS observations were extracted, and the O-C curve of primary and secondary minima showed an anti-correlated manner. In order to study the O-C curve of minima, minima times between 1991 and 2023 were collected. This investigation reported a new linear ephemeris and by fitting a quadratic function to the O-C curve of minima, calculated the orbital period rate of \(P\approx 5.786\times 10^{-8}\frac{day}{year}\). Assuming mass conservation, a mass exchange rate of \(\dot{M_{2}}=2.472\times 10^{-8}\) calculated from the more massive component to the less massive one. Then, by using the light travel time function, the possible third body was determined in the binary and derived the mass of the third body as \(0.498M_{\odot}\) with a period of \(\simeq 7351.018\) days. The O-C curve analysis and the quantity of mass indicate that the presence of a third body is unlikely. This binary is expected to evolve into a broken-contact phase and is a good case to support the thermal relaxation oscillation model. binaries: eclipsing - method: photometric 0000-0002-4000]Somayeh Soomand 0000-0002-4002-3885]Atila Poro ## 1 Introduction W UMa-type systems are recognised by their eclipsing light curves with almost equal minima and a short orbital period. These stars have spectral types ranging from A to middle K, and the convective atmosphere is the main reason for chromosphere activity, as well as starspots, which are signs of the existence of dynamo-generated magnetic activity. YY CrB (HIP 77598, TIC 29287800) is a W UMa binary system discovered by Hipparcos ESA 1997. This system has been studied in some works; first Rucinski, et al. (2000) revealed the spectral type F8V for two components.Vaiko et al. (2004) found the light curves to be asymmetric and mentioned the existence of starspots on the components. Gazeas et al. (2005) analyzed the light curve and derived the geometric and photometric parameters and concluded that this target is a contact binary with weak magnetic activity. Essam et al. (2010) combined photometric and spectroscopic solutions and calculated the fill-out factor approximately equal to 64 percent and mass ratio of 0.241. In addition, they studied the changes in the orbital period using the O-C diagram and concluded that the orbital period is decreasing. Yu, Xiang, & Xiao (2015) studied the orbital period changes and implied that the decreased period rate and concluded that the sinusoidal oscillatory can be interrupted as magnetic activity. Essam et al. (2010) and Yu, Xiang, & Xiao (2015) on the rate of period decrease demonstrate that the value of reduced rate is lowering progressively, indicating that this system was going through an orbital expansion stage of thermal relaxation oscillation (TRO) cycles. Also, understanding the evolutionary status of this target could prove invaluable. Using new space-based data, we have re-analyzed the light curve solution and studied the O-C curve in detail. Moreover, we studied the possibility of a third body in this interesting system. The structure of the paper is as follows: Section 2 provides information on TESS observations and a data reduction process. The light curve solution and the estimation of absolute parameters are included in Sections 3 and 4 respectively, the orbital period variation analysis is presented in Section 5, and finally, Section 6 contains the discussion and conclusion. ## 2 Observation and Data Reduction YY CrB was observed by the TESS during sectors 24 and 51 (April 16, 2020-May 13, 2020, and April 22, 2022-May 18, 2022) on Cameras 1 and 3. There is two-minute cadence data for sector 24 that are processed by the Science Processing Operations Center (SPOC) pipeline (Jenkins et al., 2016; Jenkins (2015)). Photometric photos were downloaded using the Lightkurve package (Lightkurve Collaboration et al., 2018) that provides the functions to download TESS data from the public data archive at MAST1. For sector 24, we used the Pre-search Data Conditioning flux of the Simple Aperture Photometry (PDCSAP). There is no detrended light curve for sector 51. Therefore, we download the TESS Full Frame Images (FFIs) from the MAST and used Lightkurve package to extract the SAP light curve with a mask that are defined by the pixels shown in the left panel of Figure 1. We used create threshold mask function to produce an aperture mask using a threshold equal to 10. This function identifies the pixels in the target pixel file and shows that a median flux that is brighter than the threshold times the standard deviation above the overall median. The right panel of Figure 1 shows the phased light curve that was produced. Footnote 1: [https://mast.stsci.edu](https://mast.stsci.edu) ## 3 Light Curve Solution Essam et al. (2010) calculated the optimal parameters by combining simultaneous radial velocity and light curve solutions. We started with the initial values of the parameters taken from the solution by the Essam et al. (2010) study. One-day data from TESS sector 24 were utilized to light curve solution. The observation of 2-min cadence help to better analysis of the effect of spots on the components. Photometric analysis of the YY CrB system was carried out using the PHOEBE 2.4.9 version, TESS filter of the code, and the MCMC approach (Prsa & Zwitter, 2005; Prsa et al., 2016; Conroy et al., 2020; Poro et al., 2022). We selected the TESS passband from the code and chose the contact binary mode in the PHOEBE based on the light curve's shape and solutions of previous studies. Figure 1: Left panel: TESS target pixel file of YY CrB in sector 51. The pixels included in the computation of the SAP as red bordered pixels. Right panel: phased light curve during sector 51. The initial and input parameters were as follows: The mass ratio \(q=0.241\) and the effective temperature of primary component \(T_{1}=5819\) are (Essam et al., 2010), the gravity darkening coefficients, \(g_{1}=g_{2}=0.32\) and the albedo coefficients, \(A_{1}=A_{2}=0.5\)(Lucy, 1967, Rucinski, 1969). The limb-darkening coefficients were employed as free parameters, and the Castelli & Kurucz (2004) method was used to model the stellar atmosphere. The parameters searched in MCMC include: the orbital inclination \(i\), the mean temperature of the stars \(T_{1,2}\), the mass ratio \(q\), the fillout factor \(f\), the bandpass luminosity of the primary star (\(L_{1}\)), and the third light in total light (\(l_{3}\)). We applied 46 walkers and 1000 iterations to each walker in MCMC. According to the asymmetry in the brightness of maxima in the light curve of the close eclipsing binary, the solution requires the assumption of a hot spot on the primary component (O'Connell, 1951). According to observational and theoretical light curves in this study, it has not been possible to provide the solution without considering \(l_{3}\). The theoretical fit on the observational light curve for the YY CrB system is given in Figure 2. The corner plot that MCMC produced is displayed in Figure 3. Also, the geometrical structure is plotted in Figure 4, which has a lower temperature at the point of contact between companion stars due to the gravity darkening (Prsa et al., 2016). The calculated parameters together with the values obtained by Essam et al. (2010) are listed in Table 1. ## 4 Absolute Parameters The absolute parameters of the binary system including \(M_{v1,2}\), \(M_{bol1,2}\), \(L_{1,2}\), \(R_{1,2}\), \(M_{1,2}\), \(log(g)_{1,2}\), and \(a\) were calculated. We used Gaia DR3 parallaxes and the parameters of the light curve solution in this study. We followed the same method as done by Poro et al. (2022). First the absolute magnitude \(M_{v}\) of the system was calculated by Equation (1). \[M_{v(system)}=V_{max}-5\log(d)+5-A_{v} \tag{1}\] Figure 2: Light curve solution of the eclipsing binary YY CrB. Observational light curve (blank circle), synthetic light curve (solid red line), and the residuals (blue circle). where the distance of the system from Gaia DR3 (\(d_{pc}=90.07\pm 0.1\)) was derived and \(V_{max}=8.64\pm 0.08\) comes from the VSX2 database. Extinction coefficient \(A_{v}=0.015\pm 0.002\) was obtained using the DUST-MAPS package in Python (Green et al., 2019). Also, Equation (2) can be utilized to determine the primary and secondary components' absolute magnitude. Footnote 2: [https://www.asvso.org/vsx/](https://www.asvso.org/vsx/) \[M_{v1,2}-M_{vtot}=-2.5\log(\frac{l_{1,2}}{l_{tot}}) \tag{2}\] The bolometric magnitude \(M_{bol}\) of each component of the binary was obtained by Equation (3), Figure 3: The corner plots of the light curve solution. \[M_{bol}=M_{v}+BC \tag{3}\] where the effective temperature of the stars is employed to obtain the bolometric correction for the primary and secondary components retrieved \(BC_{1}=-0.111\) and \(BC_{2}=-0.052\) respectively (Flower, 1996). The bolometric correction is presented as polynomial fits in Equation (4). \[BC=a+b(\log T_{eff})+c(\log T_{eff})^{2}+d(\log T_{eff})^{3}+e(\log T_{eff})^{4} \tag{4}\] Then, the luminosity of two components is determined from Pogson's relation (Pogson, 1856), \[M_{bol}-M_{bol\odot}=-2.5\log(\frac{L}{L_{\odot}}) \tag{5}\] where \(M_{bol\odot}\)is taken as \(4.73^{mag}\) from Torres (2010). The radius of primary and secondary components is calculated by the equation (6), \[R=(\frac{L}{4\pi\sigma T^{4}})^{1/2} \tag{6}\] Figure 4: The geometrical structure of YY CrB. where the \(\sigma\) is the Stephen-Boltzmann constant and \(T\) is the temperatures of each components. Additionally, with considering \(r_{mean1,2}\) and \(a=\frac{R}{r_{mean}}\), we calculated the separation \(a\) in average \(a_{1}\) and \(a_{2}\). The resulting parameters and the values obtained by the Essam et al. (2010) study are listed in Table 2. ## 5 The Orbital Period Changes To calculate the eclipse times of minima, we used the same method as done by Soomandar & Abedi (2020). First, split the detrended light curves for individual eclipses and fit a Lorentzian function to each eclipse by using the least-squares method. We used the Scipy.curve-fit package in Python to fit the Lorentzian function to the individual eclipses. We used the np.sqrt(np.diag(cov)) function to calculate the standard deviation errors on the parameters. TESS \begin{table} \begin{tabular}{l c c} \hline \hline Parameter & This study & Essam et al. (2010) \\ \hline \(q=M_{2}/M_{1}\) & \(0.2498^{+0.0031}_{-0.0024}\) & \(0.241\pm 0.002\) \\ \(T_{1}\) (K) & \(5621^{+3}_{-3}\) & \(5819\) \\ \(T_{2}\) (K) & \(5944^{+6}_{-8}\) & \(6010\pm 72\) \\ \(i\) (deg) & \(81.50^{+0.36}_{-0.29}\) & \(80.26\pm 0.05\) \\ \(\Omega_{1}=\Omega_{2}\) & \(2.295\pm 0.079\) & \(2.237\) \\ \(l_{1}/l_{tot}\) & \(0.730^{+0.001}_{-0.001}\) & \(0.7508\pm 0.0154\) \\ \(l_{2}/l_{tot}\) & \(0.264\pm 0.001\) & \(0.2492\) \\ \(l_{3}/l_{tot}\) & \(0.006^{+0.001}_{-0.001}\) & \\ \(f\) & \(0.363^{+0.025}_{-0.031}\) & \(0.64\) \\ \(r_{1\,mean}\) & \(0.522\pm 0.018\) & \(0.537\) \\ \(r_{2\,mean}\) & \(0.287\pm 0.028\) & \(0.282\) \\ Phase shift & \(0.08\pm 0.005\) & \\ \hline Spot on the star 1: & & \\ Colatitude \(\theta\) & \(99\pm 1\) & \(90\) \\ Longitude \(\lambda\) & \(325\pm 1\) & \(11.25\pm 1.638\) \\ Angular radii \(\gamma\) & \(18\pm 1\) & \(5.250\pm 0.573\) \\ \(T_{star}/T_{spot}\) & \(1.04\pm 0.02\) & \(0.750\) \\ Spot on the star 2: & & \\ Colatitude \(\theta\) & \(99.487\pm 3.919\) \\ Longitude \(\lambda\) & \(325\) \\ Angular radii \(\gamma\) & \(16.300\pm 1.326\) \\ \(T_{star}/T_{spot}\) & \(1.351\) \\ \hline \hline \end{tabular} \end{table} Table 1: The parameters of the eclipsing binary YY CrB. observations yielded a total of 220 primary and secondary minima, as displayed in the table 3. The new observational eclipse times were calculated. Then, we performed an analysis of observed (O) minus calculated (C) eclipse times (Sterken, 2005). We calculated the O-C curve using the following linear ephemeris (Kreiner, 2004; Yu, Xiang, & Xiao, 2015): \[Min.I=2452500.1757+0.3765545\times E \tag{7}\] The O-C curve of primary and secondary minima for sectors 24 and 51 are plotted in Figure 5. The anti-correlated manner between primary and secondary minima is obvious which is a confirmation of the presence of spots on the contact binary components (Tran et al. (2013); Balaji et al. (2015)). We averaged primary and secondary minima to eliminate the anti-correlated impact when analyzing orbital period changes (Balaji et al., 2015). The calculated values are shown in Table 3. 109 YY CrB observational minima times were recorded in the literature over a 31-year period. The appendix contains a list of the data gathered with uncertainty. Observational minima times converted to BJD-TDB3. Figure 6's left panel depicts the O-C curve of the minima. We presented a new ephemeris for this target as Equation (8) by fitting a linear function on the O-C curve of primary. Footnote 3: [https://astroutils.astronomy.osu.edu/time/hjd2bjd.html](https://astroutils.astronomy.osu.edu/time/hjd2bjd.html) \[Min.I=2458955.8598(\pm 1.1e-4)+0.3765581(\pm 1.1e-7)\times E \tag{8}\] \begin{table} \begin{tabular}{c c c} \hline \hline Absolute parameters & This study & Essam et al. (2010) \\ \hline \(M_{bol1}(mag)\) & \(4.083\pm 0.074\) & 3.939 \\ \(M_{bol2}(mag)\) & \(5.246\pm 0.072\) & 5.173 \\ \(L_{1}(L_{\odot})\) & \(1.832\pm 0.121\) & 2.580 \\ \(L_{2}(L_{\odot})\) & \(0.628\pm 0.041\) & 0.668 \\ \(R_{1}(R_{\odot})\) & \(1.430\pm 0.049\) & 1.427 \\ \(R_{2}(R_{\odot})\) & \(0.749\pm 0.026\) & 0.757 \\ \(a(R_{\odot})\) & \(2.674\pm 0.080\) & 2.64 \\ \(M_{1}(M_{\odot})\) & \(1.448\pm 0.131\) & 1.467 \\ \(M_{2}(M_{\odot})\) & \(0.362\pm 0.037\) & 0.357 \\ \(log(g)_{1}(cgs)\) & \(4.288\pm 0.008\) & 4.295 \\ \(log(g)_{2}(cgs)\) & \(4.248\pm 0.012\) & 4.232 \\ \hline \hline \end{tabular} \end{table} Table 2: The absolute parameters of YY CrB. Figure 5: Left panel: primary and secondary O-C curve of minima for sector 24. Right panel: primary and secondary minima for sector 51 (primary O-C curve in black circles and secondary O-C curve in red circles.). The O-C curve of minima is calculated with the new ephemeris and the resulting curve shows the same shape as the left panel of Figure 6. we fitted a quadratic function to the O-C curve in order to investigate the variations in the orbital period: \[T_{mid}(E)=T_{0}+PE+\frac{1}{2}\frac{dP}{dt}E^{2} \tag{9}\] where mid-eclipse times \(T_{mid}\) are described by \(T_{0}\) is the reference mid-eclipse time, P is the orbital period and, E is the epoch of eclipses (Patra et al., 2017). And the quadratic fit showed a drop in period, which corresponded to the quadratic plot in Figure 6's left panel. We determined the rate of period decline using the model's quadratic coefficient as Equation (10) \[\dot{P}=\frac{2\times-1.124\times 10^{-11}}{0.3765545}=-5.786\times 10^{-8}\pm 9.96 5\times 10^{-9}\frac{day}{year} \tag{10}\] Considering \(M_{1}=1.448M_{\odot}\) for the primary and \(M_{2}=0.362M_{\odot}\) for the secondary one calculated in this study and using Equation (11) and mass conservation, the rate of mass exchange between primary and secondary components was estimated. \[\frac{\dot{P}}{P}=-3\frac{\dot{M}_{2}(M_{1}-M_{2})}{M_{1}M_{2}}\Rightarrow\dot {M}_{2}=+2.472\times 10^{-8}\pm 0.190\times 10^{-8}M_{\odot}yr^{-1} \tag{11}\] The positive sign indicates the direction of mass transfer from the more massive to the less massive component. The cyclic changes are shown by the residuals of the quadratic fit. As a result, we investigated the Light Travel Time Effect (LTTE) as a possible cause of the O-C curve variations. The following periodogram analysis was performed with the Period 04 software (Lenz & Breger, 2005) for the residuals of a quadratic fit. The peak of frequencies in the periodogram analysis of residuals shows a period of 7351.018 days. Then, we used the least square method to fit the Light Travel Time (LTT) formula on the O-C curve (Irwin, 1952): \[(O-C)_{LTT}=A\times\left(\frac{1-e^{2}}{1+e\cos\upsilon}\sin(\upsilon+ \omega)+e\sin(\omega)\right) \tag{12}\] where \(A=\frac{a_{12}\sin i}{c}\), and \(a_{12}\) is the semi-major axis of the relative orbit of the eclipsing system around the center of mass (in Au unit), \(i\) is the inclination of the third-body orbit, \(e\) is the eccentricity of the supposed third body, \(\omega\) is the longitude of periastron passage in the plane of the orbit and, \(\upsilon\) is the true anomaly. To fit the LTT function to the residuals of the O-C curve, we have to convert the epoch to the true anomaly and Kepler's formula provides the link between the eccentric anomaly and the observed eclipse time: \[E_{3}-e\sin E_{3}=\frac{2\pi}{P_{3}}(t-T_{0}) \tag{13}\] The equation (13) was calculated using Newton-Raphson's method for every eclipse time of minima and considering equation (14) the epochs converted to the true anomaly. \[\begin{array}{l}\tan\frac{\nu}{2}=(\frac{1+e}{1-e})^{1/2}\tan\frac{E_{3}}{2} \\ t=T_{0}+epoch\times P_{binary}\end{array} \tag{14}\] where \(P_{binary}\), t, \(P_{3}\), \(E_{3}\), and \(T_{0}\) are the period of binary, the time of observed minima, the period of the third body, the eccentric anomaly and, the time of periastron passage respectively. By assuming a coplanar orbit (\(i=90\)), we determined the lower limit for the mass of the third body. The calculated parameters of the third body are listed in Table 5 and the related curve is plotted in the right panel of Figure 6. Assuming the third body is a main-sequence star, this corresponds to the M1V spectral type with a brightness of \(0.041L_{\odot}\)4, or \(0.016\) of the total luminosity which doesn't agree with the value \(l_{3}\) determined in section 3. Footnote 4: [http://www.pas.rochester.edu/](http://www.pas.rochester.edu/)\(\sim\)emamajek/EEM_dwarf_UBVIJHK_colors_Teff.txt We are not certain that \(l_{3}\) produced from the TESS light curve analysis represents a valid observation of flux from a third body in the system, despite the computation of the effect of the third body, especially as there is no exact \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline Min. & Epoch & O-C & Min. & Epoch & O-C & Min. & Epoch & O-C & Min. & Epoch & O-C \\ \hline [MISSING_PAGE_POST] radial-velocity curve. We estimate that this figure is most likely due to systematic errors in background flux level readings in TESS photos and/or an underestimation of photometric aperture contamination by other stars in the image. So, we investigated the Applegate's effect as a plausible explanation for the cyclical fluctuations in the O-C curve. We calculated the observed relative change of the orbital period throughout one cycle of the binary using the previously obtained modulation period and the O-C amplitude computed from the orbit of the third body simulated in the previous section. \(\frac{\Delta P}{P}=2\pi\frac{(O-C)}{P_{\rm{mod}}}=1.025\times 10^{-5}\)(Applegate, 1992). The value of \(\frac{\Delta P}{P}\)-suggests that the Applegate effect can explain the cyclic changes in the O-C curve of minima. This system shows unequal maxima that is known as the O'Connell effect (O'Connell, 1951) due to the presence of the hot spots (Wilsey and Beaky, 2009). So, this difference implies that the hemisphere of a component emits a different amount of radiation than the other hemisphere. These types of systems have active chromospheres because of the existence of large spots (Knote et al., 2022). Starspots can alter the depth of minima and have an obvious effect on the eclipse light curve (Han, Muirhead, and Swift, 2019). To investigate the effect of the spots on the light curve over both sectors of TESS observations, we calculated the difference between the two depths of the primary and secondary minima. We considered the relative fluxes in phases 0 and 0.5 for every complete individual light curve. The curves that resulted were displayed in Figure 7. YY CrB has the values of the DepthI - DepthII as large as about 10% of the variable light amplitude. And this is possible because \begin{table} \begin{tabular}{c c} \hline \hline Parameters of third body & Value \\ \hline Eccentricity (e) & \(0.689\pm 0.005\) \\ The longitude of periastron passage (\(\omega\)) & \(57.6\pm 1.8\) \\ Period (days) & \(7351.018\) \\ Amplitude (minutes) & \(17.28\pm 1.44\) \\ The time of periastron passage (\(T_{0}\)) & \(2460201\) \\ Projected semi-major axis\(\times\sin i\) & \(2.11\pm 0.17\) \\ Mass Function (\(MassFunction(f_{m})\)) & \(0.023\pm 0.006\) \\ \(M_{3}\sin i(i=90,M_{\odot})\) & \(0.498\) \\ \(\sum{(O-C)^{2}}\) & \(0.001\) \\ \hline \hline \end{tabular} \end{table} Table 4: Parameters of the third body. Figure 6: Left Panel: The O-C data points of the minima (black circle) and, the polynomial fit (red line). Right Panel: LTTE on the residuals of the polynomial fit; LTTE (black line), the O-C data points after subtracted polynomial fit (red circles), and the residuals of the LTT effect fit (blue circles). of the migration and evolution of spots with time on the surface of two components that cause cyclic magnetic activity. ## 6 Discussion and Conclusion Based on the estimated mass ratio, fill-out factor and inclination angle, YY CrB is an over-contact binary with an increased orbital period. Essam et al. (2010) calculated the decreased period rate \(1.194\times 10^{-6}\frac{day}{year}\). Yu, Xiang, & Xiao (2015) considered all of the minima time published until 2013 and calculated a secular period decrease with a rate of \(6.727\times 10^{-7}\frac{day}{year}\). In this study, the decreasing value of period rate \(5.786\times 10^{-8}\frac{day}{year}\) indicates that the rate of period changes has been decreased. And, when mass conservation is taken into account, the mass transfer from the Roche-lobe-filling primary component to the secondary component is \(\underset{2}{\overset{M}{M}}=2.472\times 10^{-8}M_{\odot}yr^{-1}\). When the stated results in Yu, Xiang, & Xiao (2015) are compared to the value of mass transfer in this study, it is clear that mass transfer has been decreased and the distance between two components is growing while the value of fill-out factor is decreasing (compare the values of fill-out factors in Table 1). And this target may evolve to shallow-contact binary via the thermal relaxation oscillation (TRO) model (Flannery (1976); Robertson & Eggleton (1977)) and ultimately reach a broken-contact phase (Lucy (1976)). The mass ratio of the components, which is related to the mass transfer, is the crucial parameter in the evolution of the close binary stars. Table 5 contains a list of contact binaries with low mass ratios \(<0.25\). In order to explain the evolutionary status of the YY CrB system, we provide the mass-luminosity (\(M-L\)) diagram displayed in Figure 8. The Zero-Age Main Sequence (ZAMS) and the Terminal-Age Main Sequence (TAMS) are plotted along with the selected contact binaries with low mass ratios. It is obvious that the more massive primary components are around the ZAMS line, meaning they are not evolved or little evolved. Also, the less massive secondary components have evolved away from the main sequence stars and over-luminosity comparing the stars with the same mass in the main sequence. In addition, the orbital angular momentum of YY CrB has a value of \(51.585\pm 0.067\). The \(logJ_{0}-logM\) diagram shows the position of the system (Figure 9), and this diagram shows that YY CrB is in a contact binary systems region. According to the study Yu, Xiang, & Xiao (2015) and the periodic changes in the residuals of the quadratic fit on the O-C curve, the potential of excitability of the third body was investigated. The light-time function fitting revealed the existence of a body with the value of \(0.498M_{\odot}\). This number equates to \(0.016\) of total brightness, which differs from the value of \(l_{3}\) determined in section 3. We explored the Applegate effect as a possible explanation for the fluctuation in the O-C curve because there is no exact radial-velocity curve. Figure 7: Right panel: the difference between the depths of Primary and Secondary minima over the 24th sector of TESS observations. Left panel: the difference between the depths of Primary and Secondary minima over the 51st sector of TESS observations. Figure 8: \(M-L\) diagram of selected contact binaries with low mass ratio. The primary and secondary components of YY CrB are plotted in red and blue colors, respectively. Figure 9: The location of YY CrB on the \(logJ_{0}-logM\) diagram. The quadratic line is based on a study by Eker et al. (2006). YY CrB is a contact binary with a mass ratio less than 0.3, so considering Hut's criteria (Hut, 1980) to investigate the stability is necessary. We used Equation (15) to calculate the ratio of the spin angular momentum to the orbital angular momentum (Yang & Qian, 2015). \[\frac{J_{s}}{J_{o}}=\frac{q+1}{q}[(k_{1}r_{1})^{2}+(k_{2}r_{2})^{2}q] \tag{15}\] where \(r_{1}\) and \(r_{2}\) are the relative radii for the primary and secondary components and \(k^{2}{}_{1,2}=0.06\)(Li & Zhang, 2006), are the dimensionless gyration radii. The calculated value of \(\frac{J_{s}}{J_{o}}=0.087\), which is less than the threshold value therefore this system is stable. This target shows a period increase which is attributed to mass transfer. According to the existing data and the analyses done in this study, the existence of a third body is unlikely for this system, and detailed spectroscopic and photometric observations over a longer length of time are required for the definitive determination. ## Acknowledgements This manuscript has made use of data from the TESS mission. Funding for the TESS mission is provided by the NASA Science Mission Directorate. This research has made use of the SIMBAD and VIZIER databases, operated at CDS, Strasbourg, France. The time of minima data from the Variable Star Observers League in Japan (VSOLJ) websites proved invaluable to the assessment of potential period changes experienced by this variable star. The authors would like to thank Marco Brentel for his help. We are grateful to Ehsan Paki from the BSN project ([https://bsnp.info/](https://bsnp.info/)) for providing Figure 4 of this manuscript, which also shows the color-temperature scale. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline System & \(q\) & \(M_{1}(M_{\odot})\) & \(M_{2}(M_{\odot})\) & \(R_{1}(R_{\odot})\) & \(R_{2}(R_{\odot})\) & \(L_{1}(L_{\odot})\) & \(L_{2}(L_{\odot})\) & Reference \\ \hline V429 Cam & 0.206 & 1.36(12) & 0.28(3) & 1.55(3) & 0.78(2) & 3.56(9) & 0.85(2) & Li et al. (2021) \\ V830 Cep & 0.23 & 0.84(5) & 0.19(1) & 0.91(1) & 0.47(1) & 0.98(1) & 0.29(1) & Li et al. (2021) \\ FP Boo & 0.096 & 1.614(52) & 0.154(21) & 2.310(25) & 0.774(8) & 11.193(99) & 0.920(13) & Gazeas et al. (2006) \\ DN Boo & 0.103 & 1.428(39) & 0.148(6) & 1.710(67) & 0.670(110) & 3.750(280) & 0.560(170) & Senavc et al. (2008) \\ FG Hya & 0.112 & 1.444(25) & 0.161(7) & 1.405(9) & 0.591(8) & 2.158(86) & 0.412(17) & Qian \& Yang (2005) \\ CK Boo & 0.0108 & 1.442(14) & 0.154(2) & 1.453(3) & 0.577(10) & 2.74(1) & 0.47(2) & Kalci \& Derman (2005) \\ GR Vir & 0.122 & 1.37(16) & 0.17(6) & 1.42(7) & 0.61(4) & 2.87(28) & 0.48(6) & Qian \& Yang (2004) \\ CSS J234807.2+193717 & 0.176 & 1.19(4) & 0.21(3) & 1.36 (2) & 0.66 (1) & 1.45(24) & 0.42(5) & Christopoulou et al. (2022) \\ J170307 & 0.092 & 1.134(253) & 0.105(24) & 1.204(120) & 0.436(48) & 1.874(572) & 0.271(72) & Liu et al. (2023) \\ J1641000 & 0.095 & 1.402(287) & 0.133(28) & 1.580(144) & 0.577(58) & 3.912(1.109) & 0.512(142) & Liu et al. (2023) \\ J223837 & 0.093 & 1.541(306) & 0.144(30) & 1.784(159) & 0.646(64) & 5.463(1.534) & 0.704(192) & Liu et al. (2023) \\ CSS J222607.8+0620170 & 0.221 & 1.49(3) & 0.33(13) & 1.51(5) & 0.81(2) & 3.35(65) & 1.15(24) & Sun et al. (2020) \\ CSS J012559.7+203404 & 0.231 & 1.38(3) & 0.32(12) & 1.42(4) & 0.77(2) & 2.46(66) & 0.81(18) & Sun et al. (2020) \\ CSS J153855.6+042903 & 0.187 & 1.44(5) & 0.27(12) & 1.37(4) & 0.66(2) & 2.94(96) & 0.30(10) & Sun et al. (2020) \\ CSS J141923.2\(-\)013522 & 0.168 & 1.31(5) & 0.22(11) & 1.23(4) & 0.57(2) & 1.97(68) & 0.33(11) & Sun et al. (2020) \\ CSS J130111.2\(-\)132012 & 0.108 & 1.38(3) & 0.15(12) & 1.49(5) & 0.61(2) & 2.49(57) & 0.40(9) & Sun et al. (2020) \\ CSS J165813.7+390911 & 0.183 & 1.09(3) & 0.20(9) & 1.05(3) & 0.49(1) & 0.92(24) & 0.24(6) & Sun et al. (2020) \\ V870 Ara & 0.082 & 1.546(54) & 0.127(37) & 1.64(6) & 0.63(5) & 2.64(17) & 0.42(6) & Poro et al. (2021) \\ TYC 6995-813-1 & 0.111 & 1.23(1) & 0.135(1) & 1.46(1) & 0.60(1) & 2.293(4) & 0.58(2) & Wadhwa et al. (2021) \\ NSVS 13602901 & 0.171 & 1.19(2) & 0.203(10) & 1.69(1) & 0.79(1) & 2.05(4) & 0.58(2) & Wadhwa et al. (2021) \\ NSVS 5029961 & 0.151 & 1.872(468) & 0.284(72) & 1.573(119) & 0.680(53) & 3.403(14) & 0.610(26) & Zheng et al. (2021) \\ CSS J02214.4+044340 & 0.201 & 1.44(25) & 0.29(5) & 1.26(8) & 0.65(4) & 1.718(191) & 0.416(50) & Liu \& Li (2021) \\ HV Aqr & 0.15 & 1.240(28) & 0.186(17) & 1.456(12) & 0.601(5) & 3.326(213) & 0.638(44) & Gazeas et al. (2021) \\ ZZ PsA & 0.078 & 1.213(8) & 0.095(1) & 1.422(4) & 0.559(4) & 2.20(4) & 0.63(4) & Wadhwa et al. (2021) \\ NSVS 1926064 & 0.160 & 1.558(38) & 0.249(6) & 1.605(13) & 0.755(42) & 3.91(28) & 0.641(33) & Kjurkchieva et al. (2020) \\ \hline \hline \end{tabular} \end{table} Table 5: Absolute parameters for low mass ratio contact binaries. ORCID IDS Somayeh Soomandar: 0000-0002-9520-9573 Atila Poro: 0000-0002-0196-9732 APPENDIX ## Appendix A Available Minima Times The appendix table displays the minima times along with their error in the first column, the epochs in the third column, the O-C values in the fourth column, and the references in the final column. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Min.(\(BJD_{TDB}\)) & Epoch & O-C & Reference & Min.(\(BJD_{TDB}\)) & Epoch & O-C & Reference \\ \hline 955.8695(12) & -4101 & -0.0562 & Pribulla et al. (2001) & 4308.3899(2) & 4802 & -0.0005 & Parimucha et al. (2009) \\ 955.8718(6) & -4101 & -0.0539 & Rucinski, et al. (2000) & 4564.2595 & 5481.5 & 0.0003 & Nagai (2009) \\ 1318.4993 & -3138 & -0.04847 & Essam et al. (2010) & 4604.36215(1) & 5588 & -0.0001 & Yilmaz et al. (2009) \\ 1318.5001 & -3138 & -0.0475 & Essam et al. (2010) & 4605.4917(2) & 5591 & -0.0002 & Yilmaz et al. (2009) \\ 1361.4275(3) & -3024 & -0.0475 & Keskin et al. (2000) & 4628.4657(2) & 5652 & 0.0039 & Parimucha et al. (2009) \\ 1361.4275(2) & -3024 & -0.0474 & Keskin et al. (2000) & 4632.414(3) & 5662.5 & -0.0012 & Parimucha et al. (2009) \\ 1368.3965(4) & -3005.5 & -0.0447 & Keskin et al. (2000) & 4648.4191(30) & 5706 & 0.0000 & Parimucha et al. (2009) \\ 1368.3966(4) & -3005.5 & -0.0446 & Keskin et al. (2000) & 4648.4201(10) & 5705 & 0.0009 & Hubscher et al. (2009) \\ 1370.4659(6) & -3000 & -0.0463 & Keskin et al. (2000) & 4688.3352(3) & 5811 & 0.0013 & Yilmaz et al. (2009) \\ 1372.3494(3) & -2995 & -0.0457 & Keskin et al. (2000) & 4931.4009(1) & 6456.5 & 0.0011 & Hubscher et al. (2011) \\ 1668.3359 & -2209 & -0.0309 & Essam et al. (2010) & 4931.5883(1) & 6457 & 0.0001 & Hubscher et al. (2011) \\ 1669.4602 & -2206 & -0.0363 & Essam et al. (2010) & 4958.5136(20) & 6528.5 & 0.0018 & Hubscher et al. (2011) \\ 1670.3976 & -2203.5 & -0.0403 & Essam et al. (2010) & 4983.7401(3) & 6595.5 & -0.0009 & Diethelm (2009) \\ 1670.3984 & -2203.5 & -0.0395 & Essam et al. (2010) & 5017.44086(3) & 6685 & -0.0017 & Parimucha et al. (2009) \\ 1674.3548 & -2193 & -0.0369 & Karska \& Maciejewski (2003) & 5213.6297(3) & 7206 & 0.0022 & Parimucha et al. (2011) \\ 1692.4299 & -2144 & -0.0365 & Essam et al. (2010) & 5219.6532(1) & 7222 & 0.0009 & Parimucha et al. (2011) \\ 1692.4319 & -2145 & -0.0345 & Essam et al. (2010) & 5261.8279(1) & 7334 & 0.0014 & Dvorak (2011) \\ 1975.6050 & -1392 & -0.0304 & Pribulla, et al. (2003) & 5264.4630(3) & 7341 & 0.0006 & Parimucha et al. (2011) \\ 1975.6061(1) & -1392 & -0.0293 & Pribulla et al. (2001) & 5311.3444(2) & 7465.5 & 0.001 & Parimucha et al. (2011) \\ 1975.6064(1) & -1393 & -0.0290 & Pribulla et al. (2001) & 5311.5317(2) & 7466 & 0.000 & Parimucha et al. (2011) \\ 1975.6108(7) & -1392 & -0.0246 & Pribulla \& Vanko (2002) & 5351.4463(1) & 7572 & -0.0002 & Hubscher, et al. (2012) \\ 2029.4398(7) & -1250 & -0.0426 & Pribulla \& Vanko (2002) & 5354.4587(2) & 7580 & -0.0002 & Parimucha et al. (2011) \\ 2031.5168(7) & -1244.5 & -0.0369 & Pribulla \& Vanko (2002) & 5420.3573(3) & 7755 & 0.0137 & Parimucha et al. (2011)) \\ 2045.4589 & -1207.5 & -0.0273 & Essam et al. (2010) & 5652.8828(30) & 8372.5 & 0.0045 & Diethelm (2011) \\ 2060.3320 & -1167 & -0.0281 & Essam et al. (2010) & 5665.4931(3) & 8406 & 0.0088 & Parimucha et al. (2013) \\ 2060.3352 & -1167 & -0.0249 & Essam et al. (2010) & 5705.4093(2) & 8512 & 0.0016 & Hubscher, et al. (2012) \\ 2400.1804(2) & -265.5 & -0.0202 & Pribulla et al. (2002) & 6011.5475(2) & 9325 & 0.0051 & Parimucha et al. (2013) \\ 2400.3660 & -264 & -0.0227 & Karska \& Maciejewski (2003) & 55987.6371(2) & 9261.5 & 0.0018 & Parimucha et al. (2013) \\ 2469.4699(4) & -81.5 & -0.017 & Demircan et al. (2003) & 5987.6371(2) & 9261.5 & 0.0018 & Parimucha et al. (2013) \\ 2472.2898(2) & -74 & -0.0209 & Petropoulou et al. (2015) & 5992.5319(2) & 9274.5 & 0.0014 & Parimucha et al. (2013) \\ 2473.4197(2) & -71 & -0.0206 & Petropoulou et al. (2015) & 6005.5237(2) & 9309 & 0.0021 & Parimucha et al. (2013) \\ 2473.4247(4) & -71 & -0.0156 & Demircan et al. (2003) & 6005.5237(2) & 9309 & 0.0021 & Parimucha et al. (2013) \\ 2500.1757 & 0 & 0.000 & Kreiner (2004) & 6149.3679(2) & 9691 & 0.0026 & Parimucha et al. (2013) \\ 2719.3200 & 582 & -0.0104 & Nagai (2004) & 2456199.26169(2) & 9823.5 & 0.0028 & Parimucha et al. (2013) \\ 2764.5082(23) & 702 & -0.0088 & Hubscher et al. (2005) & 6742.4439(14) & 11266 & 0.00513 & Hubscher \& Lehmann (2015) \\ 2786.3500(2) & 761 & -0.0071 & Ak \& Filiz (2003) & 6749.4115(5) & 11284.5 & 0.0066 & Hubscher \& Lehmann (2015) \\ 2793.5038(2) & 779 & -0.0079 & Ak \& Filiz (2003) & 6011.5483(2) & 9325 & 0.0018 & Parimucha et al. (2013) \\ 27074.9466(1) & 112149 & 0
2305.10163
Large Language Models Leverage External Knowledge to Extend Clinical Insight Beyond Language Boundaries
$\textbf{Objectives}$: Large Language Models (LLMs) such as ChatGPT and Med-PaLM have excelled in various medical question-answering tasks. However, these English-centric models encounter challenges in non-English clinical settings, primarily due to limited clinical knowledge in respective languages, a consequence of imbalanced training corpora. We systematically evaluate LLMs in the Chinese medical context and develop a novel in-context learning framework to enhance their performance. $\textbf{Materials and Methods}$: The latest China National Medical Licensing Examination (CNMLE-2022) served as the benchmark. We collected 53 medical books and 381,149 medical questions to construct the medical knowledge base and question bank. The proposed Knowledge and Few-shot Enhancement In-context Learning (KFE) framework leverages the in-context learning ability of LLMs to integrate diverse external clinical knowledge sources. We evaluated KFE with ChatGPT(GPT3.5), GPT4, Baichuan2(BC2)-7B, and BC2-13B in CNMLE-2022 and investigated the effectiveness of different pathways for incorporating LLMs with medical knowledge from 7 perspectives. $\textbf{Results}$: Directly applying ChatGPT failed to qualify for the CNMLE-2022 at a score of 51. Cooperated with the KFE, the LLMs with varying sizes yielded consistent and significant improvements. The ChatGPT's performance surged to 70.04 and GPT-4 achieved the highest score of 82.59. This surpasses the qualification threshold (60) and exceeds the average human score of 68.70. It also enabled a smaller BC2-13B to pass the examination, showcasing the great potential in low-resource settings. $\textbf{Conclusion}$: By synergizing medical knowledge through in-context learning, LLM can extend clinical insight beyond language barriers, significantly reducing language-related disparities of LLM applications and ensuring global benefit in healthcare.
Jiageng Wu, Xian Wu, Zhaopeng Qiu, Minghui Li, Yingying Zhang, Yefeng Zheng, Changzheng Yuan, Jie Yang
2023-05-17T12:31:26Z
http://arxiv.org/abs/2305.10163v4
Qualifying Chinese Medical Licensing Examination with Knowledge Enhanced Generative Pre-training Model ###### Abstract Generative Pre-Training (GPT) models like ChatGPT have demonstrated exceptional performance in various Natural Language Processing (NLP) tasks. Although ChatGPT has been integrated into the overall workflow to boost efficiency in many domains, the lack of flexibility in the finetuning process hinders its applications in areas that demand extensive domain expertise and semantic knowledge, such as healthcare. In this paper, we evaluate ChatGPT on the China National Medical Licensing Examination (CNMLE) and propose a novel approach to improve ChatGPT from two perspectives: integrating medical domain knowledge and enabling few-shot learning. By using a simple but effective retrieval method, medical background knowledge is extracted as semantic instructions to guide the inference of ChatGPT. Similarly, relevant medical questions are identified and fed as demonstrations to ChatGPT. Experimental results show that directly applying ChatGPT fails to qualify the CNMLE at a score of 51 (i.e., only 51% of questions are answered correctly). While our knowledge-enhanced model achieves a high score of 70 on CNMLE-2022 which not only passes the qualification but also surpasses the average score of humans (61). This research demonstrates the potential of knowledge-enhanced ChatGPT to serve as versatile medical assistants, capable of analyzing real-world medical problems in a more accessible, user-friendly, and adaptable manner. Keywords:Large Language Model Natural Language Processing Knowledge Enhancement Healthcare Medical Licensing Examination. ## 1 Introduction Large Language Models (LLMs), especially the Generative Pre-Training (GPT) models have achieved improved performance on various tasks, including both conventional Natural Language Processing (NLP) tasks [23] and multi-modal processing tasks [19]. On one hand, GPT models like ChatGPT can accurately understand users' intentions from textual prompts, even for complicated intention descriptions; On the other hand, GPT models can generate correct replies in a logical and coherent manner. Due to the strong capabilities in both understanding and generation, GPT models have received extensive interest from both academia and industry. For example, GPT models have permeated numerous aspects of daily life [3] and have gradually ventured into professional domains, including finance, law, and healthcare [1]. GPT models present a high potential for applications in the healthcare domain. For doctors, GPT models can work as the clinical decision support system and provide assistance in disease diagnosis, medication recommendation, and instruction generation [1]. This can relieve the heavy workload of doctors and alert the misdiagnosis and under-diagnoses; For patients, especially those with limited medical resources, GPT models can serve as versatile medical assistants, capable of analyzing real-world medical problems and providing useful suggestions in a more user-friendly and adaptable manner [28]. However, healthcare is a critical and sensitive domain, and an inaccurate reply or recommendation could result in serious consequences. Therefore, the performance of GPT models in the healthcare domain should be carefully evaluated before clinical applications. Encouragingly, recent studies [17, 9] proved that GPT models attain the level of proficiency in medical knowledge akin to that of a junior general practitioner, which is evidenced by the ability to qualify the United States Medical Licensing Examination (USMLE). However, to the best of our knowledge, there is no in-depth investigation conducted on non-English medical exams. Moreover, the lack of flexibility in fine-tuning GPT models limits their capacity for domain adaptation, which is critical in healthcare applications. The question of how to better incorporate various types of healthcare knowledge into GPT models is still under investigation. In addition, given that approximately 90% data for training GPTs is in English [2] and non-English medical corpora are even scarcer, it remains unclear 1) how well the GPT models perform in non-English medical scenarios, 2) how they can be further improved, and 3) what is the effectiveness of different model enhancement techniques. To address the above questions, we intend to apply the GPT model to the China National Medical Licensing Examination (CNMLE) and investigate effective approaches to further improve the performance of ChatGPT by integrating medical domain knowledge. Similar to USMLE in the United States, the CNMLE is an essential qualifying examination to become a certified doctor in China, covering knowledge from 20 medical subjects of four parts: clinical medicine, preclinical medicine, medical humanities, and preventive medicine. Candidates must complete five years of medical education and additionally undergo a one-year clinical practice assessment. Passing CNMLE requires not only a deep and broad understanding of medical knowledge but also the ability to analyze and diagnose complex real-world clinical cases. According to the experiment results, directly applying GPT 3.53 achieves a score of 51 (i.e., only 51% of questions are answered correctly), which fails to pass the qualification threshold of 60. To further improve the performance, we propose two in-context learning [11] strategies: 1) Knowledge Enhancement: we build a medical knowledge base as a source to provide background knowledge in GPT prompts; 2): Few-shot Enhancement: we collect a dataset of historical questions and answers of CNMLE as a question bank to provide few-shot exemplars of GPT prompts. Four types of Chain-of-Thought (CoT) strategies [38] are designed and examined to enrich the information of retrieved sample questions. Experiment results demonstrate that both knowledge and few-shot enhancement can improve model performance significantly. Overall, the main contributions of this paper are as follows: * We evaluate the performance of GPT model in the non-English healthcare domain. In particular, we test GPT model on the China National Medical Licensing Examination (CNMLE). * To further improve the performance, we propose the **K**nowledge and **F**ew-shot **E**nhanced In-Context Learning (KFE) to leverage the in-context learning ability of GPT model with the domain-specific knowledge. We also conduct extensive experiments and in-depth analysis to explore various settings of knowledge and few-shot enhancements. * The GPT with optimal KFE setting achieves a score of 70 in the CNMLE-2022 (passing score: 60), which not only qualifies the medical exam, but also outperforms the average score (61) of human examinees. ## 2 Related work ### Large Language Model In recent years, language models have experienced a leap in development, revolutionizing the research paradigm in the field of NLP. Starting from the emergence of Elmo (with 94M parameter) [21] and BERT (340M) [33] in 2018, NLP has entered the era of pre-trained models. With the advent of GPT-2 (1.5B) [24], T5 (11B) [25], and GPT-3 (175B) [2], NLP further entered the LLMs (\(>\)100B) period. The amount of computation, the number of model parameters, and the size of the training dataset have all grown at a rapid pace [6]. This continuous quantitative change has led to a qualitative transformation, resulting in the emergence of many outstanding capabilities in LLMs [37]. LLMs significantly improve task-agnostic, few-shot performance, sometimes even becoming competitive with prior state-of-the-art fine-tuning approaches [2]. To cope with the diverse requirements of various scenarios, various LLMs have been continuously proposed. The recently popular ChatGPT (GPT3.5) has attracted widespread attention, which comprehends human intents behind different instructions and generates corresponding content by employing instruction tuning. And, it aligns its responses with human thought and language habits using Reinforcement Learning from Human Feedback (RLHF) [20]. To meet the high-quality requirements of medical and clinical applications, Google has combined prompting strategies with instruction prompt tuning to adapt LLMs for the medical domain, named Med-PaLM [31]. These models achieve state-of-the-art accuracy on multiple medical datasets. Their results also demonstrate that comprehension, recall of knowledge, and medical reasoning improve with both model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. To promote the widespread adoption of large-scale models, Meta has open-sourced LLaMA [32], a collection of foundational language models ranging from 7B to 65B parameters. Subsequently, Stanford's Alpaca adopts a self-instruct framework [36] to align LLaMA's responses with ChatGPT. This method yields fine performance even with only 7B parameters, significantly enhancing the accessibility of LLMs. Furthermore, ChatDoctor [41] fine-tuned the LLaMA model based on 100k real-world patient-physician conversations from an online medical consultation site. ### Chain-of-Thought Owing to the rich knowledge and outstanding ability of semantic understanding, the LLMs can elicit the detailed reasoning process by Chain-of-Thought (CoT) rather than merely output the answer [38], which improves not only the performance but also the interpretability of various arithmetic, commonsense, and symbolic reasoning tasks. Subsequently, self-consistency [35] was proposed to sample multiple reasoning paths instead of only taking the greedy one, and select the most consistent answer. Kojima et al. [8] designed a simple prompt, "Let's think step by step", to encourage LLMs to elucidate their analysis and then arrive at the answer without additional support, thereby demonstrating that LLMs can serve as effective zero-shot reasoners. Building on this, Zhang et al. [43] developed Auto-CoT, which selects the representative samples by clustering and automatically constructing their reasoning chain using the LLM itself, serving as a demonstration for few-shot learning. Auto-CoT performs competitively compared to Manual-CoT which requires manual designs and greatly reduced time-consuming annotations. Additionally, there have been efforts to develop complete and robust frameworks that decouple a complete solution into different steps. Least-to-Most [44] reduces a complex problem into multiple easier subproblems and then sequentially solves them, whereby solving a given subproblem is facilitated by the answers to previously solved subproblems. ReAct [40] defines the reasoning and acting step in the CoT, then decomposes a whole task-solving into reasoning traces and task-specific actions. Particularly, the combination of Wikipedia introduces the external knowledge to generate human-like task-solving trajectories with less hallucination and error propagation. On the basis of ReAct, self-reflection [29] endows the LLMs with dynamic memory and self-reflection capabilities to enhance their reasoning traces and task-specific action. ### LLM in Medicine There are also emerging studies devoted to applying LLMs in the medical domain. The Med-PaLM firstly achieved 67.6% accuracy in USMLE benchmarks [31], which not only answered multiple-choice and open-ended questions accurately but also provided rationale. As a general LLM, ChatGPT also performed at or near the passing threshold for all three parts of the USMLE-2022 and additionally demonstrate a high level of concordance and insight in its explanations through a comprehensive review by physicians [10]. And, GPT-4 [18] exceeds the passing score of USMLE by over 20 points Furthermore, various research attempted to apply LLMs to clinical services. Jeblick et al. [5] and Lyu et al. [14] evaluated the potential of ChatGPT or GPT-4 in translating radiology report into plain language to make medicine easy to understand to a layman. Ma et al [15] proposed ImpressionGPT for radiology report summarization by an iterative optimizing framework with ChatGPT. ChatCAD [34] presented a method for interactive computer-aided diagnosis on medical images using large language models, which transforms and combines the diverse outputs of various visual neural networks into text description, and as the inputs of LLMs to obtain a condensed report, interactive explanations and medical recommendations based on the given image. Additionally, the DeID-GPT [13] was designed to automatically identify and remove the personally identifiable information of medical text, which outperformed existing commonly used methods and showed remarkable reliability in masking private information from the unstructured medical text. Though achieving encouraging progress, there are still many unexplored areas that warrant our attention. Currently, these advanced LLMs have not been evaluated and applied in non-English medical scenarios. Furthermore, previous evaluations have primarily focused on the direct application and overall performance, without delving into how to harness the potential of LLMs in situations with inferior performance. In particular, there has been insufficient investigation into in-context learning and medical domain-specific support. Additionally, there is a lack of systematic analysis and discussion regarding the extent of the effect of different pathways on incorporating LLMs with various medical knowledge. ## 3 Methodology ### Problem Formulation Different from the United States Medical Licensing Examination (USMLE), the China National Medical Licensing Examination (CNMLE) only includes one type of question: multiple-choice questions. Here we represent each instance in CNMLE in the form of a triple \(\{Q,O,A\}\) where \(Q\) refers to the question stem, \(O=\{o_{0},o_{1},o_{2},o_{3},o_{4}\}\) refers to the candidate options (in the context of CNMLE, the number of options is five), and \(A\) refers to the answer which is a specific option in \(O\). Therefore, in the context of GPT model, answering CNMLE problems can be formulated as estimating the probability of generating the correct answer \(P(A|Q,O)\) given question \(Q\) and options \(O\). To improve the accuracy of medical examination, specific instructions \(I\) are provided to describe the task. We use two types of instructions here: * **Direct Instruction**: _"Here is a multi-choice question about medical knowledge, please output the only correct answer according to the question."_ We refer to this direct instruction as \(I_{direct}\) which only requires the GPT model to generate the correct answer. Then the task can be formulated as estimating the probability \(P(A|Q,O,I_{direct})\). * **Instruction with inference**: _"Here is a multi-choice question about medical knowledge, please analyze it in a step-by-step fashion and deduce the most likely answer."_ We refer this kind of instruction to \(I_{steps}\), which requires the GPT model to generate both the correct answer as well as the detailed inference steps. Then the task can be formulated as estimating the probability \(P(A|Q,O,I_{steps})\). This kind of instruction is motivated by CoT, which has been found effective in generating the correct answer [38]. Using the direct instruction \(I_{direct}\) and the instruction with inference steps \(I_{steps}\) can reach the score of 51 and 52, respectively, which fail to quality CNMLE. To further improve the performance, we propose the **K**nowledge and **F**ew-shot **E**nhanced In-Context Learning (KFE). Figure 1 displays the framework of KFE which includes two modules: _Medical Knowledge Retriever_ and _Question Bank Retriever_. Given a question and options, the Medical Knowledge Retriever acquires the relevant medical knowledge from the medical knowledge base, which is then integrated into the prompts for GPT model; The Question Bank Retriever acquires questions and corresponding answers from a pre-built Question Bank. These retrieved questions and answers will be further enriched with GPT model and then integrated into the prompts to enable few-shot learning. Figure 1: The workflow of qualifying Chinese Medical Licensing Examination with knowledge enhanced generative pre-training model. (a) list a basic form of prompt that includes the question and options; (b) further includes retrieved related medical knowledge which is in the form of text pieces; (c) includes retrieved pairs of questions and answers as few-shot examples, which are similar to current inputted questions; (d) includes both retrieved knowledge and few-shot examples in prompts. ### Knowledge Enhancement We construct a comprehensive medical knowledge base that is generated from 53 textbooks of People's Medical Publishing House.4 These books are recommended textbooks for the majority of medical schools in China and their quality is well assured. We split the content of each book into text pieces by leveraging the structure of the books. In total, we manage to acquire 68,962 pieces of text, and the average length of the knowledge piece is 130 tokens. Footnote 4: [https://www.pmph.com/](https://www.pmph.com/) To infer the correct answer to a question, both the questions and all candidate options contain critical information. In many cases, it is required to combine the question and the candidate option together to form complete context information. Therefore, we concatenate each option \(o_{i}\in O\) with its corresponding question \(Q\), which serves as a query, to retrieve the most relevant pieces of knowledge \(k_{i}\) from the knowledge base: \[k_{i}=\arg\max R_{K}(k|(q\,\|\,a_{i})),\] where \(q\,\|\,a_{i}\) refers to the concatenation of the question with one option, \(R_{K}\) represents the knowledge retrieval engine that returns the most relevant knowledge \(k_{i}\) given \(q\,\|\,a_{i}\). To enhance the efficiency of retrieval, we employ BM25 [26], which is an extension of TF-IDF, as our retrieval engine. BM25 has been proven to have decent performance in retrieving examples for in-context learning in QA tasks, even better than sentence embedding-based approaches [27]. Therefore, for all five pairs of questions and options, we can collect 5 pieces of knowledge \(k=\{k_{1},\ldots,k_{5}\}\). This strategy ensures that the retrieved knowledge is relevant to the context of the question and provides more concentrated and useful background knowledge. ### Few-shot Enhancement We initially curate a sizable medical question bank \(B=\{b_{1},b_{2},\ldots,b_{m}\}\), encompassing a significant volume of medical questions derived from historical CNMLE, textbooks, and reference materials. In total, we build a medical question bank with 381,149 questions. Each instance in this question bank includes the question, all five candidate options, and the correct answer. Similar to the aforementioned knowledge retrieval approach, we also query similar examples from the question bank by combining the question and options together. However, instead of enumerating all question and option pairs, we concatenate the question with all options to match similar problems in the question bank. Specifically, we concatenate the question with all choices to generate the context \((q\,\|\,O)\), which is used to search for the top-\(k\) similar examples from the example bank by BM25: \[b_{q}=\arg\max_{1}^{k}R_{B}(b|(q\,\|\,O)),\] where the \(k\) is the number of examples and the \(R_{B}\) denotes the retrieval engine that returns the relevant examples. After retrieving relevant examples, we can leverage the few-shot strategy to enhance the problem-solving capabilities of LLMs. As shown in Figure 2, we propose four strategies to add few-shot enhancement which as listed as follows: * **Question + Options + Correct Answer:** for each retrieved example, we concatenate the question \(Q\), all candidate options \(O\), and the correct answer \(A\) together which is used as the few-shot part in the prompt. * **Question + Options + Generated Answer:** for each retrieved example, we first send the question \(Q\) and all candidate options \(O\) to the GPT model to generate the answer. The acquired answer is then appended back to the question and options as the few-shot part of the prompt. In this manner, for each few-shot example, we need to call the GPT model one more time which brings additional computational cost. Furthermore, since the generated answer could be incorrect, it may mislead the GPT model and in turn reduce the inference accuracy. The advantage is that we no longer require the label of the correct answer of retrieved examples. * **Question + Options + Generated Correct Answer:** Different from the above strategy, we only keep examples with the correctly generated answers. For those questions with incorrectly generated answers, we remove them and pick other examples with lower relevance from the Question Bank. * **Question + Options + Correct Answer + Generated Inference Detail:** In this case, we sent the triple \(\{Q,O,A\}\) to the GPT model and let it generate the inference details why the correct answer \(A\) is chosen. The generated inference details are concatenated with questions, options, and the correct answer to form the few-shot section in the prompt. Figure 2: Four different strategies to add few-shot enhancement. ### Knowledge and Few-shot Enhanced In-Context Learning (KFE) In Section 3.2 and Section 3.3, we enhance the in-context learning ability of GPT model to cope with CNMLE, denoted as **KFE**. The overall workflow of KFE is summarized in Algorithm 1. ``` Input : The medical question \(Q\) and its options \(O\), the large generative language model \(G\), medical knowledge base \(K\), medical question bank \(B\), the search engine \(R\), the strategy of enriching the retrieved questions \(S\), and the instruction \(I\) Output : The generated answer of \(\hat{A}\) 1 Initialize the knowledge retriever \(R_{K}\) and the question retriever \(R_{B}\) 2for each \(o_{i}\in O\)do 3 Retrieve the relevant knowledge \(k_{i}\) by \((q\parallel o_{i})\) from \(R_{K}\) 4 5Concatenate all \(k_{i}\) to construct the whole medical background \(k\) 6Retrieve \(k\) examples \(b\) by \((q\parallel o_{1}\parallel...\parallel o_{5})\) from \(R_{B}\); 7for each \(b_{i}\in b\)do 8 Enrich the content \(\hat{b_{i}}\) of retrieved question \(b_{i}\) by \(S\) and \(G\) 9 10Concatenate all \(\hat{b_{i}}\) as few-shot demonstration \(\hat{b}\) for in-context learning 11Given \((Q,O,k,b,I)\), \(G\) generates \(\hat{A}\) (with or without detailed inference) ``` **Algorithm 1**Knowledge and Few-shot Enhanced In-Context Learning ## 4 Experiments and Results ### Dataset As the official qualification examination of clinicians, there are over half a million medical practitioners attending CNMLE every year in China. CNMLE evaluates not only the proficiency of medical knowledge but also the practical skills in real clinics.5 A CNMLE test only includes multi-choice questions which cover 20 medical subjects, with a qualifying score of 60. The majority of these questions can be classified into two categories: medical knowledge questions (MK) and case analysis questions (CA). The MK questions require a broad understanding of medical concepts and terminology, which is essential for medical professionals. Meanwhile, the CA questions involve practical cases that require to be precisely diagnosed or treated according to the patient's basic information and current status, emphasizing applying medical knowledge in clinical practice. Footnote 5: [https://www1.nmec.org.cn/Pages/ArticleInfo-13-10706.html](https://www1.nmec.org.cn/Pages/ArticleInfo-13-10706.html) To avoid the circumstance that the testing questions have been included in the training set of the GPT model, we collect 494 questions from the latest CNMLE held in August 2022 for evaluation. Since the training data of ChatGPT were collected before September 30th, 2021, there is no label leakage problem. ### Settings We chose GPT 3.5-Turbo as the target LLM to evaluate, which includes 175B parameters and drives the online ChatGPT. All tests were conducted by calling OpenAI's official API. Unless specified, all experiments used exactly the same parameters and were tested with the same version of the model. We set the inference temperature to 0 to make the response more focused and deterministic. To avoid a performance penalty, we did not limit the response length, and the maximum length of tokens of GPT 3.5-Turbo is 4096 tokens (including prompt and response). The rest parameters are all set to default. ### Baselines To fully reveal the performance of LLMs, we evaluate several competitive baselines as well as different variants of the proposed KFE model as follows. * **Supervised Deep Learning:** SeaReader [42] formulates medical questions as reading comprehension tasks that extract relevant information from many related documents to determine the answer. SeaReader was trained on 230,000 medical questions and tested in CNMLE-2017. * **Domain Pre-training and Fine-tuned:** Med3R [39] consists of free reading (domain pre-training in dozens of medical books), guided reading (supervised learning with retrieved relevant documents), and multi-layer reasoning (integration of reasoning layer of different levels). It was trained on 270,000 medical questions and achieved the SOTA in CNMLE-2017. * **GPT with Direct Instruction:** Here we use the direct instruction \(I_{direct}\). To further investigate the effect of different components of KFE, we conducted extensive experiments on various strategies: _Zero-shot_ denotes the basic approach without knowledge and few-shot enhancement; _Few-shot_ denotes the approach with only few-shot enhancement (as described in Section 3.3); _Knowledge Enhancement_ denotes the approach with only knowledge enhancement (as described in Section 3.2) and KFE denotes the complete proposed approach. * **GPT with Instruction with Inference Steps:** Here we use the instructions with inference steps \(I_{steps}\). The rest settings are the same as _GPT with Direct Instruction_. Here we aim to investigate whether the generated inference details can enhance problem-solving ability. ### Results We compare the proposed KFE with baselines in Table 1. The fully supervised approaches outperform the GPT-based approaches. This is because these supervised approaches are specially tailored for medical exams which cannot be applied to other medical tasks. In addition, these supervised models are trained with more than 200k historical questions which are quite time-consuming. While the GPT-based approaches require less than 10 few-shot examples and do not need to fine-tune the backbone GPT model. Among GPT-based approaches, the proposed KFE not only passed CNMLE-2022 (70.04) but also outperformed the human examinees with a bachelor degree in medicine (64.83). We find that both the knowledge and few-shot enhancement can help to improve the final performance. Integrating either enhancement can outperform the Basic GPT model significantly. Another observation is that the GPT with \(I_{direct}\) outperforms GPT with \(I_{steps}\), this is may due to the generated inference step containing mistakes and hallucinations which mislead the GPT model to generate the incorrect answer. ## 5 Ablation Studies and Analysis In this section, we conduct ablation studies and analysis from the following perspectives: 1) we evaluate four different strategies for few-shot enhancement which are displayed in Figure 2; 2) we evaluate the contribution of generated inference details with different length in few-shot enhancement; 3) we also study the contribution of different numbers of few-shot examples; 4) we compare the performance of different instruction strategies \(I_{direct}\) and \(I_{steps}\); 5) the effectiveness of _Medical Knowledge Base_; 6) the effectiveness of _Question Bank Retrieval_; 7) limitations on length and characters of the model responses. ### Effect of Different Strategies for Few-shot Enhancement Figure 2 displays four different strategies for adding few-shot enhancement. As shown in Table 2, the _Q+O+Correct Ans_ achieved the highest score of 59.31. Compared to the other three strategies, _Q+O+Correct Ans_ uses the least generated information from GPT in composing the prompts. Another observation is that _Q+O+Generated Ans_ (51.82) underperformed _Q+O+Generated Correct Ans_ (55.67) by a large margin. These two observations showed that the presence of generated content may impair performance and even lead to a result worse than Zero-shot (52.02), which is consistent with previous in-context learning approaches [7][43] and in conflict with [16]. This is may due to that the generated \begin{table} \begin{tabular}{l c c c} \hline \hline **Method** & **Acc-MK**(\%) & **Acc-CA**(\%) & **Acc-All**(\%) \\ \hline **Fully Supervised Deep Learning** & & & \\ SeaReader [42] (with 5 documents) & - & - & 57.8 \\ SeaReader (with 100 documents) & - & - & **74.4** \\ Med3R [39] (with 5 documents) & **77.34** & **75.00** & **76.00** \\ \hline **GPT with Instruction \(I_{direct}\)** & & & \\ Zero-shot & 49.17 & 52.08 & 51.01 \\ Few-shot & 65.75 & 62.30 & 63.56 \\ Knowledge Enhancement & 68.51 & 58.15 & 61.94 \\ KFE & **72.93** & **68.37** & **70.04** \\ \hline **GPT with Instruction \(I_{steps}\)** & & & \\ Zero-shot & 51.93 & 52.08 & 52.02 \\ Few-shot & 59.12 & 56.87 & 57.69 \\ Knowledge Enhancement & **72.38** & 54.95 & 61.34 \\ KFE & 66.30 & **64.86** & **65.38** \\ \hline **Human** & & & \\ Passing core & - & - & 60 \\ Average of all examinees & 56.85 & 64.09 & 61.00 \\ Average of all medical bachelors & **61.54** & **67.26** & **64.83** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance of Different Methods in CNMLE. content contains mistakes and answering questions in CNMLE requires high precision. Therefore integrating these unconfirmed auto-generated contents in prompts could mislead the GPT model and in turn generate incorrect answers. ### Analysis of Generated Inference Details with Varied Length Given the generated inference details, we use the metric _Inference Step_ to measure its complexity as introduced in [4]. Specifically, we first conduct sentence segmentation on generated inference details and allocate them into ten buckets according to the number of sentences. As shown in Figure 3, the smaller inference steps yield better accuracy on medical examination which is different from the findings in [4], which reported that GPT achieves substantially better performance on reasoning tasks with more inference steps. This may be due to that longer inference steps may contain more mistakes and hallucinations. ### Effect of Different Numbers of Few-shot Examples We investigate how the performance varies with an increase in the number of few-shot examples. Here we choose the optimal _the Q+O+Correct Ans_ strategy for few-shot enhancement. Notably, due to the limitation of the maximum token \begin{table} \begin{tabular}{l c c c} \hline \hline **Strategy** & **Acc-MK(\%)** & **Acc-CA(\%)** & **Acc-All(\%)** \\ \hline Q+O+Correct Ans & **62.43** & 57.51 & **59.31** \\ Q+O+Generated Ans & 53.04 & 51.12 & 51.82 \\ Q+O+Generated Correct Ans & 54.14 & 56.55 & 55.67 \\ Q+O+Correct Ans+Generated Inference Detail & 54.14 & **58.15** & 56.68 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of Different Strategies of Few-shot Enhancement. Figure 3: Performance w.r.t. varied length of generated inference details. of the GPT model (4096 tokens maximal). We have increased the number of examples as much as possible and the maximal examples in Few-shot and KFE are both 12. As shown in Table 3, a significant improvement in performance is observed with the increase in example counts. Specifically, the Few-shot method demonstrates an enhancement of up to 8.7, while KFE manifests a maximum improvement of 6.07. Concurrently, we also observed that neither Few-shot nor KFE exhibited a linear improvement with the addition of examples. The performance marginally improved with more than nine examples. In both Few-shot and KFE, the optimal performance is achieved with the inclusion of nine examples. ### Effect of Different Instruction Strategies To investigate the effectiveness of different Instruction Strategies \(I_{direct}\) and \(I_{steps}\) (see Section 3.1), we compared the performance of KFE without and with inference steps. Although prior research has demonstrated generating inference steps significantly improves performance in various reasoning tasks [38], as shown in Table 4, the generation of inference steps reduced performance in the CNMLE task. This result also suggested the possibility of the generation of errors and hallucinations in the reasoning steps and such a limitation that is more serious in professional medical examinations, thus reducing the accuracy. \begin{table} \begin{tabular}{l c c c} \hline \hline **Model** & **Acc-MK(\%)** & **Acc-CA(\%)** & **Acc-All(\%)** \\ \hline Direct Instruction \(I_{direct}\) & 71.27 & 61.98 & 65.38 \\ \hline Instruction with Inference Steps \(I_{steps}\) & 66.30 & 62.62 & 63.97 \\ \hline \hline \end{tabular} \end{table} Table 4: Performance of KFE (3-Shot) with Different Instruction Strategies. \begin{table} \begin{tabular}{l c c c} \hline \hline **Model** & **Acc-MK(\%)** & **Acc-CA(\%)** & **Acc-All(\%)** \\ \hline **Few-shot** & & & \\ 1-Shot & 55.25 & 54.63 & 54.86 \\ 3-Shot & 62.43 & 57.51 & 59.31 \\ 6-Shot & 62.98 & 61.34 & 61.94 \\ 9-Shot & **66.30** & **63.90** & **64.78** \\ 12-Shot & 65.75 & 62.30 & 63.56 \\ \hline **KFE** & & & \\ 1-Shot & 69.61 & 60.7 & 63.97 \\ 3-Shot & 71.27 & 61.98 & 65.38 \\ 6-Shot & **74.59** & 65.18 & 68.62 \\ 9-Shot & 72.93 & **68.37** & **70.04** \\ 12-Shot & 73.48 & 66.77 & 69.23 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance of Few-shot and KFE with Different Numbers of Examples. ### Effect of Medical Knowledge Base To investigate the effect of related knowledge from the medical knowledge base, we introduce a baseline method _Self-inquiry_ adopted in [30, 22, 12]. Firstly, for each candidate option, we query the GPT model with the prompt of _"What does that mean of \(\{\)option\(\}\)"_ to obtain the meaning of each option; Secondly, we merge all five responses the internal medical knowledge; Thirdly, we inquire GPT model with the question and by this model generated knowledge. As shown in Table 5, with the enhancement of internal knowledge, _Self-inquiry_ achieved a score of 48.79 with a 13.15-score reduction. This result suggested that a GPT model trained on a general domain may lack medical knowledge and _Self-inquiry_ does not work in this specific domain. Nevertheless, it also demonstrates that the GPT model is capable of rapidly digesting and utilizing domain-specific knowledge in reasoning. ### Effect of Question Bank Retrieval As described in Section 3.3, we retrieve few-shot examples according to the similarity to the input question. In this subsection, we compare the performance of relevant examples with random examples. Table 6 shows a significant reduction in performance for both _Q+O+Correct Ans_ and _Q+O+Correct Ans+Generated Inference Detail_ when cooperated with random questions, as compared to retrieving related examples from the medical question bank. The former witnessed a decline of 7.89 in the score, whereas the latter experienced a decrease of 6.68. \begin{table} \begin{tabular}{l c c c} \hline \hline **Strategy** & **Acc-MK(\%)** & **Acc-CA(\%)** & **Acc-All(\%)** \\ \hline **Retrieved Questions** & & & \\ Q+O+Correct Ans & 62.43 & 57.51 & 59.31 \\ Q+O+Correct Ans+Generated Inference Detail & 54.14 & 58.15 & 56.68 \\ \hline **Random Questions** & & & \\ Q+O+Correct Ans & 54.14 & 49.84 & 51.42 \\ Q+O+Correct Ans+Generated Inference Detail & 53.04 & 48.24 & 50.00 \\ \hline \hline \end{tabular} \end{table} Table 6: Performance Comparison of Different Examples for Few-shot. \begin{table} \begin{tabular}{l c c c} \hline \hline **Model** & **Acc-MK(\%)** & **Acc-CA(\%)** & **Acc-All(\%)** \\ \hline Self-inquiry & 46.96 & 49.84 & 48.79 \\ \hline Knowledge Base & 68.51 & 58.15 & 61.94 \\ \hline \hline \end{tabular} \end{table} Table 5: Performance Comparison of Different Knowledge Enhancement. ### Effect of Model Response Length Limitation We set the maximum length of the model response and assign the logit bias of specific characters to constrain the GPT model to generate a valid response. Specifically, GPT was limited to only generating one token from {_A, B, C, D, E_} with equal probability (20%). As shown in Table 7, this constraint indeed slightly enhanced performance from 51.01 to 51.62 in the Zero-shot setting. However, such limitations would potentially compromise the model's generalizability and impede a fair comparison with others. ## 6 Ethnic Consideration Although there are many clinical practices in CNMLE, none of them involve personal information, thus circumventing the leakage of personally identifiable information. Moreover, the primary objective of this study is to investigate the effectiveness of the GPT model in tackling Chinese clinical examinations. The results and conclusions will not serve as medical suggestions. Consequently, they do not have any adverse effect on human healthcare. ## 7 Conclusion In this paper, we evaluate the performance of GPT model on the China National Medical Licensing Examination (CNMLE). We find that the direct application of GPT model fails to quality CNMLE. To improve the accuracy, we propose Knowledge and Few-Shot Enhanced In-Context Learning (KFE). Both enhancements significantly improve the performance and qualify CNMLE with a score of 70, which outperforms the average score of medical bachelors. With extensive ablation studies, we also explore KFE from multiple perspectives, including the configurations of few-shot examples, performance in relation to the number of few-shots, and a comparison of model-generated knowledge versus external knowledge. This study offers practical evaluations of the GPT model's capabilities in the context of the Chinese medical exam and sheds light on potential strategies for further improving GPT performance in the medical area. \begin{table} \begin{tabular}{l c c c} \hline \hline **Model** & **Acc-MK(\%)** & **Acc-CA(\%)** & **Acc-All(\%)** \\ \hline No Limitation & 49.17 & 52.08 & 51.01 \\ \hline 1-token and logit bias & 49.72 & 52.72 & 51.62 \\ \hline \hline \end{tabular} \end{table} Table 7: Effect of Model Response Length Limitation.
2306.09162
A First-Principles Explanation of the Luminescent Line Shape of SrLiAl$_3$N$_4$:Eu$^{2+}$ Phosphor for Light-Emitting Diode Applications
White light-emitting diodes are gaining popularity and are set to become the most common light source in the U.S. by 2025. However, their performance is still limited by the lack of an efficient red-emitting component with a narrow band emission. The red phosphor SrLiAl$_3$N$_4$:Eu$^{2+}$ is among the first promising phosphors with a small bandwidth for next-generation lighting, but the microscopic origin of this narrow emission remains elusive. In the present work, density functional theory, the $\Delta$SCF-constrained occupation method, and a generalized Huang-Rhys theory are used to provide an accurate description of the vibronic processes occurring at the two Sr$^{2+}$ sites that the Eu$^{2+}$ activator can occupy. The emission band shape of Eu(Sr1), with a zero-phonon line at 1.906 eV and a high luminescence intensity, is shown to be controlled by the coupling between the 5d$_{z^2}$-4f electronic transition and the low-frequency phonon modes associated with the Sr and Eu displacements along the Sr channel. The good agreement between our computations and experimental results allows us to provide a structural assignment of the observed total spectrum. By computing explicitly the effect of the thermal expansion on zero-phonon line energies, the agreement is extended to the temperature-dependent spectrum. These results provide insight into the electron-phonon coupling that accompanies the 5d-4f transition in similar UCr$_4$C$_4$-type phosphors. Furthermore, these results highlight the importance of the Sr channel in shaping the narrow emission of SrLiAl$_3$N$_4$:Eu$^{2+}$, and they shed new light on the structure-property relations of such phosphors.
Julien Bouquiaux, Samuel Poncé, Yongchao Jia, Anna Miglio, Masayoshi Mikami, Xavier Gonze
2023-06-15T14:38:28Z
http://arxiv.org/abs/2306.09162v2
A First-Principles Explanation of the Luminescence Line Shape of SrLiAl\({}_{3}\)N\({}_{4}\):Eu\({}^{2+}\) Phosphor for Light-Emitting Diode Applications ###### Abstract White light-emitting diodes are gaining popularity and are set to become the most common light source in the U.S. by 2025. However, their performance is still limited by the lack of an efficient re-emitting component with a narrow band emission. The red phosphor SrLiAl\({}_{3}\)N\({}_{4}\):Eu\({}^{2+}\) is among the first promising phosphors with a small bandwidth for next-generation lighting, but the microscopic origin of this narrow emission remains elusive. In the present work, density functional theory, the \(\Delta\)SCF-constrained occupation method, and a generalized Huang-Rhys theory are used to provide an accurate description of the vibronic processes occurring at the two Sr\({}^{2+}\) sites that the Eu\({}^{2+}\) activator can occupy. The emission band shape of Eu(Sr1), with a zero-phonon line at 1.906 eV and a high luminescence intensity, is shown to be controlled by the coupling between the 5d\({}_{z}\)-2f electronic transition and the low-frequency phonon modes associated with the Sr and Eu displacements along the Sr channel. The good agreement between our computations and experimental results allows us to provide a structural assignment of the observed total spectrum. By computing explicitly the effect of the thermal expansion on zero-phonon line energies, the agreement is extended to the temperature-dependent spectrum. These results provide insight into the electron-phonon coupling that accompanies the 5d-4f transition in similar UCr\({}_{4}\)C\({}_{4}\)-type phosphors. Furthermore, these results highlight the importance of the Sr channel in shaping the narrow emission of SrLiAl\({}_{3}\)N\({}_{4}\):Eu\({}^{2+}\), and they shed new light on the structure-property relations of such phosphors. ## I Introduction Eco-efficient white-light emitting diodes (WLEDs) rely on one or more phosphor materials to convert the ultraviolet or blue emission from a LED chip into a desired wavelength emission spectrum. Phosphors are made of a host material doped with an activator, the latter emitting light, whose wavelength is tuned by the effect of host crystal structure and chemical environment. The numerous combinations between host and activator have led to the creation of thousands of possible phosphor materials with a variety of photoluminescence (PL) properties including emission peak wavelength, thermal stability, quantum efficiency or shape of the PL emission spectrum. In the past decade, a large focus was put on the discovery of phosphors with narrow emission bandwidth in order to improve the color-purity of backlighting LED devices or, in the case of the red phosphors in WLED, to avoid wasting energy in the near-infrared region where human eye is not sensitive [1; 2; 3]. In the search for new-generation phosphors [4], materials with UCr\({}_{4}\)C\({}_{4}\)-type structure, doped with Eu\({}^{2+}\) have recently attracted attention [5; 6]. The first instance of such materials, SrLiAl\({}_{3}\)N\({}_{4}\):Eu\({}^{2+}\) (SLA) phosphor, discovered by Schnick et al. [7], was quickly followed by other nitride-based phosphors such as Sr[Mg\({}_{2}\)Al\({}_{2}\)N\({}_{4}\)]:Eu\({}^{2+}\) or Sr[Mg\({}_{3}\)SiN\({}_{4}\)]:Eu\({}^{2+}\)[8; 9]. Given the strong nephelauxetic effect of N\({}_{8}\) cuboid environment around the Eu activator, the emission color is limited to the red region. To lower the emission wavelength, adding O\({}^{2-}\) in the cuboid environment like in the oxy-nitride Sr[Li\({}_{2}\)Al\({}_{2}\)O\({}_{2}\)N\({}_{2}\)]:Eu\({}^{2+}\) (SALON) [10] allowed to blue-shift the emission peak from 650 nm (SLA) to 614 nm, with similar performance than SLA. Numerous oxide-based phosphors were then developed with general formula M\({}_{4}\)[Li\({}_{3}\)SiO\({}_{4}\)]\({}_{4}\) where M can be selected from Li\({}^{+}\), Na\({}^{+}\), K\({}^{+}\), Rb\({}^{+}\), Cs\({}^{+}\) and their combination [5; 11; 12]. Most of these alkali lithosilicate phosphors, with O\({}_{8}\) cuboid environment provide green/cyan/blue emission. It is commonly accepted that the narrow-band emission of Eu-doped UCr\({}_{4}\)C\({}_{4}\) structure is linked to the highly condensed host structure and the cuboid coordination environment. However, the microscopic origin of such narrow emission is not fully understood. In this respect, Huang-Rhys theory [13] and its generalization [14; 15] led by first-principles computations allows one to gain insights into the electron-phonon coupling accompanying electronic optical transitions. Mainly used in the context of defects for quantum information technologies [15; 16; 17; 18], this theory was only exploited recently to obtain information on the phonon side bands of a few selected phosphors [19; 20; 21]. Despite the pioneer status and technological relevance of SLA [22], its phonon side bands with apparent vibronic signatures have not received any theoretical attention. Initial theoretical studies of SLA focused on the bulk host material only [23; 24]. Recently, a time-dependent DFT approach for embedded clusters has clarified the electronic processes responsible for light absorption in SLA but has not investigated emission and the PL spectrum [25]. In SLA, the europium dopant can substitute the strontium atom in two inequivalent positions, which lead to two emission centers. Thanks to their different decay time, time-resolved luminescence was used in 2016 to decompose the zero-phonon line (ZPLs), 15377 cm\({}^{-1}\) (1.906 eV) and 15780 cm\({}^{-1}\) (1.956 eV), and corresponding phonon side band of each center but their structural assignment was not done [26]. In this work, we study from first-principles the vibronic processes occurring in the PL spectrum of SrLiAl\({}_{3}\)N\({}_{4}\):Eu\({}^{2+}\) (SLA) phosphor. We simulate the PL spectra from the two luminescent sites and assign them to their specific microscopic environment using predictive methods whose accuracy has previously been demonstrated [27; 28; 29; 30; 31]. We use large supercells computed with density functional theory (DFT) and the constrained-\(\Delta\)SCF method to obtain optimized Eu-4f ground state structures and Eu-5d excited state structures. The atomic displacements induced by the 5d-4f transitions are projected onto the phonons modes of the system in order to obtain the Huang-Rhys (HR) spectral function which provides information on the nature of the phonons participating to the transition. The HR spectral function is converged by increasing the supercell size using the embedding procedure proposed by Alkauskas _et al._[14]. The PL lineshape is computed following the generating function approach and its temperature-dependent generalization [15]. Finally, the effect of thermal expansion on PL properties is included via volume quasi-harmonic approximation (QHA) [32; 33] with a minimization of the Helmholtz free energy with respect to the global volume. This allows one to estimate ZPL energies with increasing temperatures. It is found that the emission lineshape with a ZPL energy around 1.906 eV and very narrow emission bandwidth [26] is associated to the Sr site having a second coordination sphere composed of 3 Li and 5 Al while the one with higher ZPL energy around 1.956 eV and larger emission bandwidth is associated to the Sr site having second coordination sphere composed of 1 Li and 7 Al. The very distinct shapes of the two PL spectra originate from different promoted excited 5d orbitals (5d\({}_{z^{2}}\)-like aligned along Sr channel or 5d\({}_{x^{2}-y^{2}}\)-like pointing along Al/Li atoms), which lead to different 5d-4f atomic relaxation pattern, and hence different electron-phonon coupling. By explicitly including the effect of thermal expansion on ZPL energies, inducing a blue-shift of around 30 meV for both sites between 10 K and 573 K, we find an excellent agreement between the simulated temperature dependent PL spectra and experiment. The paper is structured as follows. We first present the theoretical background and the computational methodology. The results of the work are then presented. Both Sr sites are compared carefully for each property: the excited state geometry, the coupling with phonons and finally the temperature dependent PL spectra, including thermal expansion effect. Finally, we provide additional discussions and conclude this work. ## II Theory and computational methodology ### Huang-Rhys theory for the luminescence spectrum Within the Huang-Rhys theory and the Franck-Condon approximation [13; 34], ground state (GS) and excited state (ES) potential energy surfaces are projected along the harmonic phonon normal coordinates of the system \(Q_{\nu}\). The GS and ES surfaces are assumed to be identical (same phonon frequencies \(\omega_{\nu}\) and eigenmodes) except for a rigid offset \(\Delta Q_{\nu}\) coming from linear electron-phonon interaction and an energy difference called zero-phonon line energy \(E^{\rm ZPL}\). This reduces the problem to a displaced harmonic oscillator problem Figure 1: Schematic representation of the origin of photoluminescence (PL) spectra. On the left, ground state (GS) and excited state (ES) energy curves are projected along phonon normal coordinate \(Q_{\nu}\) (here only for one mode) and are approximated by harmonic functions with the same frequency \(\omega_{\nu}\). Vibrational energy levels and corresponding eigenfunctions are shown as horizontal lines and colored areas. GS and ES are displaced by \(\Delta Q_{\nu}\), the mass-weighted displacement between the minimum of the GS and the ES curves. On the right, the PL spectrum is formed. The zero-phonon line (ZPL) comes from the transition between the first vibrational level of the ES to the first vibrational level of the GS. Other transitions give the phonon sideband (PSB). The intensity of each peak is computed with the overlap between corresponding eigenfunctions. for each phonon mode \(\nu\), as depicted on Fig. 1, for which an analytic expression to compute the associated phonon side band (PSB) exists. At 0 K, where only the ES vibrational state \(n=0\) contributes, one has: \(|\langle\chi^{\rm GS}_{n_{\nu}}|\chi^{\rm ES}_{0_{\nu}}\rangle|^{2}=e^{-S_{\nu} }(S_{\nu})^{n_{\nu}}/(n_{\nu}!)\) with \(S_{\nu}\) the Huang-Rhys factor of mode \(\nu\). For a given photon energy \(\hbar\omega\), the luminescence intensity is given by [14; 15; 20]: \[L(\hbar\omega,T)\propto\omega^{3}A(\hbar\omega,T), \tag{1}\] where the lineshape function \(A(\hbar\omega)\) is evaluated as the Fourier transform of the generating function \(G(t,T)\)[35] : \[A(\hbar\omega,T) = \int_{-\infty}^{+\infty}G(t,T)e^{i\omega t-\frac{\pi}{\hbar}|t|-i \frac{E^{\rm ZPL}}{\hbar}t}dt, \tag{2}\] \[G(t,T) = e^{S(t)-S(0)+C(t,T)+C(-t,T)-2C(0,T)}, \tag{3}\] where \(S(t)=\sum_{\nu}S_{\nu}e^{i\omega_{\nu}t}\) and \(C(t,T)=\sum_{\nu}\overline{n}_{\nu}(T)S_{\nu}e^{i\omega_{\nu}t}\) are the Fourier transforms of the Huang-Rhys spectral functions and temperature weighted Huang-Rhys spectral functions, respectively. \(\gamma\) is the homogeneous Lorentzian broadening of each vibronic transition, and \(\overline{n}_{\nu}(T)\) is the average occupation number of \(\nu\)-th phonon mode: \[\overline{n}_{\nu}(T)=\frac{1}{e^{\frac{\hbar\omega_{\nu}}{E_{B}T}}-1}. \tag{4}\] The ingredients for Eq. (2) are the partial Huang-Rhys factor of each phonon mode \(S_{\nu}\), the phonon frequencies \(\omega_{\nu}\) and the zero-phonon line energy E\({}^{\rm ZPL}\). The \(S_{\nu}\) is the mean number of phonon \(\nu\) involved in the transition: \[S_{\nu}=\frac{\frac{1}{2}\omega_{\nu}^{2}\Delta Q_{\nu}^{2}}{\hbar\omega_{\nu }}=\frac{\omega_{\nu}\Delta Q_{\nu}^{2}}{2\hbar}, \tag{5}\] where \(\Delta Q_{\nu}\) is the mass-weighted atomic displacement projected along phonon mode \(\nu\): \[\Delta Q_{\nu}=\sum_{\kappa\alpha}\sqrt{M_{\kappa}}\Delta R_{\kappa\alpha}e_{ \nu,\kappa\alpha}, \tag{6}\] where \(\Delta\mathbf{R}_{\kappa}=\mathbf{R}_{\kappa}^{\rm GS}-\mathbf{R}_{\kappa}^{ \rm ES}\) is the vector associated with the displacement of atom \(\kappa\) between excited and ground state, \(\mathbf{e}_{\nu,\kappa}\) are the phonon eigenvectors and \(M_{\kappa}\) are the atomic masses. Under the harmonic approximation, Eq. (6) becomes \[\Delta Q_{\nu}=\frac{1}{\omega_{\nu}^{2}}\sum_{\kappa\alpha}\frac{\Delta F_{ \kappa\alpha}e_{\nu,\kappa\alpha}}{\sqrt{M_{\kappa}}}, \tag{7}\] where \(\Delta\mathbf{F}_{\kappa}\) are the ground-state forces evaluated at the equilibrium excited-state structure. This formulation is advantageous as the forces decay faster than the displacements, see Section 1.1 and 1.2 of the supplementary informations (SI) [36]. We use DFT to obtain the ground-state optimized structure \(\mathbf{R}_{\kappa}^{\rm GS}\), the forces \(\Delta\mathbf{F}_{\kappa}\), the phonon eigenvectors \(\mathbf{e}_{\nu,\kappa}\) and eigenfrequencies \(\omega_{\nu}\). The \(\Delta\)SCF constrained-occupation method is used to optimize the excited-state structure \(\mathbf{R}_{\kappa}^{\rm ES}\). ### Computational method Calculations are performed with density-functional theory (DFT) using ABINIT [37; 38] with the PAW method [39]. The generalized gradient approximation (GGA-PBE) is used to treat exchange-correlation effects [40] and a Hubbard U=7 eV term is added on the 4\(f\) states of europium, consistently with our previous works [29]. We find that the value of the U parameter has only a weak impact on the ZPL energy and 5d-4f displacements, see Section 2 of the SI [36]. Calculations on europium-doped SLA are conducted with a 2\(\times\)2\(\times\)2 supercell containing 288 atoms. The two possible emission centers are treated as two independent systems. For all supercell calculations, the structures are relaxed below a maximal residual force of 10\({}^{-4}\) Hartree/Bohr (about 0.005 eV/A). The cut-off kinetic energy is 25 Ha (680 eV) with a single zone-centered \(\mathbf{k}\)-point. For the treatment of the Eu 4\(f^{6}5d^{1}\) excited state, we use the \(\Delta\)SCF method whereby the eigenfunctions associated to the highest predominantly Eu 4\(f\) band are forced to be unoccupied while the next predominantly Eu 5\(d\) energy band is constrained to be occupied. While the latter is more hybridized than the Eu 4\(f\) one, it is nevertheless refered to as a Eu 5\(d\) band. The ZPL energy is computed as the difference of the total energy of the relaxed excited states and that of the ground state. Detailed information on the use of this approach can be found in Ref. [19]. This work focuses on the lowest excited state of the 4f\({}^{6}\)5d\({}^{1}\) configuration and describes the vibronic features appearing in the emission spectrum of the two emission centers. The complete 4f\({}^{6}\)5d\({}^{1}\) configuration, giving rise to a complex fine structure in the excitation band, cannot be captured with our methodology. Multiconfigurational methods for embedded-cluster [41; 42] are more suitable to understand the 4f\({}^{6}\)5d\({}^{1}\) manifold and the absorption spectra of Eu-doped phosphor materials. #### ii.2.1 Phonons and embedding methodology Phonons modes of the undoped primitive cell of SLA containing 36 atoms are obtained by diagonalizing the dynamical matrices computed with DFPT with a 2\(\times\)2\(\times\)2 \(\mathbf{k}\)-points and \(\mathbf{q}\)-points grids and then Fourier interpolated on a fine 4\(\times\)4\(\times\)4 \(\mathbf{q}\)-grid [43]. The phonon band structure Fourier interpolated along high-symmetry lines and the density of states can be found in Section 3 of the SI [36]. Reaching the dilute limit requires to estimate the interatomic force constants (IFCs) on large defective supercells, which would be computationally too expensive with a direct approach. Hence a technique similar to the one proposed by Alkauskas _et al._[14] is followed. The IFCs of the pristine SLA, computed on a 4\(\times\)4\(\times\)4 \(\mathbf{q}\)-grid, are mapped on the corresponding 4\(\times\)4\(\times\)4 supercell. The IFCs of the Eu-doped system are computed on a smaller 2\(\times\)2\(\times\)2 supercell containing 288 atoms using a frozen-phonon approach as implemented in the PHONOPY package [44]. In order to construct the total IFCs, the following cutoff is applied. If both atoms \(\kappa\) and \(\kappa^{\prime}\) are separated from the dopant by a distance smaller than \(R_{\rm c}\)=5.5A, then the IFCs computed with the 2\(\times\)2\(\times\)2 defect supercell are used. For all other atomic pairs, the IFCs computed with the pristine system are used. This procedure breaks the acoustic sum rule [15] which we reimpose with \(C_{\kappa\alpha,\kappa\alpha}=-\sum_{\alpha\neq\beta}C_{\kappa\beta,\kappa\alpha}\), where \(\alpha\) and \(\beta\) refers to Cartesian coordinates. This embedding methodology allows one to obtain the phonon eigenfrequencies \(\omega_{\nu}\) and eigenvectors \(\mathbf{e}_{\nu,\kappa}\) of Eq. (7) on the 4\(\times\)4\(\times\)4 supercell. The forces \(\Delta\mathbf{F}_{\kappa}\) of Eq. (7) are those of the 2\(\times\)2\(\times\)2 supercell and are set to zero elsewhere because of the short-range decay of the forces with respect to the Eu activator. Convergence of the Huang-Rhys spectral function as a function of the size of the large supercell is reported in Section 1.3 of the SI [36]. #### ii.2.2 Thermal expansion To compute the effect of thermal expansion, the volumic quasiharmonic approximation (QHA) is used, neglecting deviatoric thermal stresses and internal forces (v-ZSISA approximation) [45; 46; 47]. The only contribution to the thermal expansion of the crystal is due to the coupling between phonons and the change in unit cell volume [48], with cell parameters and internal coordinates relaxed at fixed volumes. We obtain the temperature versus volume curve of the undoped SLA by minimizing the Helmholtz free energy (FE) [33; 49; 32]: \[F(V,T)=E(V)+F^{\rm vib}(V,T), \tag{8}\] where \[F^{\rm vib}(V,T)=\sum_{\mathbf{q}\nu}\frac{\hbar\omega_{\mathbf{q}\nu}(V)}{2}+k_{\rm B }T\ln\biggl{(}1-e^{-\frac{\hbar\omega_{\mathbf{q}\nu}(V)}{k_{\rm B}T}}\biggr{)}, \tag{9}\] where \(E(V)\) is the static DFT energy versus volume curve, with all non-volumic degrees of freedom relaxed, and \(F^{\rm vib}(V,T)\) is the vibration energy with the same cell and atomic geometry. The FE curve is constructed using seven configurations within a [-2%,+4%] volume change with respect to the static equilibrium volume. The minimum volume for a given temperature is obtained by fitting a Murnaghan equation of state [50] to the FE curve. Detailed information on the FE curves, temperature dependent thermal expansion coefficient and zero-point volume are reported in Section 4 of the SI [36]. To evaluate the effect of thermal expansion on the PL spectrum, we have scaled the lattice parameters obtained for the undoped SLA to the 288 atoms supercells, and re-computed ZPL energies at fixed volumes corresponding to the selected temperatures, see Fig. S8 of the SI [36]. This allowed us to compute both the shift of the ZPL energies with temperature as well as the effect of zero-point volume on the ZPL energy at 0 K. ## III Results and discussion ### Ground state geometry The SLA crystallizes in a triclinic crystal system (\(P\)T space group) with the typical channel structure observed in other UCr\({}_{4}\)C\({}_{4}\) type phosphors, see Fig. 2. Our computed lattice parameters are overestimated by about 0.3%, which is common for PBE exchange and correlation functionals. The angles match within 0.05\({}^{\circ}\) difference. Half of the channels are occupied by the Sr\({}^{2+}\) ions. These channels are built up with ordered tetrahedra alignment as seen in Fig. 2**a**, with one green [LiN\({}_{4}\)]\({}^{11-}\) followed by three blue [AlN\({}_{4}\)]\({}^{9-}\). Two Sr sites with very similar first-shell environment (Sr-N\({}_{8}\)) but different second-shell can host the Eu activator and are shown in Fig. 2**b**, which leads to two emission centers. The first site, denoted as Sr1 in this work, is surrounded by 3 [LiN\({}_{4}\)]\({}^{11-}\) and 5 [AlN\({}_{4}\)]\({}^{9-}\) tetrahedra, while the second, denoted as Sr2, is surrounded by one [LiN\({}_{4}\)]\({}^{11-}\) and 7 [AlN\({}_{4}\)]\({}^{9-}\) tetrahedra. We observe that within the Sr chain, the Sr1-Sr1 distance of 3.2 A is shorter than the Sr2-Sr2 distance of 3.42 A. The Sr1-Sr2 distance is in between with a value of 3.28 A. We compute that the Sr-N\({}_{8}\) cuboid has similar mean Sr-N distance in both sites (2.809 A and 2.804 A) When the Sr atoms are replaced by Eu atoms, these distances change by less than 0.1% due to similar atomic radii for the Eu\({}^{2+}\) and Sr\({}^{2+}\) atoms [51]. Additional informations can be found in Section 5 of the SI [36]. Figure 2: **a** Crystal structure of SrLiAl\({}_{3}\)N\({}_{4}\):Eu\({}^{2+}\) viewed along the Sr chain. **b** First and second shell environment of Sr1 and Sr2 sites that Eu atom can substitute. **c** Structure perpendicular to the Sr chains, with the characteristic UCr\({}_{4}\)C\({}_{4}\) framework. ### Excited state geometry The probability density of the highest occupied Kohn-Sham orbitals associated to the 4f\({}^{7}\)(\({}^{8}\)S\({}_{7/2}\)) ground and lowest 4f\({}^{6}\)(\({}^{7}\)F\({}_{J}\))5d\({}^{1}\) excited states are presented in Fig. 3**a-b** for both Sr sites. The computed energy levels are reported in Section 6 of the SI [36]. In a hypothetical pure cubic EuN\({}_{8}\) environment, 5d\({}_{z^{2}}\) and 5d\({}_{x^{2}-y^{2}}\) would be the lowest degenerate d-states. However, this degeneracy is lifted when considering a non-perfect EuN\({}_{8}\) cuboid with different second shell environment ([LiN\({}_{4}\)]\({}^{11-}\) and [AlN\({}_{4}\)]\({}^{9-}\) tetrahedra) and different Eu-Sr distances along the 1D Sr chain. In the case of Eu(Sr1), we observe that a 5d\({}_{z^{2}}\)-like orbital is stabilized along the Sr chain as in the case of Sr[Li\({}_{2}\)Al\({}_{2}\)O\({}_{2}\)N\({}_{2}\)]:Eu\({}^{2+}\)[19]. The 5d\({}_{x^{2}-y^{2}}\)-like state is also located in the band gap, but 0.27 eV above the 5d\({}_{z^{2}}\)-like state. In contrast, for Eu(Sr2), the opposite situation appears: a 5d\({}_{x^{2}-y^{2}}\)-like orbital is stabilized, perpendicular to the Sr chain. The four lobes point toward the interstitial area of low electrostatic potential produced by nitrogen ligands. The inversion of the 5d states ordering appears to be related to the difference in second-shell environments, given the similarity of the Eu-N\({}_{8}\) cuboids. However, it is unclear whether the inversion is caused by the difference in Eu-Sr distances (steric effect) or the difference in tetrahedra types (coulombic effect). To explore this further, we have relaxed a fictitious system in which the Eu atom replaces Sr1, while keeping the Eu-Sr distances fixed at the relaxed Eu(Sr2) values. Our findings indicate that the 5d\({}_{z^{2}}\) state remains stabilized, but the 5d\({}_{x^{2}-y^{2}}\) state is now 0.15 eV above. This suggests that the difference in tetrahedra types is the primary explanation for the inversion of the 5d states, with some contribution from the difference in Eu-Sr distances. We stress that in Ref. [25], it was argued that the similar local geometry and electronic structure between the two sites leads to a similar emission spectrum. However, our computations contradict this claim, as we have found that the two sites possess different electronic structures that result in distinct emission spectra, as it will be demonstrated later on. We also find that for both sites, the computed energy separation between the empty 5d state and the conduction bottom (Sr-4d character) is 0.29 eV. Experimentally, the 5d-conduction band separation is estimated to be 0.28 eV [24] based on a fitting of thermal quenching data with an exponential non-radiative decay rate from the 5d state to the conduction bottom through thermal excitation. Despite the good agreement between these values, it is important to remember that the Kohn-Sham energy levels calculated using GGA-PBE are only approximations of the actual energy levels and this agreement should be appreciated with caution. The displacement field associated to the 5d-4f transition is presented in the insets of Fig. 3**c-d**, scaled by 20 for clarity. The norm of these displacements is shown as a function of the distance from the Eu activator. For both Sr sites, upon emission, going from the 5d to 4f state leads to an elongation of Eu-N bond lengths by 0.03-0.06 A. Indeed, upon 5d excitation, additional covalent interactions appear between the N ligands with the inner 4f hole that shortens the bond length [41]. Interestingly, the Eu atom moves in its cuboid cage as a result of its non-symmetrical environment. Looking now at the atomic rearrangements beyond the first coordination shell, we see first that a significant fraction of the total displacements is outside this first shell, as observed in similar UCr\({}_{4}\)C\({}_{4}\) phosphors [19; 21]. This fact might be linked to the rigid structure (we compute a Debye temperature of 878.5 K) that distributes the local strain caused by the localized 5d-4f excitation to atoms far away from the Eu activator. Quantitatively, 61% of the total mass-weighted displacements \(\Delta Q=\sqrt{\sum_{\kappa}M_{\kappa}\Delta R_{\kappa}^{2}}\) is contained in the Eu-N\({}_{8}\) cuboid for Eu(Sr1) and 50% for Eu(Sr2). Given the high Eu mass, Eu atom contributes to this \(\Delta Q\) up to 49% in Eu(Sr1) and to 30% in Eu(Sr2). By inspecting closely the difference between Eu(Sr1) and Eu(Sr2), we observe that a main relaxation pattern in Eu(Sr1) is a long-range displacement of the Sr chain, as already observed in Sr[Li\({}_{2}\)Al\({}_{2}\)O\({}_{2}\)N\({}_{2}\)]:Eu\({}^{2+}\), which is a consequence of the 5d\({}_{z^{2}}\) orientation along this chain. Given the high Sr mass, we expect that the Sr displacements are predominant in shaping the electron-phonon spectral function of Eu(Sr1), as will be see later. For Eu(Sr2), a major fraction of the displacements are distributed on the adjacent [LiN\({}_{4}\)]\({}^{11-}\) and [AlN\({}_{4}\)]\({}^{9-}\) tetrahedra, which is a consequence of the 5d\({}_{x^{2}-y^{2}}\) lobes pointing towards it. We provide the structural parameters for pristine SLA, doped SLA Eu(Sr1) and Eu(Sr2) in their 4f and 5d states in Tables 1, 2 and 3 of the SI [36]. Finally, the energy between the computed relaxed excited and ground states provides an estimation of the ZPL energies, \(E^{\rm ZPL-Sr1}\)=1.916 eV and \(E^{\rm ZPL-Sr2}\)= 1.989 eV, close to the experimental ZPL energies of 1.906 eV and 1.956 eV, respectively. In order to explain this difference in ZPL energies, one might invoke the difference in [EuN\({}_{8}\)] cuboid volumes between the two sites, as it was done in Ref. [5] to explain the variation of emission energies across different oxy-nitrides and alkali-lithosilicates. However, the very small difference between the two cuboids volumes (\(\approx\) 1%) is not sufficient to explain the difference in ZPL energies. We believe that the different degree of ionicity experienced by the 5d orbital due to the different number of [LiN\({}_{4}\)]\({}^{11-}\) and [AlN\({}_{4}\)]\({}^{9-}\) tetrahedra is responsible for the difference in ZPL energies. As a rough estimate, Eu(Sr1) with 3[LiN\({}_{4}\)]\({}^{11-}\) and 5[AlN\({}_{4}\)]\({}^{9-}\) has a second shell formal charge of -78 e while Eu(Sr2) with 1[LiN\({}_{4}\)]\({}^{11-}\) and 7[AlN\({}_{4}\)]\({}^{9-}\) has a second shell formal charge of -74 e, where e is the absolute value of the electron charge. In SALON, this second shell formal charge is -64 e. This is in line with the experimental ZPL energies of 1.906 eV, 1.956 eV and 2.03 eV [52], respectively. ### Huang-Rhys spectral function The Huang-Rhys spectral decomposition is presented in Fig. 4. Eq. (5) was used with a supercell containing 2304 atoms (4\(\times\)4\(\times\)4 supercell) thanks to the embedding procedure, described in the theoretical section, which allows us to reach the dilute limit while keeping local and quasi-local modes brought by the Eu defect. The localization of the phonon modes can be characterized by computing the inverse participation ratio (IPR) defined as [14; 15; 21] \[\text{IPR}_{\nu}=\frac{1}{\sum_{\kappa}|\langle\mathbf{e}_{\nu,\kappa}| \mathbf{e}_{\nu,\kappa}\rangle|^{2}}, \tag{10}\] which indicates, roughly speaking, the number of atoms that participate to a phonon mode \(\nu\). For example, \(\text{IPR}_{\nu}=1\) means that only one atom vibrates and therefore the phonon mode is maximally localized, while \(\text{IPR}_{\nu}=\text{N}\) means that N atoms vibrates in the supercell with the same amplitude. The localization ratio \(\beta_{\nu}\) is defined by taking the ratio of the total number of atoms in the supercell \(N\) and the IPR, \(\beta_{\nu}=N/\text{IPR}_{\nu}\) where \(\beta_{\nu}\approx 1\) represents a bulk-like delocalized mode while \(\beta_{\nu}\gg 1\) corresponds to a quasi-local or local mode. We show in Fig. 4**b**-**c** the color-coded mode localization \(\beta_{\nu}\), which combined with the atom-projected phonon density of states in Fig. 4**a**, shows that the low-energy modes are more localized and dominate the electron-phonon coupling associated with the 5d-4f transition in the case of Eu(Sr1). For Eu(Sr1), the spectral function is dominated by a large peak around 10 meV where low-frequency phonon modes associated to Sr and Eu displacements are involved. They mostly corresponds to long-wavelength collective displacements of the Sr atoms along the Sr chain that couple strongly with the above-described 5d-4f relaxation of the Sr channel containing Eu. We compute a total Huang-Rhys factor \(S=\sum_{\nu}S_{\nu}\) of 2.631 in that case and find that the phonons with high partial HR factor \(S_{\nu}\) can be either relatively delocalized or very localized. For instance, the highest coupling mode at 9.97 meV with \(S_{\nu}\)=0.07 has a \(\beta_{\nu}\) of 23. This mode is associated to the collective displacements of all the Sr atoms in the chain and can be considered as a slightly perturbed bulk mode. In contrast, the third highest coupling mode at 6.6 meV is very localized, with a \(S_{\nu}\)=0.05 and a large \(\beta_{\nu}\) of 115. As illustrated in the inset of Fig. 4**b**, the atomic vibrations associated with this mode (red arrows) are localized around the Eu doping atom. In the case of the second site Eu(Sr2), the spectral function indicates that phonon modes involving Eu and Sr atoms participate but to a lesser extent than in the Eu(Sr1) case. Additionally, a broad peak appears between 20 meV and 40 meV which is associated to modes involving Al and N atoms. The total Huang-Rhys factor of 3.587 indicates a stronger electron-phonon coupling. The highest coupling mode located at 9.2 meV with \(S_{\nu}\)=0.023 has a \(\beta_{\nu}\) of 59. This mode is mostly as Figure 3: The probability density of the highest occupied Kohn-Sham orbitals associated to the 4f\({}^{7}\)(\({}^{8}\)S\({}_{7/2}\)) ground and lowest 4f\({}^{6}\)(\({}^{7}\)F\({}_{7}\))5d\({}^{1}\) excited states, for **a** Eu(Sr1) site and **b** Eu(Sr2) site. Norm of the atomic displacements induced by the 5d to 4f transition (upon emission) for **c** Eu(Sr1) site and **d** Eu(Sr2) site as a function of the distance from Eu. The insets show a three-dimensional view of these displacements near the Eu activator, scaled by 20 for clarity. sociated with the Eu atom moving in its cage. Similarly to the Eu(Sr1) case, there is also a number of delocalized bulk modes that contribute. Indicatively, the fifth highest coupling mode at 25.1 meV, with a \(S_{\nu}\)=0.015 has a \(\beta_{\nu}\) of 1.7 and is illustrated in the inset of Fig. 4**c**. The atomic vibrations associated to this mode (blue arrows) are distributed mainly on Al and N atoms in the whole system. We analyze in detail the five phonon modes which contribute the most to the spectral function in Section 7 of the SI [36]. ### Photoluminescence spectrum and its temperature dependence Using the generating function approach described in the theoretical section, we compute the photoluminescence (PL) intensity from the Huang-Rhys spectral function. Fig. 5 compares the PL spectra from both sites at zero temperature with the experimental PL intensities at 10 K from Ref. [26], where the total PL intensity was deconvoluted with time-resolved spectroscopy (red and blue dotted curves). The total spectrum is also in good agreement with prior measurements at 6 K [7]. For definitiveness, we have (i) aligned our computed ZPL energy of Eu(Sr1) with the highest experimental peak [26], (ii) used a constant Lorentzian broadening of 25 meV, and (iii) used the experimental Eu(Sr1) to Eu(Sr2) total intensity ratio (computed as the ratio between the areas under the curves) from Ref. [26]. We do not attempt in this work to compute these weights from first principles, which would require to compare correctly the energetics and the electric dipole matrix elements of both sites. Indeed, the weight of the second PL intensity (ZPL at 1.956 eV) is different between the two experiments, which could be explained by different synthesis conditions. We note also that the energy difference between the experimental ZPL energies is 50 meV while the computed energy difference is 74 meV. This explains the misalignment observed in Fig. 5 between the experimental Eu(Sr2) ZPL and the computed one. The vibronic peaks appearing in the experimental spectra as well as the global shapes of both spectra match our computations which means that the sharper PL spectrum with ZPL at 1.906 eV can be assigned with confidence to the Sr1 site while the broader spectrum with ZPL at 1.956 eV is assigned to the Sr2 site. When looking at the total experimental spectrum, the high-energy peak at 1.956 eV can be assigned to the ZPL of Eu(Sr2) center, the peak at 1.906 eV to the ZPL of Eu(Sr1) center, Figure 5: Computed photoluminescence intensities \(L(\hbar\omega)\) for both Sr sites at 0 K and their weighted sum compared with two low-temperature experiments [7; 26]. The red and blue dotted curves are experimental PL intensities of both sites from Ref. [26], where the total PL intensity (gray dotted curve) was deconvoluted with time-resolved spectroscopy. The computed Eu(Sr1) zero-phonon line energy is aligned with the second highest peak of the experimental data from Ref. [26] and the two weights for the black line are also taken from Ref. [26]. Reproduced with permission from Ref. [7; 26]. Copyright 2014 Springer Nature and Copyright 2016 John Wiley and Sons. Figure 4: Spectral decomposition of the Huang-Rhys function obtained from the \(S_{\nu}\) (colored dots), \(S(\hbar\omega)=\sum_{\omega}S_{\nu}\delta(\hbar\omega-\hbar\omega_{\nu})\) (black lines), using a 1 meV Gaussian broadening, for both sites. In **a** we show the atom-projected phonon density-of-states. The height of the dots in **b** -for Eu(Sr1)- and **c** -for Eu(Sr2)- provides the partial Huang-Rhys factor of each phonon mode, while the color indicates the degree of localization of each mode \(\beta_{\nu}\), see text. For each Sr site, two illustrative modes with high Huang-Rhys factor but different degree of localization are shown. and the two next peaks to the one- and two-phonon contributions of strontium based phonon modes in Eu(Sr1). We still note that our computed total Huang-Rhys factor for Eu(Sr1) S\({}_{1}\) seems slightly overestimated. In order to estimate the experimental S\({}_{1}\), the 5d-4f forces can be scaled in order to fit the experimental spectrum. Scaling the forces by 90% (giving S\({}_{1}^{\rm exp}\approx\) 2.13) allows one to obtain an excellent reproduction of the experimental phonon side-band (see figure S11). Finally, all the elements can be placed together to compute the temperature-dependent PL spectrum of SLA using Eq. (2) which we present in Fig. 6**a**. This achievement represents the main result of our work, with excellent experimental agreement. The effect of thermal expansion on the ZPL energy is quite similar for both sites and leads to a blue-shift of 29 and 32 meV from 0 to 573K, see Fig. 6**b**. The ZPL shifts associated to the zero-point volume are estimated to be 21 and 25 meV. With increasing temperature, the spectrum broadens, with a progressive loss of clear vibronic peaks. Fig. 6**c** presents the position of the emission maximum as a function of temperature, both with and without accounting for thermal expansion. When thermal expansion is not considered, a red-shift between 10 K and 573 K is computed. This is attributed to the positioning of the vibronic transitions that are activated by temperature relative to the emission maximum. Above 200K, where the position of the emission maximum is determined by the envelope of the vibronic transitions, accounting for thermal expansion yields the temperature dependence observed experimentally. We notice that the \(\omega^{3}\) dependence of \(L(\hbar\omega)\propto\omega^{3}A(\hbar\omega)\) causes a blue shift of the emission maximum due to the broadening of the lineshape function \(A(\hbar\omega)\) with temperature, even if \(A(\hbar\omega)\) does not shift. This leads to a blue shift of +11 meV between 0K and 573K (see Section 8.2 of the SI). We provide the site decomposition of this emission maximum shift and further details in the SI [36]. Our analysis sheds new light on the puzzling phenomenon of temperature-induced shift in phosphor materials, which remains theoretically underexplored [53; 54]. It highlights the importance of both the \(\omega^{3}\) dependence of the PL intensity and the effect of thermal expansion. ## IV Conclusions In this work, the vibronic processes occurring in the narrow-band emission SrLiAl\({}_{3}\)N\({}_{4}\):Eu\({}^{2+}\) red phosphor is characterized from first principles. Two Sr sites can host the Eu activator, leading to two emission centers. The Figure 6: **a** Temperature-dependent photoluminescence (PL) spectrum computed with Eq. (2) (plain curves) and compared with experimental data [26] (dashed curves). The zero-phonon line (ZPL) energies at 0 K are indicated by gray vertical dashed lines and the ZPL including thermal expansion effects are indicated with squares. The PL spectra for each temperature is shifted vertically for clarity. **b** Effect of thermal expansion on the ZPL energies for both sites where the DFT values at 0 K are shown with horizontal orange dashed lines. The shift associated with the zero-point volume are 21 and 25 meV, respectively. Between 0 K and 573 K, thermal expansion leads to a blue shift of 29 meV and 32 meV, respectively. **c** Photon energy of the emission maximum as a function of temperature. Ignoring thermal expansion leads to a red shift of the emission maximum. Adding the thermal expansion effect allows to obtain a better agreement with the experiment. first, denoted as Sr1, is surrounded by 3 [LiN\({}_{4}\)]\({}^{11-}\) and 5 [AlN\({}_{4}\)]\({}^{9-}\) tetrahedra, while the second, denoted as Sr2, is surrounded by 1 [LiN\({}_{4}\)]\({}^{11-}\) and 7 [AlN\({}_{4}\)]\({}^{9-}\) tetrahedra. We hence apply the constrained-\(\Delta\)SCF method to optimize Eu-5d excited state structures for both sites, independently. Different 5d orbital are stabilized: a 5d\({}_{z^{2}}\)-like state aligned along the Sr channel for Eu(Sr1), and a 5d\({}_{x^{2}-y^{2}}\)-like state pointing along Al/Li atoms for Eu(Sr2). The 5d-4f atomic relaxation is dominated by a long-range displacement of the Sr chain for Eu(Sr1), as a consequence of the 5d\({}_{z^{2}}\) orientation. By projecting the atomic relaxation onto the phonon modes of a large doped supercell, we identify the phonon that couples the most with the 5d-4f transition. For Eu(Sr1), low-frequency phonon modes associated to Sr and Eu displacements are involved. They correspond either to bulk-like collective displacements of the Sr chains or to very localized modes concentrated on the Sr chain around the Eu activator. We computed a total Huang-Rhys factor of \(S=2.631\) resulting in a small coupling with phonons of low frequency which yields a small bandwidth. The same situation was found in Sr[Li\({}_{2}\)Al\({}_{2}\)O\({}_{2}\)N\({}_{2}\)]:Eu\({}^{2+}\) suggesting that the small bandwidth of other UCr\({}_{4}\)C\({}_{4}\)-type phosphors with Ca/Sr/Ba channel can be explained in a similar way. For Eu(Sr2), as a consequence of a different 5d orbital orientation, modes associated to Sr and Eu displacements are less involved and delocalized bulk modes involving Al and N atoms contribute as well, leading to a higher Huang-Rhys factor \(S=3.587\). This larger coupling with phonons of higher frequencies yields a broader lineshape. By comparing our computations with experimental low-temperature photo-luminescent spectra, we are able to assign the peak at 1.906 eV to the ZPL of Eu(Sr1) center, the two next peaks at 1.89 and 1.88 eV being the one and two-phonon contributions of strontium-based phonon modes, respectively. The peak at 1.956 eV is assigned to the ZPL of Eu(Sr2) center. Finally, we show the importance of thermal expansion effect on the ZPL energies which induces a blue-shift of around 30 meV for both sites between 10 K and 573 K. This results in a good agreement between our simulated temperature dependent PL spectra and experimental data, for both the broadening and the shift of the spectrum with temperature. Overall, this work offers a direct theoretical understanding of the observed spectrum of SrAl\({}_{3}\)N\({}_{4}\):Eu\({}^{2+}\) which highlights the importance of the Sr channels in shaping the small bandwidth. These findings are general and should apply to any UCr\({}_{4}\)C\({}_{4}\)-type phosphors. **Supporting informations** Details on the embedding approach, influence of the Hubbard U, phonon band structure of pristine SLA, additional details on thermal expansion, additional details on structural parameters, Kohn-Sham levels, dominant phonon modes, additional details on the emission shift with temperature. **Acknowledgments** Computational resources have been provided by the supercomputing facilities of the Universite catholique de Louvain (CISM/UCL) and the Consortium des Equipements de Calcul Intensif en Federation Wallonie Bruxelles (CECI) funded by the Fond de la Recherche Scientifique de Belgique (F.R.S.-FNRS) under convention 2.5020.11 and by the Walloon Region, as well as the Tier-1 supercomputer of the Federation Wallonie-Bruxelles, infrastructure funded by the Walloon Region under the grant agreement No. 1117545. We thank the Consortium des Equipements de Calcul Intensif, Belgium for awarding to this project an access to the LUMI supercomputer, owned by the EuroHPC Joint Undertaking, hosted by CSC (Finland) and the LUMI consortium through the project 465000061. J.B. and S.P. acknowledge support from the F.R.S.-FNRS. X.G. also acknowledges support from the F.R.S.-FNRS, under grant n\({}^{\text{\text{\textregistered}}}\)T.0103.19 (PDR - ALPS). This work was supported by the Communaute francaise de Belgique through the SURFASCOPE project (ARC 19/24-102. Y.J acknowledges the funding support from National Key R&D Program (No.2022YFB3503800), Natural Science Foundation of Hebei Province (No.E2021203126) and Cultivation Project for Basic Research and Innovation of Yanshan University (No.2021LGQN033).
2304.07736
Charged and rotating boson stars in 5-dimensional Einstein-Maxwell(-Chern-Simons) theory
We study charged and rotating boson stars in 5-dimensional Einstein-Maxwell(-Chern-Simons) theory assuming the two angular momenta associated to the two orthogonal planes of rotation to be equal. Next to the angular momenta, the boson stars carry electric charge and magnetic moment. Interestingly, we find new branches of Einstein-Maxwell-Chern-Simons solutions for which the spatial part of the gauge potential possesses nodes. Consequently, the magnetic moment and the gyromagnetic ratio have opposite sign as compared to the solutions on the main branch. For sufficiently large energy density we find that the solutions possess ergoregions.
Yves Brihaye, Betti Hartmann
2023-04-16T09:39:20Z
http://arxiv.org/abs/2304.07736v1
# Charged and rotating boson stars in 5-dimensional Einstein-Maxwell(-Chern-Simons) theory ###### Abstract We study charged and rotating boson stars in 5-dimensional Einstein-Maxwell(-Chern-Simons) theory assuming the two angular momenta associated to the two orthogonal planes of rotation to be equal. Next to the angular momenta, the boson stars carry electric charge and magnetic moment. Interestingly, we find new branches of Einstein-Maxwell-Chern-Simons solutions for which the spatial part of the gauge potential possesses nodes. Consequently, the magnetic moment and the gyromagnetic ratio have opposite sign as compared to the solutions on the main branch. For sufficiently large energy density we find that the solutions possess ergoregions. ## 1 Introduction With General Relativity now the accepted and experimentally well confirmed paradigm to describe the gravitational interaction for a wide range of masses and sizes of objects, it remains to be understood how strong gravity acts on scales where quantum effects play an important role. While a consistent theory of Quantum Gravity that would also be able to explain a number of puzzles such as that of dark energy has not be formulated to this day there are possibilities to test strong gravity in settings that could be connected to Quantum Theory. One such possibility is the boson star [1, 2, 3, 4, 5, 6] which is made off a scalar field that is essentially quantum in nature as its collapse is prevented by Heisenberg's uncertainty relation. One could think of such a star as a "macroscopic Bose-Einstein condensate" that is self-gravitating. These solutions exist due to a global U(1) symmetry of the model that leads to a conserved Noether charge that can be interpreted as the number of scalar bosonic particles making up the star. These solitonic objects are stationary as they possess a harmonic time-dependence, in their simplest version, however, have a static energy density that leads to a static space-time. Boson stars can also rotate (with resulting stationary space-time)[2, 3, 4, 5, 6] and interestingly the resulting angular momentum is given as an integer multiple of the Noether charge. Hence, the angular momentum is quantized - a feature that is very common in quantum physics - and is proportional to the total number of scalar bosonic particles that make up the star. It has been argued in [7] that boson stars with large angular momentum possess an ergoregion which would eventually make them unstable. Gauging the U(1) symmetry leads to charged boson stars [8, 9, 10]. The non-rotating boson stars possess electric charge proportional to the Noether charge with the proportionality constant equal to the gauge coupling. These solutions exist as long as the electromagnetic repulsion does not overcome the gravitational attraction [8], i.e. a critical value of the gauge coupling exists at fixed gravitational coupling. Adding rotation leads to solutions with electric charge and magnetic moment [12, 13]. It was shown in [13] that the relation between angular momentum and Noether charge present in the uncharged case also holds in the presence of a U(1) gauge field. Boson stars can be constructed as well in higher space-time dimensions which requires a complex scalar field doublet [11]. In 5 space-time dimensions, rotating stars can possess two angular momenta. Choosing these two angular momenta equal, the symmetry of the system can be enhanced and the space-time possesses hyper-spherical symmetry. As for boson stars in 4 space-time dimensions, the sum of the angular momenta is proportional to the Noether charge. One aim of this present paper is to add a U(1) gauge field to the model discussed in [11]. As we will shown below, these solutions possess electric charge and magnetic moment. Next to the standard Maxwell term, another possibility exists in odd space-time dimensions: a Chern-Simons gauge field interaction. While the former is a relativistic gauge field model, the Chern-Simons term is topological and does not depend on the metric. The latter is important when building models describing phenomena in non-relativistic physics such as e.g. condensed matter. Charged black holes without scalar fields in Einstein-Maxwell-Chern-Simons theory have been studied in [14, 15, 16, 17], while 5-dimensional charged, rotating black holes with scalar hair have been studied in [18]. Here, we construct the globally regular counterparts to these black holes and extend the results to include a Chern-Simons term. Our paper is organized as follows: in Section 2 with discuss the model, while Section 3 contains our numerical results. We conclude in Section 4. ## 2 The Model The action of the model that we will consider in the following reads : \[S=\int\left[\frac{\mathcal{R}}{16\pi G}-\left(D_{\mu}\Phi\right)^{\dagger}(D^{ \mu}\Phi)-V(|\Phi|)-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\alpha\frac{1}{\sqrt{-g}} \epsilon^{\mu\nu\rho\sigma\theta}A_{\mu}F_{\nu\rho}F_{\sigma\theta}\right] \sqrt{-g}\ \mathrm{d}^{5}x. \tag{1}\] This is a U(1) gauge field model coupled minimally to a complex scalar doublet \(\Phi=(\phi_{1},\phi_{2})^{T}\) with potential \(V(|\Phi|)\) as well as Einstein gravity with \(\mathcal{R}\) the Ricci scalar and \(G\) Newton's constant. Note that the scalar sector possesses a global \(U(2)\) symmetry whose any \(U(1)\) subgroup can be gauged. Here, the diagonal part of the \(U(1)\times U(1)\) maximal Abelian subgroup is gauged. The covariant derivative and U(1) field strength tensor then take the form \[D_{\mu}=(\partial_{\mu}-iqA_{\mu})\ \,\ \ F_{\mu\nu}=\partial_{\mu}A_{\nu}- \partial_{\nu}A_{\mu} \tag{2}\] and \(q\) denotes the gauge coupling constant. We will assume \(q>0\) without loss of generality since the sign of \(q\) can be absorbed in the gauge fields and the Chern-Simons coupling \(\alpha\). The variation of the action (1) with respect to the metric leads to the Einstein equation: \[G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=8\pi G(T^{s}_{\mu\nu}+T^{v}_{\mu \nu}) \tag{3}\] with the stress-energy tensor of the scalar field \[T^{s}_{\mu\nu}=(D_{\mu}\Phi)^{\dagger}(D_{\nu}\Phi)+(D_{\nu}\Phi)^{\dagger}(D_ {\mu}\Phi)-\frac{1}{2}g_{\mu\nu}\bigg{[}(D_{\alpha}\Phi)^{\dagger}(D_{\beta} \Phi)+(D_{\beta}\Phi)^{\dagger}(D_{\alpha}\Phi)\bigg{]}g^{\alpha\beta}-g_{\mu \nu}U(|\Phi|), \tag{4}\] and the stress-energy tensor of the gauge field \[T^{v}_{\mu\nu}=-F_{\mu\alpha}F^{\alpha}_{\nu}+\frac{1}{4}g_{\mu\nu}F_{\alpha \beta}F^{\alpha\beta}\ \, \tag{5}\] respectively. The variation with respect to the matter fields leads to the equations for the scalar field and gauge field, respectively : \[\frac{1}{\sqrt{-g}}D_{\mu}\left(\sqrt{-g}D^{\mu}\Phi\right)=\frac{\partial U }{\partial|\Phi|^{2}}\Phi\ \,\ \ \frac{1}{\sqrt{-g}}\partial_{\mu}\sqrt{-g}F^{\mu\nu}=J^{\nu}+3\alpha \epsilon^{\nu\rho\sigma\theta\alpha}F_{\rho\sigma}F_{\theta\alpha} \tag{6}\] with the 5-current given by \[J^{\nu}=iq((D^{\nu}\Phi)^{\dagger}\Phi-\Phi^{\dagger}(D^{\nu}\Phi)). \tag{7}\] Note that the action (1) is invariant under a local U(1) transformation up to a divergence, i.e. the equations of motion (6) are gauge invariant. ### The Ansatz For vanishing gauge field 1-form \(A_{\mu}\mathrm{d}x^{\mu}=0\) the model with action (1) was studied first in [11]. We will extend these results here to include electric charge and magnetic moment as well as study the influence of the Chern-Simons term. As mentioned above, we assume the solutions to possess bi-azimuthal symmetry, implying the existence of three commuting Killing vectors, \(\xi=\partial_{t}\), \(\eta_{1}=\partial_{\varphi_{1}}\), and \(\eta_{2}=\partial_{\varphi_{2}}\). A suitable metric Ansatz then reads : \[\mathrm{d}s^{2} =-b(r)\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{f(r)}+R(r)\mathrm{d} \theta^{2}+h(r)\sin^{2}\theta\left(\mathrm{d}\varphi_{1}-W(r)\mathrm{d}t \right)^{2}+h(r)\cos^{2}\theta\left(\mathrm{d}\varphi_{2}-W(r)\mathrm{d}t \right)^{2} \tag{8}\] \[\quad+(R(r)-h(r))\sin^{2}\theta\cos^{2}\theta(\mathrm{d}\varphi_ {1}-\mathrm{d}\varphi_{2})^{2}\] where \(\theta\in[0,\pi/2]\), \((\varphi_{1},\varphi_{2})\in[0,2\pi]\), and \(r\) and \(t\) denote the radial and time coordinate, respectively. For such solutions the isometry group is enhanced from \(\mathbb{R}\times U(1)^{2}\) to \(\mathbb{R}\times U(2)\). This is nothing else but stating that the two angular momenta \(J_{1}\), \(J_{2}\) associated to rotations by \(\varphi_{i}\), \(i=1,2\) are equal to each other \(J_{1}=J_{2}\equiv J/2\), where \(J\) is the total angular momentum. The symmetry enhancement mentioned above in particular allows to factorize the angular dependence and thus leads to ordinary differential equations. The Ansatz for the scalar field then reads [11] : \[\Phi=\phi(r)e^{i\omega t}\left(\begin{array}{c}\sin\theta e^{i\varphi_{1}}\\ \cos\theta e^{i\varphi_{2}}\end{array}\right), \tag{9}\] where the frequency \(\omega\) parametrises the harmonic time-dependence. For the scalar field potential we restrict our study to the simplest case of a massive, non self-interacting scalar field, i.e. we set \[V(|\Phi|)=\mu^{2}\Phi^{\dagger}\Phi=\mu^{2}\phi(r)^{2} \tag{10}\] where \(\mu\) corresponds to the scalar field mass. Finally, the Ansatz for the electromagnetic potential is chosen to be : \[A_{\mu}{\rm d}x^{\mu}=V(r){\rm d}t+A(r)(\sin^{2}(\theta){\rm d}\varphi_{1}+\cos ^{2}(\theta){\rm d}\varphi_{2}) \tag{11}\] which turns out to be consistent with the symmetries of the metric and scalar fields. The non-vanishing components of the field strength tensor are then \[F_{rt}=\frac{{\rm d}V(r)}{{\rm d}r}\ \,\ \ F_{r\varphi_{1}}=\frac{{\rm d}A(r)}{{ \rm d}r}\sin^{2}\theta\ \,\ \ F_{r\varphi_{2}}=\frac{{\rm d}A(r)}{{\rm d}r}\cos^{2}\theta\ \,\ \ F_{\theta\varphi_{1}}=-F_{\theta\varphi_{2}}=A(r)\sin(2\theta)\, \tag{12}\] i.e. our solutions possess electric and magnetic fields. Without fixing a metric gauge, a straightforward computation leads to the following reduced action for the system : \[{\cal S}_{\rm eff}=\int{\rm d}r{\rm d}t\ L_{\rm eff},\ \ \ {\rm with}\ \ \ \ L_{\rm eff}=L_{g}+16\pi G(L_{s}+L_{v}+\alpha L_{cs}), \tag{13}\] \[L_{g} = \sqrt{\frac{fh}{b}}\biggl{(}b^{\prime}R^{\prime}+\frac{R}{2h}b^{ \prime}h^{\prime}+\frac{b}{2R}R^{\prime 2}+\frac{b}{h}R^{\prime}h^{\prime}+ \frac{1}{2}RhW^{\prime 2}+\frac{2b}{f}\left(4-\frac{h}{R}\right)\biggl{)}, \tag{14}\] \[L_{s} = R\sqrt{\frac{bh}{f}}\left[f\phi^{\prime 2}+\left(\frac{2}{R}+ \frac{(1-qA)^{2}}{h}-\frac{(\omega-W+q(V+WA))^{2}}{b}+\mu^{2}\right)\phi^{2} \right]\,\] (15) \[L_{v} = R\sqrt{\frac{bh}{f}}\left(\frac{2A^{2}}{R^{2}}+\frac{f}{2h}(A^{ \prime})^{2}-\frac{f}{2b}(V^{\prime}+WA^{\prime})^{2}\right)\,\] (16) \[L_{CS} = 16A(A^{\prime}V-AV^{\prime})\, \tag{17}\] the effective gravity (\(g\)), scalar field (\(s\)), gauge field (\(v\)) and Chern-Simons (CS) Lagrangian density, respectively. The prime now and in the following denotes the derivative with respect to \(r\). The equations of motion can then be consistently obtained from this reduced action by varying with respect to \(h\), \(b\), \(f\), \(R\), \(W\), \(F\), \(V\) and \(A\). Note that the effective CS Lagrangian density does not depend on the metric functions and hence will not source the space-time curvature. The metric gauge freedom can be fixed afterwards, leading to a system of seven independent equations plus a constraint which is a consequence of the other equations. For the construction of the solutions, we have fixed the metric gauge by taking \[R(r)=r^{2} \tag{18}\] consistently with the standard analytic form of the Myers-Perry solution [19]. Appropriate combinations of the equations can be used such that the equation for \(f(r)\) is first order while the equations of the six other functions are second order. We hence need a total of thirteen conditions at \(r=0\) and/or at \(r=\infty\) to specify a boundary value problem. ### Asymptotic behaviour and boundary conditions Boson stars are globally regular solutions. At \(r=0\) we impose the following boundary conditions : \[f(0)=1\ \,\ b^{\prime}(0)=0\ \,\ h(0)=0\ \,\ W^{\prime}(0)=0\ \,\ V(0)=0\ \,\ A(0)=0\ \,\ A^{\prime}(0)=0\ \,\ \phi(0)=0. \tag{19}\] Note that the condition \(V(0)=0\) does not result from the requirement of regularity, but is a choice. This can be made without loosing generality since the equations depend only on the combination \(qV+\omega\). Moreover, we want the solutions to be asymptotically flat, i.e. we require : \[b(r)=1+\frac{{\cal M}}{r^{2}}+\ldots,\ \ f(r)=1+\frac{{\cal M}}{r^ {2}}+\ldots,\ \ h(r)=r^{2}+\frac{{\cal V}}{r^{2}}+\ldots,\ \ W(r)=\frac{{\cal J}}{r^{4}}+\ldots\] \[V(r)=V_{\infty}+\frac{q_{c}}{r^{2}}+\ldots,A(r)=\frac{q_{m}}{r^{2 }}+\ldots\ \ \phi(r)=c_{0}\frac{e^{-r\sqrt{\mu^{2}-(\omega-qV_{\infty})^{2}}}}{r^{3/2}}+\ldots, \tag{20}\] where \({\cal M}\), \({\cal V}\), \({\cal J}\), \(q_{c}\), \(q_{m}\), \(V_{\infty}\) and \(c_{0}\) are free parameters that can only be computed from the numerical solution. Note that the asymptotic behaviour of the scalar field tells us that it acquires an effective mass \(m_{\rm eff}\) with \[m_{\rm eff}^{2}\equiv\mu^{2}-(\omega-qV_{\infty})^{2}\ =\ (\mu-\omega+qV_{\infty})( \mu+\omega-qV_{\infty})\ . \tag{21}\] The parameter \(V_{\infty}\), i.e. the value of the electric potential \(V(r)\) at \(r\to\infty\) turns out to be negative in our numerical calculations. Since \(V(0)=0\), the value of \(V_{\infty}\) corresponds to the potential difference between the origin and infinity. With the choice \(q\geq 0\), this tells us that the first factor on the right-hand side of (21), which we define as \[\Omega:=\mu-\omega+qV_{\infty} \tag{22}\] determines whether the boson star is an exponentially localized solution. Obviously, we need to require \(\Omega\geq 0\). For \((\omega-qV_{\infty})^{2}\geq\mu^{2}\) we are above the threshold of producing scalar particles of mass \(\mu\). ### Physical quantities Before we discuss the relevant physical quantities of the solutions and how they can be extracted from the numerical results we obtain, let us remark that although there are _a priori_ three (four) parameters to be varied in the Maxwell (respectively Chern-Simons) case, Newton's constant \(G\) and the mass \(\mu\) of the scalar field can be set to unity without loss of generality. This is achieved by a suitable rescaling of the matter fields and the coordinate \(r\). This means that we are left with the gauge coupling \(q\) in the Maxwell case and additionally with \(\alpha\) in the Maxwell-Chern-Simons case. The mass \(M\) and total angular momentum \(J=J_{1}+J_{2}\) of the solutions have been discussed in [11]. Hence, we just state the expressions here without explicitly deriving them. They read : \[M=-\frac{3\pi}{8G}{\cal M},\ \ J=\frac{\pi}{4G}{\cal J}, \tag{23}\] where \({\cal M}\) and \({\cal J}\) are given in (20). Since the model we are discussing here possesses a global U(1) symmetry, there exists an associated locally conserved Noether current. This is the current given in (7). The globally conserved Noether charge then is : \[Q=\int\sqrt{-g}\ J^{0}{\rm d}^{4}x=qN\ \ {\rm with}\ \ N=2\pi^{2}\int_{0}^{ \infty}r^{3}\sqrt{\frac{h}{fb}}(\omega+W-q(V+AW))\phi^{2}{\rm d}r. \tag{24}\] \(N\) can then be interpreted as the total number of bosonic particles making up the boson star and \(Q\) as the total charge of \(N\) individual particles that each carry charge \(q\). Also note that there is a relation between the angular momentum \(J\) and \(N\) given by [11] \[|J|=N. \tag{25}\] We can also define the electric charge \(Q_{e}\) and the magnetic moment \(Q_{m}\), respectively, as follows \[Q_{e}=\frac{\pi}{G}q_{e}\ \ \,\ \ \ Q_{m}=\frac{\pi}{G}q_{m}\ \, \tag{26}\] where \(q_{e}\) and \(q_{m}\) are given in (20). Using the equation (6) it can be shown that \(Q=Q_{e}\) as expected. In the numerical calculation the validity of this equality is a good cross-check. Finally, the gyromagnetic ratio \(\gamma\) of our solutions reads \[\gamma=\frac{2MQ_{m}}{Q_{e}J}=\frac{2Mq_{m}}{q_{e}J}. \tag{27}\] We will also need the Ricci scalar \({\cal R}\) in the following. This reads \[{\cal R}(r) = -f\left(\frac{b^{\prime\prime}}{b}+\frac{h^{\prime\prime}}{h}+ \frac{2R^{\prime\prime}}{R}\right)+\frac{f}{2}\left(\frac{(b^{\prime})^{2}}{b^ {2}}+\frac{(h^{\prime})^{2}}{h^{2}}+\frac{(R^{\prime})^{2}}{R^{2}}+\frac{h}{b} (W^{\prime})^{2}\right) \tag{28}\] \[+ -\frac{R^{\prime}}{Rbh}(fbh)^{\prime}-\frac{1}{2bh}(f^{\prime}bh ^{\prime}+f^{\prime}b^{\prime}h+f^{\prime}b^{\prime})+\frac{2}{R^{2}}(4R-h)\.\] We will see in the discussion of the numerical results that some configurations reach limiting solutions with \(b(0)\to 0\) (while \(b^{\prime\prime}(0)\) is finite) suggesting that the scalar curvature at the origin diverges for these solutions. ## 3 Numerical results Due to the non-linearity of the field equations, we have solved the equations numerically using the collocation solver COLSYS [21]. The obtained solutions typically have accuracy of \(10^{-6}\). With appropriate rescalings of the fields and coordinates, we can set \(8\pi G\equiv 1\), \(\mu\equiv 1\), i.e. the only two parameters to vary in the following are \(q\) and \(\alpha\). ### Einstein-Maxwell (EM) boson stars Let us first discuss the solutions in the absence of the Chern-Simons interaction, i.e. for \(\alpha=0\). As a crosscheck of our numerics and to emphasize the changes that the presence of the gauge field brings to the model, let us briefly discuss the case \(q=0\) that has been studied in detail in [11]. We have constructed uncharged, rotating boson star solutions and studied their properties by varying the parameter \(\phi^{\prime}(0)\). For \(\phi^{\prime}(0)=0\) the scalar field is trivial \(\phi(r)\equiv 0\), \(\omega=1\) and the space-time is simply 5-dimensional Minkowski space-time. Note, however, that the limit \(\phi^{\prime}(0)\to 0\) is subtle : while the scalar function \(\phi(r)\) becomes trivial, the mass \(M\) and Noether charge \(N\) do not approach zero in this limit. In fact, a mass gap forms. This has been discussed in [11]. In Fig. 1 (left) we show the dependence of the mass \(M\) on \(\Omega\) for \(q=0\), \(q=0.25\) and \(q=0.5\), respectively. For all values of \(q\) we observe the typical spiraling behaviour, i.e. the existence of a main branch of solutions which exists between \(\Omega=0\) and a maximal value of \(\Omega=\Omega_{\rm max,1}\). From \(\Omega_{\rm max,1}\) a second branch of solutions extends backwards in \(\Omega\) down to \(\Omega_{\rm min,2}>0\). From \(\Omega_{\rm min,2}\) a third branch exists up to \(\Omega_{\rm max,3}<\Omega_{\rm max,1}\) and bends backwards into a fourth branch. We find that \(\Omega=\Omega_{\rm max,1}\), \(\Omega_{\rm min,2}\), \(\Omega_{\rm max,3}\) all decrease with increasing \(q\), i.e. the interval in \(\Omega\) for which charged, rotating boson stars exist in 5 dimensions decreases with increasing \(q\). Moreover, we find that the mass gap described above for uncharged solutions also exists for charged solutions and increases with increasing \(q\). In fact, we observe that the mass range for which charged boson stars exist changes only slightly when increasing \(q\) from zero to \(q=0.25\), while the increase to \(q=0.5\) increases the value of the mass gap considerably. This is related to the increased electromagnetic repulsion. The charge \(Q\) (and with it the angular momentum \(J\)) have a very similar qualitative dependence, this is why we do not show them here. Along the branches, the parameter \(\phi^{\prime}(0)\) is increased. We show the dependence of \(\Omega\) on \(\phi^{\prime}(0)\) for \(q=0\), \(q=0.25\) and \(q=0.5\) in Fig. 1 (right). For \(\phi^{\prime}(0)=0\) we have \(\Omega=0\) independent of the choice of \(q\). Increasing \(\phi^{\prime}(0)\) the value of \(\Omega\) reaches a maximal value, then decreases to a minimal value and for sufficiently large \(\phi^{\prime}(0)\) tends to a constant value of \(\Omega\). The solutions cease to exist when \(\phi^{\prime}(0)\) is too large, where the maximal possible value of \(\phi^{\prime}(0)\) decreases with increasing \(q\). This is related to the formation of a singularity in the Ricci scalar at the origin. The Ricci scalar at \(r=0\) is given by \({\cal R}(0)=-4\phi^{\prime\prime}(0)/b(0)-6\,f^{\prime\prime\prime}(0)\) (compare (28)). In Fig. 2 (left) we show the value of \(b(0)\) in dependence of \(\Omega\) for \(q=0\), \(q=0.25\) and \(q=0.5\). For \(\Omega=0\) we find \(b(0)=1\) independent of \(q\) as this is the limit of vanishing scalar field \(\phi(r)\equiv 0\). Along the branches, i.e. increasing \(\phi^{\prime}(0)\), the value of \(b(0)\) decreases until it reaches zero, i.e. a solution with diverging Ricci scalar at \(r=0\) is reached. We also observe that \(W(0)\) decreases from zero when moving along the branches, see Fig. 2 (right). In particular, we find that \(g_{tt}=-b+hW^{2}\) can become zero and even positive indicating that an ergoregion exists for solutions with sufficiently large \(\phi^{\prime}(0)\). As has been discussed in the context of boson stars before [7], this would make the solutions unstable. We find that the larger \(q\), the smaller is the value of \(\phi^{\prime}(0)\) at which an ergoregion appears, e.g. for \(q=0.25\), we find solutions with ergoregions for \(\phi^{\prime}(0)>1.2\), while these ergoregions exist for \(\phi^{\prime}(0)>0.9\) when choosing \(q=0.5\). Some data is shown in Table 1, where we give the two values of \(r\) at which \(g_{tt}\) becomes zero. The ergoregion is a hyper-spherical shell of inner radius \(r_{1}\) and outer radius \(r_{2}\). Within this shell, \(g_{tt}\) is positive and attains its maximal value at \(r^{(\rm max)}\). Values of the maximal value of \(g_{tt}\) and the value of \(r^{(\rm max)}\) are also given in Table 1. In Fig. 3 we show the gauge field energy density \(\epsilon_{v}\) and scalar field energy density \(\epsilon_{s}\) given by \[\epsilon_{v}\equiv(T_{0}^{0})_{v}=\frac{f}{2b\hbar}(A^{\prime})^{2}(b-hW^{2} )+\frac{2}{r^{4}}A^{2}+\frac{f}{2b}(V^{\prime})^{2} \tag{29}\] and \[\epsilon_{s}\equiv(T_{0}^{0})_{s}=f(\phi^{\prime})^{2}+\mu^{2}\phi^{2}+\phi^ {2}\left(\frac{2}{r^{2}}+\frac{(1-qA)^{2}}{h}\right)+\frac{\phi^{2}}{b}\left( (qV-\omega)^{2}-W^{2}(qA-1)^{2}\right)\, \tag{30}\] respectively. The sum \(\epsilon_{v}+\epsilon_{s}\) is equivalent to the total energy density of the solution. These profiles are for \(q=0.5\) and \(\phi^{\prime}(0)=0.8\) (left) and \(\phi^{\prime}(0)=1.6\) (right), respectively. We also show the metric tensor component \(-g_{tt}\). We observe that the scalar field energy density \(\epsilon_{s}\) dominates the energy density as it is a factor of 50 larger than the contribution \(\epsilon_{v}\) from the gauge field. The gauge field energy density \(\epsilon_{v}\) is maximal at the center of the boson star, while \(\epsilon_{s}\) has its maximal value at \(r=r_{s,\rm max}>0\). Interestingly, the gauge field energy density as well as \(-g_{tt}\) have a local minimum around \(r_{s,\rm max}\). For sufficiently large scalar field energy density we find that \(-g_{tt}\) becomes negative, i.e. an ergoregion appears. Increasing \(\phi^{\prime}(0)\) from 0.8 to 1.6 leads to an increase of \(\Omega\), i.e. the scalar field falls off quicker. Correspondingly, \(r_{s,\rm max}\) decreases and the maximum of the energy density increases with increasing \(\phi^{\prime}(0)\). We have also studied the gyromagnetic ratio for the solutions. Our results for \(q=0.25\) and \(q=0.5\) are shown in Fig. 4 (left). On the main branch for \(\Omega\to 0\) the gyromagnetic ratio tends to the "classical" value \(\gamma=1\). Increasing \(\Omega\) from zero increases \(\gamma\) up to a maximal value on the second branch of solutions. The larger \(q\) the larger is this maximal value. In order to better understand the dependence of the solutions on \(\phi^{\prime}(0)\) and \(q\), we have also studied the case of fixed \(\phi^{\prime}(0)\) and varying \(q\). Our numerical experiments indicate that localized solutions do not exist for \(q>q_{\rm max}\) with \(q_{\rm max}\approx 0.5775\) more or less independent of the choice of \(\phi^{\prime}(0)>0\). This is shown in Fig. 5 (left) where we give \(\Omega\) in function of \(q\) for three different values of \(\phi^{\prime}(0)\). Obviously, when \(q\to q_{\rm max}\) the value of \(\Omega\to 0\), i.e. the boson star solution is no longer (exponentially) localized. Accordingly, all physical quantities subject to a Gauss law (mass \(M\), electric charge \(Q_{e}\), magnetic moment \(Q_{m}\), angular momentum \(J\)) will diverge in this limit. However, it is interesting to note that the gyromagnetic ratio \(\gamma\) behaves differently in the limit \(q_{\rm max}\approx 0.5775\) when choosing \(\phi^{\prime}(0)\) small as compared to choosing \(\phi^{\prime}(0)\) large. This is shown in Fig. 5 (right) where we give \(\gamma\) in function of \(q\) for different values of \(\phi^{\prime}(0)\). We observe that for small values of \(\phi^{\prime}(0)\) (here \(\phi^{\prime}(0)=0.035\)) the gyromagnetic ratio increases strongly for \(q\to q_{\rm max}\), while for large values of \(\phi^{\prime}(0)\) (here \(\phi^{\prime}(0)=1.6\) and \(\phi^{\prime}(0)=3.7\), respectively) \(\gamma\) decreases strongly in this limit. Note that this limit for \(q\) was also observed for the corresponding black hole solutions [18]. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \(\phi^{\prime}(0)\) & \(q\) & \(r_{1}\) & \(r_{2}\) & \(r^{\rm(max)}\) & \(g_{tt}^{\rm(max)}\) \\ \hline \hline 1.6 & 0.25 & 0.29 & 0.97 & 0.75 & 0.038 \\ 1.2 & 0.25 & 0.64 & 0.91 & 0.84 & 0.007 \\ \hline 1.6 & 0.5 & 0.22 & 1.14 & 0.79 & 0.033 \\ 1.0 & 0.5 & 0.66 & 1.18 & 0.90 & 0.008 \\ 0.95 & 0.5 & 0.76 & 1.14 & 0.96 & 0.004 \\ \hline \end{tabular} \end{table} Table 1: The two values of \(r\) for which \(g_{tt}\) becomes zero, i.e. the inner radius \(r_{1}\) and outer radius \(r_{2}\), respectively, of the ergoregion as well as the value of \(r=r^{\rm(max)}\) at which \(g_{tt}\) attains its maximal value \(g_{tt}^{\rm(max)}\) are given for EM boson stars and some values of \(\phi^{\prime}(0)\) and \(q\). Figure 1: The mass \(M\) in dependence of \(\Omega\) (left) and \(\Omega\) in dependence of \(\phi^{\prime}(0)\) (right) for EM boson stars with \(q=0.25\) (purple) and \(q=0.5\) (green), respectively. For comparison we also show the uncharged case, \(q=0\). Figure 2: The value of the metric function \(b(r)\) at the origin, \(b(0)\) in dependence of \(\Omega\) (left) and the value of the metric function \(W(r)\) at the origin, \(W(0)\) in dependence of \(\Omega\) (left) for EM boson stars with \(q=0.25\) (purple) and \(q=0.5\) (green), respectively. For comparison we also show the uncharged case, \(q=0\). Figure 4: _Left:_ The gyromagnetic ratio \(\gamma\) in dependence of \(\Omega\) for EM boson stars with \(q=0.25\) (purple) and \(q=0.5\) (green), respectively. _Right:_ The gyromagnetic ratio \(\gamma\) in dependence of \(\Omega\) for EMCS boson stars with \(\alpha=1\) and \(q=0.25\) (purple) and \(q=0.5\) (black), respectively. In the latter case, we show branch A (solid) and branch B (dashed), respectively. Figure 5: _Left:_ The value of \(\Omega\) in dependence of \(q\) for EM boson stars with \(\phi^{\prime}(0)=0.035\) (purple), \(\phi^{\prime}(0)=1.6\) (green) and \(\phi^{\prime}(0)=3.7\) (blue), respectively. _Right:_ The gyromagnetic ratio \(\gamma\) in dependence of \(q\) for the same solutions. Figure 3: The profiles of the gauge field energy density \(\epsilon_{v}\), the scalar field energy density \(\epsilon_{s}\) and the metric function \(-g_{tt}\) for EM boson stars with \(q=0.5\), \(\gamma=0\) and \(\phi^{\prime}(0)=0.8\) (left) and \(\phi^{\prime}(0)=1.6\) (right). ### Einstein-Maxwell-Chern-Simons (EMCS) boson stars In the following, we discuss the influence of the CS term on the properties of the charged boson stars. As expected, the EM boson stars get progressively deformed when choosing \(\alpha\neq 0\). As an example we show the dependence of the mass \(M\) and the angular momentum \(J\) (left) as well as the electric charge \(Q_{e}=Q\), the magnetic moment \(Q_{m}\), and the value of the electric potential at infinity \(V_{\infty}\) (right) on \(\alpha\) for \(\phi^{\prime}(0)=0.35\) and \(q=0.5\) in Fig. 6. As these figures suggest, two branches of solutions exist which we will refer to as 'branch A' and 'branch B', respectively, in the following. Branch A is connected to the EM limit \(\alpha\to 0\) and exists for both positive and negative values of \(\alpha\), while branch B appears only for sufficiently large and positive values of \(\alpha\), i.e. for \(\alpha>\alpha_{\rm cr,B}\), where \(\alpha_{\rm cr,B}\) depends on \(q\) and \(\phi^{\prime}(0)\). For \(q=0.5\) and \(\phi^{\prime}(0)=0.35\) we find that \(\alpha_{\rm cr,B}\approx 0.405\). Solutions on both branches have the feature that \(M>J\) and both \(M\) and \(J\) decrease with increasing \(\alpha\) (except close to \(\alpha_{\rm cr,B}\) on branch B where our numerical results indicate an increase on a small interval of \(\alpha\)). The results suggest that mass and angular momentum of the solutions on branch A change little when increasing \(\alpha\) from negative values to zero. Even for small positive values of \(\alpha\), this seems to be the case. For \(\alpha\) slightly smaller, but close to \(\alpha_{\rm cr,B}\) we find that \(M\) and \(J\) drop sharply and then again on a large interval of (positive) \(\alpha\) remain nearly constant. This suggests that the appearance of branch B seems to be connected to a drop in energy and angular momentum of the boson stars on branch A. All our numerical results indicate that these two branches remain separated and do not merge at sufficiently large \(\alpha\). We also observe that the electric charge \(Q_{e}=Q\) decreases with increasing \(\alpha\) for both branches (see Fig. 6 (right)) with \(Q_{e}=Q\) smaller on branch A as compared to on branch B. \(|V_{\infty}|\) decreases with increasing \(\alpha\) and is - again - larger on branch B. Finally, the magnetic moment \(Q_{m}\) is close to zero for negative \(\alpha\) (branch A) and increases to positive values when increasing \(\alpha\) from zero. On branch B, the magnetic moment is negative and decreases in absolute value when increasing \(\alpha\) and approaches zero for large positive values of \(\alpha\). The solutions on branch B hence have larger electric charge and larger absolute value of the magnetic moment with the latter being negative on branch B. In order to understand the difference between the two branches, we have plotted the profiles of typical boson star solutions. This is shown in Fig. 7 (left) for \(q=0.5\), \(\alpha=0.5\) and \(\phi^{\prime}(0)=0.35\). Clearly, the magnetic potential \(A(r)\) possesses a node for the solutions on branch B. Solutions with nodes in the spatial part of the gauge field have been found before for black holes in EMCS theory (without scalar fields) [15]. These have been interpreted as radial excitations and the fact that solutions on branch B have larger mass than those on branch A suggests that this interpretation is also suitable here. Interestingly, we observe that neither the electric part of the gauge potential (given in terms of the function \(V(r)\)) nor the scalar field function \(\phi(r)\) are strongly changed when radially exciting the magnetic part of the gauge potential. Fixing \(q\) and \(\alpha\) and increasing \(\phi^{\prime}(0)\) we find that the value of \(r=r_{0}\) at which \(A(r)\) becomes zero increases. This is shown in Fig. 7 (right) for \(\phi^{\prime}(0)=1.3\). In this case we find that for \(0\leq r\lesssim r_{0}\) the solutions on the two branches barely differ from each other. This includes the extend and existence of the ergoregion which is slightly more extended for solutions on branch B, see also Table 2 for some more data. This data also suggests that at fixed \(\phi^{\prime}(0)\) and fixed \(q\) the ergoregion has smaller radial thickness when the CS term is present. The solutions on branch B possess negative magnetic moment and hence negative gyromagnetic ratio. Our results for \(\alpha=1\) and \(q=0.25\) as well as \(\alpha=1\), \(q=0.5\) and branch A and branch B are shown in Fig.4 (right). In comparison to the EM case, the gyromagnetic ratio can become negative (branch B for \(q=0.5\)) and significantly larger in absolute value. The maximal possible value of \(\gamma\) increases with \(q\) and is one order of magnitude larger as compared to the EM boson stars. When plotting the mass \(M\) as function of \(\Omega\), see Fig.8 (left) we find that the maximal possible value of \(\Omega\) increases with increasing \(\alpha\) and that on the second branch the solutions exist down to \(\Omega\approx 0\) from where a third branch of solutions emerges that shows a sharp increase in mass \(M\) on a very small interval of \(\Omega\). Reaching a maximal mass, a fourth branch emerges on which the mass decreases again. We find that the larger \(\alpha\) the sharper is the increase of the mass on the third branch. Comparing the solutions on branch A and branch B for \(\alpha=1\) we find that the qualitative dependence of the mass on \(\Omega\) is quite different. In particular, we notice that the qualitative dependence of the solutions on branch B for \(\alpha=1\) seems similar to that of the solutions for \(\alpha=0.5\). Finally, we have checked the dependence of the magnetic moment \(|Q_{m}|\) on the angular momentum \(J\) of the solutions. This is shown in Fig.8 (right). Interestingly, we find a nearly linear relation between \(\log|Q_{m}|\) and \(\log J\) on the first two branches of solutions, where the first branch has a larger slope than the second. The third and fourth branch show more complicated behaviour. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(\phi^{\prime}(0)\) & \(\alpha\) & \(r_{1}\) & \(r_{2}\) & branch \\ \hline \hline 1.6 & 0.5 & 0.33 & 0.87 & A \\ 1.6 & 0.5 & 0.31 & 0.93 & B \\ \hline 1.3 & 1.0 & 0.59 & 0.80 & A \\ 1.2 & 1.0 & 0.60 & 0.87 & B \\ \hline \end{tabular} \end{table} Table 2: The two values of \(r\) for which \(g_{tt}\) becomes zero, i.e. the inner radius \(r_{1}\) and outer radius \(r_{2}\), respectively, of the ergoregion for EMCS boson stars with \(q=0.5\) and some exemplary values of \(\phi^{\prime}(0)\) and \(\alpha\). This relation has been found before from observations of planets and stars [20]. While boson stars are assumed to be very compact objects and hence rather be comparable in density to neutron stars and white dwarfs, our results suggest that (at least in 5 dimensions) EMCS boson stars have the property that \(Q_{m}\) is proportional to a positive power of \(J\), i.e. that \(Q_{m}\) would increase when \(J\) increases. This seems to be different for neutron stars and white dwarfs which have \(Q_{m}\) proportional to a negative power of \(J\)[20] and hence the magnetic moment would decrease with increased angular momentum. ## 4 Conclusions In this paper, we have discussed the construction of charged and rotating boson stars in 5 space-time dimensions. The gauge field dynamics is either of Maxwell type or of Maxwell-Chern-Simons type. These solutions possess electric charge \(Q\) equal to \(q\) times the Noether charge \(N\), where \(q\) is the gauge coupling, and sum of the two angular momenta \(J\) equal to the Noether charge, i.e. \(Q/q=N=J\). The gyromagnetic ratio of the solutions is on the order of unity for boson stars in standard Maxwell gauge field theory, while it can become one order of magnitude larger when the Chern-Simons interaction is added. Moreover, we observe that the presence of the Chern-Simons term leads to the existence of solutions with radially excited magnetic gauge field component. This leads to the reversal of the sign of Figure 6: The mass \(M\) and angular momentum \(J\) (left) and the electric charge \(Q_{e}=Q\), the magnetic moment \(Q_{m}\) and the value of the electric potential at infinity \(V_{\infty}\) (right) of EMCS boson stars in dependence of \(\alpha\) for \(\phi^{\prime}(0)=0.35\) and \(q=0.5\). We show branch A with no node (black) and branch B with one node (violet) of the gauge field function \(A(r)\), see also Fig. 7. Figure 7: _Left:_ We show the profiles of the gauge potential functions \(A(r)/r\) (solid) and \(V(r)\) (dashed) as well as of the scalar field function \(\phi(r)\) (dotted) for EMCS boson stars on branch A (black) and branch B (violet) for \(q=0.5\), \(\alpha=0.5\) and \(\phi^{\prime}(0)=0.35\). _Right:_ We show the the metric tensor component \(-g_{tt}\) (solid), the gauge potential function \(A(r)/r\) (dashed) and the scalar field function \(\phi(r)\) (dotted-dashed) for EMCS boson stars on branch A (black) and branch B (violet) for \(q=0.5\), \(\alpha=0.5\) and \(\phi^{\prime}(0)=1.3\). the magnetic moment and the gyromagnetic ratio, i.e. we find solutions with positive and negative gyromagnetic ratio in the presence of the Chern-Simons term. For sufficiently compact boson stars we find that the space-time possesses an ergoregion which suggests that these solutions become eventually unstable. The presence of the CS term decreases the radial extention of the ergoregion at fixed \(q\) and \(\phi^{\prime}(0)\). When considering the relation between the magnetic moment and the angular momentum we find a positive correlation for the solutions on the first and second branch of solutions, i.e. the absolute value of the magnetic moment increases with angular momentum. This is different for neutron stars and white dwarfs for which a negative correlation seems to exist, see e.g. [20]. Positive correlations are typical for planets and ordinary stars. It would be interesting to investigate this question further in other space-time dimensions.
2307.01617
Application of the Fermi function in money exchange
In a money exchange process involving a seller and a buyer, we develop a straightforward model encompassing conservative, non-conservative, and systems with or without debt. Our model integrates the Fermi function to capture the behavior of buyers and sellers. Under certain circumstances, we identify an equation that marks the phase transition between a stable equal wealth state across all connected social graphs and an unstable equal wealth state in some connected social graph.
Hsin-Lun Li
2023-07-04T10:11:50Z
http://arxiv.org/abs/2307.01617v2
# Application of the Fermi function in Money Exchange ###### Abstract. Money exchange involves a buyer and a seller. A money exchange model is designed as follows: at each time step, a pair of socially connected agents are selected to transact. A Fermi function defines the probability of one of the selected pair being a buyer. Their transaction amount is an integral random variable with finite states. We argue that achieving equal wealth is unattainable. Additionally, we demonstrate conditions that result in the money distribution forming two or more clusters, and we explore circumstances in which a connected social graph with more links leads to a reduced number of states in the money distribution. Key words and phrases:Money exchange, social network, equal wealth, Fermi function 2020 Mathematics Subject Classification: 91C20, 91D25, 91D30, 94C15, 60G48 ## 1. Introduction Money exchange consists of two roles: buyer and seller. We propose a money exchange model that involves a finite number of agents whose initial money is in \((\ell,h)\) for \(\ell\) and \(h\) integers. Say \([n]=\{1,2,\ldots,n\}\) is the collection of all agents. At each time step, a pair of socially connected agents are selected to transact. * A transaction occurs if and only if their money falls in \((\ell,h)\). The social relationships among agents are characterized by a social graph \(G\), where each vertex represents an agent and each edge indicates a social connection between the corresponding agents. Let \(X_{ij}(t)\), \(i,j\in[n],t\geq 0\) be independent and identically distributed random variables with a finite state space \(\{1,2,\ldots,d\}\subset\mathbb{Z}^{+}\). If agents \(i\) and \(j\) engage in a transaction at time \(t\), agent \(i\) has a probability \(Q_{ij}(t)\) of paying agent \(j\) a certain amount \(X_{ij}(t)\) of money for goods or services. \(Q_{ij}(t)\) is defined by a Fermi function, \[Q_{ij}(t)=\frac{1}{1+\exp\left[-\eta_{t}(m_{i}(t)-m_{j}(t))\right]},\] where \[m_{i} =\] money of agent \[i\] at time \[t\] , \[\eta_{t} =\] inverse temperature at time \[t\] Observe that \((\ell-d,\ell]\) and \([h,h+d)\) are absorbing states. The parameter \(\eta_{t}\) determines the degree of randomness in the dynamics. A wealthier agent is more likely to act as a buyer when \(\eta_{t}>0\), and as a seller when \(\eta_{t}<0\). However, if \(\eta_{t}=0\), a richer agent has an equal chance of being a buyer or a seller compared to a poorer agent. * We set \(\eta_{t}\leq 0\) at all times so that a wealthier agent is more likely or has an equal chance to be a seller. We can set \(h-\ell\) to be significantly larger than \(d\) and initialize the money distribution at the center of \((\ell,h)\). This ensures that agents are less likely to end up in the absorbing states after a transaction. Agents with a money amount greater than or equal to \(h\) can be considered quite rich, while agents with a money amount less than or equal to \(\ell\) can be considered quite poor. By shifting, the initial money distribution is at state zero in \((-(h-\ell)/2,(h-\ell)/2)\). Let \(X_{ij}(t_{1})\), \(m_{k}(t_{2})\) and \(\eta_{t_{3}}\) be independent for all \(i,j,k\in[n]\), and \(t_{1},t_{2},t_{3}\geq 0.\) This implies that an agent spends money without considering the amount of money remaining. Let \(|H|\) denote the order of the graph \(H\), which represents the number of vertices in \(H\). We use \(H-v\) to denote the graph obtained by removing vertex \(v\) from \(H\), and \(H-e\) to denote the graph obtained by removing edge \(e\) from \(H\). An application of the Fermi function can be observed in the Bonabeau model presented in [1]. The Bonabeau model describes a system of finite agents positioned on sites within a two-dimensional grid. At each time step, an agent is selected along with one of its neighboring sites. If the neighboring site is unoccupied, the agent moves to that site. However, if the neighboring site is occupied by another agent, a fight between the two agents occurs. The Fermi function represents the probability of the attacking agent winning the fight. Unlike certain models in opinion dynamics, such as the Deffuant model discussed in [3, 2], the Hegselmann-Krause model presented in [4], and the mixed Hegselmann-Krause model described in [5, 6], the money exchange model ensures the conservation of total money during a transaction between two agents. ## 2. Main results We derive the following results under the proposed money exchange model. **Theorem 1**.: All transactions end almost surely in finite time. Let \(T\) be the earliest time that all transactions end, i.e., \[T=\inf\{t\geq 0:m_{i}(s)=m_{i}(t)\text{ for all }s\geq t\}.\] Then, \(T\) is almost surely finite. Corollary 2 states that agents are not social neighbors if their money is in a non-absorbing state at time \(T\). **Corollary 2**.: Agents whose money is in \((\ell,h)\) at time \(T\) are not social neighbors. Namely, if one of the components of the social graph is of order greater than one, then one of the clusters in the money distribution at time \(T\) will be in an absorbing state. Corollary 3 states that equal wealth is unattainable on a connected social graph of order greater than one. **Corollary 3**.: It is almost surely impossible to achieve equal wealth at time \(T\) when \(G\) is connected and of order more than one. It turns out that a social graph with the most links engenders at most one agent out of the absorbing states at time \(T\). **Corollary 4**.: Assume that \(G\) is complete. Then, at most one agent is out of the absorbing states at time \(T\). Corollary 5 demonstrates situations where a connected social graph with an increased number of links results in a reduction in the number of states in the money distribution at time \(T\). **Corollary 5**.: Assuming that \(G\) is connected and cyclic and edge \(e=(i,j)\) belongs to a cycle in \(G\), let \(n_{H}\) represent the number of states in the money distribution at time \(T\) under social graph \(H\). Then, we have \(n_{G}\leq n_{G-e}\) when the same pair of agents is selected at each time step, no two agents are in the same non-absorbing state for all \(t\geq 1\), and an absorbing state occurs before agents \(i\) and \(j\) engage in a transaction with each other. Although achieving equal wealth is not possible, having more social connections does not necessarily lead to a greater number of states in the money distribution at time \(T\). ## 3. The model The key aspect in obtaining the main results is to identify a suitable supermartingale or submartingale. **Lemma 6**.: Let \(Z_{t}=\sum_{i,j\in[n]}(m_{i}(t)-m_{j}(t))^{2}.\) Then, \(Z_{t}\) is a submartingale. In particular, \(\mathbf{E}[Z_{t}-Z_{t+1}|\)a transaction at time \(t]\leq-4n\) if there is a transaction at time \(t.\) Proof.: Let \(X_{ij}=X_{ij}(t),\)\(m_{i}=m_{i}(t)\) and \(m_{i}^{\star}=m_{i}(t+1)\) for all \(i,j\in[n]\) and \(t\geq 0.\) Say agents \(p\) and \(q\) transact at time \(t\). Then, \[(m_{i}-m_{p})^{2}-(m_{i}-m_{p}^{\star})^{2}=2(m_{i}-m_{p}^{\star} )(m_{p}^{\star}-m_{p})+(m_{p}^{\star}-m_{p})^{2},\] \[(m_{i}-m_{q})^{2}-(m_{i}-m_{q}^{\star})^{2}=2(m_{i}-m_{q}^{\star })(m_{q}^{\star}-m_{q})+(m_{q}^{\star}-m_{q})^{2},\] \[(m_{i}-m_{p})^{2}-(m_{i}-m_{p}^{\star})^{2}+(m_{i}-m_{q})^{2}-(m_ {i}-m_{q}^{\star})^{2}\] \[=2(m_{p}^{\star}-m_{p})(m_{q}^{\star}-m_{p}^{\star})+(m_{p}^{ \star}-m_{p})^{2}+(m_{q}^{\star}-m_{q})^{2},\] \[(m_{p}-m_{q})^{2}-(m_{p}^{\star}-m_{q}^{\star})^{2}=4X_{pq}(X_{ pq}-m_{p}+m_{q}).\] Observe that \[Z_{t}-Z_{t+1}=2\bigg{\{}\sum_{i\in[n]-\{p,q\}}\big{[}(m_{i}-m_{p} )^{2}-(m_{i}-m_{p}^{\star})^{2}\\ +(m_{i}-m_{q})^{2}-(m_{i}-m_{q}^{\star})^{2}\big{]}+(m_{p}-m_{q}) ^{2}\bigg{\}}.\] Let \(F_{t}\) be the event of a transaction at time \(t\). Then, \[\mathbf{E}[Z_{t}-Z_{t+1}|F_{t}]\] \[=2\bigg{\{}(n-2)\mathbf{E}\big{[}-2X_{pq}(X_{pq}+m_{q}-m_{p})Q_{ pq}-2X_{qp}(X_{qp}+m_{p}-m_{q})Q_{qp}|F_{t}\big{]}\\ +\mathbf{E}\big{[}-4X_{pq}(X_{pq}-m_{p}+m_{q})Q_{pq}-4X_{qp}(X_{ qp}-m_{q}+m_{p})Q_{qp}|F_{t}\big{]}\bigg{\}}\] \[=2\bigg{[}[-2(n-2)-4]\bigg{(}\mathbf{E}[X_{12}^{2}|F_{t}]+\mathbf{ E}[X_{12}|F_{t}]\mathbf{E}[(m_{q}-m_{p})(Q_{pq}-Q_{qp})|F_{t}]\bigg{)}\bigg{]}\] \[=-4n\bigg{(}\mathbf{E}[X_{12}^{2}|F_{t}]+\mathbf{E}[X_{12}|F_{t}] \mathbf{E}[(m_{q}-m_{p})(Q_{pq}-Q_{qp})|F_{t}]\bigg{)}\leq-4n.\] **Proof of Theorem 1**.: It follows from Lemma 6, \(Z_{t}\) is a submartingale bounded in \(L^{1}\) by \(n^{2}(h-\ell+2d)^{2}\). By the martingale convergence theorem, \(Z_{t}\) converges almost surely to a random variable \(Z_{\infty}\) with finite expectation. Therefore, \[\lim_{t\to\infty}(Z_{t}-Z_{t+1})=Z_{\infty}-Z_{\infty}=0.\] So \(|Z_{t}-Z_{t+1}|<1/2\) for some \(N\geq 0\) and for all \(t\geq 0\). Since \(Z_{t}-Z_{t+1}\in\mathbb{Z}\), \(Z_{t}-Z_{t+1}=0\) for all \(t\geq N.\) If there is a transaction at some time \(t\geq N\), then by Lemma 6, \[\mathbf{E}[Z_{t}-Z_{t+1}|\text{a transaction at time }t]\leq-4n,\text{ a contradiction.}\] Hence, there is no transaction from time \(N\) on. Proof of Corollary 2.: Assuming the opposite of the statement, let \(E_{t}\) be the event that two agents whose money is in \((\ell,h)\) at time \(T\) are social neighbors, and let \(F_{t}\) be the event that the two agents transact at some time \(t\geq T\). Then, \[P(F_{t})\geq P(F_{t}|E_{t})P(E_{t})>\frac{1}{n^{2}}P(E_{t})>0,\text{ contradicting }P(F_{t})=0.\] Proof of Corollary 3.: Assuming that this is not the case, since the total money of all agents is conserved over time, the money of all agents would be the average initial money of all agents, which falls in \((\ell,h)\). This contradicts Corollary 2. Proof of Corollary 4.: Assume that this is not the case. Then, there are two agents whose money is in \((\ell,h)\) at time \(T\). \(G\) complete implies they are social neighbors, contradicting Corollary 2. Proof of Corollary 5.: We prove by induction on \(|G|\). For \(|G|=3\), we consider \(K_{3}\) the complete graph of order \(3\) and \(P_{3}\) the path of order \(3\). Say \(1\implies 2\implies 3\) is the path. If more than one agent is in an absorbing state before agents \(1\) and \(3\) are selected, we are done. Else, only one agent is in an absorbing state before agents \(1\) and \(3\) are selected. By symmetry, we consider two cases: 1. agent \(1\) in an absorbing state and 2. agent \(2\) in an absorbing state. Under case (1), a transaction occurs only when agents \(2\) and \(3\) are selected before time \(T\) for \(K_{3}\) and \(P_{3}\), therefore \(n_{K_{3}}=n_{P_{3}}\). Since all agents in non-absorbing states are distinct for all \(t\geq 1\), \(n_{P_{3}}=3\geq n_{K_{3}}\) under case (2). Thus, it is true for \(|G|=3\). For \(|G|>3\), if agent \(i\) or agent \(j\) is in an absorbing state before agents \(i\) and \(j\) are selected, then the states of all agents under \(G\) are the same as the states of all agents under \(G-e\) all the time. Else, let vertex \(v\) be in an absorbing state before edge \(e=(i,j)\) is selected and \(H\) be the component of \(G-v\) containing \(e\). Since agents in distinct components of \(G-v\) can not transact with each other, by induction, \(n_{H-e}\geq n_{H}\). Hence, \(n_{G-e}\geq n_{G}\). ## 4. Statements and Declarations ### Competing Interests The author is funded by the National Science and Technology Council in Taiwan. ### Data availability No associated data was used.
2303.04298
Classical vs Quantum Advice and Proofs under Classically-Accessible Oracle
It is a long-standing open question to construct a classical oracle relative to which BQP/qpoly $\neq$ BQP/poly or QMA $\neq$ QCMA. In this paper, we construct classically-accessible classical oracles relative to which BQP/qpoly $\neq$ BQP/poly and QMA $\neq$ QCMA. Here, classically-accessible classical oracles are oracles that can be accessed only classically even for quantum algorithms. Based on a similar technique, we also show an alternative proof for the separation of QMA and QCMA relative to a distributional quantumly-accessible classical oracle, which was recently shown by Natarajan and Nirkhe.
Xingjian Li, Qipeng Liu, Angelos Pelecanos, Takashi Yamakawa
2023-03-08T00:30:07Z
http://arxiv.org/abs/2303.04298v4
# Classical vs Quantum Advice and Proofs under ###### Abstract It is a long-standing open question to construct a classical oracle relative to which \(\mathsf{BQP/qpoly}\neq\mathsf{BQP/poly}\) or \(\mathsf{QMA}\neq\mathsf{QCMA}\). In this paper, we construct _classically-accessible classical oracles_ relative to which \(\mathsf{BQP/qpoly}\neq\mathsf{BQP/poly}\) and \(\mathsf{QMA}\neq\mathsf{QCMA}\). Here, classically-accessible classical oracles are oracles that can be accessed only classically even for quantum algorithms. Based on a similar technique, we also show an alternative proof for the separation of \(\mathsf{QMA}\) and \(\mathsf{QCMA}\) relative to a distributional quantumly-accessible classical oracle, which was recently shown by Natarajan and Nirkhe. ## 1 Introduction Quantum information is often in possession of richer structures than classical information, at least intuitively. The first (but often false) thought is that phases and magnitudes are continuous, and a piece of quantum information may be able to store exponentially or infinitely more information than classical ones; which is always not true1. Since classical and quantum information present distinct and unique natures, the community studies their differences under different contexts and directions, including advice-aided quantum computation [15, 1, 1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], \(\mathsf{QMA}\) v.s. \(\mathsf{QCMA}\) (i.e., quantum \(\mathsf{NP}\) with either quantum or classical witness) [1, 1, 2], quantum v.s. classical communication complexity [13, 1, 1, 14, 15] and many others. Footnote 1: As storing and extracting information takes resources that scale with accuracy. One way to understand their differences is by studying one-way communication complexity: i.e., Alice and Bob want to jointly compute a function with their private inputs, but only one-time quantum/classical communication from Alice to Bob is allowed. Among many works, Bar-Yossef, Jayram, and Kerenidis [1] showed an exponential separation between quantum and classical one-way communication complexity, for the so-called hidden matching problem. The other approach is by looking at \(\mathsf{QMA}\) v.s. \(\mathsf{QCMA}\). In 2007, Aaronson and Kuperberg [1] showed a black-box separation with respect to a black-box quantum unitary and left the same separation with respect to a classical oracle as an open question. More than a decade later, Fefferman and Kimmel [17] proved a second black-box separation using a distributional in-place oracle, which is a non-standard type of oracles. Recently, Natarajan and Nirkhe [20] moved a step closer to the goal by presenting a black-box separation with respect to a distributional oracle2. Therefore, we would like to further investigate the difference between quantum and classical proofs, i.e., the separation between QMA v.s. QCMA. In the work, we address the question by demonstrating a separation relative to _classically accessible classical_ oracle. Footnote 2: In this case, the witness only depends on the distribution. The oracle is later picked from the distribution, but independent of the witness. **Definition 1.1** (Qma).: _A language \(\mathcal{L}\) is said to be in_ QMA _if there exists a quantum polynomial-time machine \(\mathcal{V}\) together with a polynomial \(p(\cdot)\) such that,_ * _For all_ \(x\in\mathcal{L}\)_, there exists a quantum state_ \(\ket{\psi_{x}}\) _of at most_ \(p(\ket{x})\) _qubits, such that_ \(\mathcal{V}\) _accepts on_ \(\ket{x},\ket{\psi_{x}}\) _with a probability at least_ \(2/3\)_._ * _For all_ \(x\not\in\mathcal{L}\)_, for all quantum states_ \(\ket{\psi_{x}}\) _of at most_ \(p(\ket{x})\) _qubits,_ \(\mathcal{V}\) _accepts on_ \(\ket{x},\ket{\psi_{x}}\) _with a probability at most_ \(1/3\)_._ One can similarly define the class QCMA except \(\ket{\psi_{x}}\) are of \(p(\ket{x})\) classical bits. We also aim to understand the difference between quantum and classical information through advice-aided quantum computation. Classically, a piece of advice can significantly speed up classical computation, from speeding up exhaustive search [14] to deciding the unary Halting Problem. In a quantum world, advice can be either a piece of classical or quantum information. It is very natural to ask the question: does quantum advice "outperform" classical advice? Among many questions, one of the most fundamental is to understand the power of \(\mathsf{BQP/qpoly}\) and \(\mathsf{BQP/poly}\): i.e., the class of languages that can be decided by bounded-error quantum machines with arbitrary bounded-length quantum/classical advice and polynomial time. **Definition 1.2** (\(\mathsf{BQP/qpoly}\)).: _A language \(\mathcal{L}\) is said to be in \(\mathsf{BQP/qpoly}\) if and only if there exists a quantum polynomial-time machine \(\mathcal{A}\) together with a collection of polynomial-sized quantum states \(\{\ket{z_{n}}\}_{n\in\mathbb{N}}\) such that,_ * _For all_ \(x\in\mathcal{L}\)_,_ \(\Pr\left[\mathcal{A}(x,\ket{z_{\ket{x}}})=1\right]\geq 2/3\)_._ * _For all_ \(x\not\in\mathcal{L}\)_,_ \(\Pr\left[\mathcal{A}(x,\ket{z_{\ket{x}}})=0\right]\geq 2/3\)_._ One can similarly define the class \(\mathsf{BQP/poly}\) except \(\ket{z_{n}}\) are poly-sized classical strings. Similar to the case of QMA v.s. QCMA, Aaronson and Kuperberg [1] in the same paper showed an oracle separation between these two classes, leaving the separation with respect to a classical oracle as an open question. Recently, Liu [15] showed the separation for its relational variants (i.e., \(\mathsf{FBQP/qpoly}\) and \(\mathsf{FBQP/poly}\)) under a special case, where the oracle is never given to the algorithms3. Despite all the efforts, the separation between \(\mathsf{BQP/poly}\) and \(\mathsf{BQP/qpoly}\) relative to a classical oracle remains obscure. Footnote 3: As mentioned in Section 1.3, a concurrent work by Aaronson, Buhrman, and Kretschmer [1] proves their relational variants \(\mathsf{FBQP/qpoly}\neq\mathsf{FBQP/poly}\)_unconditionally_. In this work, we proceed with the question by showing a _full separation_ relative to a _classically accessible classical_ oracle. Along the way, we adapt our techniques and give an alternative proof for the separation between QMA v.s. QCMA relative to a _quantumly accessible classical_ distributional oracle, which was recently established by Natarajan and Nirkhe [20]. ### Our Results Our first result is a black-box separation between \(\mathsf{BQP/qpoly}\) v.s. \(\mathsf{BQP/poly}\) with respect to a classically-accessible classical oracle. _Classically-accessible classical oracles._ A classical oracle \(\mathcal{O}\) is said to be classically accessible if a quantum algorithm can only query the oracle classically; in other words, the only interface of \(\mathcal{O}\) to quantum algorithms is classical: given an input \(x\), it outputs \(y=\mathcal{O}(x)\). **Theorem 1.3** (Informal).: _There exists a language \(\mathcal{L}\) in \(\mathsf{BQP/qpoly}\) but not in \(\mathsf{BQP/poly}\), relative to a classically accessible classical oracle \(\mathcal{O}\)._ Our work is based on the previous works on quantum advantages with unstructured oracles, by Yamakawa and Zhandry [22] and recent separation by Liu [19]. Our work improves [19] in two aspects: first, Liu only proved the separation of their relational variants, instead of the original decision classes; second, the separation by Liu does not allow algorithms to have access to the oracle (either classically, or quantumly), but only the advice can depend on the oracle. **Remark 1**.: _Although the long-standing open question is to understand the separation relative to a quantumly accessible classical oracle, our theorem is not weaker but incomparable. Since classical access is not stronger than quantum access, it limits both the computational power of quantum machines with quantum and classical advice: intuitively, classical access makes it easier to prove a language \(\mathcal{L}\) is not in \(\mathsf{BQP^{O}/poly}\) but harder to prove \(\mathcal{L}\in\mathsf{BQP^{O}/qpoly}\). A similar observation is applicable to the theorem concerning the separation between \(\mathsf{QMA}\) and \(\mathsf{QCMA}\)._ Our second result is about the separation between \(\mathsf{QMA}\) and \(\mathsf{QCMA}\). **Theorem 1.4** (Informal).: _There exists a language \(\mathcal{L}\) in \(\mathsf{QMA}\) but not in \(\mathsf{QCMA}\), relative to a classically accessible classical oracle \(\mathcal{O}\)._ Inspired by the techniques used in our Theorem 1.3 and Theorem 1.4, we give an alternative proof for the result by Natarajan and Nirkhe [22]. Note in the following result, the classical oracle is _quantumly accessible_. **Theorem 1.5** (Informal).: _There exists a language \(\mathcal{L}\) in \(\mathsf{QMA}\) but not in \(\mathsf{QCMA}\), relative to a distributional quantumly-accessible classical oracle \(\mathcal{O}\)._ Finally, we observe that the problem considered in [22] gives a new superpolynomial separation between classical and quantum one-way communication complexity. Though such a separation has been known since 2004 [1], the new separation has two interesting features that Bob's input length is short and the classical lower bound holds even if Bob can classically access Alice's input as an oracle (see Section 7 for more detail).4 Footnote 4: As mentioned in Section 1.3, a concurrent work by Aaronson, Buhrman, and Kretschmer [1] observes that there is a variant of [1] that satisfies the former (but not latter). ### Overview Quantum Advantages and Separation for a Special Case.We will set up the languages on the recent work on quantum advantages from unstructured oracles by Yamakawa and Zhandry [22], as it will be used for both of our results in this work. They proved that there exists a code \(C\subseteq\Sigma^{n}\) and an oracle-aided function \(f_{C}\) such that, relative to a random oracle from \(H:[n]\times\Sigma\to\{0,1\}\) the function \(f_{C}^{H}\) is easy to invert on any images with quantum access but inversion is hard with only classical access. The function is defined as the following: \[f_{C}^{H}(v_{1},\cdots,v_{n})=\begin{cases}H(1,v_{1})||H(2,v_{2})||\cdots||H(n,v _{n})&\text{ if }(v_{1},\cdots,v_{n})\in C\\ \bot&\text{ if }(v_{1},\cdots,v_{n})\not\in C\end{cases}.\] Intuitively, although the function computes an entry-by-entry hash, the requirement that \((v_{1},\cdots,v_{n})\) must be a codeword enforces the hardness of inversion when only classical queries are allowed. More precisely, the underlying code \(C\) satisfies a property called list-recoverability; even if a classical algorithm learns hash values of a subset \(E_{i}\subseteq\Sigma\) for each of \(H(i,\cdot)\), only a polynomial number of codewords can be found in \(E_{1}\times E_{2}\times\cdots\times E_{n}\), which does not help invert random images5. On the other hand, they showed a quantum algorithm that uses quantum queries to invert images. Footnote 5: When all classical queries are non-adaptive, this is clearly true: as only polynomially many \(f_{c}^{H}\) are known for codewords in \(C\). The idea can be adapted to adaptive queries as well; we do not elaborate on it here. Liu [14] observed that the inversion quantum algorithm only needs to make non-adaptive quantum queries that are independent of the image \(y\). Therefore, the algorithm with quantum access can be easily cast into a quantum algorithm with quantum advice but no queries; on the other hand, he showed that, if an algorithm has no access to the random oracle, it can not invert even with a piece of exponentially large classical advice. Since given an image \(y\) there are multiple pre-images of \(y\), the above two statements lead to the separation between \(\mathsf{FBQP/qpoly}\) and \(\mathsf{FBQP/poly}\) when an algorithm has no access to the oracle. Allowing classical queries.Our first result is to extend the previous separation of \(\mathsf{FBQP/qpoly}\) and \(\mathsf{FBQP/poly}\) by Liu, by allowing quantum algorithms to make online _classical queries_ to \(H\). Since the algorithm with quantum advice makes no queries, it also works in this setting of classical access. We only need to prove the hardness with classical advice: i.e., no quantum algorithms can invert with classical queries and bounded classical advice. Following the framework by Guo et al. [13], when only making classical queries (say, at most \(T\)), a piece of \(S\) bits of classical advice is equivalent to the so-called "ordinary advice": the advice consists of only \(ST\) coordinates, or \(ST\) pairs of inputs and outputs. More precisely, the information the algorithm can learn from \(S\) bits of advice and \(T\) classical queries is roughly the same as that from \(ST\) bits of ordinary advice and \(T\) classical queries. Thus, the first step is to replace classical advice with ordinary classical advice. It now remains to show that a quantum algorithm with classical access to \(H\) and short ordinary advice can not invert random images. As ordinary advice only gives information on at most polynomially many pairs of inputs and outputs, let \(E_{i}\) denote the subset of inputs whose hash values under \(H(i,\cdot)\) are in the ordinary advice; since the ordinary advice is of length polynomial, \(|E_{i}|\) is polynomial for each \(i\in[n]\). Now let us assume the algorithm makes non-adaptive queries that are independent of a challenge. In this case, we can further define \(E_{i}^{\prime}\) that consists of all \(x\) inputs whose values \(H(i,x)\) are known from these classical queries. The algorithm in total learns hashes of \(H(i,\cdot)\) for the inputs in \(E_{i}\cup E_{i}^{\prime}\). Observing that \(|E_{i}\cup E_{i}^{\prime}|\) is polynomial for each \(i\), by the list-recoverability of the underlying code \(C\), the algorithm only learns values of \(f_{C}^{H}\) for a polynomial number of codewords, which almost always never hits a random challenge. Lastly, we extend the proof to non-adaptive cases (for more details, please refer to Section 3). Upgrading to \(\mathsf{BQP/qpoly}\) v.s. \(\mathsf{BQP/poly}\).Next, we turn the above separation of relational classes into a separation of \(\mathsf{BQP/qpoly}\) v.s. \(\mathsf{BQP/poly}\). Our idea is to define a language \(\mathcal{L}\) through a random function \(G\colon\{0,1\}^{n}\to\{0,1\}\), i.e., \(x\in\mathcal{L}\) if and only if \(G(x)=1\). We use an oracle \(\mathcal{O}\) to hide the evaluation of \(G\) on \(x\), by requiring the algorithm to invert the function \(f_{C}^{H}\) on image \(x\). More precisely, we define \(\mathcal{O}\) as follows: \[\mathcal{O}(\mathbf{v},x)=\begin{cases}G(x)&\text{ if }f_{C}^{H}(\mathbf{v})=x, \\ \bot&\text{ otherwise.}\end{cases}\] Then an algorithm only gets oracle access to \(\mathcal{O}\), but not \(H\) or \(G\). To see that \(\mathcal{L}\) is decidable by a \(\mathsf{BQP/qpoly}\) machine, we can just take the quantum advice as in [11]. On an input \(x\), an algorithm generates \(\mathbf{v}\) such that \(f_{C}^{H}(\mathbf{v})=x\) using the quantum advice. We can then evaluate \(G(x)\) by querying \(\mathcal{O}\) at \((x,\mathbf{v})\) and decide if \(x\in\mathcal{L}\). On proving that \(\mathcal{L}\) cannot be decided by any \(\mathsf{BQP/poly}\) machine, we leverage the statement proved above: given classical advice, by only querying a classical oracle \(H\), it is hard for an efficient algorithm to invert \(f_{C}^{H}\). The beyond result implies that the algorithm should only have negligible query weight on \(\mathbf{v}\) under the classical oracle \(\mathcal{O}\) such that \(f_{C}^{H}(\mathbf{v})=x\); otherwise the algorithm can be turned into another one that inverts \(f_{C}^{H}\). Therefore, we can reduce the problem to the case where an algorithm has only access to \(G(y)\) for all \(y\neq x\), but no access to \(G(x)\) for the challenge \(x\); the goal is still to learn \(G(x)\). This is exactly the famous Yao's box problem [10]: a piece of advice is allowed to depend on the whole oracle \(G\), but then an online algorithm uses the advice to find \(G(x)\) for a random \(x\), with no access to \(G(x)\). By adapting the ideas in [10, 12] and combining all the previous ideas with a standard diagonalization argument, we prove the separation \(\mathsf{BQP^{O}/qpoly}\neq\mathsf{BQP^{O}/poly}\) relative to a classically accessible classical oracle. Separation between \(\mathsf{QMA}\) and \(\mathsf{QCMA}\) relative to a classically accessible classical oracle.We first construct a problem that has a short quantum proof for \(\mathsf{YES}\) instances, no quantum proof for \(\mathsf{NO}\) instances, and no classical proof that can distinguish between \(\mathsf{YES}\) and \(\mathsf{NO}\) instances. Given random functions \(H:[n]\times\Sigma\to\{0,1\}\) and \(G:\{0,1\}^{n}\to\{0,1\}^{n}\), as well as a subset \(S\subseteq\{0,1\}^{n}\) with a size of at least \(2/3\cdot 2^{n}\), we create two pairs of oracles: * For a \(\mathsf{YES}\) instance, the oracle \(G\) and an oracle \(\mathcal{O}[G,H,\emptyset]\) are provided. The latter takes an input \(t\in\{0,1\}^{n}\) and a vector \(\mathbf{v}\in C\) and outputs \(1\) if and only if \(f^{H}(\mathbf{v})=G(t)\). * If it is a \(\mathsf{NO}\) instance, the oracle \(G\) and an oracle \(\mathcal{O}[G,H,S]\) is given. The latter takes as input a \(t\in\{0,1\}^{n}\) and a vector \(\mathbf{v}\in C\), it outputs \(1\) if and only if \(f^{H}(\mathbf{v})=G(t)\) and \(t\not\in S\). A quantum algorithm \(\mathcal{A}\), with the same advice as in [11], achieves the following: * The algorithm \(\mathcal{A}\) on oracles \(G,\mathcal{O}\) (which will be either \(\mathcal{O}[G,H,\emptyset]\) or \(\mathcal{O}[G,H,S]\)), uniformly at random samples \(t\in\{0,1\}^{n}\). It then uses the quantum advice to compute a vector \(\mathbf{v}\) such that \(f^{H}(\mathbf{v})=G(t)\) and outputs \(\mathcal{O}(t,\mathbf{v})\). * When \(\mathcal{O}[G,H,\emptyset]\) is given, by the correctness of the YZ algorithm, \(\mathcal{A}\) will output \(1\) with an overwhelming probability. * When \(\mathcal{O}[G,H,S]\) is given, by the definition and the condition \(|S|\geq 2/3\cdot 2^{n}\), \(\mathcal{A}\) will output \(1\) with a probability at most \(1/3\). On the other hand, for any quantum algorithm \(\mathcal{B}\) with classical queries and bounded-sized classical proof, it can not distinguish between the case of having access to \(G,\mathcal{O}[G,H,\emptyset]\) or \(G,\mathcal{O}[G,H,S]\). On a high level, the only way to tell the difference is by finding an input \((t,\mathbf{v})\) such that \(\mathcal{O}[G,H,\emptyset](t,\mathbf{v})\neq\emptyset\). \(\mathcal{O}[G,H,S](t,\mathbf{v})\). This will require \(\mathcal{B}\) to query on an input \((t,\mathbf{v})\) such that \(f^{H}(\mathbf{v})=G(t)\), which is difficult for \(\mathcal{B}\) with only classical advice. Finally, we mount our new separation result on the diagonalization argument and construct a language that shows the separation between \(\mathsf{QMA}\) and \(\mathsf{QCMA}\) relative to a classically accessible classical oracle. Please refer to Section 5 for full details. Separation between \(\mathsf{QMA}\) and \(\mathsf{QCMA}\) relative to a distributional oracle.To separate \(\mathsf{QMA}\) from \(\mathsf{QCMA}\), we would try to separate two distributions of oracles, namely the \(\mathsf{YES}\) and \(\mathsf{NO}\) distributions, defined below: For a random function \(H\colon[n]\times\Sigma\to\{0,1\}\), * If it is a \(\mathsf{YES}\) instance, an oracle distribution \(\{\mathcal{O}[r]\}_{r}\) with the index being drawn uniformly at random, such that: \(\mathcal{O}[r]\) takes \(\mathbf{v}\in\Sigma^{n}\) and evaluates \(f^{H}_{C}(\mathbf{v})\), and outputs \(1\) iff \(f^{H}_{C}(\mathbf{v})=r\); * If it is a \(\mathsf{NO}\) instance, the oracle always outputs \(0\). To see that the two distributions can be distinguished using a \(\mathsf{QMA}\) machine, notice that we can take the quantum advice as in [10], and generate \(\mathbf{v}\) such that \(f^{H}_{C}(\mathbf{v})=r\). By querying \(\mathcal{O}\) at \(\mathbf{v}\), we can distinguish whether the oracle belongs to the \(\mathsf{YES}\) distribution or the \(\mathsf{NO}\) distribution. To prove the two distributions cannot be distinguished by any \(\mathsf{QCMA}\) machine, we notice that the difference between \(\mathsf{YES}\) and \(\mathsf{NO}\) oracles is only on inputs \(\mathbf{v}\) that \(f^{H}_{C}(\mathbf{v})=r\). Therefore, we reduce it to the hardness of finding \(\mathbf{v}\) for random \(r\) such that \(f^{H}_{C}(\mathbf{v})=r\), even when given classical advice. In Section 6, we define unary languages \(\mathcal{L}_{i}\) and their related oracle distributions for each \(n\). By a standard diagonalization argument, we can argue that there exists some language \(\mathcal{L}\) that is in \(\mathsf{QMA}^{\mathcal{O}}\) but not in \(\mathsf{QCMA}^{\mathcal{O}}\). ### Concurrent Work A concurrent work by Aaronson, Buhrman, and Kretschmer [1], among many results, proves \(\mathsf{FBQP/poly}\neq\mathsf{FBQP/qpoly}\)_unconditionally_ where \(\mathsf{FBQP/poly}\) and \(\mathsf{FBQP/qpoly}\) are the relational variants of \(\mathsf{BQP/poly}\) and \(\mathsf{BQP/qpoly}\), respectively. The key insight behind the result is an observation that a variant of hidden matching problem [1] gives an exponential separation between classical and quantum one-way communication complexity with short Bob's input length. They essentially prove that any such a separation with efficient Bob can be used to prove \(\mathsf{FBQP/poly}\neq\mathsf{FBQP/qpoly}\). We independently had an observation that [13] gives such a separation of classical and quantum one-way communication complexity with short Bob's input length (see Section 7). By relying on their proof, which is fairly easy in hindsight, it seems possible to prove \(\mathsf{FBQP/poly}\neq\mathsf{FBQP/qpoly}\) by using [13] instead of the hidden matching problem. A crucial difference between the one-way communication variant of [13] and the hidden matching problem is that the hardness of the former with classical communication holds even if Bob can classically query Alice's input. Due to this difference, one cannot reprove \(\mathsf{BQP/poly}\neq\mathsf{BQP/qpoly}\) and \(\mathsf{QMA}\neq\mathsf{QCMA}\) relative to a classically-accessible classical oracle by using the hidden matching problem instead of [13]. On the other hand, it seems possible to prove \(\mathsf{QMA}\neq\mathsf{QCMA}\) relative to a distributional quantumly-accessible classical oracle by using (a parallel repetition variant of) their variant of the hidden matching problem instead of [13]. Preliminaries Basic notations.For a set \(X\), \(\left|X\right|\) denotes the cardinality of \(X\). We write \(x\gets X\) to mean that \(x\) is uniformly taken from \(X\). For a distribution \(\mathcal{D}\) over classical strings, we write \(x\leftarrow\mathcal{D}\) to mean that \(x\) is sampled from the distribution \(\mathcal{D}\). For sets \(X\) and \(Y\), \(\mathsf{Func}(X,Y)\) denotes the set of all functions from \(X\) to \(Y\). For a positive integer \(n\), \([n]\) denotes the set \(\{1,2,...,n\}\). QPT stands for "Quantum Polynomial-Time". We use \(\mathsf{poly}\) to mean a polynomial and \(\mathsf{negl}\) to mean a negligible function. Oracle variations.In the literature on quantum computation, when we say that a quantum algorithm has oracle access to \(f:X\to Y\), it usually means that it is given oracle access to an oracle that applies a unitary \(\left|x\right\rangle\left|y\right\rangle\mapsto\left|x\right\rangle\left|y \oplus f(x)\right\rangle\). We refer to such a standard oracle as **quantumly-accessible classical oracles**. In this paper, we consider the following two types of non-standard oracles. The first is **classically-accessible classical oracles**. A classically-accessible classical oracle for a classical function \(f:X\to Y\) takes a _classical string_\(x\in X\) as input and outputs \(f(x)\). In other words, when an algorithm sends \(\sum_{x,y}\alpha_{x,y}\left|x\right\rangle\left|y\right\rangle\) to the oracle, the oracle first _measures_ the first register and then applies the unitary \(\left|x\right\rangle\left|y\right\rangle\mapsto\left|x\right\rangle\left|y \oplus f(x)\right\rangle\). Note that classically-accessible classical oracles apply non-unitary operations. The second is **distributional quantumly-accessible classical oracles**. They are specified by a distribution \(\mathcal{F}\) over classical functions \(f\) rather than by a single function \(f\). When we consider an algorithm that is given oracle access to a distributional quantumly-accessible classical oracle, it works as follows: At the beginning of an execution of the algorithm, a function \(f\) is chosen according to the distribution \(\mathcal{F}\), and then the algorithm has access to a quantumly-accessible classical oracle that computes \(f\). Note that \(f\) is sampled at the beginning and then the same \(f\) is used throughout the execution. Complexity classes.We define the complexity classes which we consider in this paper. Specifically, we define \(\mathsf{BQP}/\mathsf{qpoly}\), \(\mathsf{BQP}/\mathsf{poly}\), \(\mathsf{QMA}\), and \(\mathsf{QCMA}\) relative to classically-accessible classical oracles and \(\mathsf{QMA}\) and \(\mathsf{QCMA}\) relative to distributional quantumly-accessible classical oracles. **Definition 2.1** (\(\mathsf{BQP}/\mathsf{qpoly}\) and \(\mathsf{BQP}/\mathsf{poly}\) languages relative to classically-accessible classical oracles.).: _Let \(\mathcal{O}\) be a classically-accessible classical oracle. A language \(\mathcal{L}\subseteq\{0,1\}^{*}\) belongs to \(\mathsf{BQP}/\mathsf{qpoly}\) relative to \(\mathcal{O}\) if there is a QPT machine \(\mathcal{A}\) and a polynomial-size family \(\{|z_{n}\rangle\}_{n\in\mathbb{N}}\) of quantum advice such that for any \(x\in\{0,1\}^{*}\),_ \[\Pr[\mathcal{A}^{\mathcal{O}}(x,|z_{|x|}))=\mathcal{L}(x)]\geq 2/3\] _where \(\mathcal{L}(x):=1\) if \(x\in\mathcal{L}\) and otherwise \(\mathcal{L}(x):=0\)._ \(\mathsf{BQP}/\mathsf{poly}\) _is defined similarly except that the advice is required to be classical._ **Definition 2.2** (\(\mathsf{QMA}\) and \(\mathsf{QCMA}\) languages relative to classically-accessible classical oracles.).: _Let \(\mathcal{O}\) be a classically-accessible classical oracle. A language \(\mathcal{L}\subseteq\{0,1\}^{*}\) belongs to \(\mathsf{QMA}\) relative to \(\mathcal{O}\) if there is a QPT machine \(V\) with classical access to its oracle and a polynomial \(p\) such that the following hold:_ **Completeness**: _For any_ \(x\in\mathcal{L}\)_, there is a_ \(p(|x|)\)_-qubit witness_ \(\left|w\right\rangle\) _such that_ \[\Pr[V^{\mathcal{O}}(x,\left|w\right\rangle)=1]\geq 2/3.\] **Soundness**: _For any \(x\notin\mathcal{L}\) and \(p(|x|)\)-qubit witness \(\left|w\right>\),_ \[\Pr[V^{\mathcal{O}}(x,\left|w\right>)=1]\leq 1/3.\] \(\mathsf{QCMA}\) _is defined similarly except that the witness is required to be classical._ **Definition 2.3** (\(\mathsf{QMA}\) and \(\mathsf{QCMA}\) languages relative to distributional quantumly-accessible classical oracles.).: _Let \(\mathcal{F}\) be a distributional quantumly-accessible classical oracle, i.e., it specifies a distribution over classical functions \(f\). A language \(\mathcal{L}\subseteq\{0,1\}^{*}\) belongs to \(\mathsf{QMA}\) relative to \(\mathcal{F}\) if there is a QPT machine \(V\) with quantum access to its oracle and a polynomial \(p\) such that the following hold:_ **Completeness**: _For any_ \(x\in\mathcal{L}\)_, there is a_ \(p(|x|)\)_-qubit witness_ \(\left|w\right>\) _such that_ \[\Pr_{f\leftarrow\mathcal{F}}[V^{f}(x,\left|w\right>)=1]\geq 2/3.\] **Soundness**: _For any_ \(x\notin\mathcal{L}\) _and_ \(p(|x|)\)_-qubit witness_ \(\left|w\right>\)_,_ \[\Pr_{f\leftarrow\mathcal{F}}[V^{f}(x,\left|w\right>)=1]\leq 1/3.\] \(\mathsf{QCMA}\) _is defined similarly except that the witness is required to be classical._ **Remark 2**.: _Notice that the quantum/classical witness \(\left|w\right>/w\) can only depend on the distribution \(\mathcal{F}\) rather than a specific oracle \(f\)._ Non-Uniformity in the ROM.Prior work has developed a number of tools to characterize the power of a non-uniform adversary in the random oracle model (ROM). Note that we are considering the ROM where we only allow classical access to random oracles unlike the quantum ROM [1]. This is sufficient for our purpose because we use these tools only for proving \(\mathsf{BQP}/\mathsf{qpoly}\neq\mathsf{BQP}/\mathsf{poly}\) and \(\mathsf{QMA}\neq\mathsf{QCMA}\) relative to _classically-accessible_ classical oracles. Similar to [14], we will be using the presampling technique, introduced by [13] and further developed by [10, 1]. First, we define games in the ROM. **Definition 2.4** (Games in the ROM).: _A game \(\mathcal{G}\) in the ROM is specified by three classical algorithms \(\mathsf{Samp}^{H},\mathsf{Query}^{H}\), and \(\mathsf{Verify}^{H}\) where \(H\leftarrow\mathsf{Func}(X,Y)\) for some sets \(X,Y\):_ * \(\mathsf{Samp}^{H}(r)\)_: it is a deterministic algorithm that takes uniformly random coins_ \(r\in\mathcal{R}\) _as input, and outputs a challenge_ \(\mathsf{CH}\)_._ * \(\mathsf{Query}^{H}(r,\cdot)\)_: it is a deterministic classical algorithm that hardcodes the randomness_ \(r\) _and provides the adversary's online queries._ * \(\mathsf{Verify}^{H}(r,\mathsf{ans})\)_: it is a deterministic algorithm that takes the same random coins for generating a challenge and an alleged answer_ \(\mathsf{ans}\)_, and outputs_ \(b\) _indicating whether the game is won (_\(b=1\) _for winning)._ _Let \(T_{\mathsf{Samp}}\) be the number of queries made by \(\mathsf{Samp}\) and \(T_{\mathsf{Verify}}\) be the number of queries made by \(\mathsf{Verify}\)._ _For a fixed \(H\in\mathsf{Func}(X,Y)\) and a quantum algorithm \(\mathcal{A}\), the game \(\mathcal{G}^{H}_{\mathcal{A}}\) is executed as follows:_ * _A challenger_ \(\mathcal{C}\) _samples_ \(\mathsf{CH}\leftarrow\mathsf{Samp}^{H}(r)\) _using uniformly random coins_ \(r\)_._ * _A (uniform or non-uniform) quantum algorithm_ \(\mathcal{A}\)_, that has classical oracle access to_ \(\mathsf{Query}^{H}(r,\cdot)\)_, takes_ \(\mathsf{CH}\) _as input and outputs_ \(\mathsf{ans}\)_. We call_ \(\mathcal{A}\) _an online adversary/algorithm._ * \(b\leftarrow\mathsf{Verify}^{H}(r,\mathsf{ans})\) _is the game's outcome._ **Definition 2.5**.: _We say that a game \(\mathcal{G}\) in the ROM has security \(\delta(Z,Q):=\delta\) if_ \[\max_{\mathcal{A}}\Pr_{H}[\mathcal{G}^{H}_{\mathcal{A}}=1]\leq\delta\] _where \(H\leftarrow\mathsf{Func}(X,Y)\) and \(\max\) is taken over all \(\mathcal{A}\) with \(Z\)-bit classical advice that makes \(Q\) classical queries._ The presampling technique relates the probability of success of a non-uniform algorithm with classical queries to a random oracle with the success probability of a uniform algorithm in the \(P\) bit-fixing game, as defined below. **Definition 2.6** (Games in the \(P\)-BF-ROM).: _A game \(\mathcal{G}\) in the \(P\)-BF-ROM is specified by two classical algorithms \(\mathsf{Samp}^{H}\) and \(\mathsf{Verify}^{H}\) that work similarly to those in Definition 2.4._ _For a fixed \(H\in\mathsf{Func}(X,Y)\) and a quantum algorithm \((f,\mathcal{A})\), the game \(\mathcal{G}^{H}_{f,\mathcal{A}}\) is executed as follows:_ * **Offline Stage:** _Before a game starts, an offline algorithm_ \(f\) _(having no input) generates a list_ \(\mathcal{L}=\{(x_{i},y_{i})\}_{i\in[P]}\in(X\times Y)^{P}\) _containing at most_ \(P\) _input-output pairs (all_ \(x_{i}\)_'s are distinct)._ * **Online Stage:** _The game is then executed with oracle access to_ \(H\) _similarly to Definition_ 2.4_._ \(H\) _is a function drawn at random such that it satisfies_ \(\mathcal{L}\)_._ **Definition 2.7**.: _We say that a game \(\mathcal{G}\) in the \(P\)-BF-ROM has security \(\nu(P,Q):=\nu\) if_ \[\max_{f,\mathcal{A}}\Pr_{H}[\mathcal{G}^{H}_{f,\mathcal{A}}=1]\leq\nu\] _where \(H\leftarrow\mathsf{Func}(X,Y)\) and \(\max\) is taken over all \(f\) that outputs \(P\) input-output pairs and \(\mathcal{A}\) that makes \(Q\) classical queries._ **Theorem 2.8** ([16, Theorem A.1]6).: _Let \(\mathcal{G}\) be any game with \(Q_{\mathsf{Samp}},Q_{\mathsf{Verify}}\) being the number of queries made by \(\mathsf{Samp}\) and \(\mathsf{Verify}\). For any classical advice length \(Z\), and number of online queries \(Q\):_ Footnote 6: Similar theorems with slightly worse bounds are presented in [11, Theorem 5 and 6] and [16, Theorem 3]. 1. _For_ \(P=Z(Q+Q_{\mathsf{Samp}}+Q_{\mathsf{Verify}})\)_, if_ \(\mathcal{G}\) _has security_ \(\nu(P,Q)\) _in the_ \(P\)_-BF-ROM, then it has security_ \(\delta(Z,Q)\leq 2\cdot\nu(P,Q)\) _against non-uniform unbounded-time algorithms with_ \(Z\) _bits of classical advice and_ \(Q\) _classical queries._ 2. _For any_ \(P>0\)_, if_ \(\mathcal{G}\) _has security_ \(1/2+\nu(P,Q)\) _in the_ \(P\)_-BF-ROM, then it has security_ \(\delta(Z,Q)\leq 1/2+\nu(P,Q)+Z(Q+Q_{\mathsf{Verify}}+Q_{\mathsf{Samp}})/P\) _against non-uniform unbounded-time algorithms with_ \(Z\) _bits of classical advice and_ \(Q\) _classical queries._ We can use the above correspondence to bound the success probability of a non-uniform quantum algorithm with classical queries to the Yao's Box game [14]. **Lemma 2.9** (Yao's Box with Classical Queries [20, 10]).: _Let \(G:[N]\to\{0,1\}\) be a random function. Let \(\mathcal{A}\) be an unbounded-time algorithm, with \(Z\) bits of (classical) advice \(z_{G}\) and \(Q\) classical queries to \(G\). The probability that \(\mathcal{A}\) computes \(G(x)\) without querying \(G\) at random index \(x\) is at most_ \[\Pr_{x}[\mathcal{A}^{G}(z_{G},x)=G(x)]\leq\frac{1}{2}+2\sqrt{\frac{Z(Q+1)}{N}}.\] The above lemma was essentially proved in [10], but we offer here an alternative proof using the presampling technique of Theorem 2.8. Proof of Lemma 2.9.: We consider the bit-fixing game in the \(P\)-BF-ROM, where we fix the value of \(G\) at \(P\) positions in the offline phase. Since \(\mathcal{A}\) is not allowed to query \(G\) at position \(x\) even in the \(P\)-BF-ROM (because it queries \(G\) via the original \(\mathsf{Query}^{G}\)), the only way for \(\mathcal{A}\) to successfully compute \(G(x)\) is if \(x\) is included in the set of fixed positions during the offline phase. This happens with probability \(\frac{P}{N}\) and thus \[\nu(P,T)\leq\frac{P}{N}.\] In this game \(Q_{\mathsf{Samp}}=0\), since the Sampler outputs a challenge \(x\in[N]\) uniformly at random, without the need to perform any queries. The Verifier only needs to query \(G\) at position \(x\), thus \(Q_{\mathsf{Verify}}=1\). The statement of Item 2 of Theorem 2.8 with \(P=\sqrt{Z(Q+1)N}\) implies that the advantage of a non-uniform algorithm with advice \(z_{G}\) and \(Q\) queries is at most \[\delta(Z,Q)\leq\frac{1}{2}+2\sqrt{\frac{Z(Q+1)}{N}}\] The following lemma also relates the ROM with auxiliary input and \(P\)-BF-ROM. This lemma was originally used to prove Theorem 2.8 with a slightly worse bound, but it is applicable in a more general setting. Looking ahead, this is used in the separation between \(\mathsf{QMA}\) and \(\mathsf{QCMA}\) relative to a classically-accessible oracle in Section 5. **Definition 2.10**.: _For \(P\in\mathbb{N}\), we say that a distribution \(\mathcal{D}\) supported by functions \(G\colon X\to Y\) is a \(P\)-bit-fixing distribution if there is a subset \(S\subseteq X\) such that \(|S|\leq P\) and_ * _for all_ \(x\in S\)_,_ \(G(x)\) _takes the same value for all_ \(G\) _in the support of_ \(\mathcal{D}\)_, and_ * _for all_ \(x\notin S\)_,_ \(G(x)\) _is uniformly distributed over_ \(Y\) _when we sample_ \(G\leftarrow\mathcal{D}\)_._ _We say that \(\mathcal{D}\) is fixed on \(x\) if and only if \(x\in S\)._ **Lemma 2.11** ([1, Lemma 1]).: _Let \(G:X\to Y\) be a random oracle. For any \(\gamma>0\), \(P\in\mathbb{N}\), and any family \(\{z_{G}\}_{G}\) of \(Z\)-bit classical advice, there exists a family \(\{\mathcal{D}_{G}\}_{G}\) of convex combinations \(\mathcal{D}_{G}\) of \(P\)-bit-fixing distributions over \(\mathsf{Func}(X,Y)\) such that for any distinguishing algorithm \(\mathcal{B}\) that makes \(Q<P\) classical queries,_ \[\left|\Pr_{G\leftarrow\mathsf{Func}(X,Y)}\left[\mathcal{B}^{G}(z_{G})=1 \right]-\Pr_{\begin{subarray}{c}G\leftarrow\mathsf{Func}(X,Y)\\ G^{\prime}\leftarrow\mathcal{D}_{G}\end{subarray}}\left[\mathcal{B}^{G^{ \prime}}(z_{G})=1\right]\right|\leq\frac{(Z+\log 1/\gamma)\cdot Q}{P}+\gamma.\] **Remark 3**.: _In the [1, Lemma 1], they prove the existence of a family of \(P\)-bit-fixing distributions \(\{\mathcal{D}_{z_{G}}\}_{z_{G}}\), where the elements of the family are parameterized by the advice \(z_{G}\). In contrast, our formulation above defines a convex combination of \(P\)-bit-fixing distributions for each \(G\). This is without loss of generality, since we can assign to each \(G\) the convex combination \(\mathcal{D}_{z_{G}}\) that corresponds to its advice \(z_{G}\)._ One-way to hiding lemma.We review a lemma called the one-way to hiding lemma originally proven by Unruh [20]. The following is a variant proven in [1]. **Lemma 2.12** (One-Way to Hiding Lemma [1, Theorem 3]).: _Let \(S\subseteq\mathcal{X}\) be random. Let \(G,H\colon\mathcal{X}\to\mathcal{Y}\) be random functions satisfying \(\forall x\not\in S\)\([G(x)=H(x)]\). Let \(z\) be a random classical bit string or quantum state. (\(S,G,H,z\) may have an arbitrary joint distribution.) Let \(\mathcal{A}\) be an oracle-aided quantum algorithm that makes at most \(Q\) quantum queries. Let \(\mathcal{B}\) be an algorithm that on input \(z\) chooses \(i\leftarrow[q]\), runs \(\mathcal{A}^{H}(z)\), measures \(\mathcal{A}\)'s \(i\)-th query, and outputs the measurement outcome. Then we have_ \[|\Pr[\mathcal{A}^{G}(z)=1]-\Pr[\mathcal{A}^{H}(z)=1]|\leq 2Q\sqrt{\Pr[\mathcal{B} ^{H}(z)\in S]}.\] We remark that we consider quantum access to the oracles in the above lemma since we use it in the proof of the separation between \(\mathsf{QMA}\) and \(\mathsf{QCMA}\) relative to a distributional _quantumly-accessible_ oracle in Section 6. ## 3 Quantum vs Classical Advice for [21] We review the result of [21], which gives an \(\mathsf{NP}\)-search problem that is easy for \(\mathsf{BQP}\) machines but hard for \(\mathsf{BPP}\) machines relative to a random oracle. Then we show that the problem is easy for \(\mathsf{BQP}\) machines with quantum advice and no online query but hard for unbounded-time machine with polynomial-size classical advice and polynomially-many classical online queries relying on an observation of [14]. **Definition 3.1** ([21]).: _Let \(C\subseteq\Sigma^{n}\) be a code over an alphabet \(\Sigma\) and \(H:[n]\times\Sigma\to\{0,1\}\) be a function. The following function is called a YZ function with respect to \(C\) and \(H\):_ \[f_{C}^{H}:C\to\{0,1\}^{n}\] \[f_{C}^{H}(v_{1},v_{2},...,v_{n})=H(1,v_{1})||H(2,v_{2})||...||H(n,v_{n}).\] **Remark 4**.: _When we refer to a code \(C\subseteq\Sigma^{n}\), it actually means a family \(\{C_{n}\}_{n\in\mathbb{N}}\) of codes \(C_{n}\subseteq\Sigma_{n}^{n}\) over an alphabet \(\Sigma_{n}\). We often omit the dependence of \(n\) for notational simplicity._ [21] shows that there is an error correcting code \(C\), which satisfies a property called _list-recoverability_, such that \(f_{C}^{H}\) is easy to invert with quantum queries to \(H\) but hard to invert with classical queries to \(H\) where \(H\) is modeled as a random oracle. Liu [14] observed that the quantum inversion algorithm of [21] does not need to make adaptive queries, and having a polynomial-size quantum advice on \(H\) that is independent of the target \(r\) suffices. In addition, he proved that classical advice does not suffice for the task if no adaptive query is allowed. Combining the above, we have the following theorem. **Theorem 3.2** ([21, 14]).: _There is a code \(C\subseteq\Sigma^{n}\) satisfying the following:_ 1. **(Easiness with Quantum Advice)** _There is a QPT algorithm_ \(\mathcal{A}\) _and a family of_ \(\mathsf{poly}(n)\)_-qubit quantum advice_ \(\{|z_{H}\rangle\}_{H}\) _such that for any_ \(r\in\{0,1\}^{n}\)_,_ \[\Pr_{H}[f_{C}^{H}(\mathcal{A}(|z_{H}\rangle\,,r))=r)]\geq 1-\mathsf{negl}(n)\] _where_ \(H\leftarrow\mathsf{Func}([n]\times\Sigma,\{0,1\})\)_._ 2. **(Hardness with Classical Advice)** _For any unbounded-time algorithm_ \(\mathcal{B}\) _and polynomial_ \(s\)_, there is a negligible function_ \(\mu\) _such that for any family of_ \(s(n)\)_-bit classical advice_ \(\{z_{H}\}_{H}\)_,_ \[\Pr_{H,r}[f^{H}_{C}(\mathcal{B}(z_{H},r))=r]\leq\mu(n)\] _where_ \(H\leftarrow\mathsf{Func}([n]\times\Sigma,\{0,1\})\) _and_ \(r\leftarrow\{0,1\}^{n}\)_._ _Moreover, \(C\) satisfies \((\zeta,\ell,L)\)-list-recoverability, i.e., for any subset \(T_{i}\subseteq\Sigma\) such that \(|T_{i}|\leq\ell\) for every \(i\in[n]\),_ \[|\{(v_{1},...,v_{n})\in C:|\{i\in[n]:x_{i}\in T_{i}\}|\geq(1-\zeta)n\}|\leq L\] _where \(\zeta=\Omega(1)\), \(\ell=2^{n^{c}}\), and \(L=2^{\tilde{O}(n^{c^{\prime}})}\) for some constants \(0<c<c^{\prime}<1\)._ We extend Theorem 3.2 to require that the hardness with classical advice holds even if \(\mathcal{A}\) is given a classical access to \(H\). This can be seen as a unification of [11] and [14]. **Theorem 3.3**.: _Let \(C\subseteq\Sigma^{n}\) be the code in Theorem 3.2, the following holds:_ **Hardness with Classical Advice and Classical Queries**: _For any unbounded-time algorithm_ \(\mathcal{B}\) _that makes polynomially many classical queries to_ \(H\) _and polynomial_ \(s\)_, there is a negligible function_ \(\mu\) _such that for any family of_ \(s(n)\)_-bit classical advice_ \(\{z_{H}\}_{H}\)_,_ \[\Pr_{H,r}[f^{H}_{C}(\mathcal{B}^{H}(z_{H},r))=r]\leq\mu(n)\] _where_ \(H\leftarrow\mathsf{Func}([n]\times\Sigma,\{0,1\})\) _and_ \(r\leftarrow\{0,1\}^{n}\)_._ Proof.: The proof is obtained by combining the proofs of [11] and [14]. We give a full proof for completeness. By Item 1 of Theorem 2.8, we only have to prove that the winning probability of the following game in the \(P\)-BF-ROM is \(\mathsf{negl}(n)\) for any \(P=\mathsf{poly}(n)\) and an unbounded-time algorithm \(\mathcal{B}\) that makes \(Q=\mathsf{poly}(n)\) online classical queries. **Offline Stage**\(\mathcal{B}\) chooses list \(L=\{x_{k},y_{k}\}_{k\in[P]}\) of \(P\) input-output pairs where \(x_{k}\in[n]\times\Sigma\) and \(y_{k}\in\{0,1\}\) for each \(k\in[P]\) and \(x_{k}\neq x_{k^{\prime}}\) for all \(k\neq k^{\prime}\). Then the random oracle \(H\leftarrow\mathsf{Func}([n]\times\Sigma,\{0,1\})\) is uniformly chosen under the constraint that \(H(x_{k})=y_{k}\) for all \(i\in[P]\). **Online Stage**\(\mathcal{B}\) takes \(r\leftarrow\{0,1\}^{n}\) as input, makes \(Q\) classical queries to \(H\), and outputs \(\mathbf{v}^{*}=(v_{1}^{*},...,v_{n}^{*})\in\Sigma^{n}\). **Decision**\(\mathcal{B}\) wins if \(\mathbf{v}^{*}\in C\) and \(f^{H}_{C}(\mathbf{v}^{*})=r\). For \(i\in[n]\), let \(T_{i}:=\{v_{i}\in\Sigma:x_{k}=(i,v_{i})\text{ for some }k\in[P]\}\). Let \[\mathsf{Good}:=\{(v_{1},...,v_{n})\in C:|\{i\in[n]:x_{i}\in T_{i}\}|\geq(1- \zeta)n\}.\] By the \((\zeta,\ell,L)\)-list-recoverability and \(|T_{i}|\leq P\leq 2^{n^{c}}\leq\ell\) (for sufficiently large \(n\)), we have \(|\mathsf{Good}|\leq L\). We consider the following two cases: Case 1: \(\mathbf{v}^{*}\in\mathsf{Good}\).For each element \(\mathbf{v}\in\mathsf{Good}\), \(\Pr_{r\leftarrow\{0,1\}^{n}}[f^{H}_{C}(\mathbf{v})=r]=2^{-n}\). Thus, by the union bound, the probability that \(\mathcal{B}\) wins is at most \(|\mathsf{Good}|\cdot 2^{-n}=2^{-\Omega(n)}\) by \(|\mathsf{Good}|\leq L=2^{\tilde{O}(n^{c^{\prime}})}\) and \(c^{\prime}<1\). Case 2: \(\mathbf{v}^{*}\notin\mathsf{Good}\).The analysis of this case is very similar to the proof of soundness in [13] and the following proof is partially taken verbatim from theirs. For each \(i\in[n]\) and \(j\in[Q]\), let \(S_{i}^{j}\subseteq\Sigma\) be the set of elements \(v_{i}\) such that \(\mathcal{B}\) ever queried \((i,v_{i})\) by the point when it has just made the \(j\)-th query. Let \(\hat{S}_{i}^{j}:=S_{i}^{j}\cup T_{i}\) for each \(i\in[n]\) and \(j\in\{0,1,...,Q\}\). Without loss of generality, we assume that \(v_{i}^{*}\in\hat{S}_{i}^{Q}\) for all \(i\in[n]\).7 After the \(j\)-th query, we say that a codeword \(\mathbf{v}=(v_{1},...,v_{n})\in C\) is _\(K\)-queried_ if there is a subset \(I\in[n]\) such that \(|I|=K\), \(v_{i}\in\hat{S}_{i}^{j}\) for all \(i\in I\), and \(v_{i}\notin\hat{S}_{i}^{j}\) for all \(i\notin I\). By the assumption that \(\mathbf{v}^{*}\notin\mathsf{Good}\), \(\mathbf{v}^{*}\) is \(K_{0}\)-queried for some \(K_{0}\leq\lceil(1-\zeta)n\rceil\) right after the offline stage. By the assumption that \(v_{i}^{*}\in\hat{S}_{i}^{Q}\) for all \(i\in[n]\), \(\mathbf{v}^{*}\) is \(n\)-queried after the \(Q\)-th query. Since a \(K\)-queried codeword either becomes \((K+1)\)-queried or remains \(K\)-queried by a single query, \(\mathbf{v}^{*}\) must be \(K\)-queried at some point of the execution of \(\mathcal{B}\) for all \(K=K_{0},K_{0}+1,...,n\). Footnote 7: This might increase \(Q\) by at most \(n\), but \(T\) is still \(\mathsf{poly}(n)\) anyway. We consider the number of codewords that ever become \(K\)-queried for \(K=\lceil(1-\zeta)n\rceil\). If \(\mathbf{v}=(v_{1},...,v_{n})\in C\) is \(\lceil(1-\zeta)n\rceil\)-queried at some point, the number of \(i\) such that \(v_{i}\in\hat{S}_{i}^{Q}\) is at least \(\lceil(1-\zeta)n\rceil\) since \(\hat{S}_{i}^{j}\subseteq\hat{S}_{i}^{Q}\) for all \(i,j\). We have \(|\hat{S}_{i}^{Q}|\leq Q+P=\mathsf{poly}(n)<2^{n^{c}}\) for sufficiently large \(n\). On the other hand, \(C\) is \((\zeta,\ell,L)\)-list recoverable for \(\ell=2^{n^{c}}\) and \(L=2^{\tilde{O}(n^{c^{\prime}})}\). Thus, the number of codewords that ever become \(\lceil(1-\zeta)n\rceil\)-queried is at most \(L=2^{\tilde{O}(n^{c^{\prime}})}\). Suppose that we simulate the oracle \(H\) for \(\mathcal{B}\) via lazy sampling, that is, instead of uniformly choosing random functions at first, we sample function values whenever they are specified in the list sent in the offline stage or queried in the online stage. Suppose that a codeword \(\mathbf{v}\) becomes \(\lceil(1-\zeta)n\rceil\)-queried at some point of the execution of the experiment. Since the function values on the unqueried \(\lfloor\zeta n\rfloor\) positions are not sampled yet, \(\mathbf{v}\) can become a valid proof only if all those values happen to be consistent to \(r\), which occurs with probability \(\big{(}\frac{1}{2}\big{)}^{\lfloor\zeta n\rfloor}=2^{-\Omega(n)}\) by \(\zeta=\Omega(1)\). Since one of them is the final output \(\mathbf{v}^{*}\), by the union bound, the probability that \(\mathbf{v}^{*}\) is a valid proof is at most \(L\cdot\big{(}\frac{1}{2}\big{)}^{\lfloor\zeta n\rfloor}=2^{-\Omega(n)}\) by \(L=2^{\tilde{O}(n^{c^{\prime}})}\) and \(c^{\prime}<1\). We have the following corollary. The motivation of showing this corollary is to ensure that "a large fraction of \(H\) works for all \(r\)" rather than that "for all \(r\), a large fraction of \(H\) works". Looking ahead, this is needed for proving an oracle separation for \(\mathsf{BQP}/\mathsf{poly}\) and \(\mathsf{BQP}/\mathsf{opoly}\) (but not for \(\mathsf{QMA}\) and \(\mathsf{QCMA}\)). **Corollary 3.4**.: _Let \(C\) be the code in Theorem 3.2. Then the following hold:_ 1. **(Easiness with Quantum Advice)** _There is a QPT algorithm_ \(\mathcal{A}\) _and a family of_ \(\mathsf{poly}(n)\)_-qubit quantum advice_ \(\{|z_{\mathcal{H}}\rangle\}_{\mathcal{H}}\) _such that_ \[\Pr_{\mathcal{H}}[\forall r\in\{0,1\}^{n} \Pr[\exists j\in[n]\text{ s.t. }f_{C}^{H^{(j)}}(\mathbf{v}^{(j)})=r:(\mathbf{v}^{(1)},...,\mathbf{v}^{(n)}) \leftarrow\mathcal{A}(|z_{\mathcal{H}}\rangle\,,r)]\geq 1-\mathsf{negl}(n)]\] \[\geq 1-\mathsf{negl}(n)\] _where_ \(\mathcal{H}=(H^{(1)},...,H^{(n)})\leftarrow\left(\mathsf{Func}([n]\times \Sigma,\{0,1\})\right)^{n}\)_._ 2. **(Hardness with Classical Advice and Classical Queries)** _For any unbounded-time algorithm_ \(\mathcal{B}\) _that makes_ \(\mathsf{poly}(n)\) _classical queries to_ \(\mathcal{H}=(H^{(1)},...,H^{(n)})\) _and polynomial_ \(s\)_, there is a negligible function_ \(\mu\) _such that for any family of_ \(s(n)\)_-bit classical advice_ \(\{z_{\mathcal{H}}\}_{\mathcal{H}}\)_,_ \[\Pr_{\mathcal{H},r}[\exists j\in[n]\text{ s.t. }f_{C}^{H^{(j)}}(\mathbf{v}^{(j)})=r:( \mathbf{v}^{(1)},...,\mathbf{v}^{(n)})\leftarrow\mathcal{B}^{\mathcal{H}}(z_{ \mathcal{H}},r)]\leq\mu(n)\] _where_ \(\mathcal{H}=(H^{(1)},...,H^{(n)})\leftarrow(\mathsf{Func}([n]\times\Sigma,\{0,1 \}))^{n}\) _and_ \(r\leftarrow\{0,1\}^{n}\)_._ Proof.: For proving Item 1, we can set the advice as \(|z_{\mathcal{H}}\rangle=|z_{H^{(1)}}\rangle\otimes|z_{H^{(2)}}\rangle\otimes\cdots \otimes|z_{H^{(n)}}\rangle\), and the algorithm just parallel runs the algorithm in Theorem 3.2 for different \(H^{(i)}\). Assume the algorithm in Theorem 3.2 satisfies: \[\Pr_{H}[f_{C}^{H}(\mathcal{A}(\left|z_{H}\right\rangle,r))=r)]\geq 1-\eta(n),\] for some negligible function \(\eta(n)\). For any \(r\in\{0,1\}^{n}\), \[\Pr_{\mathcal{H}}[\forall j\in[n],f_{C}^{H^{(j)}}(\mathbf{v}^{(j)})\neq r:( \mathbf{v}^{(1)},...,\mathbf{v}^{(n)})\leftarrow\mathcal{A}(\left|z_{ \mathcal{H}}\right\rangle,r)]\leq\eta(n)^{n}.\] By an averaging argument, at most \(\eta(n)^{n/2}\) fraction of \(\mathcal{H}\) will satisfy \[\Pr[\forall j\in[n],f_{C}^{H^{(j)}}(\mathbf{v}^{(j)})\neq r:(\mathbf{v}^{(1) },...,\mathbf{v}^{(n)})\leftarrow\mathcal{A}(\left|z_{\mathcal{H}}\right\rangle,r)]\geq\eta(n)^{n/2}\] By applying a union bound over all \(r\in\{0,1\}^{n}\), we obtain that \[\Pr_{\mathcal{H}}[\exists r\in\{0,1\}^{n}\Pr[\forall j\in[n],f_{C}^{H^{(j)}}( \mathbf{v}^{(j)})\neq r:(\mathbf{v}^{(1)},...,\mathbf{v}^{(n)})\leftarrow \mathcal{A}(\left|z_{\mathcal{H}}\right\rangle,r)]\geq\eta(n)]\leq(4\eta(n))^{ \frac{n}{2}}\] Applying negation, we obtain the bound above. For proving Item 2, we will show how an adversary \(\mathcal{B}\) that breaks hardness with classical advice and classical queries, can be used to construct an adversary \(\mathcal{B}^{\prime}\) that breaks hardness with classical advice and classical queries of Theorem 3.3. To show this, we go through the following steps: 1. For each fixed \(j\in[n],\mathcal{H}_{\bar{j}}=(H^{(1)},\ldots,H^{(j-1)},H^{(j+1)},\ldots,H^{(n)})\), we define a pair \((\mathcal{B}^{\prime}[j,\mathcal{H}_{\bar{j}}],\{z_{H}^{\prime}[j,\mathcal{H}_ {\bar{j}}]\}_{H})\) of an adversary and advice in which \(j\) and \(\mathcal{H}_{\bar{j}}\) is hardwired. 2. Show that \((\mathcal{B}^{\prime}[j,\mathcal{H}_{\bar{j}}],\{z_{H}^{\prime}[j,\mathcal{H}_ {\bar{j}}]\}_{H})\) breaks Theorem 3.3 on average over the choice of \(j\). 3. Fix the "best" \(j\) and \(\mathcal{H}_{\bar{j}}\) (w.r.t. random \(H\)) to get a fixed pair of algorithm and advice that breaks hardness of Theorem 3.3. Specifically, it works as follows. Suppose there exists some adversary \(\mathcal{B}\) and some polynomial \(q(n)\) such that for sufficiently large \(n\), \[\Pr_{\mathcal{H},r}[\exists j\in[n]\text{ s.t. }f_{C}^{H^{(j)}}(\mathbf{v}^{(j) })=r:(\mathbf{v}^{(1)},...,\mathbf{v}^{(n)})\leftarrow\mathcal{B}^{\mathcal{H }}(z_{\mathcal{H}},r)]\geq q(n)\] For each \(j\) and \(\mathcal{H}_{\bar{j}}\), we define \((\mathcal{B}^{\prime}[j,\mathcal{H}_{\bar{j}}],\{z_{H}^{\prime}[j,\mathcal{H}_ {\bar{j}}]\}_{H})\) as follows: \(z_{H}^{\prime}[j,\mathcal{H}_{\bar{j}}]\)**:**: Given \(\mathcal{H}_{\bar{j}}\), set \[\mathcal{H}=(H^{(1)},\ldots,H^{(j-1)},H,H^{(j+1)},\ldots,H^{(n)}).\] Set \(z_{H}^{\prime}[j,\mathcal{H}_{\bar{j}}]\) to the advice of \(\mathcal{B}\) for \(\mathcal{H}\), i.e. \(z_{H}^{\prime}[j,\mathcal{H}_{\bar{j}}]:=z_{\mathcal{H}}\). \(\mathcal{B}^{\prime}[j,\mathcal{H}_{\bar{j}}]^{H}(z_{H}^{\prime}[j,\mathcal{H}_ {\bar{j}}],r)\)**:**: It runs \(\mathcal{B}^{\prime}(z_{H}^{\prime}[j,\mathcal{H}_{\bar{j}}],r)\) where \(\mathcal{B}^{\prime}\) hardwired \(\mathcal{H}_{\bar{j}}\) into its algorithm. It simulates the oracle \(\mathcal{H}\) for \(\mathcal{B}\) by querying its own oracle \(H\) as \(H^{(j)}\) and simulates other \(H^{(i)}(i\neq j)\) by querying the hardwired \(\mathcal{H}_{\bar{j}}\). By the above argument, we have \[\Pr_{H,r,j,\mathcal{H}_{\bar{j}}}[f_{C}^{H}(\mathcal{B}^{\prime}[j,\mathcal{H}_{\bar{j}}]^{H}(z_{H}^{\prime}[j,\mathcal{H}_{\bar{j}}],r))=r]\] \[=\Pr_{\mathcal{H},r,j}[f_{C}^{H^{(j)}}(\mathbf{v}^{(j)})=x:( \mathbf{v}^{(1)},...,\mathbf{v}^{(n)})\leftarrow\mathcal{B}^{\mathcal{H}}(z_{ \mathcal{H}},r)]\] \[\geq\frac{q(n)}{n}.\] Thus, by taking \(j=j^{*}\) and \(\mathcal{H}_{\bar{j}^{*}}^{*}\) that makes the above probability the largest for random \(\mathcal{H}\), \((\mathcal{B}^{\prime},\{z_{H}^{\prime}\}_{H}):=(\mathcal{B}^{\prime}[j^{*}, \mathcal{H}_{\bar{j}^{*}}^{*}],\{z_{H}^{\prime}[j^{*},\mathcal{H}_{\bar{j}^{*}}^{*} ]\}_{H})\) breaks hardness of Theorem 3.3. BQP/poly vs BQP/qpoly under Classically-Accessible Oracle In this section, we demonstrate a BQP/qpoly and BQP/poly separation relative to a classically-accessible classical oracle. The main technical lemma needed for proving the separation is the following. **Lemma 4.1**.: _There is a family of distributions \(\{\mathcal{D}_{n}\}_{n\in\mathbb{N}}\), where \(\mathcal{D}_{n}\) is supported on tuples \((G,\mathcal{O})\) of functions \(G:\{0,1\}^{n}\to\{0,1\}\) and \(\mathcal{O}:\{0,1\}^{p(n)}\to\{0,1\}^{q(n)}\) for some polynomials \(p\) and \(q\), satisfying the following:_ 1. **(Easiness with Quantum Advice.)** _There is a QPT algorithm_ \(\mathcal{A}\) _with classical access to_ \(\mathcal{O}\) _and a family of_ \(\mathsf{poly}(n)\)_-qubit quantum advice_ \(\{\left|z_{\mathcal{O}}\right\rangle\}_{\mathcal{O}}\) _such that_ \[\Pr_{(G,\mathcal{O})\leftarrow\mathcal{D}_{n}}[\forall x\in\{0,1\}^{n}\ \Pr[\mathcal{A}^{\mathcal{O}}(\left|z_{\mathcal{O}}\right\rangle,x)=G(x)] \geq 1-\mathsf{negl}(n)]\geq 1-\mathsf{negl}(n).\] 2. **(Hardness with Classical Advice.)** _For any unbounded-time algorithm_ \(\mathcal{B}\) _that makes_ \(\mathsf{poly}(n)\) _classical queries to_ \(\mathcal{O}\) _and a family of_ \(\mathsf{poly}(n)\)_-bit classical advice_ \(\{z_{\mathcal{O}}\}_{\mathcal{O}}\)_,_ \[\Pr_{\begin{subarray}{c}(G,\mathcal{O})\leftarrow\mathcal{D}_{n}\\ x\leftarrow\{0,1\}^{n}\end{subarray}}[\mathcal{B}^{\mathcal{O}}(z_{\mathcal{O }},x)=G(x)]\leq\frac{3}{5}\] _for all sufficiently large_ \(n\)_._ For proving Lemma 4.1, we prepare the following lemma. **Lemma 4.2**.: _Let \(G:\{0,1\}^{n}\to\{0,1\}\) be a uniformly random function. For an unbounded-time algorithm \(\mathcal{A}\) that makes \(\mathsf{poly}(n)\) classical queries to \(G\) and a family of \(\mathsf{poly}(n)\)-bit classical advice \(\{z_{G}\}_{G}\), suppose that the following holds:_ \[\Pr_{G,x\leftarrow\{0,1\}^{n}}[\mathcal{A}^{G}(z_{G},x)=G(x)]> \frac{3}{5}.\] _Then the probability that \(x\) is contained in the query list is at least \(\frac{1}{20}\) for \(\frac{1}{30}\) fraction of \(x\in\{0,1\}^{n}\) for sufficiently large \(n\)._ Proof.: For each \(x\in\{0,1\}^{n}\), we define \(G_{x}\) as the random function \(G\) with its input on \(x\) removed, i.e. \(G_{x}(x^{\prime})=G(x^{\prime})\) for \(x^{\prime}\neq x\) and \(G_{x}(x)=0\). Since \(\mathcal{A}\) only makes classical queries to \(G\), the only way for it to distinguish \(G\) from \(G_{x}\) is to query the oracle at \(x\). Denote by \(\delta_{x}\) the probability that \(x\) is in the query list of \(\mathcal{A}^{G_{x}}\), where the probability is over the randomness of \(\mathcal{A}\) and \(G_{x}\). We obtain that, \[|\Pr_{G}[\mathcal{A}^{G}(z_{G},x)=G(x)]-\Pr_{G}[\mathcal{A}^{G_{ x}}(z_{G},x)=G(x)]|\leq\delta_{x}.\] Now we consider the case when we uniform randomly choose \(x\leftarrow\{0,1\}^{n}\), and require \(\mathcal{A}^{G_{x}}(z_{G},x)\) to output \(G(x)\). This is exactly Yao's box problem, where the adversary is required to output \(G(x)\) without querying \(x\). By Lemma 2.9, we have the following bound for Yao's box with classical queries and classical advice: \[\Pr_{G,x}[\mathcal{A}^{G_{x}}(z_{G},x)=G(x)]\leq\frac{1}{2}+2 \sqrt{\frac{|z_{G}|(Q+1)}{2^{n}}}=\frac{1}{2}+\mathsf{negl}(n)\] where we assume that \(\mathcal{A}\) makes \(Q\) queries. Thus we have that \[\Pr_{G,x}[\mathcal{A}^{G}(z_{G},x)=G(x)]-\Pr_{G,x}[\mathcal{A}^{G_{x}}(z_{G},x)= G(x)]\geq\frac{1}{10}-\mathsf{negl}(n),\] Therefore, we have \[\operatorname*{\mathbb{E}}_{x}[\delta_{x}]\geq\operatorname*{\mathbb{E}}_{x} \left[\left|\Pr_{G}[\mathcal{A}^{G}(z_{G},x)=G(x)]-\Pr_{G}[\mathcal{A}^{G_{x} }(z_{G},x)=G(x)]\right|\right]\geq\frac{1}{10}-\mathsf{negl}(n)\] We now show that \(\delta_{x}\) is at least \(\frac{1}{20}\) with probability \(\frac{1}{30}\) for sufficiently large \(n\). \[\Pr_{x}\left[\delta_{x}\geq\frac{1}{20}\right]+\left(1-\Pr_{x}\left[\delta_{x }\geq\frac{1}{20}\right]\right)\cdot\frac{1}{20}\geq\operatorname*{\mathbb{E} }_{x}[\delta_{x}]\geq\frac{1}{10}-\mathsf{negl}(n)\] \[\implies\Pr_{x}\left[\delta_{x}\geq\frac{1}{20}\right]\geq\frac{1}{20}- \mathsf{negl}(n)\] Thus for sufficiently large \(n\), for a \(\frac{1}{20}-\mathsf{negl}(n)\geq\frac{1}{30}\) fraction of \(x\in\{0,1\}^{n}\), \(\mathcal{A}\) will query \(x\) with probability at least \(\frac{1}{20}\). Then we prove Lemma 4.1. Proof of Lemma 4.1.: We define \(\mathcal{D}_{n}\) to be the distribution that samples \(G\) and \(\mathcal{O}\) as follows: * Let \(C\subseteq\Sigma^{n}\) be the code in Corollary 3.4. It samples random functions \(G:\{0,1\}^{n}\to\{0,1\}\) and \(H^{(j)}:\{0,1\}^{\log n}\times\Sigma\to\{0,1\}\) for \(j\in[n]\) and defines \(\mathcal{O}\) as follows: \(\mathcal{O}\) takes \(x\in\{0,1\}^{n}\) and \((\mathbf{v}^{(1)},...,\mathbf{v}^{(n)})\in C^{n}\) as input, and outputs \(G(x)\) if there is \(j\in[n]\) such that \(f_{C}^{H^{(j)}}(\mathbf{v}^{(j)})=x\) and outputs \(\bot\) otherwise.8 For simplicity we will denote by \(\mathcal{H}=(H^{(1)},\ldots,H^{(n)})\). Footnote 8: Note that \(x\) here plays the role of \(r\) in Corollary 3.4. First, we show the easiness with quantum advice (Item 1). Let \((\mathcal{A}^{\prime},\{|z^{\prime}_{\mathcal{H}}\rangle\}_{\mathcal{H}})\) be the tuple of an algorithm and family of quantum advice in Item 1 of Corollary 3.4. We construct \((\mathcal{A},\{|z_{\mathcal{O}}\rangle\}_{\mathcal{O}})\) that satisfies Item 1 of Lemma 4.1. In fact, we allow the advice \(|z_{\mathcal{O}}\rangle\) to be a mixed state and write it by \(\rho_{\mathcal{O}}\). Note that this does not weaken the statement since any mixed state can be considered as a distribution over pure states and thus there must exist a pure state advice \(|z_{\mathcal{O}}\rangle\) that is at least as good as \(\rho_{\mathcal{O}}\). The algorithm \(\mathcal{A}\) and quantum advice \(\rho_{\mathcal{O}}\) is described as follows: * We describe a randomized procedure to set \(\rho_{\mathcal{O}}\) given an oracle \(\mathcal{O}\). This should be understood as setting \(\rho_{\mathcal{O}}\) to be the mixed state corresponding to the output of the procedure. Sample \((G,\mathcal{H})\) from the conditional distribution of \(\mathcal{D}_{n}\) conditioned on the given \(\mathcal{O}\). Note that then the joint distribution of \((G,\mathcal{H},\mathcal{O})\) is identical to the real one. Then \(\rho_{\mathcal{O}}\) is set to be \(\rho_{\mathcal{H}}\). * It runs \(\mathbf{v}\leftarrow\mathcal{A}^{\prime}(\rho_{\mathcal{O}},x)\), queries \((x,\mathbf{v})\) to \(\mathcal{O}\), and outputs whatever \(\mathcal{O}\) returns. Then Item 1 of Corollary 3.4 implies \[\Pr_{(G,\mathcal{O})\leftarrow\mathcal{D}_{n}}[\forall x\in\{0,1\}^{n}\ \Pr[\mathcal{A}^{\mathcal{O}}(\rho_{\mathcal{O}},x)=G(x)]\geq 1- \mathsf{negl}(n)]\geq 1-\mathsf{negl}(n).\] Thus, Item 1 of Lemma 4.1 holds. Next, we show the hardness with classical advice (Item 2). Suppose that there is \((\mathcal{B},\{z_{\mathcal{O}}\}_{\mathcal{O}})\) that breaks it. Then we have \[\Pr_{\begin{subarray}{c}(G,\mathcal{O})\leftarrow\mathcal{D}_{n}\\ x\leftarrow\{0,1\}^{n}\end{subarray}}[\mathcal{B}^{\mathcal{O}}(z_{\mathcal{O }},x)=G(x)]>\frac{3}{5}\] for infinitely many \(n\in\mathbb{N}\). Recall that \(\mathcal{O}\) returns \(G(x)\) only if the query \((x,(\mathbf{v}^{(1)},...,\mathbf{v}^{(n)}))\) satisfies \(f_{C}^{H^{(j)}}(\mathbf{v}^{(j)})=x\) for some \(j\in[n]\). Thus, by a direct reduction to Lemma 4.2, for a \(\frac{1}{30}\) fraction of \(x\in\{0,1\}^{n}\), the query list of \(\mathcal{B}\) to a randomly chosen \(\mathcal{O}\) according to \(\mathcal{D}_{n}\) will contain a query of the form \((x,(\mathbf{v}^{(1)},...,\mathbf{v}^{(n)}))\) such that there is \(j\in[n]\) such that \(f_{C}^{H^{(j)}}(\mathbf{v}^{(j)})=x\) with probability at least \(\frac{1}{20}\). Also note that classical access to \(\mathcal{O}\) can be simulated by classical access to \(G\) and \(\mathcal{H}=(H^{(1)},\ldots,H^{(n)})\). Thus, the above directly gives an algorithm that violates the hardness with classical advice and classical access to \(\mathcal{H}\) (Item 2 of Corollary 3.4). To show this, we go through the following steps: 1. For each fixed \(G\), we define a pair \((\mathcal{B}^{\prime}[G],\{z^{\prime}_{\mathcal{H}}[G]\}_{\mathcal{H}})\) of an adversary and advice in which \(G\) is hardwired. 2. Show that \((\mathcal{B}^{\prime}[G],\{z^{\prime}_{\mathcal{H}}[G]\}_{\mathcal{H}})\) breaks Item 2 of Corollary 3.4 on average over the choice of \(G\). 3. Fix the "best" \(G\) (w.r.t. random \(\mathcal{H}\)) to get a fixed pair of algorithm and advice that breaks Item 2 of Corollary 3.4. Specifically, it works as follows. For each \(G\), we define \((\mathcal{B}^{\prime}[G],\{z^{\prime}_{\mathcal{H}}[G]\}_{\mathcal{H}})\) as follows: \(z^{\prime}_{\mathcal{H}}[G]\)**:**: Construct \(\mathcal{O}\) from \((G,\mathcal{H})\). Set \(z^{\prime}_{\mathcal{H}}[G]:=z_{\mathcal{O}}\). \(\mathcal{B}^{\prime}[G]^{\mathcal{H}}(z^{\prime}_{\mathcal{H}}[G],x)\)**:**: It runs \(\mathcal{B}^{\prime}(z^{\prime}_{\mathcal{H}}[G],x)\) where \(\mathcal{B}^{\prime}\) simulates the oracle \(\mathcal{O}\) for \(\mathcal{B}\) by using its own oracle \(\mathcal{H}\) and the hardwired oracle \(G\) and outputs a uniformly chosen query by \(\mathcal{B}\). By the above argument, we have \[\Pr_{G,\mathcal{H},x}[\exists j\in[n]\text{ s.t. }f_{C}^{H^{(j)}}(\mathbf{v}^{(j)} )=x:(\mathbf{v}^{(1)},...,\mathbf{v}^{(n)})\leftarrow\mathcal{B}^{\prime}[G] ^{\mathcal{H}}(z^{\prime}_{\mathcal{H}}[G],x)]\geq\frac{1}{600Q}\] where \(Q\) is the number of queries by \(\mathcal{B}\). Thus, by taking \(G=G^{*}\) that makes the above probability the largest, \((\mathcal{B}^{\prime},\{z^{\prime}_{\mathcal{H}}\}_{\mathcal{H}}):=(\mathcal{B }^{\prime}[G^{*}],\{z^{\prime}_{\mathcal{H}}[G^{*}]\}_{\mathcal{H}})\) breaks Item 2 of Corollary 3.4.9 Footnote 9: At first glance, this argument seems to allow \(\mathcal{B}^{\prime}\) to be a non-uniform machine that takes \(G\) as advice. However, this is not needed since \(\mathcal{B}^{\prime}\) can find the best \(G\) by itself by using its unbounded-time computational power. Given Lemma 4.1, it is straightforward to prove a separation between \(\mathsf{BQP/qpoly}\) and \(\mathsf{BQP/poly}\) relative to a classically-accessible classical oracle by the standard diagonalization argument. **Theorem 4.3**.: _There is a classically-accessible classical oracle \(\mathcal{O}\) relative to which \(\mathsf{BQP/poly}\neq\mathsf{BQP/qpoly}\)._ Proof.: Suppose that we generate \((G,\mathcal{O})\leftarrow\mathcal{D}_{n}\) and define a language \(\mathcal{L}:=\bigsqcup_{n\in\mathbb{N}}G_{n}^{-1}(1)\) and an oracle \(\mathcal{O}\) that returns \(\mathcal{O}_{|x|}(x)\) on a query \(x\in\{0,1\}^{*}\). We claim that \(\mathcal{L}\in\mathsf{BQP^{\mathcal{O}}/qpoly}\) and \(\mathcal{L}\notin\mathsf{BQP^{\mathcal{O}}/poly}\) with probability \(1\). To see \(\mathcal{L}\in\mathsf{BQP^{\mathcal{O}}/qpoly}\) with probability \(1\), Item 1 of Lemma 4.1 implies that there is a \(\mathsf{BQP}\) machine \(\mathcal{A}\) with polynomial-size quantum advice that decides \(\mathcal{L}\) on all \(x\) of length \(n\) with probability at least \(1-\frac{1}{n^{2}}\) for all sufficiently large \(n\). Since \(\sum_{n=1}^{\infty}\frac{1}{n^{2}}=\pi^{2}/6\) is finite, the Borel-Cantelli lemma implies that the probability that \(\mathcal{A}\) fails on infinitely many \(n\) is \(0\). In other words, the probability that there is \(N\) such that \(\mathcal{A}\) succeeds in deciding \(\mathcal{L}\) on all \(x\) such that \(|x|\geq N\) is \(1\). By augmenting \(\mathcal{A}\) to decide \(\mathcal{L}\) by brute-force when the instance has length smaller than \(N\), we can conclude that there is a BQP machine with polynomial-size quantum advice that decides \(\mathcal{L}\) on all \(x\in\{0,1\}^{*}\) with probability \(1\) over the random choice of \((G,\mathcal{O})\) for \(n\in\mathbb{N}\). Next, we prove \(\mathcal{L}\notin\mathsf{BQP}^{\mathcal{O}}/\mathsf{poly}\) with probability \(1\). For a BQP machine \(\mathcal{B}\) that takes \(\ell(n)\)-bit classical advice for a polynomial \(\ell\), we define \(S_{\mathcal{B}}(n)\) to be the event over the choice of \((G,\mathcal{O})\) that there is a \(\ell(n)\)-bit classical advice \(z_{\mathcal{O}}\) such that \[\Pr[\forall x\in\{0,1\}^{n}\ \mathcal{B}^{\mathcal{O}}(z_{\mathcal{O}},x)=G(x )]\geq\frac{2}{3}.\] Item 2 of Lemma 4.1 implies that there is an integer \(N\) such that for any BQP machine \(\mathcal{B}\) with classical access to \(\mathcal{O}\), we have \(\Pr_{G,\mathcal{O}}[S_{\mathcal{B}}(n)]\leq c\) for all \(n\geq N\) where \(c:=9/10\). We now show that \[\Pr_{G,\mathcal{O}}[S_{\mathcal{B}}(1)\wedge S_{\mathcal{B}}(2)\wedge\dots]=0\] * We will consider a sequence of input lengths \(n_{1},n_{2},\dots\) defined by \(n_{1}:=N\) and \(n_{i}:=T(n_{i-1})+1\), where \(T(n)\) is the running time of \(\mathcal{B}\) on input of length \(n\). This means that when \(\mathcal{B}\)'s input length is \(n_{i-1}\), it cannot touch the oracle on input length \(\geq n_{i}\). This guarantees that \(\Pr[S_{\mathcal{B}}(n_{i})\mid S_{\mathcal{B}}(n_{j})]=\Pr[S_{\mathcal{B}}(n_ {i})]\) for all \(i>j\). * We can now show that the probability that \(\mathcal{B}\) succeeds on all inputs is equal to \(0\) over the choices of \(G,\mathcal{O}\). \[\Pr[S_{\mathcal{B}}(1)\wedge S_{\mathcal{B}}(2)\wedge\dots]\] \[\leq\Pr\left[\bigwedge_{i}S_{\mathcal{B}}(n_{i})\right]\] \[=\Pr[S_{\mathcal{B}}(n_{1})]\cdot\Pr[S_{\mathcal{B}}(n_{2})\mid S _{\mathcal{B}}(n_{1})]\cdot\dots\] \[\leq c\cdot c\cdot\dots\] \[=0\] Since there are countably many QPT machines, \[\Pr_{G,\mathcal{O}}[\exists\mathcal{B}\ S_{\mathcal{B}}(1)\wedge S_{ \mathcal{B}}(2)\wedge\dots]=0.\] This means that \(\mathcal{L}\not\in\mathsf{BQP}^{\mathcal{O}}/\mathsf{poly}\) with probability \(1\) over the choice of \((G,\mathcal{O})\). Stronger separations.We can easily extend our proof to show \(\mathsf{BQP}/\mathsf{qpoly}\cap\mathsf{NP}\cap\mathsf{coNP}\not\subseteq \mathsf{BQP}/\mathsf{poly}\) relative to a classically-accessible classical oracle. This can be seen by observing that for any \(x\in\{0,1\}^{n}\), we can use \((\mathbf{v}^{(1)},...,\mathbf{v}^{(n)})\in C^{n}\) such that \(f_{C}^{H^{(j)}}(\mathbf{v}^{(j)})=x\) for some \(j\in[n]\) as a witness that certifies \(G(x)\). This means that the language \(\mathcal{L}\) in the proof of Theorem 4.3 is in \(\mathsf{NP}\cap\mathsf{coNP}\). Moreover, we can further strengthen the separation to show \(\mathsf{YQP}\cap\mathsf{NP}\cap\mathsf{coNP}\not\subseteq\mathsf{BQP}/ \mathsf{poly}\) relative to a classically-accessible classical oracle. Here, \(\mathsf{YQP}\) is the class of problems that can be decided by a BQP machine with _untrusted_ quantum advice, which was introduced in [1] followed by a minor definitional modification in [1]. This can be seen by observing that the BQP/\(\mathsf{qpoly}\) algorithm for deciding the language \(\mathcal{L}\) works even if the advice is untrusted because its guess of \(G(x)\) is correct whenever the oracle \(\mathcal{O}\) does not return \(\bot\) on \((x,(\mathbf{v}^{(1)},...,\mathbf{v}^{(n)}))\) where \(\mathbf{v}^{(j)}\) are candidate solutions to the YZ problem w.r.t. \(H^{(j)}\) and \(x\) generated by using the given (potentially ill-formed) quantum advice. QMA vs QCMA under Classically-Accessible Oracle In this section, we demonstrate a QMA and QCMA separation relative to a classically-accessible classical oracle. Notation.Let \(C\) be the code in Theorem 3.2. (Remark that \(C\) and \(\Sigma\) are actually indexed by \(n\), but we omit it for notational simplicity. See Remark 4.) For \(n\in\mathbb{N}\), functions \(G:\{0,1\}^{n}\to\{0,1\}^{n}\) and \(H:[n]\times\Sigma\to\{0,1\}\), and a subset \(S\subseteq\{0,1\}^{n}\), we define the following oracle: \(\mathcal{O}_{n}[G,H,S]\)**:**: It takes \(t\in\{0,1\}^{n}\) and \(\mathbf{v}\in C\) as input and outputs \(1\) if \(f_{C}^{H}(\mathbf{v})=G(t)\) and \(t\notin S\) and otherwise outputs \(0\). The following is the key technical lemma for the separation between QMA and QCMA relative to a classically-accessible oracle. **Lemma 5.1**.: _The following hold:_ 1. **(Distinguishability with Quantum Witness)** _There is a QPT algorithm_ \(\mathcal{A}\) _that makes polynomially many classical queries and a polynomial_ \(\ell\) _such that the following hold:_ 1. _There is an_ \(\ell(n)\)_-qubit state_ \(|w\rangle\) _such that_ \[\Pr_{G,H}[\mathcal{A}^{G,\mathcal{O}_{n}[G,H,\emptyset]}(1^{n},|w\rangle)=1] \geq 1-\mathsf{negl}(n)\] _where_ \(G\leftarrow\mathsf{Func}(\{0,1\}^{n},\{0,1\}^{n})\) _and_ \(H\leftarrow\mathsf{Func}([n]\times\Sigma,\{0,1\})\)_._ 2. _For any_ \(G\)_,_ \(H\)_,_ \(S\subseteq\{0,1\}^{n}\)_, and_ \(\ell(n)\)_-qubit state_ \(|w\rangle\)_,_ \[\Pr[\mathcal{A}^{G,\mathcal{O}_{n}[G,H,S]}(1^{n},|w\rangle)=1]\leq 1-\frac{|S|}{2^{n}}\] _for all_ \(n\in\mathbb{N}\)_._ 2. **(Indistinguishability with Classical Witness)** _For any unbounded-time algorithm_ \(\mathcal{B}\) _that makes polynomially many classical queries and polynomial_ \(s\)_, there is a negligible function_ \(\mu\) _such that for any family of_ \(s(n)\)_-bit classical witness_ \(\{w_{G,H}\}_{G,H}\)_, there is a subset_ \(S\subseteq\{0,1\}^{n}\) _such that_ \(|S|\geq 2^{n}\cdot 2/3\) _and_ \[\left|\Pr_{G,H}[\mathcal{B}^{G,\mathcal{O}_{n}[G,H,\emptyset]}(1^{n},w_{G,H}) =1]-\Pr_{G,H}[\mathcal{B}^{G,\mathcal{O}_{n}[G,H,S]}(1^{n},w_{G,H})=1]\right| \leq\mu(n)\] _where_ \(G\leftarrow\mathsf{Func}(\{0,1\}^{n},\{0,1\}^{n})\) _and_ \(H\leftarrow\mathsf{Func}([n]\times\Sigma,\{0,1\})\)_._ Proof.: We start with the distinguishability with quantum witness. **Distinguishability with quantum witness.** We use the quantum advice of Theorem 3.2 as a witness. The algorithm \(\mathcal{A}\) proceeds as follows: Sample a random \(t\in\{0,1\}^{n}\) and compute \(r=G(t)\). Note that \(r\) is also uniformly random over \(\{0,1\}^{n}\). Since we can generate \(\mathbf{v}\in C\) such that \(H(\mathbf{v})=r\) using the algorithm in Theorem 3.2 with probability \(1-\mathsf{negl}(n)\) over random \(H,r\), \(\mathcal{A}\) queries \(\mathcal{O}_{n}[G,H,S]\) with the generated \((t,\mathbf{v})\), and output the query result. If \(S=\emptyset\), the oracle should return \(1\) with probability \(1-\mathsf{negl}(n)\), else returns \(0\) if the random \(t\in S\), which happens with probability \(\frac{|S|}{2^{n}}\). **Indistinguishability with classical witness.** In the following proof, unless specified otherwise, we assume \(G,H\) are uniformly sampled when using the notation \(\Pr_{G,H}[X]\). The advantage for an unbounded-time algorithm \(\mathcal{B}\) with classical oracle queries to distinguish \((G,\mathcal{O}_{n}[G,H,\emptyset])\) from \((G,\mathcal{O}_{n}[G,H,S])\) is at most the probability the queries of \(\mathcal{B}\) include an input \((t,\mathbf{v})\) on which the two oracles differ. These inputs are precisely the ones that satisfy \(t\in S\) and \(f_{C}^{H}(\mathbf{v})=G(t)\). Thus we define \(\tilde{\mathcal{B}}\) to be the adversary that is given oracle access to \((G,\mathcal{O}_{n}[G,H,\emptyset])\), the same input as \(\mathcal{B}\), and the set \(S\) and outputs the first query \((t,\mathbf{v})\) of \(\mathcal{B}\) on which the two oracles differ, i.e. \(t\in S\) and \(f_{C}^{H}(\mathbf{v})=G(t)\). (If there is no such query, \(\tilde{\mathcal{B}}\) aborts.) Then we have the following inequality: \[\left|\Pr_{G,H}[\mathcal{B}^{G,\mathcal{O}_{n}[G,H,\emptyset]}(1 ^{n},w_{G,H})=1]-\Pr_{G,H}[\mathcal{B}^{G,\mathcal{O}_{n}[G,H,S]}(1^{n},w_{G, H})=1]\right|\] \[\leq\Pr_{G,H}\left[t\in S\wedge f_{C}^{H}(\mathbf{v})=G(t):(t, \mathbf{v})\leftarrow\tilde{\mathcal{B}}^{G,\mathcal{O}_{n}[G,H,\emptyset]}( 1^{n},w_{G,H},S)\right]\] Note that one can simulate the oracle \((G,\mathcal{O}_{n}[G,H,\emptyset])\) using only access to \((G,H)\). Thus what we want to prove is equivalent to proving the following: **Claim 5.2**.: _For any unbounded-time \(\tilde{\mathcal{B}}\) that makes polynomially many classical queries and polynomial \(s\), there is a negligible function \(\mu\) such that for any family of \(s(n)\)-bit classical advice \(\{w_{G,H}\}_{G,H}\), there is a subset \(S\subseteq\{0,1\}^{n}\) such that \(|S|\geq 2^{n}\cdot 2/3\) such that_ \[\Pr_{G,H}[t\in S\wedge f_{C}^{H}(\mathbf{v})=G(t):(t,\mathbf{v})\leftarrow \tilde{\mathcal{B}}^{G,H}(1^{n},w_{G,H},S)]\leq\mu(n)\] _where \(G\leftarrow\mathsf{Func}(\{0,1\}^{n},\{0,1\}^{n})\) and \(H\leftarrow\mathsf{Func}([n]\times\Sigma,\{0,1\})\)._ We prove Claim 5.2 by reducing it to a similar statement in the \(P\)-BF-ROM by using Lemma 2.11.10 Footnote 10: We cannot simply apply Theorem 2.8 here because the probability considered in Claim 5.2 is not captured by security of a game in the ROM as defined in Definition 2.5. Let \(Q\) be the number of queries by \(\tilde{\mathcal{B}}\). Applying Lemma 2.11, for any \(P>Q\), there is a family \(\{\mathcal{D}_{G,H}\}_{G,H}\) of convex combinations of \(P\)-bit-fixing distributions \(\mathcal{D}_{G,H}\) such that for all \(S\subseteq\{0,1\}^{n}\), \[\left|\Pr_{G,H}[t\in S\wedge f_{C}^{H}(\mathbf{v})=G(t):(t, \mathbf{v})\leftarrow\tilde{\mathcal{B}}^{G,H}(1^{n},w_{G,H},S)]\right.\] \[-\left.\Pr_{\begin{subarray}{c}G,H\\ (G^{\prime},H^{\prime})\leftarrow\mathcal{D}_{G,H}\end{subarray}}[t\in S\wedge f _{C}^{H^{\prime}}(\mathbf{v})=G^{\prime}(t):(t,\mathbf{v})\leftarrow\tilde{ \mathcal{B}}^{G^{\prime},H^{\prime}}(1^{n},w_{G,H},S)]\right|\leq\frac{(s+\log 1/ \gamma)\cdot Q}{P}+\gamma. \tag{1}\] For each \((G,H)\), since \(\mathcal{D}_{G,H}\) is a convex combination of \(P\)-bit fixing distributions, it can be written as \[\mathcal{D}_{G,H}=\sum_{i}p_{G,H,i}\mathcal{D}_{G,H,i}\] where \(0<p_{G,H,i}\leq 1\), \(\sum_{i}p_{G,H,i}=1\) and each \(\mathcal{D}_{G,H,i}\) is a \(P\)-bit fixing distribution. Let \(\mathcal{I}_{G,H}\) be a distribution that samples \(i\) with probability \(p_{G,H,i}\). For each \(t\in\{0,1\}^{n}\), define \[\delta_{G,H,t}:=\Pr_{i\leftarrow\mathcal{I}_{G,H}}[\mathcal{D}_{G,H,i}\text{ is fixed on }t]\] and \[\delta_{t}:=\underset{G,H}{\mathbb{E}}[\delta_{G,H,t}].\] Since each \(\mathcal{D}_{G,H,i}\) is a \(P\)-bit fixing distribution, we have \(\mathbb{E}_{t}[\delta_{t}]\leq P/2^{n}\). We define \[S:=\{t:\delta_{t}\leq 3P/2^{n}\}.\] By Markov's inequality, we have that \(|S|\geq 2/3\cdot 2^{n}\). Define \(\mathcal{D}_{G,H}|\mathcal{G}(S)\) to be the distribution that samples \((G^{\prime},H^{\prime})\) by first sampling \((G^{\prime},H^{\prime})\leftarrow\mathcal{D}_{G,H}\) and then replacing \(G^{\prime}(t)\) on \(t\in S\) with a uniformly random value. By the definition of \(\delta_{t}\), for each \(t\in S\), the statistical distance between the distribution of \(G^{\prime}(t)\) for \((G^{\prime},H^{\prime})\leftarrow\mathcal{D}_{G,H}\) and that for \((G^{\prime},H^{\prime})\leftarrow\mathcal{D}_{G,H}|\mathcal{G}(S)\) is at most \(\delta_{t}\leq 3P/2^{n}\). Thus, we have \[\begin{array}{l}\left|\Pr_{\begin{subarray}{c}G,H\\ (G^{\prime},H^{\prime})\leftarrow\mathcal{D}_{G,H}\end{subarray}}\left[t\in S \wedge f_{C}^{H^{\prime}}(\mathbf{v})=G^{\prime}(t)\colon(t,\mathbf{v}) \leftarrow\tilde{\mathcal{B}}^{G^{\prime},H^{\prime}}(1^{n},w_{G,H},S)\right] -\\ \Pr_{\begin{subarray}{c}G,H\\ (G^{\prime},H^{\prime})\leftarrow\mathcal{D}_{G,H}|\mathcal{G}(S)\end{subarray}} \left[t\in S\wedge f_{C}^{H^{\prime}}(\mathbf{v})=G^{\prime}(t)\colon(t, \mathbf{v})\leftarrow\tilde{\mathcal{B}}^{G^{\prime},H^{\prime}}(1^{n},w_{G,H},S)\right]\Bigg{|}\leq 3P(Q+1)/2^{n}.\end{array} \tag{2}\] This is proved via a hybrid argument. Let \(\alpha_{t}\) be the probability that \(\tilde{B}\) queries \(t\) to \(G^{\prime}\) or the first component of its output is \(t\). By the operational meaning of statistical distance, replacing the distributions as described above will change the final output probability at most \[\sum_{t\in S}\alpha_{t}\delta_{t}\leq\sum_{t\in S}3P\alpha_{t}/2^{n}\leq 3P(Q+1) /2^{n}.\] Now we prove that \[\Pr_{\begin{subarray}{c}G,H\\ (G^{\prime},H^{\prime})\leftarrow\mathcal{D}_{G,H}|\mathcal{G}(S)\end{subarray}} \left[t\in S\wedge f_{C}^{H^{\prime}}(\mathbf{v})=G^{\prime}(t)\colon(t, \mathbf{v})\leftarrow\tilde{\mathcal{B}}^{G^{\prime},H^{\prime}}(1^{n},w_{G,H},S)\right]\leq 2^{-\Omega(n)}. \tag{3}\] Note that \[\Pr_{\begin{subarray}{c}G,H\\ (G^{\prime},H^{\prime})\leftarrow\mathcal{D}_{G,H}|\mathcal{G}(S)\end{subarray}} \left[t\in S\wedge f_{C}^{H^{\prime}}(\mathbf{v})=G^{\prime}(t)\colon(t, \mathbf{v})\leftarrow\tilde{\mathcal{B}}^{G^{\prime},H^{\prime}}(1^{n},w_{G,H },S)\right]\] where \(Q_{G^{\prime}}\) is the number of queries to \(G^{\prime}\) and \(t_{i}\) is the \(i\)-th query to \(G^{\prime}\). For each \(i\), we can bound the first term as follows: \[\Pr_{\begin{subarray}{c}G,H\\ (G^{\prime},H^{\prime})\leftarrow\mathcal{D}_{G,H}|\mathcal{G}(S)\end{subarray}} \left[t=t_{i}\wedge t\in S\wedge f_{C}^{H^{\prime}}(\mathbf{v})=G^{\prime}(t) \colon(t,\mathbf{v})\leftarrow\tilde{\mathcal{B}}^{G^{\prime},H^{\prime}}(1^{ n},w_{G,H},S)\right]\leq 2^{-\Omega(n)}.\] This can be shown by repeating exactly the same proof as that of Theorem 3.3 noting that \(G^{\prime}(t_{i})\) is uniformly random for \(\tilde{\mathcal{B}}\) before querying it when \(t\in S\) and thus we can embed a fresh problem instance \(r\leftarrow\{0,1\}^{n}\) into \(G^{\prime}(t_{i})\). The second term \[\Pr_{\begin{subarray}{c}G,H\\ (G^{\prime},H^{\prime})\leftarrow\mathcal{D}_{G,H}|\mathcal{G}(S)\end{subarray}} \left[t\notin\{t_{1},...,t_{Q_{G^{\prime}}}\}\wedge t\in S\wedge f_{C}^{H^{ \prime}}(\mathbf{v})=G^{\prime}(t)\colon(t,\mathbf{v})\leftarrow\tilde{ \mathcal{B}}^{G^{\prime},H^{\prime}}(1^{n},w_{G,H},S)\right]\] is bounded by \(2^{-n}\), because \(G^{\prime}(t)\) is uniformly random for \(\tilde{\mathcal{B}}\) and the probability that \(f_{C}^{H^{\prime}}(\mathbf{v})\) is equal to a uniformly random value is \(2^{-n}\). Combining the above and \(Q_{G^{\prime}}=\mathsf{poly}(n)\), we get Equation (3). Now we combine Equations (1) to (3) where we set \(P=2^{\sqrt{n}}\), \(s=\mathsf{poly}(n)\), \(Q=\mathsf{poly}(n)\), and \(\gamma=2^{-n}\), we can obtain that \[\Pr_{G,H}[t\in S\wedge f_{C}^{H}(\mathbf{v})=G(t):(t,\mathbf{v})\leftarrow \tilde{\mathcal{B}}^{G,H}(1^{n},w_{G,H},S)]\leq 2^{-\Omega(\sqrt{n})}=\mathsf{ neg}(n)\] This completes the proof of Claim 5.2, which in turn completes the proof of Lemma 5.1. Next, we give a corollary of Lemma 5.1, which is useful for showing the separation between \(\mathsf{QMA}\) and \(\mathsf{QCMA}\) relative to a classically-accessible oracle. **Corollary 5.3**.: _Let \(\mathcal{A}\) and \(\ell\) be as in Lemma 5.1. For any unbounded-time algorithm \(\mathcal{B}\) that makes polynomially many classical queries and polynomial \(s\), there is an integer \(N_{\mathcal{B},s}\) such that either of the following holds for all \(n\geq N_{\mathcal{B},s}\):_ 1. _There are_ \(G\in\mathsf{Func}(\{0,1\}^{n},\{0,1\}^{n})\) _and_ \(H\in\mathsf{Func}([n]\times\Sigma,\{0,1\})\) _such that_ 1. _There is an_ \(\ell(n)\)_-qubit state_ \(|w\rangle\) _such that_ 2. \(\Pr[\mathcal{B}^{G,\mathcal{O}_{n}[G,H,\emptyset]}(1^{n},w)=1]\leq 2/3\)_._ 2. \(\Pr[\mathcal{B}^{G,\mathcal{O}_{n}[G,H,\emptyset]}(1^{n},w)=1]<2/3\) _for any_ \(w\in\{0,1\}^{s(n)}\)_._ 3. _There are_ \(G\in\mathsf{Func}(\{0,1\}^{n},\{0,1\}^{n})\)_,_ \(H\in\mathsf{Func}([n]\times\Sigma,\{0,1\})\)_, and_ \(S\subseteq\{0,1\}^{n}\) _such that_ 1. \(\Pr[\mathcal{A}^{G,\mathcal{O}_{n}[G,H,S]}(1^{n},|w\rangle)=1]\leq 1/3\) _for any_ \(\ell(n)\)_-qubit state_ \(|w\rangle\)_._ 2. _There is_ \(w\in\{0,1\}^{s(n)}\) _such that_ \(\Pr[\mathcal{B}^{G,\mathcal{O}_{n}[G,H,S]}(1^{n},w)=1]>1/3\)_._ Proof.: By Item 1a of Lemma 5.1 and a standard averaging argument, for \(1-\mathsf{negl}(n)\)-fraction of \((G,H)\), there is an \(\ell(n)\)-qubit state \(|w\rangle\) such that \[\Pr[\mathcal{A}^{G,\mathcal{O}_{n}[G,H,\emptyset]}(1^{n},|w\rangle)=1]\geq \frac{2}{3}\] for sufficiently large \(n\). Let \(\mathsf{Good}_{n}\) be the set of all such \((G,H)\). Suppose that there is \((G,H)\in\mathsf{Good}_{n}\) such that \(\Pr[\mathcal{B}^{G,\mathcal{O}_{n}[G,H,\emptyset]}(1^{n},w)=1]<2/3\) for any \(w\in\{0,1\}^{s(n)}\). Then it implies that Item 1 of Corollary 5.3 is satisfied. Thus, it suffices to prove that Item 2 of Corollary 5.3 is satisfied assuming that for all \((G,H)\in\mathsf{Good}_{n}\), there is \(w\in\{0,1\}^{s(n)}\) such that \(\Pr[\mathcal{B}^{G,\mathcal{O}_{n}[G,H,\emptyset]}(1^{n},w)=1]\geq 2/3\). We prove it below. Since \(\mathsf{Good}_{n}\) consists of \(1-\mathsf{negl}(n)\)-fraction of \((G,H)\), a similar inequality to Item 2 of Lemma 5.1 holds even if we sample \((G,H)\) from \(\mathsf{Good}_{n}\), i.e., there is a negligible function \(\mu^{\prime}\) such that for any family of \(s(n)\)-bit classical advice \(\{w_{G,H}\}_{G,H}\), there is a subset \(S\subseteq\{0,1\}^{n}\) such that \(|S|\geq 2^{n}\cdot 2/3\) and \[\left|\Pr_{G,H}[\mathcal{B}^{G,\mathcal{O}_{n}[G,H,\emptyset]}(1^{n},w_{G,H})= 1]-\Pr_{G,H}[\mathcal{B}^{G,\mathcal{O}_{n}[G,H,S]}(1^{n},w_{G,H})=1]\right| \leq\mu^{\prime}(n)\] where \((G,H)\leftarrow\mathsf{Good}_{n}\). In particular, there is \((G,H)\in\mathsf{Good}_{n}\) such that for any \(s(n)\)-bit classical advice \(w\), there is a subset \(S\subseteq\{0,1\}^{n}\) such that \(|S|\geq 2^{n}\cdot 2/3\) and \[\left|\Pr_{G,H}[\mathcal{B}^{G,\mathcal{O}_{n}[G,H,\emptyset]}(1^{n},w)=1]- \Pr_{G,H}[\mathcal{B}^{G,\mathcal{O}_{n}[G,H,S]}(1^{n},w)=1]\right|<\frac{1}{3}\] for sufficiently large \(n\). We fix such \((G,H)\). Since \((G,H)\in\mathsf{Good}_{n}\), by our assumption, there is \(w\in\{0,1\}^{s(n)}\) such that \(\Pr[\mathcal{B}^{G,\mathcal{O}_{n}[G,H,\emptyset]}(1^{n},w)=1]\geq 2/3\). Combined with the above inequality, there are \(w\in\{0,1\}^{s(n)}\) and a subset \(S\subseteq\{0,1\}^{n}\) such that \(|S|\geq 2^{n}\cdot 2/3\) and \(\Pr[\mathcal{B}^{G,\mathcal{O}_{n}[G,H,S]}(1^{n},w)=1]>1/3\). This means that Item 2b of Corollary 5.3 is satisfied. Moreover, since \(|S|\geq 2^{n}\cdot 2/3\), Item 1b implies Item 2a of Corollary 5.3. Thus, Item 2 of Corollary 5.3 is satisfied. This completes the proof of Corollary 5.3. Given Corollary 5.3, it is straightforward to prove the separation of \(\mathsf{QMA}\) and \(\mathsf{QCMA}\) relative to a classically-accessible oracle using the standard diagonalization argument. **Theorem 5.4**.: _There is a classically-accessible classical oracle relative to which \(\mathsf{QMA}\neq\mathsf{QCMA}\)._ Proof of Theorem 5.4.: We enumerate all tuples \((\mathcal{B}_{1},s_{1}),(\mathcal{B}_{2},s_{2}),...\) where \(\mathcal{B}_{j}\) is a QPT machine that makes polynomially many classical queries and \(s_{j}\) is a polynomial for \(j\in\mathbb{N}\). Let \(T_{j}\) be a polynomial such that \(\mathcal{B}_{j}\) runs in time \(T_{j}(n)\) when it takes \(1^{n}\) and \(y\in\{0,1\}^{s_{j}(n)}\) as input. For a sequence \(\mathcal{O}=(\mathcal{O}_{1},\mathcal{O}_{2},...)\) of oracles \(\mathcal{O}_{n}:\{0,1\}^{n}\times C_{n}\to\{0,1\}\), \(n^{*}\in\mathbb{N}\), \(G:\{0,1\}^{n^{*}}\to\{0,1\}^{n^{*}}\), \(H:[n^{*}]\times\Sigma^{n^{*}}\to\{0,1\}\), and \(S\subseteq\{0,1\}^{n^{*}}\), let \(\tilde{\mathcal{O}}[G,H,S]\) be the same as \(\mathcal{O}\) except that the \(n^{*}\)-th oracle is replaced with \(\mathcal{O}_{n^{*}}[G,H,S]\). That is, we define \(\tilde{\mathcal{O}}[G,H,S]:=(\tilde{\mathcal{O}}_{1}[G,H,S],\tilde{\mathcal{ O}}_{2}[G,H,S],...)\) where \(\tilde{\mathcal{O}}_{n^{*}}[G,H,S]:=\mathcal{O}_{n^{*}}[G,H,S]\) and \(\tilde{\mathcal{O}}_{n}[G,H,S]:=\mathcal{O}_{n}\) for all \(n\neq n^{*}\). We define sequences of oracles \(G^{(0)},G^{(1)},...\) and \(\mathcal{O}^{(0)},\mathcal{O}^{(1)},...\) and a sequence of bits \(\mathsf{flag}_{1},\mathsf{flag}_{2},...\) by the following procedure, where for each \(i\), \(G^{(i)}\) and \(\mathcal{O}^{(i)}\) themselves are also sequences of oracles \(G^{(i)}_{1},G^{(i)}_{2},...\) and \(\mathcal{O}^{(i)}_{1},\mathcal{O}^{(i)}_{2},...\), respectively, such that \(G^{(i)}_{n}:\{0,1\}^{n}\to\{0,1\}^{n}\) and \(\mathcal{O}^{(i)}_{n}:\{0,1\}^{n}\times C\to\{0,1\}\). 1. Take \((G^{(0)}_{1},G^{(0)}_{2},...)\) and \((H^{(0)}_{1},H^{(0)}_{2},...)\) in such a way that there is a \(\ell(n)\)-qubit state \(|w\rangle\) such that \[\Pr[\mathcal{A}^{G_{n},\mathcal{O}_{n}[G^{(0)}_{n},H^{(0)}_{n},\emptyset]}(1^ {n},|w\rangle)=1]\geq 2/3\] for all \(n\in\mathbb{N}\) where \(\mathcal{A}\) and \(\ell\) are as in Item 1 of Lemma 5.1. Note that such \((G^{(0)}_{1},G^{(0)}_{2},...)\) and \((H^{(0)}_{1},H^{(0)}_{2},...)\) exist by Item 1 of Lemma 5.1. Let \(\mathcal{O}^{(0)}:=(\mathcal{O}^{(0)}_{1},\mathcal{O}^{(0)}_{2},...)\) where \(\mathcal{O}^{(0)}_{n}:=\mathcal{O}_{n}[G^{(0)}_{n},H^{(0)}_{n},\emptyset]\) for \(n\in\mathbb{N}\). Set \(n_{0}:=1\) and \(\mathsf{flag}_{n}:=1\) for all \(n\in\mathbb{N}\). 2. For \(i=1,2,...\), 1. Let \(n_{i}:=\max\{T_{i-1}(n_{i-1})+1,N_{\mathcal{B}_{i},s_{i}}\}\) where \(N_{\mathcal{B}_{i},s_{i}}\) is as in Corollary 5.3 where we define \(T_{0}(n_{0}):=1\) for convenience. 2. Do either of the following: 1. If there are \(G^{(i)}_{n_{i}}\) and \(H^{(i)}_{n_{i}}\) such that * there is an \(\ell(n_{i})\)-qubit state \(|w\rangle\) such that \(\Pr[\mathcal{A}^{G^{(i)},\tilde{\mathcal{O}}^{(i-1)}[G^{(i)}_{n_{i}},H^{(i)}_ {n_{i}},\emptyset]}(1^{n_{i}},|w\rangle)=1]\geq 2/3\), and * \(\Pr[\mathcal{B}^{G^{(i)},\tilde{\mathcal{O}}^{(i-1)}[G^{(i)}_{n_{i}}H^{(i)}_ {n_{i}},\emptyset]}_{i}(1^{n_{i}},w)=1]<2/3\) for any \(w\in\{0,1\}^{s_{i}(n_{i})}\), where \(G^{(i)}_{n}:=G^{(i-1)}_{n}\) for all \(n\neq n_{i}\) (which define \(G^{(i)}:=(G^{(i)}_{1},G^{(i)}_{2},...)\)), then set \(\mathcal{O}^{(i)}:=\tilde{\mathcal{O}}^{(i-1)}[G^{(i)}_{n_{i}},H^{(i)}_{n_{i}},\emptyset]\). 2. Otherwise, by Corollary 5.3, there are \(G^{(i)}_{n_{i}}\), \(H^{(i)}_{n_{i}}\), and \(S^{(i)}_{n_{i}}\) such that * \(\Pr[\mathcal{A}^{G^{(i)},\tilde{\mathcal{O}}^{(i-1)}[G^{(i)}_{n_{i}},H^{(i)}_ {n_{i}},S^{(i)}_{n_{i}}]}(1^{n_{i}},|w\rangle)=1]\leq 1/3\) for any \(\ell(n_{i})\)-qubit state \(|w\rangle\). * There is \(w\in\{0,1\}^{s_{i}(n_{i})}\) such that \(\Pr[\mathcal{B}^{G^{(i)},\tilde{\mathcal{O}}^{(i-1)}[G^{(i)}_{n_{i}},H^{(i)}_ {n_{i}},S^{(i)}_{n_{i}}]}_{i}(1^{n_{i}},w)=1]>1/3\). where \(G^{(i)}_{n}:=G^{(i-1)}_{n}\) for all \(n\neq n_{i}\) (which define \(G^{(i)}:=(G^{(i)}_{1},G^{(i)}_{2},...)\)). Set \(\mathcal{O}^{(i)}:=\tilde{\mathcal{O}}^{(i-1)}[G^{(i)}_{n_{i}},H^{(i)}_{n_{i}},S ^{(i)}_{n_{i}}]\) and overwrite \(\mathsf{flag}_{n_{i}}:=0\). Let \(G\) and \(\mathcal{O}\) be oracles that are consistent to \(G^{(i)}\) and \(\mathcal{O}^{(i)}\), respectively, on all \(n\in[n_{i+1}-1]\) for all \(i\in\{0\}\cup\mathbb{N}\). (That is, for any \(i\in\{0\}\cup\mathbb{N}\), \(n\in[n_{i+1}-1]\), \(t\in\{0,1\}^{n}\), and \(\mathbf{v}\in C\), \(G(t)=G_{n}^{(i)}(t)\) and \(\mathcal{O}(t,\mathbf{v})=\mathcal{O}_{n}^{(i)}(t,\mathbf{v})\).) They are well-defined since we have \(G_{n}^{(i+1)}=G_{n}^{(i)}\) and \(\mathcal{O}_{n}^{(i+1)}=\mathcal{O}_{n}^{(i)}\) for all \(n\leq n_{i+1}-1\) by the definitions of \(G^{(i)}\) and \(\mathcal{O}^{(i)}\). Let \(\mathcal{L}\) be a unary language defined as \(\mathcal{L}:=\{1^{n}:\mathsf{flag}_{n}=1\}\). Then by the definitions of \(G\) and \(\mathcal{O}\), \(\mathcal{A}\) is a valid \(\mathsf{QMA}\) verification algorithm for \(\mathcal{L}\) i.e., \(\mathcal{L}\in\mathsf{QMA}^{G,\mathcal{O}}\).11 Moreover, for any QPT machine \(\mathcal{B}_{i}\) with classical witness length \(s_{i}\), it fails to be a valid \(\mathsf{QCMA}\) verification algorithm for \(\mathcal{L}\) on input length \(n_{i}\). Thus, we have \(\mathcal{L}\notin\mathsf{QCMA}^{G,\mathcal{O}}\). This completes the proof of Theorem 5.4. Footnote 11: Strictly speaking, \(\mathcal{A}\) may not work as a \(\mathsf{QMA}\) verifier for small \(n\) on which \(\Pr[\mathcal{A}^{G_{n},\mathcal{O}_{n}[G_{n}^{(0)},H_{n}^{(0)},\emptyset]}(1^{ n},|w))=1]\geq 2/3\) does not hold. However, since there are only finitely many such \(n\), we can augment \(\mathcal{A}\) to work on all \(n\in\mathbb{N}\) by hardwiring the correct outputs on all such \(n\). ## 6 \(\mathsf{QMA}\) vs \(\mathsf{QCMA}\) under Distributional Oracle In this section, we demonstrate a \(\mathsf{QMA}\) and \(\mathsf{QCMA}\) separation relative to a distributional quantumly-accessible classical oracle. **Lemma 6.1**.: _There exists a family of oracles \(\{\mathcal{O}_{n}^{b}[H,r]\}_{n\in\mathbb{N},b\in\{0,1\},H\in\mathcal{H}_{n}, r\in\{0,1\}^{n}}\), and a distribution \(\mathcal{D}_{n}\) over \(\mathcal{H}_{n}\times\{0,1\}^{n}\) such that the following hold:_ 1. **(Distinguishability with Quantum Witness.)** _There exists QPT algorithm_ \(\mathcal{A}\) _and_ \(\mathsf{poly}(n)\)_-qubit quantum witness_ \(\{|z_{H}\rangle\}_{H}\) _such that_ \[\Pr_{\begin{subarray}{c}(H,r)\leftarrow\mathcal{D}_{n}\\ b\leftarrow\{0,1\}\end{subarray}}[\mathcal{A}^{\mathcal{O}_{n}^{b}[H,r]}(|z_{ H}\rangle\,,r)=b]\geq 1-\mathsf{negl}(n).\] 2. **(Indistinguishability with Classical Witness.)** _For any QPT algorithm_ \(\mathcal{B}\) _and any polynomial_ \(s\)_, there is a negligible function_ \(\mu\) _such that for any_ \(s(n)\)_-bit classical witness_ \(\{z_{H}\}_{H}\)__ \[\left|\Pr_{\begin{subarray}{c}(H,r)\leftarrow\mathcal{D}_{n}\\ b\leftarrow\{0,1\}\end{subarray}}[\mathcal{B}^{\mathcal{O}_{n}^{b}[H,r]}(z_{ H},r)=b]-\frac{1}{2}\right|\leq\mu(n).\] Proof.: Let \(C\subseteq\Sigma^{n}\) be the code in Theorem 3.2. Let \(\mathcal{H}_{n}:=\mathsf{Func}([n]\times\Sigma,\{0,1\})\). For a function \(H_{n}:[n]\times\Sigma\rightarrow\{0,1\}\) and \(r_{n}\in\{0,1\}^{n}\), let \(\mathcal{O}_{n}^{b}[H,r]\) be an oracle that works as follows: if \(b=1\), \(\mathcal{O}_{n}^{1}[H,r]\) takes \(\mathbf{v}\in\Sigma^{n}\) as input and outputs \(1\) if \(f_{C}^{H}(\mathbf{v})=r_{n}\) and outputs \(0\) otherwise; if \(b=0\), \(\mathcal{O}_{n}^{0}[H,r]\) always returns \(0\) for all inputs \(\mathbf{v}\in\Sigma^{n}\). Note that \(\mathcal{O}_{n}^{0}[H,r]\) does not depend on \((H,r)\) at all, but we use this notation for convenience. Distinguishability with quantum witness.We use the quantum advice of Theorem 3.2 as a witness. Since we can generate a \(\mathbf{v}\in C\) such that \(H(\mathbf{v})=r\) using the algorithm in Theorem 3.2 with probability \(1-\mathsf{negl}(n)\) over random \(H,r\), we query \(\mathcal{O}_{n}^{b}[H,r]\) with the generated \(\mathbf{v}\), and outputs the query result. If \(b=1\), the oracle should return \(1\) with probability \(1-\mathsf{negl}(n)\), else it will always return \(0\). Indistinguishability with classical witness.By the one-way to hiding lemma (Lemma 2.12), the probability that an unbounded-time algorithm \(\mathcal{B}\) can distinguish \(\mathcal{O}^{1}_{n}[H,r]\) from \(\mathcal{O}^{0}_{n}[H,r]\) is related to the probability that measuring a random query of \(\mathcal{B}\) collapses to an input on which the two oracles differ. The inputs \(\mathbf{v}\) for which \(\mathcal{O}^{1}_{n}[H,r]\) and \(\mathcal{O}^{0}_{n}[H,r]\) differ are precisely the ones that satisfy \(f^{H}_{C}(\mathbf{v})=r\). Thus if we define \(\mathcal{M}\) to be the adversary that outputs the measurement of a random query of \(\mathcal{B}\) we have \[\left|\Pr_{H,r}[\mathcal{B}^{\mathcal{O}^{1}_{n}[H,r]}(z_{H},r)=1 ]-\Pr_{H,r}[\mathcal{B}^{\mathcal{O}^{0}_{n}[H,r]}(z_{H},r)=1]\right|\] \[\qquad\leq 2q\sqrt{\Pr\left[f^{H}_{C}(\mathbf{v})=r\mid \mathbf{v}\leftarrow\mathcal{M}^{\mathcal{O}_{n}[H,r]}(z_{H},r)\right]}\] where \(q\) is the number of \(\mathcal{B}\)'s queries. Since \(\mathcal{O}^{0}_{n}[H,r]\) is an oracle with all-zeros, \(\mathcal{M}\) can simulate it without access to \(H\). Thus from Item 1 of Theorem 3.2, the RHS is \(\mathsf{negl}(n)\). This implies that for any unbounded-time algorithm \(\mathcal{B}\) that makes \(\mathsf{poly}(n)\) quantum queries and a family of \(\mathsf{poly}(n)\)-bit classical witness \(\{z_{H}\}_{H}\), \[\left|\Pr_{H,r}[\mathcal{B}^{\mathcal{O}^{1}_{n}[H,r]}(z_{H},r)=1]-\Pr_{H,r}[ \mathcal{B}^{\mathcal{O}^{0}_{n}[H,r]}(z_{H},r)=1]\right|\leq\mathsf{negl}(n).\] Equivalently, we have \[\left|\Pr_{H,r,b\leftarrow\{0,1\}}[\mathcal{B}^{\mathcal{O}^{b}_{n}[H,r]}(z_ {H},r)=b]-\frac{1}{2}\right|\leq\mathsf{negl}(n).\] We now use the standard diagonalization argument to translate the indistinguishability result of Lemma 6.1 to a \(\mathsf{QMA}\) and \(\mathsf{QCMA}\) separation with respect to a distributional quantumly-accessible classical oracle \(\mathcal{O}\). **Theorem 6.2**.: _There is a distributional quantumly-accessible classical oracle \(\mathcal{O}\) relative to which \(\mathsf{QMA}\neq\mathsf{QCMA}\)._ Proof.: Let \(\mathcal{L}\) be a unary language chosen uniformly at random, that is for each \(n\), we choose \(b_{n}\leftarrow\{0,1\}\) independently and put \(1^{n}\) into \(\mathcal{L}\) if and only if \(b_{n}=1\). The oracle \(\mathcal{O}=\{\mathcal{O}_{n}\}_{n\in\mathbb{N}}\) is chosen as follows where we abuse the notation to write \(r_{n}\) to mean an oracle that takes a null string as input and outputs \(r_{n}\): * If \(1^{n}\in\mathcal{L}\) (i.e., \(b_{n}=1\)), then \(\mathcal{O}_{n}:=(\mathcal{O}^{1}_{n}[H_{n},r_{n}],r_{n})\) where \((H_{n},r_{n})\leftarrow\mathcal{D}_{n}\). * If \(1^{n}\not\in\mathcal{L}\) (i.e., \(b_{n}=0\)), then \(\mathcal{O}_{n}:=(\mathcal{O}^{0}_{n}[H_{n},r_{n}],r_{n})\) where \((H_{n},r_{n})\leftarrow\mathcal{D}_{n}\). We start with proving that \(\mathcal{L}\in\mathsf{QMA}^{\mathcal{O}}\) with probability \(1\) over the choice of \(\{(b_{n},H_{n},r_{n})\}_{n}\). The verifier \(V\) works as follows: It first queries \(\mathcal{O}\) at point \(0^{n}\), obtains the random string \(r_{n}\), then calls the algorithm in Lemma 6.1. Since for each \(n\), if \(b_{n}=1\), the algorithm \(\mathcal{A}^{\mathcal{O}^{1}_{n}[H_{n},r_{n}]}(\left|z_{H_{n}}\right.,r_{n})\) will return \(1\) with probability \(1-\mathsf{negl}(n)\), and if \(b_{n}=0\), the \(\mathcal{A}^{\mathcal{O}^{0}_{n}[H_{n},r_{n}]}(\left|z\right.,r_{n})\) will always return \(0\) for all witness \(\left|z\right.\). Thus for each \(n\) the verifier will fail with probability \(\mathsf{negl}(n)\). By applying the Borel-Cantelli lemma and following the same arguments as in Theorem 4.3, we can prove that \(\mathcal{L}\in\mathsf{QMA}^{\mathcal{O}}\). Note that our final oracle distribution \(\mathcal{F}\) will fix \(H_{n}\) for each \(n\), thus here we allow the witness \(\left|z_{H_{n}}\right.\) to depend on \(H_{n}\). Next, we prove \(\mathcal{L}\not\in\mathsf{QCMA}^{\mathcal{O}}\) with probability \(1\) over the choice of \(\{(b_{n},H_{n},r_{n})\}_{n}\). Fix a QPT machine \(V\) that takes \(\ell(n)\)-bit classical witness for some polynomial \(\ell\) and let \(S_{V}(n)\) be the event that \(V^{\mathcal{O}}\) succeeds on \(1^{n}\), that is either * \(1^{n}\in\mathcal{L}\) and there exists classical witness \(w_{\mathcal{O}}\in\{0,1\}^{\ell(n)}\) such that \(V^{\mathcal{O}}\) accepts \((1^{n},w_{\mathcal{O}})\) with probability at least \(\frac{2}{3}\), or * \(1^{n}\not\in\mathcal{L}\) and \(V^{\mathcal{O}}\) accepts \((1^{n},w)\) with probability at most \(\frac{1}{3}\) for all \(w\in\{0,1\}^{\ell(n)}\). To be precise, \[S_{V}(n) =[\exists w\in\{0,1\}^{\ell(n)}:\Pr[V^{\mathcal{O}^{b_{n}-1}}(1^{ n},w)=1]\geq 2/3]\] \[\qquad\qquad\vee[\forall w\in\{0,1\}^{\ell(n)}:\Pr[V^{\mathcal{O }^{b_{n}=0}}(1^{n},w)=1]\leq 1/3]\] We first show that for any QPT algorithm \(V^{\mathcal{O}}\), there is a distinguishing algorithm \(\mathcal{B}^{\mathcal{O}^{b_{n}}[H_{n},r_{n}]}\) in Lemma 6.1 that has the same accept probability as \(V^{\mathcal{O}}\) for all given \((H_{1},H_{2},\dots)\). \(\mathcal{B}^{\mathcal{O}^{b_{n}}[H_{n},r_{n}]}\) takes the witness \(z_{\mathcal{O}}\), and it hardcodes all other \(H_{i},b_{i}(i\neq n)\) in its program, and it simulates the behaviour of \(V^{\mathcal{O}}\) by randomly choosing \(r_{i}\) for \(i\neq n\), and calculates \(O^{b_{i}}_{i}[H_{i},r_{i}]\) for \(i\neq n\) oracle queries, and when querying \(n\), it queries its own oracle. In the end, \(\mathcal{B}\) sets its output to be the same as \(V\). It can be seen that \[\Pr_{H_{n},r_{n}}\left[\mathcal{B}^{\mathcal{O}^{b_{n}}[H_{n},r_{n}]}(w_{ \mathcal{O}},r_{n})=1\right]=\Pr_{\{r_{i}\}_{i},H_{n}}\left[V^{\mathcal{O}}( 1^{n},w_{\mathcal{O}})=1\mid\{H_{i},b_{i}\}_{i\neq n}\right].\] Notice that on the left-hand side we can view \(w_{\mathcal{O}}\) as a witness \(z_{H_{n}}\) as in Lemma 6.1. Now we prove that there exists universal constant \(c<1\) such that for any QPT verification algorithm \(V\), there exist infinitely many \(n\) such that \[\Pr_{\{(b_{n},H_{n},r_{n})\}_{n}}[S_{V}(n)]<c.\] By our observation above, if we can prove that for all unbounded quantum algorithms \(\mathcal{B}\), the following inequality holds: \[\Pr_{H_{n},r_{n},b_{n}}[[E_{1}\wedge b_{n}=1]\vee[E_{2}\wedge b_{n}=0]]<c,\] where \[E_{1} =\exists w\in\{0,1\}^{\ell(n)}:\Pr[\mathcal{B}^{\mathcal{O}^{1}_ {n}[H_{n},r_{n}]}(w,r_{n})=1]\geq 2/3,\] \[E_{2} =\forall w\in\{0,1\}^{\ell(n)}:\Pr[\mathcal{B}^{\mathcal{O}^{0}_ {n}[H_{n},r_{n}]}(w,r_{n})=1]\leq 1/3,\] then \(\Pr[S_{V}(n)]\) is also bounded by an averaging argument. We can see that \[\Pr[[E_{1}\wedge b_{n}=1]\vee[E_{2}\wedge b_{n}=0]]=\frac{1}{2}(\Pr[E_{1}]+ \Pr[E_{2}]).\] It can be seen that \[\Pr[E_{1}]=\Pr[E_{1}\wedge\neg E_{2}]+\Pr[E_{1}\wedge E_{2}].\] We prove \(\Pr[E_{1}\wedge E_{2}]\leq\frac{4}{5}(1+2\mu(n))\) as follows. By Item 2 of Lemma 6.1, for all polynomial sized \(w_{H_{n}}\), \[\Pr_{H_{n},r_{n}}[\mathcal{B}^{\mathcal{O}^{1}_{n}[H_{n},r_{n}]}(w_{H_{n}},r_{ n})=1]-\Pr_{H_{n},r_{n}}[\mathcal{B}^{\mathcal{O}^{0}_{n}[H_{n},r_{n}]}(w_{H_{n}},r_{ n})=1]\leq 2\mu(n).\] By a standard averaging argument,12 we can conclude that for at least \(1-\frac{4}{5}(1+2\mu(n))\) fraction of \((H_{n},r_{n})\), Footnote 12: Concretely, add \(1\) to both sides, apply Markov’s inequality, and then subtract \(1\) from both sides. Then we see that for at most \(\frac{4}{5}(1+2\mu(n))\) fraction of \((H_{n},r_{n})\), we have \(\Pr[\mathcal{B}^{\mathcal{O}^{1}_{n}[H_{n},r_{n}]}(w_{H_{n}},r_{n})=1]-\Pr[ \mathcal{B}^{\mathcal{O}^{0}_{n}[H_{n},r_{n}]}(w_{H_{n}},r_{n})=1]\geq 1/4\). \[\Pr[\mathcal{B}^{\mathcal{O}^{1}_{n}[H_{n},r_{n}]}(w_{H_{n}},r_{n})=1]-\Pr[ \mathcal{B}^{\mathcal{O}^{0}_{n}[H_{n},r_{n}]}(w_{H_{n}},r_{n})=1]<1/4,\] implying that if \(E_{1}\) occurs, \(E_{2}\) does not occur, i.e., there exists \(w_{H_{n}}\in\{0,1\}^{\ell(n)}\) such that \[\Pr[\mathcal{B}^{\mathcal{O}^{0}_{n}[H_{n},r_{n}]}(w_{H_{n}},r_{n})=1]>\frac{ 2}{3}-\frac{1}{4}>\frac{1}{3}.\] Thus we have that \(\Pr[E_{1}\wedge E_{2}]\leq\frac{4}{5}(1+2\mu(n))\), giving us \[\frac{1}{2}(\Pr[E_{1}]+ \Pr[E_{2}])\leq\frac{1}{2}(\Pr[E_{1}\wedge\neg E_{2}]+\frac{4}{5 }(1+2\mu(n))+\Pr[E_{2}])\] \[\leq\frac{1}{2}(\Pr[\neg E_{2}]+\frac{5}{6}+\Pr[E_{2}])=\frac{11 }{12}.\] Thus, we can set \(c:=11/12\). We now show that \[\Pr_{\{H_{n},r_{n},b_{n}\}_{n}}[\exists V\;S_{V}(1)\wedge S_{V}(2)\wedge\ldots ]=0.\] Consider a sequence of input lengths \(n_{1},n_{2},\ldots\), such that the indistinguishability condition of Lemma 6.1 holds for \(n_{i}\) and \(n_{i}\geq T(n_{i-1})+1\), where \(T(n)\) is the running time of \(V\) on input of length \(n\). This means that when \(V\)'s input length is \(n_{i-1}\), it cannot touch the oracle on input length \(\geq n_{i}\). This guarantees that \(\Pr[S_{V}(n_{i})\mid S_{V}(n_{j})]=\Pr[S_{V}(n_{i})]\) for all \(i>j\). We can now show that the probability that \(V\) succeeds on all inputs is equal to \(0\) over the choices of \(H_{n},r_{n},b_{n}\). \[\Pr[S_{V}(1)\wedge S_{V}(2)\wedge\ldots]\] \[\leq\Pr\left[\bigwedge_{i}S_{V}(n_{i})\right]\] \[=\Pr[S_{V}(n_{1})]\cdot\Pr[S_{V}(n_{2})\mid S_{V}(n_{1})]\cdot\ldots\] \[\leq c\cdot c\cdot\ldots\] \[=0\] Since there are countably many \(\mathsf{QCMA}\) machines, we have that \[\Pr_{\{H_{n},r_{n},b_{n}\}_{n}}[\exists V\;S_{V}(1)\wedge S_{V}(2)\wedge \ldots]=0.\] Thus \(\mathcal{L}\not\in\mathsf{QCMA}^{\mathcal{O}}\) with probability \(1\) over the choice of \(\{H_{n},r_{n},b_{n}\}_{n}\). We conclude that by fixing \(H_{n},b_{n}\) for each \(n\), we can obtain a language \(\mathcal{L}\) in \(\mathsf{QMA}^{\mathcal{O}}\) but not in \(\mathsf{QCMA}^{\mathcal{O}}\), where \(\mathcal{O}\) is now a distribution over \(\{r_{n}\}_{n}\). One-Way Communication Complexity We observe that the results of [13, 14] (Theorem 3.2) can be seen as a separation of classical and quantum one-way communication complexity. Consider the following protocol between Alice and Bob where we use the notations defined in Section 3. **Alice's input:**: A truth table of a function \(H:[n]\times\Sigma\to\{0,1\}\). **Bob's input:**: A string \(r=(r_{1},...,r_{n})\in\{0,1\}^{n}\). **One-way communication:**: Alice sends a classical or quantum message \(m\) to Bob. **Bob's output:**: Bob outputs \(\mathbf{v}=(\mathbf{v}_{1},...,\mathbf{v}_{n})\in C\). We say that Bob wins if \(H_{i}(\mathbf{v}_{i})=r_{i}\) for all \(i\in[n]\). Quantum (resp. classical) one-way communication complexity is defined to be the minimum length of the quantum (resp. classical) message \(m\) that enables Bob to win with high probability (say, \(2/3\)). Theorem 3.2 directly means that quantum one-way communication complexity is \(\mathsf{poly}(n)\) but classical one-way communication complexity is super-polynomial in \(n\) (one can see that it is actually subexponential in \(n\) from its proof).13 This gives a new super-polynomial separation between classical and quantum one-way communication complexity. Such a separation is already known back in 2004 by Bar-Yossef, Jayram, and Kerenidis [1] where they showed it based on a problem called the hidden matching problem. However, our protocol has the following two interesting features compared to theirs. Footnote 13: Strictly speaking, Item 1 of Theorem 3.2 only ensures the existence of a quantum communication protocol with \(\mathsf{poly}(n)\)-qubit communication that works on average over random \(H\). In the standard definition of one-way communication complexity, we require a protocol to work for all inputs. For ensuring that, we have to rely on the perfectly correct version of [13] that can be found at the end of [13, Section 4]. First, Bob's input length is exponentially smaller than Alice's input length. Why is that interesting? To give the context, we review the following theorem shown by Aaronson [1]. **Theorem 7.1**.: _For any (possibly partial) boolean function \(f:\{0,1\}^{N}\times\{0,1\}^{M}\to\{0,1\}\), \(\mathcal{R}^{1}(f)=O(M\mathcal{Q}^{1}(f))\)._ Here, \(\mathcal{Q}^{1}\) and \(\mathcal{R}^{1}\) mean the quantum and classical randomized bounded-error one-way communication complexity, respectively. This theorem means that we cannot have a large quantum-classical separation for boolean functions if Bob's input length is small. We circumvent this barrier by considering _relations_ rather than functions.14 This is reminiscent of [13] that overcomes the barrier of Aaronson-Ambainis conjecture [1] by considering search problems rather than decision problems. Footnote 14: Interestingly, a similar observation was made by a concurrent work [1] where they show a variant of the hidden matching problem with short Bob’s input length. Second, by Theorem 3.3, one can see that the hardness with classical communication remains to hold even if we allow Bob to classically query Alice's input. On the other hand, the hidden matching problem becomes completely easy if we allow such classical queries. This property is the key to show \(\mathsf{BQP/poly}\neq\mathsf{BQP/qpoly}\) and \(\mathsf{QMA}\neq\mathsf{QCMA}\) relative to classically-accessible classical oracles.
2303.05186
A Framework for History-Aware Hyperparameter Optimisation in Reinforcement Learning
A Reinforcement Learning (RL) system depends on a set of initial conditions (hyperparameters) that affect the system's performance. However, defining a good choice of hyperparameters is a challenging problem. Hyperparameter tuning often requires manual or automated searches to find optimal values. Nonetheless, a noticeable limitation is the high cost of algorithm evaluation for complex models, making the tuning process computationally expensive and time-consuming. In this paper, we propose a framework based on integrating complex event processing and temporal models, to alleviate these trade-offs. Through this combination, it is possible to gain insights about a running RL system efficiently and unobtrusively based on data stream monitoring and to create abstract representations that allow reasoning about the historical behaviour of the RL system. The obtained knowledge is exploited to provide feedback to the RL system for optimising its hyperparameters while making effective use of parallel resources. We introduce a novel history-aware epsilon-greedy logic for hyperparameter optimisation that instead of using static hyperparameters that are kept fixed for the whole training, adjusts the hyperparameters at runtime based on the analysis of the agent's performance over time windows in a single agent's lifetime. We tested the proposed approach in a 5G mobile communications case study that uses DQN, a variant of RL, for its decision-making. Our experiments demonstrated the effects of hyperparameter tuning using history on training stability and reward values. The encouraging results show that the proposed history-aware framework significantly improved performance compared to traditional hyperparameter tuning approaches.
Juan Marcelo Parra-Ullauri, Chen Zhen, Antonio García-Domínguez, Nelly Bencomo, Changgang Zheng, Juan Boubeta-Puig, Guadalupe Ortiz, Shufan Yang
2023-03-09T11:30:40Z
http://arxiv.org/abs/2303.05186v1
# A Framework for History-Aware Hyperparameter Optimisation in Reinforcement Learning ###### Abstract. A Reinforcement Learning (RL) system depends on a set of initial conditions (_hyperparameters_) that affect the system's performance. However, defining a good choice of hyperparameters is a challenging problem. Hyperparameter tuning often requires manual or automated searches to find optimal values. Nonetheless, a noticeable limitation is the high cost of algorithm evaluation for complex models, making the tuning process computationally expensive and time-consuming. In this paper, we propose a framework based on integrating complex event processing and temporal models, to alleviate these trade-offs. Through this combination, it is possible to gain insights about a running RL system efficiently and unobtrusively based on data stream monitoring and to create abstract representations that allow reasoning about the historical behaviour of the RL system. The obtained knowledge is exploited to provide feedback to the RL system for optimising its hyperparameters while making effective use of parallel resources. We introduce a novel _history-aware epsilon-greedy logic_ for hyperparameter optimisation that instead of using static hyperparameters that are kept fixed for the whole training, adjusts the hyperparameters at runtime based on the analysis of the agent's performance over time windows in a _single agent's lifetime_. We tested the proposed approach in a 5G mobile communications case study that uses DQN, a variant of RL, for its decision-making. Our experiments demonstrated the effects of hyperparameter tuning using history on training stability and reward values. The encouraging results show that the proposed history-aware framework significantly improved performance compared to traditional hyperparameter tuning approaches. 2017 acmcopy (Conference'17). ACM, New York, NY, USA, 9 pages. [https://doi.org/10.1145/nnnnnnnnnnnn](https://doi.org/10.1145/nnnnnnnnnnnn) 2 Juan Marcelo Parra-Ullauri, Chen Zhen, Antonio Garcia-Dominguez, Nelly Bencomo, Changgang Zheng, Juan Boubeta-Puig, Guadalupe Ortiz, and Shufan Yang (). A Framework for History-Aware Hyperparameter Optimisation in Reinforcement Learning. In _Proceedings of ACM Conference on_. 1 Jun Marcelo Parra-Ullauri, Chen Zhen, Antonio Garcia-Dominguez, Nelly Bencomo, Changgang Zheng, Juan Boubeta-Puig, Guadalupe Ortiz, and Shufan Yang (). A Framework for History-Aware Hyperparameter Optimisation in Reinforcement Learning. In _Proceedings of ACM Conference on_. 1 ## 1. Introduction Reinforcement Learning (RL) is a sub-field of Machine Learning with a great success in applications such as self-driving cars, industry automation, among many others [(26)]. In RL, autonomous agents learn through trial-and-error how to find optimal solutions to a problem [(26)]. RL algorithms have multiple _hyperparameters_ that require careful tuning as it is a core aspect of obtaining the state-of-the-art performance [(8)]. The search for the best hyperparameter configuration is a sequential decision process in which initial values are set, and later adjusted, through a mixture of intuition and trial-and-error, to optimise an observed performance to maximise the accuracy or minimise the loss [(8)]. Hyperparameter Optimisation (HPO) often requires expensive manual or automated hyperparameter searches in order to perform properly on an application domain [(29)]. However, a noticeable limitation is the high cost related to algorithm evaluation, which makes the tuning process highly inefficient, computational expensive, and commonly adds extra algorithm developing overheads to the RL agent decision-making processes [(5; 8; 29; 30)]. The full behaviour of complex RL systems often only emerges during operation. They thus need to be monitored at runtime to check that they adhere to their requirements [(23)]. Event-driven Monitoring (EDM) is a common lightweight approach for monitoring a running system [(10)]. Particularly, _Complex Event Processing_ (CEP) is an EDM technique, for capturing, analysing, and correlating large amounts of data in real time in a domain-agnostic way [(12)]. The present paper proposes the use of CEP to quickly detect causal dependencies between events on the fly by continuously querying data streams produced by the RL system in order to gain insights from events as they occur during the execution of the RL agent which is crucial for HPO [(5)]. CEP provides the short-term memory needed to analyse the system behaviour on pre-defined time-points or limited time-windows. However, it is debated that long-term memory is also required when analysing the effects of HPO on the RL agent to find optimal performance evolved on past behaviours. _History-awareness_ requires node-level memory and traceability management facilities to allow the exploration of system's history. _Temporal Models_ (TMs) are seen to tackle these challenges (Borda et al., 2017). TMs offer storage facilities that allows time representation using a temporal database (TDB) (Todorov et al., 2016; Todorov et al., 2016). In this paper, a TDB supports the storage of massive amounts of historical data, while providing fast querying capabilities to support reasoning about runtime properties in the monitored RL agent. In this paper, we propose a framework based on CEP ans TMs that can be reused for different RL algorithms. The proposed combination allows the detection of situations of interest at runtime and permits tracing the RL agent history to enable the short and long term memory required to analyse the impact of HPO. The framework uses a formal defined structure to trace data streams produced by the RL agents, process them and provide feedback for HPO. In addition, we present a novel _history-aware epsilon-greedy logic_ for HPO that is implemented using the components of the proposed framework. This logic tunes the hyperparameter concurrently while acting greedily under certain circumstances, but also exploring the hyperparameter value-space with an \(\epsilon\) probability in order to escape local maximums. The HPO occurs while the agent is learning, which turns to be more efficient than using static hyperparameters during the training process and having to update them on multiple agent's lifetimes (Todorov et al., 2016; Todorov et al., 2017). In order to test the feasibility of the proposed framework, Deep Q-Network (DQN) (Todorov et al., 2016), a popular RL algorithm, was applied to a case study on the next generation of mobile communications from (Todorov et al., 2016). The experiments analysed the effects of the proposed history-aware approach for HPO during the RL agent training, and compared the results with traditional hyperparameter tuning approaches. Our experiments focused on updating the discounting factor hyperparameter at runtime for a _single agent's lifetime_, using these different techniques. The rest of the paper is organised as follows. Section 2 provides a description of the core concepts required to understand this paper. Section 3 introduces our approach. Experiments and results are presented in Section 4. The discussion is presented in Section 5. Section 6 compares the presented work with current state of HPO in RL. Finally, Section 7 presents conclusions and future directions. ## 2. Background ### Hyperparameter Optimisation in Reinforcement Learning In RL, an agent tries to maximise the optimal action-value function described as the Bellman Optimality Eq. (Todorov et al., 2016): \[Q^{*}(s,a)=\mathbb{E}\{r_{t+1}+\gamma*max_{a^{\prime}}Q^{*}(s_{t+1},a^{\prime })|s_{t}=s,a_{t}=a\} \tag{1}\] where \(\mathbb{E}\) represents the expected sum of future rewards characterised by the hyperparameter \(\gamma\), which is the _discounting factor_(Todorov et al., 2016). A reward \(r_{t}\) that occurs N steps in the future from the current state, is multiplied by \(Y^{N}\) to describe its importance to the current state. As shown, defining the right \(\gamma\) and additional hyperparameters through HPO is key to deliver optimal solutions in RL. The most basic way of HPO is manual search, which is based on the intuition of the developer (Borda et al., 2017). Once the system execution has finished, the verification of convergence is reviewed. More sophisticated HPO approaches include i) Model-free Blackbox Optimisation (MBO) and ii) Bayesian Optimisation (BO) (Borda et al., 2017). Grid and random search are part of i). In grid search, the user defines a set of hyperparameter values to be analysed and the search evaluates the Cartesian product of these sets. Random search samples configurations at random until a certain budget for the search is exhausted. (Borda et al., 2017). Regarding ii), BO iteratively evaluates a promising hyperparameter configuration based on the current model then updates it trying to locate the optimum in multiple agent's lifetimes. However, performing these techniques is time consuming, computationally expensive and requires expert knowledge (Borda et al., 2017). For the reasons mentioned, the introduction of an automated hyperparameter search process is key for the continuing success of RL and is acknowledged as the most basic task in automated machine learning (AutoML) (Borda et al., 2017). In this work we focus on MBO for a _single agent's lifetime_, which is claimed to be more efficient than having static hyperparameters during the training process and updating them in multiple agent's lifetimes (Todorov et al., 2016; Todorov et al., 2016). ### Temporal Models TMs go beyond representing and processing the current state of systems (Borda et al., 2017). They seek to add short and long-term memory to models through the use of temporal databases (Borda et al., 2017). Examples of temporal databases used for TMs, are Time Series Databases (TSDB) and Temporal Graph Databases (TGDB) (Borda et al., 2017; Todorov et al., 2016). Each attribute to be monitored in a running system can be considered as a time series: a sequence of values along an axis (Borda et al., 2017). TGDB extend this ability to track the appearance and disappearance of entities and connections (Borda et al., 2017). TGDBs record how nodes and edges appear, disappear and change their key/value pairs over time. Greycat (Borda et al., 2017) is an open-source TGDB. Nodes and edges in Greycat have a lifespan: they are created at a certain time-point, they may change in state over the various time-points, and they may be "ended" at another time-point. Greycat considers edges to be part of the state of their source and target nodes. It also uses a copy-on-write mechanism to store only the parts of a graph that changed at a certain time-point, thus saving disk space. In this work, TMs build on top of Greycat TGDB, allow accessing and retrieving causally connected historical information about runtime behaviour of RL agents. ### Event-driven Monitoring EDM approaches are commonly designed to monitor system events, processes and handle them in the background without interfering with the main system's execution (Moser et al., 2016). Moser et al. identified in (Moser et al., 2016) four key requirements for EDM: i) it should be platform agnostic and unobtrusive, ii) it should be capable of integrating monitoring data from other subsystems, iii) it should enable monitoring across multiple services and instances, and iv) it should be capable of unveiling potential anomalies in the monitored system. CEP is a cutting-edge EDM technology that has been widely used to address these requirements (Todorov et al., 2016). CEP provides real-time analysis and correlation of large volumes of streaming data in an effective and efficient manner with the aim of automatically detecting situations of interest in a particular domain (event patterns). The patterns to be detected have to be defined and deployed into a CEP engine, i.e. the software responsible for analysing and correlating the data streams. Each CEP engine provides its own Event Processing Language (EPL) for implementing the patterns to be deployed. Among the existing CEP engines, we opted for Esper1, a mature, scalable and high-performance CEP engine. The Esper EPL is a language similar to SQL but extended with temporal, causal and pattern operators, as well as data windows. The present document proposes to leverage the power of CEP to detect temporal and causal dependencies between events and to pre-process data streams, in order to gain insights from events as they occur during the training of an RL agent. Footnote 1: [https://www.espertech.com/esper/](https://www.espertech.com/esper/) ## 3. History-Aware Hyperparameter Optimisation for RL This section presents our proposed framework integrating CEP and TMs for HPO. Additionally, the section also describes a novel history-aware epsilon-greedy logic that will be implemented using the referenced framework. ### A Software Framework combining CEP and TMs for HPO RL involves challenging optimisation problems due to the stochasticity of evaluation, high computational cost and possible non-stationarity of the hyperparameters (Srivastava et al., 2017). Therefore, the efficient continuous monitoring and dynamic verification of internal operations and parameters of the RL agent and its interactions with the environment over time are required. We propose the use of CEP for short-term analysis and TMs for navigation through the system history to provide feedback to the RL algorithm. Fig. 1 shows the proposed framework: * The _RL algorithm_1 runs mostly independently from the rest of the system, while publishing data streams with logging information into an "RL Traces" topic created in a Message Queuing Telemetry Transport (MQTT) broker. The algorithm is subscribed to a "Feedback" topic, which will contain suggestions for hyperparameter change. Footnote 1: [https://www.espertech.com/esper/](https://www.espertech.com/esper/) * The _MQTT Broker_2 is the communication hub for the architecture, acting as an event bus. It is responsible for loosely integrating the other components through the use of _topics_: components can publish events into a topic, or subscribe to updates about that topic. * The _CEP Engine_3 is responsible for filtering and correlating data streams in the form of simple events coming from the RL algorithm into semantically richer _complex events_. It subscribes to the "RL Traces" topic to obtain those simple events, and it pushes complex events into the "History-Awareness" topic. Footnote 2: [https://www.espertech.com/esper/](https://www.espertech.com/esper/) * The _Temporal Model_4 uses the complex events from the "History-Awareness" topic to construct the next version of the high-level model of the RL agent's state, which is used to update the TM. A novel _graph listener_ component is notified about the changes, which applies the HPO logic (see Section 3.2) to push any feedback on the current hyperparameter values into the "Feedback" MQTT topic. TMs are conceptually structured according to a metamodel designed to record a _Log_ of _Decisions_ made by _Agents_, based on _Observations_ about the environment, and including a set of _Measurements_ of interest, according to various _Measures_. The metamodel is divided into two parts: the above concepts are defined into a core package from [_omitted_], and concepts that are specific to RL are split into its own package (see Fig. 2), which imports elements from the core package. The RL package provides a specialised _RLAgent_ which keeps track of the _RLState_ that can be observed in the environment, an _RLDecision_ which tracks the _QValues_ of each available action, and an _RLObservation_ which tracks the current state before the action was taken, and the current _Reward_ values. ### History-aware epsilon-greedy logic for HPO In RL, the \(N\)-dimensional hyperparameter configuration space is defined as \(\Lambda=\Lambda_{1}\times...\times\Lambda_{N}\) and a vector of hyperparameters is denoted by \(\lambda\in\Lambda\). Let's denote the RL algorithm as \(\Phi\) and \(\Phi_{\lambda}\) the algorithm instantiated to a vector of hyperparameters \(\lambda\). Let us define the objective function to maximise the value of a reward function \(\mathcal{R}\). Then, we define the HPO problem of a \(\Phi\) given the Figure 1. CEP and TMs for Hyperparameter Optimisation Figure 2. Class diagram for the RL extensions to the core metamodel used to record system history. Imported core elements are marked with an arrow. environment \(E\) at time \(T\) as finding the optimal hyperparameter vector \(\lambda^{*}\): \[\lambda^{*}=\operatorname*{arg\,max}_{\lambda\in\Lambda}\mathcal{R}(\Phi_{ \lambda},E,T) \tag{2}\] where \(\mathcal{R}(\Phi_{\lambda},E,T)\) measures a reward value generated by the algorithm \(\Phi\) under a configuration of the \(\lambda\) hyperparameter while interacting with the environment \(E\) at time \(T\). Now we can introduce our history-aware epsilon-greedy approach. RL is episodic, with multiple iterations \(i\in I\) performed within each episode \(e\in E\)[26]. In this context, let us define the value of \(\mathcal{R}\) at the instant \(t_{i}\) after \(\Phi_{\lambda}\) has interacted with the environment \(E\) as the reward \(r_{t}\) that the agent obtained by performing an action \(a_{t}\) and arriving to the state \(s_{t}\). Thus, the value of our reward function by iteration is denoted by \(\mathcal{R}_{i}(t_{i},\lambda)=r_{t_{i}}\). Consequently, the reward function by episode is defined by: \[\mathcal{R}_{e}(t_{e},\lambda)=\frac{\sum_{i=1}^{I}\mathcal{R}_{i}(t_{i}, \lambda)}{I} \tag{3}\] After stated our reward function by episode, we define the criterion for analysing the history. In other words, how long back are we going to look when deciding to change a hyperparameter. With this purpose, we introduce the concept of time-windows to the logic. A time-window \(w\in W\) consists of \(x\in\mathbb{R}\) episodes \(e\) where \(x\) is the length of the time-window. Then, the value of our reward function by time-window is denoted by: \[\mathcal{R}_{win}(t_{w},\lambda)=\frac{\sum_{w=e}^{e+x-1}\mathcal{R}_{e}(t_{ w},\lambda)}{x} \tag{4}\] Eq. 4 defines the time frame when the monitoring process is taking place. The next step is to define the criterion that would lead to a hyperparameter update. The criteria selected is the _stability_ of the reward value. We analyse the distance of the reward function by episode \(\mathcal{R}_{e}\) to the mean of the time-window \(\mathcal{R}_{win}\). If the value is below a defined threshold \(th_{stable}\in\mathbb{R}\) for all the values within the time-window, we induce that the reward value has stabilised within a range and a _possible_ hyperparameter update will be performed. Formalising this as a Boolean conjunction we have: \[\bigwedge_{j\in[e,e\neq x)}\big{(}|\mathcal{R}_{e}(t_{j},\lambda)-\mathcal{R} _{win}(t_{w},\lambda)|<th_{stable}\big{)} \tag{5}\] where \(\mathcal{R}_{win}\) is the reward function value for the time window at time \(w=e+x-1\). We have emphasised the word _possible_ for a change in \(\lambda\), as stability won't necessarily mean that the agent has reached its maximum performance under the current conditions. Let's consider the example when our optimiser system has observed the following set of \(\mathcal{R}_{e}\) under the same conditions \(\Phi_{\lambda}\), \(\mathcal{R}:\{1,2,3,4,5,6\}\). We define a time-window length of 3 episodes (\(x=3\)) and a stability threshold of 2 (\(th_{stable}=2\)). Thus, \(w_{1}=\{1,2,3\}\), \(w_{2}=\{4,5,6\}\), \(\mathcal{R}_{win_{1}}=2\) and \(\mathcal{R}_{win_{2}}=5\). As a result, the Boolean conjunction for \(w_{1}\) will be true. This would mean that the system requires a hyperparameter change. However, under the same conditions \(\Phi_{\lambda}\), the system would have kept improving its performance as it is shown in \(w_{2}\) and \(\mathcal{R}_{win_{2}}\). Therefore, an additional condition is necessary to define when a hyperparameter tuning is required. We introduce \(max\mathcal{R}_{t}\) as the maximum known value of \(\mathcal{R}_{win}\) up to the time-point \(t\) and it is initialised as 0. Similarly, we introduce \(max\lambda_{t}\) as the value of \(\lambda\) that has produced \(max\mathcal{R}_{t}\) up to the time-point \(t\). We then define the main condition for hyperparameter tuning \(HPO(\lambda)\) and it is described as follows: \[HPO(\lambda)=\left\{\begin{array}{l}\lambda_{t},\\ \textbf{with}\\ max\lambda_{t}\leftarrow\lambda_{t}\\ max\mathcal{R}_{t}\leftarrow\mathcal{R}_{win_{t}}\end{array}\right\}\quad \text{if }\mathcal{R}_{win_{t}}>max\mathcal{R}_{t} \tag{6}\] where \(HPO(\lambda)\) is equal to \(\lambda_{t}\) iff the current value of \(\mathcal{R}_{win_{t}}\) is greater than the maximum known value of \(max\mathcal{R}_{t}\) at time \(t\). This would imply that the current value of our \(\mathcal{R}\) function has increased from the previous maximum known and therefore the current configuration \(\Phi_{\lambda}\), should be kept as the system is still 'learning'. Correspondingly, the current value of \(\mathcal{R}_{win_{t}}\) would become the new maximum known value of \(max\mathcal{R}_{t}\) (\(max\mathcal{R}_{t}\leftarrow\mathcal{R}_{win_{t}}\)). In the case that the previous condition for \(HPO(\lambda)\) is not met (\(\mathcal{R}_{win_{t}}>max\mathcal{R}_{t}\)), a hyperparameter tuning is required and will be analysed by our epsilon-greedy function \(\xi(\lambda)\). Our optimiser would examine \(\xi(\lambda)\) iff and only iff the following conditions are met: i) \(\mathcal{R}\) is stable for a time-window \(w\) (Eq. 5), and ii) the current value of \(\mathcal{R}_{win_{t}}\) is less than the previous maximum known value of \(\mathcal{R}\), \(max\mathcal{R}_{t}\). These conditions mean that the system is stable (regarding to rewards observed) and on a sub-optimal configuration of \(\Phi_{\lambda_{t}}\) (on reference to \(max\mathcal{R}_{t}\)). Therefore, a different vector of hyperparameters \(\lambda\) should be explored. The criterion for selecting the next \(\lambda\) is a variation of the well-known epsilon (\(\epsilon\))-greedy policy for balancing exploration and exploitation in RL [26]. The optimiser will explore the hyperparameter value-space with a probability of \(\epsilon\), otherwise it will exploit the known best configuration (\(max\lambda_{t}\)). Thus, it would act greedily. Eq. 8 shows the proposed epsilon-greedy function \(\xi(\lambda)\). \[\xi(\lambda)=\left\{\begin{array}{l}max\lambda_{t},\\ \end{array}\right\}\quad\text{with probability 1-$\epsilon$} \tag{7}\] \[\xi(\lambda)=\left\{\begin{array}{l}\text{random }\lambda\in\Lambda,\\ \textbf{or}\\ \lambda+c,\\ \textbf{or}\\ \lambda-c,\end{array}\right\}\quad\text{with probability $\epsilon$}\] In addition of exploring randomly the value-space with a probability \(\epsilon\), we have introduced supplementary conditions that will help the optimiser for deciding if the current value of the hyperparameter to which it is performing the tuning should be increased to \(\lambda+c\) or decreased to \(\lambda-c\), where \(c\) is a user-defined constant. These conditions give hints to the optimiser about the direction in the value-space where better performance is achieved. Putting everything together, given a RL algorithm \(\Phi\) that interacts with an environment \(E\) at time \(t\in T\) with an initial hyperparameter configuration \(\lambda\in\Lambda\) such as \(\Phi_{\lambda}\), its configuration will be analysed and possibly updated based on \(HPO(\lambda)\) when meeting the stability condition on a time-window from (5). The update based on our \(\epsilon\)-greedy function \(\xi(\lambda)\) will only take place if the observed value of our \(\mathcal{R}\) for such a time-window is less than the best known value of \(\mathcal{R}\), \(max\mathcal{R}\). The advantage of the proposed approach is that exploration actions are only selected in situations when the system has stopped learning under the defined conditions, which is indicated by analysing the history of \(\mathcal{R}\). Finally, the found optimal hyperparameter vector \(\lambda^{*}\) for a lifetime of the RL agent, corresponds to the final value of \(max\lambda\): \[\lambda^{*}\gets max\lambda \tag{8}\] This history-aware epsilon-greedy logic for HPO in RL has been implemented using the architecture in Section 3.1, exploiting the benefits of CEP and TMs. ## 4. Experiments and Results ### System under study: Airbone base stations In order to demonstrate the feasibility of the proposed architecture, this section presents its implementation for a case study from the domain of mobile communications. In this case study, Airborne Base Stations (ABS) use DQN to decide where to move autonomously in order to provide connectivity to as many users as possible. The 5G Communications System Model performs the necessary calculations to estimate the Signal-to-Interference-plus-Noise Ratio (SINR) and the Reference Signal Received Power (RSRP) to defined if a user is connected or not (Stein in real time, the situations of interest for the application domain. A set of event patterns were implemented in the selected CEP engine (Esper). Precisely, we have implemented Equations 4, 5 and 6 using Esper EPL in a hierarchy of event patterns. Listing 1 shows the implementation of Equation 6 that attempts to detect stable conditions on time windows of 3 episodes (\(w=3\)) with a \(th_{stable}=30\). Every pattern AvgByEpisode (which refers to Eq. 3), followed (\(\approx\)) by two subsequent AvgByEpisode and a EpiWinAVG (which refers to Eq. 4), is analysed (where statement) in compliance of the boolean conjunction of Eq. 5. When the condition is met (i.e. boolean conjunction \(=True\)), the engine automatically generates complex events that collect the required information, to therefore send the events to the MQTT broker component for further processing. 4. Temporal Model: It receives complex events and records their information as a new version of the model in the TGDB. Specifically, the Hawk3 model indexer was extended with the capability to subscribe to an MQTT queue and reshape the information into a model conforming to the metamodel in Fig. 2. The graph listener is notified when a stable condition is detected. Next, it performs the required calculations of Eq. 6 and 7 to provide feedback to the RL algorithm on either; keeping the hyperparameter configuration or tuning it towards finding a good solution to Eq. 2. The feedback provided is recorded as part of the temporal model enabling the long-term memory needed for further processing, accountability and post-mortem analysis. Footnote 3: [https://www.eclipse.org/hawk/](https://www.eclipse.org/hawk/) As previously mentioned, our implementation decouples the running RL-system from the HPO process. In that sense, the experiments were performed using two machines dedicated to different purposes: one performing the training of the different RL algorithms, and the other running the proposed framework. The RL algorithms ran on a dedicated ML server with 10 NVIDIA RTX A6000 48GB GPUs using the ABS simulator, Python 3, Anaconda 4.8.5, matplotlib 3.3.4, numpy 1.19.1, paho-mqtt 1.5.0, pandas 1.1.3, and pytorch 1.7.1. The machine running the proposed framework was a Lenovo Thinkpad T480 with an Intel i7-85500 CPU with 1.80GHz, running Ubuntu 18.04.2 LTS and Oracle Java 1.8\(0\)201, using Paho MQTT 1.2.2, Eclipse Hawk 2.0.0, and Esper 8.0.0. The full implementation can be found in (omitted for double-blind review). ### Evaluation of the results In this section, we present the evaluation of the results of using the proposed framework implementing the history-aware epsilon-greedy logic for HPO. We trained the DQN system under the same conditions for the different experiments. A total of 20 runs were conducted. #### 4.4.1. History-Aware HPO vs traditional HPO The first experiment corresponded to a qualitative study of the performance of the ABS system using the proposed approach comparing against traditional HPO techniques. Fig. 3 shows the results. As it can be observed, our history-aware HPO approach (black line) over-performed, in terms of time to converge and accuracy, the different approaches obtaining its maximum values from episode 32 onward. The random search (blue line) fluctuates and its performance is closed with static configuration (red line). It is interesting to note that grid search (green line) achieved similar performance. However, the sharp dip at episode 70 to 80 shows a potential instability. Similarly, BO (green line) achieves maximum performance in episode 42 however, it could not recover after trails of sub-optimal hyperparameter values from episode 71 onwards. The proposed approach allows us to get more insights about the HPO process by analysing the history stored in the TGDB in conformation of metamodel of 2. Fig. 4 (a) depicts the results. The extracted information shows that the maximum value of our reward value function \(\mathcal{R}\) was 727.055 at episode 74 with \(y\)=0.204. Therefore, under the configuration \(\Phi_{\lambda(\gamma,\kappa)}\) the optimal found value for the HPO problem of Eq. 3 is: \(\lambda^{*}\leftarrow\lambda(\gamma=0.204,\kappa)\). Figure 3. Comparison of hyperparameter tuning methods in DQN #### 4.4.2. History-Aware HPO vs static hyperparameters The second experiment included an exhaustive analysis of the performance of the RL algorithm using different seeds for the discount factor. The comparison include the analysis of the reward value function \(\mathcal{R}\) with and without the proposed approach for each system configuration \(\Phi_{A_{\gamma}(y,\kappa)}\). The boxplots of Fig. 5 display the results. By using the proposed history-aware HPO (in red) the system was able to reach greater maximum values (the upper end of the whiskers) for each configuration. Furthermore, the interquartile ranges (boxes) in each case had a greater upper quartile. Regarding to the medians, that represent the middle of the set, they were also greater for each case except for \(\gamma=0.2\) and \(\gamma=0.3\). This can suggest two things: i) the optimal values of \(\gamma\) is within this range \(0.2<\gamma^{*}<0.3\), which reinforce the result obtained in experiment 1, and ii) the variance in the data corresponds to the optimiser exploring the hyperparameter value-space with probability \(\epsilon\). The results showed that by the use of the approach no outliers that lie on an abnormal distance from other values in the data set were found. The best performance of the RL system using the history-aware HPO approach occurred when the initial value of the discounting factor was the centre of the hyperparameter value-space, \(\gamma_{0}=0.5\) with an average of connected users of 636.104 and a median of 702.886. In the same manner, the poorest performance occurred with \(\gamma_{o}=0.9\) with an average of 309.818 connected users and a median of 323.774. As shown in Fig. 4 (b), after exploring the hyperparameter value-space, the optimiser was going towards the optimal value of \(\gamma\) which is corresponded with the increase of the reward. Thus, the system would have needed longer to find the optimal value. ## 5. Discussion The results from our conducted experiments showed the feasibility of the history-aware approach for HPO. Combining CEP and TMs made us to offer both the short and long term memory required for hyperparameter tuning with reflective capabilities. The history-aware epsilon-greedy logic allowed to explore the hyperparameter value-space with explicit long-term memory to remember good/optimal system's configurations \(\Phi_{A_{\gamma}(y,\kappa)}\). Our experiments provide valuable insights into the effects of the tuning of the discount factor and its influence on the stability of training and overall system performance (maximised cumulative rewards). The discount factor determines how many future time steps the agent considers when choosing an action. This value strongly depends on the environment that software agents are experienced. In the ABS case study, a discount factor close to 1 allows the agent to take actions very future oriented. A lower discount factor suggests that the ABS are more concerned to provide coverage to multiple users in short term but would introduce uncertainty in the long term. It is challenging to find the balance between the highest possible number of connected users in the short term and the long-term impact, as user behaviour may vary. The approach has some limitations. Primarily, the optimisation of multiple hyperparameters in a single run. We have focused our study on the impact of the discount factor as key element of the Bellman Equation however, there are other hyperparameters that may affect the final system performance. Further work will involve the gradual lifting of these restrictions by allowing the tuning of multiple hyperparameters using different threads or timelines. Another limitation of the approach is the definition of stability on the system which is strongly related to the threshold of stability and time-window length. This could be problematic in situations when the \(\mathcal{R}\) is noisy which could produce that the system never gets into a stable condition. This could be tackled by analysing different criteria for stability such as Z-score (Belle et al., 2017) and the absolute deviation around the median (Kraus et al., 2018). Another approach could be introducing a patience time, e.g. if the system has not entered on a stable condition for X episodes, force it to explore another \(\lambda\). ## 6. Related Work HPO in the RL traditionally used a Delta-Bar-Delta method as incremental algorithms to tune parameters (Dela et al., 2017). However, this method and its variations were limited to linear supervised learning. The recent movement is to combine incremental Delta-Bar-Delta method and Temporal-difference learning (Zhu et al., 2018). Those methods can not make tuning hyperparameters online and allow the algorithm to more robustly adjust to non-stationarity in a problem at the same time. A variety of techniques exist to combat this recently--most notably use of a large experience replay buffers or the use of multiple parallel actors. These techniques come at the cost of moving away from the online RL problem as it is traditionally formulated. More sophisticated approaches include Self-Tuning Figure 4. Reward and discount factor evolution, starting at \(\gamma=0.5\) and \(\gamma=0.9\) using history-aware HPO. Figure 5. Comparison of history-aware hyperparameter optimisation vs static values Actor Critic (STAC) (Kumar et al., 2019) and Sequential Model-based Bayesian optimisation (SMBO) (Beng et al., 2017). However both methods ignored a crucial issue for RL: the possible non-stationarity of the RL problem induces a possible non-stationarity of the hyperparameters. Thereby, at various stages of the learning process, various hyperparameter settings might be required to behave optimally (Kumar et al., 2019). Furthermore, these approaches bases their functionality on multiple trials, thus multiple agent's lifetimes different from the present work that focuses on HPO in a single lifetime. Moreover, CEP can bring some advantages to ML approaches. They have been used together in fields such as the financial sector (Kumar et al., 2019), cybersecurity (Kumar et al., 2019) and Internet of Things (Kumar et al., 2019). More particularly, CEP has been used to preprocess the stream of data that will be provided to the ML classifiers for training and predictive calculations (Kumar et al., 2019). More evolved architectures include the use of ML to find and set event patterns for the detection of complex events, thus automatising the setup stage of a CEP system (Kumar et al., 2019). Even more, some architectures have been developed to automatically update their event patterns using ML (Kumar et al., 2019). ML and CEP are also combined to provide dynamic fault-tolerance support (Kumar et al., 2019). Recently, CEP has been also integrated with TM to support both service monitoring and explainable reinforcement learning. Specifically, in (Kumar et al., 2019), an architecture based on CEP and TM is proposed for runtime monitoring of comprehensive data streams. This architecture promptly reacts to events and analyses the historic behaviour of a system. In (Kumar et al., 2019), a configurable architecture combining CEP and RL allows to keep track of a system's reasoning over time, to extract on-demand history-aware explanations, to automatically detect situations of interest and to real-time filter the relevant points in time to be stored in a TGDB. ## 7. Concluding remarks and future work Hyperparameter tuning is an omnipresent problem in RL as it is a key element for obtaining the state-of-the-art performance. This paper proposes to tackle this issue by integrating CEP and TMs. We investigated new ways to monitor software agents to explore their environment and pruning algorithms by automatically updating hyperparameters using feedback based on the RL agents historical behaviour. In order to test the feasibility of the approach, we conducted several experiments comparing the performance of a DQN case study using the proposed approach and different traditional HPO techniques. The encouraging results show that the proposed framework combining CEP and TMs and implementing the history-aware epsilon-greedy logic significantly improved performance compared to traditional HPO approaches, in terms of reward values and learning speed. Furthermore, the outcomes from the monitoring process produce interpretable results easy for a human to understand and act upon. We have shown how some SE paradigms can be exploited for the benefit of RL and can be further used for creating accountability of RL systems. We believe that the SE and ML communities should work together to solve the critical challenges of assuring the quality of ML/RL and software systems in general. Future work will include the study of the points mentioned in Section 5 regarding the the limitations of the approach. Further experiments will also be conducted to analyse the performance of the approach with other RL methods. Similarly, we will benchmark the proposed approach with more sophisticated HPO approaches as the ones mentioned in Section 6. The feedback obtained from the proposed framework can be further exploited for example for safe early stopping (Kumar et al., 2019). Moreover, by choosing a set of high-level operations from hyperparameter tuning to algorithm selection set, to guide an agent to perform various tasks, like remembering history, comparing and contrasting current and past inputs, and using learning methods to change its own learning methods, the proposed approach can be considered a first step towards Meta-Learning (Kumar et al., 2019). Finally, the obtained knowledge can be useful to convey this information for different stakeholders apart from proving feedback which can be another research direction.
2306.00620
OTW: Optimal Transport Warping for Time Series
Dynamic Time Warping (DTW) has become the pragmatic choice for measuring distance between time series. However, it suffers from unavoidable quadratic time complexity when the optimal alignment matrix needs to be computed exactly. This hinders its use in deep learning architectures, where layers involving DTW computations cause severe bottlenecks. To alleviate these issues, we introduce a new metric for time series data based on the Optimal Transport (OT) framework, called Optimal Transport Warping (OTW). OTW enjoys linear time/space complexity, is differentiable and can be parallelized. OTW enjoys a moderate sensitivity to time and shape distortions, making it ideal for time series. We show the efficacy and efficiency of OTW on 1-Nearest Neighbor Classification and Hierarchical Clustering, as well as in the case of using OTW instead of DTW in Deep Learning architectures.
Fabian Latorre, Chenghao Liu, Doyen Sahoo, Steven C. H. Hoi
2023-06-01T12:45:00Z
http://arxiv.org/abs/2306.00620v1
# OTW: Optimal Transport Warping for Time Series ###### Abstract Dynamic Time Warping (DTW) has become the pragmatic choice for measuring distance between time series. However, it suffers from unavoidable quadratic time complexity when the optimal alignment matrix needs to be computed exactly. This hinders its use in deep learning architectures, where layers involving DTW computations cause severe bottlenecks. To alleviate these issues, we introduce a new metric for time series data based on the Optimal Transport (OT) framework, called Optimal Transport Warping (OTW). OTW enjoys linear time/space complexity, is differentiable and can be parallelized. OTW enjoys a moderate sensitivity to time and shape distortions, making it ideal for time series. We show the efficacy and efficiency of OTW on 1-Nearest Neighbor Classification and Hierarchical Clustering, as well as in the case of using OTW instead of DTW in Deep Learning architectures. Fabian Latorre\({}^{\star}\), Chenghao Liu\({}^{\dagger}\), Doyen Sahoo\({}^{\dagger}\), Steven C.H. Hoi\({}^{\dagger}\)\({}^{\star}\)Ecole polytechnique federale de Lausanne (EPFL) \({}^{\dagger}\)Salesforce Research Asia Time-series, Optimal Transport, Deep Learning, Optimization. ## 1 Introduction Time-series data is ubiquitous in contemporary Machine Learning applications including Heart disease predictions based on ECG data [1], end-to-end text-to-speech synthesis [2] and sign-language recognition [3]. Common tasks like supervised classification and hierarchical clustering require a notion of _distance_ or _similarity_ between time series, and their performance strongly depends on such a choice. A key desired characteristic of a time series distance is the ability to identify commonly occurring patterns such as shape distortions and time delays. For example, two time series that differ by a slight distortion in shape or a slight delay in time should be considered _similar_ or _close_[4]. The euclidean distance is unable to reason about shape and time distortions, and thus, is considered a poor choice for time series applications. Dynamic Time Warping (DTW) [5]can assign high similarity values to time series that have a closely matching shape, but which do not necessarily align perfectly in the time domain. For this reason, DTW has established itself as the most prominent similarity measure for time series [6, 7, 8, 9, 10]. However, DTW and all closely-related variants suffer from an unavoidable quadratic complexity with respect to the length of the time series [11]. This makes it highly expensive or out-right unusable for time series of considerable length, especially for large datasets. This issue has been noted in literature, and researchers have developed faster variants of DTW, such as FastDTW[12]. Unfortunately, FastDTW has been dismissed as being slower than DTW in practice, despite its theoretical linear time complexity [13]. Recently, DTW has been incorporated inside Deep Learning pipelines for time-series data. It has been used either as a loss function [14, 15], or as a replacement for linear layers as in DTWNet [10]. In the latter case it replaces the inner product operation in the linear layers of the network. Typically, these layers have a linear time-complexity with regard to input size (e.g. convolutional layers, batch-norm, etc.), and replacing them with a quadratic complexity DTW layer introduces a significant computational bottleneck. Moreover, DTW uses Dynamic Programming, an inherently sequential framework that does not lend itself to trivial paralellization in GPU. Even though there are available GPU implementations [16, 17, 18], they do not avoid the sequential nature. This limits the adoption of DTW in practice, as the cost of training and hyperparameter optimization increases considerably. Lastly, DTW is not a metric, as it violates the triangle inequality. Thus, it cannot exploit faster similarity search methods like the Approximating and Eliminating Search Algorithm (AESA) [19]. Thus, we need a distance notion that not only can be computed in linear time, but is also differentiable and can be parallelized on GPU, thus speeding up all training pipelines by a huge margin. Even though the euclidean distance enjoys such characteristics, it does not provide a good inductive bias for time-series data, as it ignores the time component. Precisely, the goal is to keep the theoretical properties and performance of DTW at a computational cost similar to that of the euclidean distance. Our contributions.It is apparent that there is a need for new time series distance notions that overcomes some, if not all, the aforementioned drawbacks. We propose a new distance for time series data that we call **Optimal Transport Warping (OTW)**. Our distance is rooted in the theory of Optimal Transport [20] (OT), which is a well-known metric used to compare the shape of two probability distributions. We adapt Optimal Transport for the case of time-series data through an Unbalanced Optimal Transport formulation, given that two time series may not have the same mass (may not sum up to the same value). We also address the issue of negativity, i.e., while probability distributions are non-negative, time series may not be. Our final OTW formulation (1) can be computed in linear time and space, (2) is differentiable, (3) can be easily computed on massively parallel architectures (GPU/TPUs), (4) has moderate sensitivity to shape and time distortions, and (5) is a proper metric. A comparison of our method against several state-of-the-art time series distances is summarized in Table 1. In experiments we observe that OTW runs up to 10-30x times faster than DTW, depending on the dataset. In the classification task it improves over DTW in 6 out of 7 types of datasets in the UCR time-series benchmark (table 2). In the clustering task, it improves over DTW in all 7 types of datasets considered (table 6). A total of 92 datasets were considered. Through synthethic and real experiments we show that by replacing DTW with OTW in DTW-Net we solve its computational bottleneck, and we can achieve a lower error in less than 50% of the time. ## 2 Related Work We summarize the most prominent time-series distances in table 1, and conclude that there are few low-complexity alternatives to DTW, which suffers from quadratic complexity. Only GDTW[21], FastDTW [12] and OTW (our proposed method) are fast alternatives that avoid computational bottlenecks. However, FastDTW has been dismissed by the community [13] while GDTW only provides an CPU implementation of a stochastic rather than a deterministic gradient. Because the activations appear at every layer, this prevents the computation of an unbiased estimator of the gradient of the loss function. Moreover, GDTW requires a Contiguous Sparse Matrix Data Structure that might not be efficient in GPU (the authors of the paper do not provide a GPU implementation). Hence, a priori we expect that training networks using GDTW activations on CPU is an unfeasible task given limited time. In recent years, DTW has found popularity in several applications using Deep Learning architectures. DTW-like distances have been used either as a loss function for time series forecasting/regression, or as part of a feature-extracting module that can be used as a layer inside a neural network. When such distances are used to replace inner products, as in linear layers or transformers, the complexity is inevitably increased. Because layers are applied in a sequential fashion (as they cannot be parallelized), this forms a bottleneck that slows down the learning pipeline. A few recent works developing Deep Learning pipelines that make use of a DTW-like module, potentially suffering from speed issues: DILATE [14] is a loss function module computing Soft-DTW as a subroutine; DTWNet [10] proposes a feature-extraction module based on DTW; D3TW [15] is a discriminative Loss for Action Alignment and Segmentation based on Soft-DTW; SLR [26] is an Encoder-Decoder architecture with Soft-DTW alignment constraint for Continuous Sign Language Recognition; STRIPE [27] is similar to [14]; DTW-NN [28] is similar to [10]; SpringNet is a Transformer architecture with DTW replacing inner products; TTS [2] is an End-to-End text-to-speech architecture with a Soft-DTW-based prediction loss; [29] uses DTW to synchronize a resultant and a target sequence inside a text-to-video sign-language synthesizer network. Due to the soft-DTW computational demands, an alternative attention mechanism is studied. **OT-based time series distances.** The potential of Optimal Transport for time-series has been explored in other works. For example, [30] proposes to compare positive time-series using sinkhorn divergences, a variant of OT. However, even in the one-dimensional case there exists no subquadratic algorithm for the problem. Time-adaptive Optimal Transport [31] is a similar approach. Nevertheless, their formulation is essentially different from ours: for a time series \(a\), a uniform distribution is constructed, over pairs \((i,a_{i})\), which are then compared using the traditional OT formulation. Because this is a two-dimensional distribution, the complexity of their proposed algorithm is at least quadratic and hence, as slow as DTW. In contrast our algorithms work in linear time. We recall that OT requires cubic time but it can be approximated in quadratic time using Sinkhorn iterations [32]. Only in the one-dimensional case it has a linear-time implementation [33]. Finally, [34] interprets the values of a time series as _set_ of samples rather than a sequence. This has the effect of removing the time information of the sequence and hence OT, as used in this case, is oblivious to the temporal nature of the data. The resulting optimization problems are solved with the so-called Sinkhorn iterations which again lead to a quadratic complexity. In summary, ours is the first linear-time distance for time-series \begin{table} \begin{tabular}{l c c c} **Method** & **Complexity** & **Gradient** & **GPU** \\ \hline TWED [22] & \(O(n^{2})\) & ✗ & ✗ \\ ERP [23] & \(O(n^{2})\) & ✗ & ✗ \\ EDR [24] & \(O(n^{2})\) & ✗ & ✗ \\ DTW [5] & \(O(n^{2})\) & ✗ & ✓ \\ SoftDTW [9] & \(O(n^{2})\) & ✓ & ✓ \\ DILATE [14] & \(O(n^{2})\) & ✓ & ✓ \\ SoftDTW div. [25] & \(O(n^{2})\) & ✓ & ✓ \\ FastDTW [12] & \(O(n)\) & ✗ & ✗ \\ GDTW [21] & \(O(n)\) & Stochastic & ✗ \\ OTW (This work) & \(O(n)\) & ✓ & ✓ \\ \hline \end{tabular} \end{table} Table 1: Comparison of time series similarity measures. **Gradient:** differentiability with respect to its inputs. **GPU:** existence of a parallel GPU implementation. based on OT. ## 3 Optimal Transport Warping ### Problem Setting and Optimal Transport Probability distributions are analogous to density functions in physics, which measure the concentration of mass along an infinitesimal unit of volume. We can think of certain time-series as measuring a concentration of mass along an infinitesimal length of _time_: if we let \(a(t)\) be the amount of car traffic at time \(t\) of the day, after normalization, we can think of the integral \(a(t)\) over a time interval, as the probability that a car is in traffic during that interval. Hence, a method that compares probability distributions can potentially compare time-series. Indeed, Optimal transport has been used to compare time-series or general sequences before [32, 34, 31]. Optimal transport provides a many-to-many alignment, instead of a one-to-many alignment in the case of DTW and its variants. A priori, it is reasonable to believe that some time-series data might be more amenable to alignment using one type of alignment over the other. Indeed, OT is also sometimes called the _Earth Mover's Distance_, because it captures the cost of _reshaping_ one distribution into another. In this way it is sensitive to shape distortions, a fact that has been exploited and explained in depth in [30]. A discrete time series \(a\) is a sequence of numbers \((a_{1},\ldots,a_{n})\), that can be understood as a function from the set \([n]=\{1,\ldots,n\}\) to the real numbers. Since Optimal Transport is a distance notion between probability measures, we first focus on time series that are positive and sum to one i.e., \(\sum_{i=1}^{n}a_{i}=1;a_{i}\geq 0\) and \(\sum_{j=1}^{n}b_{j}=1;b_{i}\geq 0\). Later, we will expand the notion of distance to arbitrary time series. To define optimal transport for probability distributions over the set \([n]\), we require a nonnegative function \(d:[n]\times[n]\to\mathbb{R}_{+}\) defined over pairs of elements of \([n]\). We denote by \(D\) the matrix with entries \(D_{i,j}\coloneqq d(i,j)\) and we refer to it as the _distance matrix_. Denoting the column vector of all-ones as \(\mathbb{1}\in\mathbb{R}^{n}\), the Optimal Transport distance (with respect to the distance matrix \(D\)) between \(a\) and \(b\) is defined as: \[W_{D}(a,b)\coloneqq\min_{T}\left\{\left<T,D\right>:T\geq 0,T\mathbb{1}=a, \mathbb{1}^{T}T=b\right\} \tag{1}\] For an introduction to Optimal Transport please refer to [20]. Note that we can extend definition (1) to sequences that sum up to the same value, not necessarily equal to \(1\): because \(a\) is defined as the row-sums of \(T\), and \(b\) is defined as the columns of \(T\), \(a\) and \(b\) need only sum up to the same value, equal to the sum of all the entries of \(T\). However, we cannot remove the constraint that that \(a,b\geq 0\), as the entries of \(T\) are positive c.f. eq. (1). In our case we will choose the matrix \(D\) as the absolute value distance, i.e., \(d(i,j)\coloneqq|i-j|\). This choice means that transporting a unit of mass from the position \(i\) to the position \(j\) will be equal to the absolute difference between the two positions. In the following we will always assume that \(D\) is chosen in this way and we let \(W(a,b)\coloneqq W_{D}(a,b)\) to simplify notation. This choice of \(D\) ensures that there exists a closed form solution for the Optimal Transport problem eq. (1) that can be computed in linear time [33]: \[\begin{split} W(a,b)=\sum_{i=1}^{n}\left|A(i)-B(i)\right|\\ A(i)\coloneqq\sum_{j=1}^{i}a_{j},\quad B(i)\coloneqq\sum_{j=1}^{ i}b_{j}\end{split} \tag{2}\] that is, \(A\) and \(B\) are the _cumulative distribution_ functions of \(a\) and \(b\), respectively. Because a time series is not necessarily a probability measure, we cannot directly use eq. (1). The two main issues are: (i) _unbalancedness,_ as the time series may not sum up to the same value and (ii) _negativity,_ as a time series may contain negative values. In order to extend the Optimal Transport framework to arbitrary time series we need to workaround the limitations of the definition in eq. (1), while trying to retain the linear space/time properties of the closed-form solution (2). ### Unbalanced Optimal Transport for time-series We will first assume that the sequences \(a,b\) are nonnegative but that they do not necessarily sum up to the same value. In order to resolve the unbalancedness issue, we proceed by adding a _sink_ as described in [35, Chapter 3]: we append one additional element to both sequences as follows: \[\begin{split}\hat{a}_{i}&\coloneqq\begin{cases}a_{i} &\text{if }i\in[n]\\ \sum_{j=1}^{n}b_{j}&\text{if }i=n+1\end{cases}\\ \hat{b}_{i}&\coloneqq\begin{cases}b_{i}&\text{if }i\in[n]\\ \sum_{j=1}^{n}a_{i}&\text{if }i=n+1\end{cases}\end{split} \tag{3}\] In this way, we ensure that \(\sum_{i}\hat{a}_{i}=\sum_{i}\hat{b}_{i}\). This step can be understood as balancing the total mass of the two sequences. Now, we also have to extend the distance matrix in some way, to account for the _sink_ in the extended sequences \(\hat{a}\) and \(\hat{b}\). For some value \(m\in\mathbb{R}_{+}\) called the _waste cost_ we set: \[D(m)_{i,j}=\begin{cases}|i-j|&\text{if }i,j\in[n]\\ m&\text{if }i=n+1,j\in[n]\\ m&\text{if }j=n+1,i\in[n]\\ 0&\text{if }i=n+1,j=n+1\end{cases} \tag{4}\] the idea being that any excess can be transported to the \(n+1\)-th point (the sink) incurring a cost of \(m\) per unit of mass. We define the _(nonnegative) unbalanced Optimal Transport Distance_ problem as: \[\begin{split}\widehat{W}_{m}(a,b)&\coloneqq W_{D}(m)(\hat{a}, \hat{b})\\ &=\min_{T}\left\{\left<T,D(m)\right>:T\geq 0,T\mathbb{1}=\hat{a}, \mathbb{1}^{T}T=\hat{b}\right\}\end{split} \tag{5}\] note that if the sequences \(a,b\) sum up to the same value, the problem reduces to the original _balanced_ optimal transport problem eq. (1). With this modification, it is not clear if eq. (5) can also be solved in closed form in linear time and space as in eq. (2). However, we can compute an upper bound: **Theorem 1**.: _Let \(D_{i,j}=|i-j|\) and let \(D(m)\) be defined as in eq. (4). Define_ \[\text{OTW}_{m}(a,b)\coloneqq m|A(n)-B(n)|+\sum_{i=1}^{n-1}|A(i)-B(i)| \tag{6}\] _Then, \(\widehat{W}_{m}(a,b)\leq\text{OTW}_{m}(a,b)\). Clearly, \(\text{OTW}_{m}(a,b)\) can be computed in linear time/space._ We defer the proof of theorem 1 to appendix C. Precisely, **we propose to use eq. (6) as the notion of distance** between time series of positive sign. In practice, the choice of \(m<n\) works better. For large values of \(n\), the choice \(m=n\) would put too much weight on the first component \(m|A(n)-B(n)|\), which strongly penalizes the total mass difference of the two sequences \(a,b\). As we show in lemma 1, the unbalanced OT distance (4) increases linearly when a time-shift is introduced. This makes it ideal for time-series applications like demand forecasting, where a shift in time can represent a change in the seasonality of a product, for example. In contrast, in speech recognition where time-shifts should be ignored, perhaps a distance like DTW might be more suitable. **Lemma 1**.: _Let \(a\in\mathbb{R}^{n}_{+}\) and \(b\) be two time series, and let \(b^{\prime}\) be a shifted version of \(b\) by \(t<n\) units, that is \(b^{\prime}_{i}=b_{i-t}\) for \(i=t+1,\ldots,n\). For simplicity assume that \(b_{n-t+1}=\ldots=b_{n}=0\) i.e., the sequence \(b\) is zero-padded. In this way \(\sum_{i}b_{i}=\sum_{i}b^{\prime}_{i}\). It holds that \(|\widehat{W}_{m}(a,b^{\prime})-\widehat{W}_{m}(a,b)|\leq t\left(\sum_{i=1}^{n }a_{i}\right)\)_ The proof of lemma 1 is deferred to appendix C. This property means that the distance increases linearly proportional to the magnitude of the time-shift. Note that Soft-DTW [9] has a quadratic lower bound on its sensitivity to time-shifts, and was presented as a main contribution in [30, Theorem 1]. In contrast we show an upper bound and our sensitivity is linear. **Constraining the Transport map to be local.** For time series distances like DTW, it has been observed that in practice, considering all possible alignments might be detrimental to the performance in downstream tasks. Instead, the best choice is to constrain the alignment to perform only _local_ matching i.e., map samples only inside a constrained window. This is precisely the so-called DTW distance with Sakoe-Chiba band constraint c.f., [36] for details. Our proposed Optimal-Transport based distance is no different, as the transport map \(T\) in eq. (5) does not have any constraint, and constitutes a many-to-many map between arbitrary positions in \([n]=\{1,\ldots,n\}\). To mitigate this issue we propose to use a windowed-cumulative-sum rather than the full cumulative-sum \(A(i)\coloneqq\sum_{j=1}^{i}a_{j}\) as originally proposed in eq. (2). More precisely we let: \[A_{s}(i)\coloneqq\sum_{j=1}^{i}a_{j}-\sum_{j=1}^{i-s}a_{i},\quad B_{s}(i) \coloneqq\sum_{j=1}^{i}b_{j}-\sum_{j=1}^{i-s}b_{j} \tag{7}\] and we define the local Optimal Transport Warping distance for \(s\in\{1,\ldots,n\}\): \[\text{OTW}_{m,s}(a,b)\coloneqq m|A_{s}(n)-B_{s}(n)| \tag{8}\] \[+\sum_{i=1}^{n-1}|A_{s}(i)-B_{s}(i)|\] **Localness.** the parameter \(s\) is akin to the window parameter of the Sakoe-Chiba constraint in DTW: it interpolates between the \(\ell_{1}\)-norm distance and the Unbalanced Optimal Transport. One the one hand, the \(\ell_{1}\)-norm compares the two sequences \(a\), \(b\) entry-wise: it does not allow for alignment between two different time positions. On the other hand, the Unbalanced Optimal Transport distance allows transport of mass between any two time positions in the time-series. The parameter \(s\) interpolates between the two extremes. In practice not all datasets require the same level of localness, and it is important to choose \(s\) by cross-validation, which usually results in better performance. We summarize this fact in the following lemma, with proof deferred to appendix C: **Lemma 2**.: _For simplicity assume \(m=1\). When \(s=1\) then \(\text{OTW}_{1,1}(a,b)=\|a-b\|_{1}\). When \(s=n\) we recover the global OTW distance (6) i.e., \(\text{OTW}_{1,n}(a,b)=\text{OTW}_{m}(a,b)\)._ Finally, note that \(\text{OTW}_{m,s}\) is not differentiable when \(A_{s}=B_{s}\), due to the presence of the absolute value function. In order to have a fully differentiable version, it suffices to use a smooth approximation of absolute value, like the well-known _smooth \(\ell_{1}\)-loss_: \[L_{\beta}(x)=\begin{cases}x^{2}/(2\beta)&|x|<\beta\\ |x|-\beta/2&|x|\geq\beta\end{cases} \tag{9}\] We define: \[\text{OTW}_{m,s}^{\beta}(a,b)= mL_{\beta}(A_{s}(b)-B_{s}(n)) \tag{10}\] \[+\sum_{i=1}^{n-1}L_{\beta}(A_{s}(i)-B_{s}(i))\] and it holds that \(\text{OTW}_{m,s}^{\beta}\rightarrow\text{OTW}_{m,s}\) as \(\beta\to 0\). ### Dealing with negative values Up to this point, we have presented a way to compare two time series that have only positive entries, using Unbalanced Optimal Transport. However, time series can contain negative values. In order to deal with sequences of arbitrary sign we propose to choose one of the following strategies, using cross-validation: 1. Apply eq. (10) to arbitrary sequences. Note that this formula can be applied even in the case where some entries of the sequences \(a\) or \(b\) are negative. 2. Split arbitrary sequences \(a,b\) into their positive and negative parts i.e., \(a_{+}=\max(a,0)\) and \(a_{-}=\max(-a,0)\), and sum the unbalanced optimal transport distance between the parts of equal sign: \[\overline{\text{OTW}}_{m,s}^{\beta}(a,b)= \text{OTW}_{m,s}^{\beta}(a_{+},b_{+})\] (11) \[+\text{OTW}_{m,s}^{\beta}(a_{-},b_{-})\] **Remark.** Because we only need to compute the windowed-cumulative-sums (7) and then apply the smooth \(\ell_{1}\)-loss to the differences \(A_{s}(i)-B_{s}(i)\) for \(i=1,\ldots,n\), we only need a linear number of operations, as a function of \(n\). Moreover, since we only use basic operations available in Deep Learning frameworks like PyTorch, the computation can be readily performed in GPU using optimized code. Finally, because the smooth \(\ell_{1}\)-loss is convex, this implies the triangle inequality and OTW becomes a true metric. Next, we perform experiments to validate the performance of OTW. ## 4 Experiments ### Comparing OT to DTW on 1-nearest-neighbors classification In this experiment we train 1-nearest-neighbors classifiers on the UCR time series classification archive, which consists of a large number of univariate time series data. Please note that **the purpose is not to show state-of-the-art performance**. Rather, our goal is to perform an apples-to-apples comparison of the Dynamic Time Warping (DTW) distance and our proposed OTW distance. To choose hyperparameters for the OTW distance, for each combination we follow a 80/20 random split of the training/validation sets, and choose the one that maximizes accuracy on the validation set. For the best hyperparameters found, we evaluate the method on the testing set. The accuracy for the 1-nearest-neighbors classification for the DTW distance with learned warping window is directly obtained from the UCR time series classification benchmark website1. Footnote 1: [https://www.cs.ucr.edu/~eamonn/time_series_data_2018/](https://www.cs.ucr.edu/~eamonn/time_series_data_2018/) For our method, we do 10 independent runs and obtain a 95% confidence interval for the test error. Because of the lower complexity of our method vs DTW (linear vs quadratic complexity), it runs considerably faster and hence we consider it a better method if it can attain the same performance as DTW. Because there is only a single testing observation for the DTW methods available in the reported benchmarks, we consider that our method is better if the confidence interval for the test error of OTW contains or is below the reported test error for DTW. In table 2 we summarize the results according to the type of dataset. We group the ECG, EOG, HRM and Hemodynamics types of datasets under the name _Medical_, as they contain a relatively small number of datasets. We observe a noticeable improvement in all the considered categories except the _sensor_ category. Note that we remove the datasets corresponding to the motion and trajectory type, as the OTW distance is not suited for such data. OT is suited to time series that can be interpreted as probability distributions or, more generally, measures. Time-series that track the position of an object through time, do not fall in this category. The results for each single dataset are collected in appendix A. Overall, we observe in table 2 that **OTW improves over DTW on 6 out of 7 types of datasets**. On some datasets like Medical and Traffic, the advantage is apparent. In total, we evaluated on 92 dataset from the UCR time series benchmark, only the ones with missing values or variable length of sequence were discarded. ### Hierarchical Clustering We compare the DTW and OTW distance for time series clustering. We use the Agglomerative Clustering algorithm [37], which only requires access to the distance matrix. We run such clustering algorithm on the UCR time series benchmark collection. We skip datasets having more than 500 samples, as the quadratic memory requirements of the algorithms imposes such constraint. We evaluate the quality of the clustering using the Rand Index (RI), given that the datasets in the UCR archive are labelled. We summarize the results in table 6. We observe that our proposed distance outperforms DTW on most datasets considered. This is striking, as the time required for our method to run is considerably less than the DTW-based clustering. Overall, we observe in table 6 that **OTW improves over DTW on 7 out of 7 types of datasets**. On some datasets like Image, Traffic and Sensor, the advantage is apparent. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Type** & **DTW error** & **OTW error** & **Improv.** & **Total** & **\%** \\ \hline Image & 0.25 & \(\mathbf{0.25\pm 0.02}\) & 19 & 31 & \(\mathbf{61.29}\) \\ Spectro & 0.26 & \(\mathbf{0.24\pm 0.03}\) & 6 & 8 & \(\mathbf{75.00}\) \\ Sensor & 0.23 & \(0.44\pm 0.03\) & 9 & 28 & \(32.14\) \\ Device & 0.34 & \(0.44\pm 0.03\) & 5 & 9 & \(\mathbf{55.56}\) \\ Medical & 0.38 & \(\mathbf{0.36\pm 0.02}\) & 9 & 13 & \(\mathbf{69.23}\) \\ Traffic & 0.12 & \(\mathbf{0.06\pm 0.02}\) & 2 & 2 & \(\mathbf{100.00}\) \\ Power & 0.08 & \(\mathbf{0.04\pm 0.01}\) & 1 & 1 & \(\mathbf{100.00}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Average test error of 1-NN classification using Learned DTW or Learned OTW (OTW error) and number of datasets in the collection where OTW outperforms DTW ### Performance of DTW-Net vs OTW-Net in synthetic and real data DTW distances have been employed to design neural network layers with inductive biases that are better suited for time series data. This is the case, for example, of DTW-Net [10]. In this network, the first hidden layer consists of DTW distances between the input and the rows of a matrix, which is the trainable parameter of the layer. If there are \(k\) such rows, then the computational complexity of the layer is \(O(kn^{2})\), where \(n\) is the length of the input. On top of such features an arbitrary network architecture is added, which outputs the class probabilities. In contrast, in a vanilla multi-layer fully-connected neural network the first hidden-layer (a linear layer) consists of inner products between the input and the rows of a matrix. If there are \(k\) such rows, the complexity of this linear layer is \(O(kn)\). Hence, the complexity of DTW-Net [10] is higher than a regular fully-connected neural network: it suffers from a computational bottleneck. In this experiment, we replace the DTW distance in DTW-Net by our OTW distance. Because OTW can be computed in linear time, this restores the complexity to \(O(kn)\), in line with the other layers of the network, getting rid of the computational bottleneck. We describe such layers in algorithms 1 and 2. ``` Input: input \(a\in\mathbb{R}^{n}\) Parameters: matrix \(B\in\mathbb{R}^{k\times n}\) for\(i=1\) to \(k\)do \(b\gets B_{[:,i]}\)\(\triangleright\)\(i\)-th row of \(B\) \(z_{i}\leftarrow\text{OTW}_{m,s}^{\beta}(a,b)\) return\(z\in\mathbb{R}^{k}\) ``` **Algorithm 1** OTW Feature Extraction Layer ``` Input:\(a\in\mathbb{R}^{d}\) Parameters: matrix \(B\in\mathbb{R}^{k\times n}\) for\(i=1\) to \(k\)do \(b\gets B_{[:,i]}\)\(\triangleright\)\(i\)-th row of \(B\) \(z_{i}\leftarrow\text{DTW}(a,b)\) return\(z\in\mathbb{R}^{k}\) ``` **Algorithm 2** DTW Feature Extraction Layer **Synthetic data.** In order to illustrate the performance of both types of networks, we generate three synthetic datasets c.f. Figure 1. The synthetic datasets contain four classes which are determined by shape (triangle or square), location (left or right) and some added noise. For the synthetic data experiment, the hidden layer sizes for both DTW-Net and OTW-Net are set as \([1,128,128]\). We train both networks for 500 epochs and we plot the test-error vs training time in Figure 2. To improve the readability, we plot the minimum achieved test error, up to time \(t\). Due to the computational bottleneck in DTW-net, its training time is orders of magnitude larger than OTW-Net. However, DTW-Net converges in fewer epochs, which somewhat offsets its slower time-per-epoch. In any case, OTW-Net achieves zero-error in 50 to 60 percent of the time of DTW-Net. Note that even if the training time to convergence is only reduced by 50%-60%, **the inference is reduced by a larger margin** and is many times faster (see fig. 3 middle and right panes), which shows the true advantage of linear complexity + GPU usage. **Real data.** We compare DTW-Net with OTW-Net on real datasets from the UCR time series archive. For the real data experiment, the hidden layer sizes for OTW-Net are set as \([500,500,500]\) and for DTW-Net as \([100,500,500]\). The smaller size of the first hidden layer of DTW-Net allows training in a reasonable amount of time. In fact, we estimate the time and memory required to train DTW-Net on the UCR time series archive, consisting of more than 100 datasets, and we conclude that DTW-Net can only be trained on a handful of them in less than 24 hours. We choose the MoteStrain dataset in this collection to demonstrate the speed and accuracy of both methods. We train both networks for 5000 epochs and we plot the test-error vs training time in Figure 3. To improve the readability, we plot the minimum achieved test error, up to time \(t\). For the MoteStrain dataset, we only show the time up to 400 seconds, when OTW-Net converges to around 14% test error. In contrast, DTW-Net only achieves 25% error and its training takes around 12 hours, compared to OTW-Net which only takes 30 minutes. We conclude that the quadratic complexity of the first layer in DTW-Net makes this approach unfeasible on realistic datasets, and that one way to solve this problem is to use our proposed architecture OTW-Net. **Speed comparison on CPU/GPU.** We turn to understanding how the higher complexity of DTW affects the performance of Deep Neural Networks with DTW-based layers and our proposed OTW-based layer replacement. To this end, we compute the time that it takes to perform one forward/backward pass through the networks, for an input of increasing dimension. In CPU, we compare our OTW-Net against DTW-Net. DTW does not have a GPU implementation available, so instead we use the state-of-the-art GPU implementation of SoftDTW [17]. The results are plotted in Figure 3, in the middle and right panes. As expected we observe a stark difference \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Type** & **DTW RI - (time)** & **OTW RI - (time)** & **Improv.** & **Total** & **\%** \\ \hline Device & \(0.40\) - \((2,279)\) & \(\mathbf{0.49\cdot(81)}\) & 5 & 6 & \(\mathbf{83}\) \\ Image & \(0.62\) - \((277)\) & \(\mathbf{0.67\cdot(28)}\) & 25 & 29 & \(\mathbf{86}\) \\ Medical & \(0.79\) - \((884)\) & \(\mathbf{0.82\cdot(40)}\) & 7 & 9 & \(\mathbf{78}\) \\ Power & \(0.50\) - \((90)\) & \(\mathbf{0.52\cdot(20)}\) & 1 & 1 & \(\mathbf{100}\) \\ Sensor & \(0.57\) - \((384)\) & \(\mathbf{0.67\cdot(31)}\) & 15 & 17 & \(\mathbf{88}\) \\ Spectro & \(0.56\) - \((88)\) & \(\mathbf{0.61\cdot(11)}\) & 4 & 7 & \(\mathbf{57}\) \\ Traffic & \(0.62\) - \((61)\) & \(\mathbf{0.72\cdot(9)}\) & 2 & 2 & \(\mathbf{100}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Average Rand Index (RI) and average cpu time (seconds) of OT-based and DTW-based Hierarchical Clustering methods. Figure 1: The three synthetic labeled datasets considered, consisting of 4 different classes determined by a combination of shape (square/triangle) and time shift. Each color corresponds to a different class. 4 samples of each class are shown. Figure 3: Left: Test error vs Wall clock time of DTW-Net and OTW-Net, trained on the MoteStrain dataset from the UCR time series archive. Center: Wall clock time of a forward/backward pass over the network (in CPU) as a function of the size of the input. Right: Wall clock time of a forward/backward pass over the network (in GPU) as a function of the size of the input. Figure 2: Test error vs Wall clock time (in seconds) of DTW-Net and OTW-Net, trained on the synthetic datasets proposed. in time, and our OTW-Net runs considerably faster than the DTW-based counterparts, both in CPU and GPU. This illustrates why the DTW based layers are not a feasible alternative already for moderate dimensions and faster alternatives, like our proposed method, have to be developed if practitioners are to adopt such kind of architectures. ## 5 Conclusion We have introduced OTW, a distance for time-series that overcomes the main limitations of DTW while retaining or improving its performance in downstream tasks. In this way, it opens the path to the use of time-series distances inside Deep Learning pipelines, thanks to its easy GPU implementation and the absence of computational bottlenecks. One of the limitations of our work is the constraint that the time-series be one-dimensional. We leave possible extensions to the multivariate case as a promising direction of future research. Finally, we recall that despite the improvements, the purpose is not to completely replace DTW: for many datasets it is still the best performing distance. Rather, OTW is a complementary distance that should be preferred when there are resource constraints. ## Acknowledgements Fabian Latorre would like to thank Jorge Barreras, Chris Russel, Gregory Durrett, Laura Montoya, Lucia Cipolina-Kun and two anonymous donors for their financial support2. Without their help, it would have been impossible to present this work at ICASSP 2023. Footnote 2: [https://www.gofundme.com/f/help-me-attend-icassp-2023](https://www.gofundme.com/f/help-me-attend-icassp-2023)
2307.08155
The Rayleigh shearing instability limit of the magnetorotational instability
We use the geometric optics approximation to derive the stability criteria for the Rayleigh shearing instability and the magnetorotational instability. We examine the cases where each criterion is relevant by looking into the magnitude of the magnetic field using a small dimensionless parameter. Examining all the orders of this parameter in the characteristic equation we show that configurations with sufficiently small magnetic field are characterised by the Rayleigh shearing instability criterion rather than that of the magnetorotational instability.
Konstantinos Palapanidis, Despoina Pazouli
2023-07-16T21:42:05Z
http://arxiv.org/abs/2307.08155v2
# The Rayleigh shearing instability limit of the magnetorotational instability ###### Abstract We use the geometric optics approximation to derive the stability criteria for the Rayleigh shearing instability and the magnetorotational instability. We examine the cases where each criterion is relevant by looking into the magnitude of the magnetic field using a small dimensionless parameter. Examining all the orders of this parameter in the characteristic equation we show that configurations with sufficiently small magnetic field are characterised by the Rayleigh shearing instability criterion rather than that of the magnetorotational instability. ## I Introduction The dynamics of systems featuring differential rotation, like accretion discs, is a fundamental challenge in astrophysics. Its significance extends to various phenomena, including the formation of celestial bodies such as planets and stars. The magnetorotational instability (MRI) is widely accepted as a vital mechanism for elucidating the dynamics of these discs. On the other hand, purely hydrodynamic instabilities, such as the Rayleigh shearing instability, have been explored as alternative explanations, although they are not as effective as the magnetorotational instability in capturing the underlying dynamics [15]. The Rayleigh shearing instability arises from the shear in the rotation of a fluid, and has been studied extensively. It was first introduced by Lord Rayleigh in 1880, and has since been shown to play an important role in the dynamics of a wide range of fluid systems [14]. The simplest manifestation of this instability arises in an axisymmetric configuration with circular fluid motion around the axis. In this case the shear of the fluid simplifies to the radial rate of change of the angular frequency. The instability is characterised by the Rayleigh criterion. More specifically the presence of shear is a necessary but not sufficient condition for this instability to arise. The magnetorotational instability was first probed by Chandrasekhar [8] and later discovered and described in its present form by Balbus [3; 9]. The MRI implies that a differentially rotating fluid, for example accretion discs around neutron stars and protoplanetary discs around young stars, is stable only if the angular velocity profile of the fluid is radially increasing, even in the case where the magnetic field is almost zero. Realistic shearing flows of astrophysical relevance have in general radially decreasing angular velocity profiles. Since most of them possess at least some very small magnetic field, they should be therefore unstable. There is a peculiarity in this result since the stability of a purely hydrodynamical system i.e. without a magnetic field, is characterised by the Rayleigh shearing instability criterion, which implies that the above mentioned velocity profiles should be stable [14]. In particular, although for most of angular velocity profiles the MRI and the Rayleigh criteria agree on the characterisation of stability, there is a set of angular velocity profiles that are characterised stable with respect to the Rayleigh criterion, but unstable with respect to the MRI criterion. These correspond to the cases where an arbitrarily small magnetic field is present in the system. From a physical point of view there should be no difference in the results of the two different descriptions of the same physical configuration. Rather, one would anticipate that the vanishing magnetic field limit of the MRI would provide the same results as the purely hydrodynamical Rayleigh shearing consideration. There is much discussion on this physical paradox, including a mechanical analog discussed in [9] and an allegorical analogy, written as a side note in [15, p. 171] and [6]. This analogy states that if, after taking a bite of a maggot-infested apple, you find part of the maggot, then the more maggot you find in the piece the better it is (since you ate less of the maggot). Eventually the worst case scenario is to find an infinitesimal part of the maggot (which should correspond to the case of a no-maggot-infested apple). Intuitively, this is the opposite of what one would expect, i.e. that the best case scenario is to not find any maggot in the apple at all. In analogy, strong magnetic fields provide more stability than weaker magnetic fields, which destabilise the system the weaker they are up to the limit that there is no magnetic field at all. In the present work, we aim to discuss this paradox that appears to exist between the stability results obtained in the low magnetic field limit of magnetorotational systems and purely hydrodynamical systems. To achieve this, we will derive the Rayleigh and MRI stability criteria using the geometric optics approach. Using this approach, we will obtain the linearly perturbed system of equations describing both the purely hydrodynamic system and the magnetohydrodynamic system. In particular, we apply the geometric optics method by determining the rate of change of the background quantities with respect to the coordinates and time, and we reach the same characteristic equation as in [9]. Contrary to the original paper we do not consider the Boussinesq approximation for the continuity equation but we rather use the full form of it. In section II we describe the system of non-linear and linearised equations using the geometric optics approximation assuming plane wave perturbations. In the next section we describe axisymmetric configurations of purely hydrodynamic and magnetohydrodynamic systems and derive the Rayleigh shearing instability [2] and the magnetorotational instability in agreement with the literature. In section IV we examine the case where the magnetic field obtains very small, close to zero, values and discuss the applicability of the Rayleigh and MRI stability criteria. ## II Linear perturbation of the system In this section we apply the geometric optics approximation to the purely hydrodynamical and to the ideal magnetohydrodynamical (MHD) systems of equations.[22]Specifically in the present work we use the two-timing method.It is called as such because we use two different parameters that control the magnitude of the quantities involved [21]. We present both of the aforementioned systems of equations and we introduce the ansatz to linearise them. Finally, by keeping only the background and the first order terms, we provide the perturbed equations. ### The system of equations In this section, we describe the system of equations that we use to derive the MRI and the Rayleigh shearing instability. The results we derive are either in the context of hydrodynamics or ideal magnetohydrodynamics. The description of a single fluid in the Newtonian framework employs the continuity equation given by \[\frac{\partial\rho}{\partial t}+\rho\mathbf{\nabla}\cdot\mathbf{v}+(\mathbf{v}\cdot\mathbf{ \nabla})\rho=0,\] (II.1) where \(\mathbf{v}\) is the fluid velocity and \(\rho\) is the density. Please note that contrary to the original paper [3] where the Boussinesq approximation [19], i.e. \(\mathbf{\nabla}\cdot\mathbf{v}=0\), was considered we use the full form of the continuity equation. Consequently, we do not implicitly impose additional conditions on the background and perturbed density. We also have the Euler (momentum conservation) equation \[\frac{\partial\mathbf{v}}{\partial t}+(\mathbf{v}\cdot\mathbf{\nabla})\mathbf{v}+\frac{1}{ \rho}\mathbf{\nabla}P+\mathbf{\nabla}\Phi\underbrace{+\frac{1}{4\pi\rho}\mathbf{B}\times( \mathbf{\nabla}\times\mathbf{B})}_{\text{ideal MHD Lorentz force}}=0,\] (II.2) where \(\Phi\) is the gravitational potential, \(\mathbf{B}\) is the magnetic field and \(P\) is the pressure of the fluid. For a purely hydrodynamical system the ideal MHD Lorentz force (i.e. the under-brace term) vanishes. To describe ideal MHD systems we need to include the magnetic field induction equation, \[\frac{\partial\mathbf{B}}{\partial t}-\mathbf{\nabla}\times(\mathbf{v}\times\mathbf{B})=0,\] (II.3) in our system of equations as well. As discussed in the literature, [5; 10] this equation is obtained by using the Maxwell equations, assuming that the fluid is perfectly conducting. Please note that for pure hydrodynamic systems this equation is not required, since the magnetic field is zero [15]. Finally, the adiabatic condition, \[\frac{\partial\Sigma}{\partial t}+(\mathbf{v}\cdot\mathbf{\nabla})\Sigma=0,\] (II.4) is required in both the hydrodynamics and the ideal MHD cases. In the above equation, \(\Sigma\) is the specific entropy of the fluid. We assume an adiabatic flow, which means that the entropy is conserved along the flow lines [16]. The entropy is considered to be a function of the pressure and the density, \(\Sigma=\Sigma\left(P,\rho\right)\), and serves as an equation of state for the system. Please note that under this consideration the pressure and the density are independent quantities. The speed of sound is defined through \[c_{\text{s}}^{2}=\left.\frac{\partial P}{\partial\rho}\right|_{\Sigma},\] (II.5) and describes the speed of propagation for acoustic perturbations [16]. Using this definition for the speed of sound, equation (II.4) in terms of \(P\) and \(\rho\) becomes \[\frac{\partial P}{\partial t}+(\mathbf{v}\cdot\mathbf{\nabla})P-c_{\mathrm{s}}^{2}\left[ \frac{\partial\rho}{\partial t}+(\mathbf{v}\cdot\mathbf{\nabla})\rho\right]=0.\] (II.6) ### Linear perturbations In this section we calculate the linear perturbations of the system of equations of section (II.1) using the geometric optics approximation. We substitute all quantities of the system using the ansatz \[\rho=\rho_{\mathrm{0}}+\delta\rho,\] (II.7) where \(\rho_{\mathrm{0}}\) is a background quantity and \[\delta\rho=\bar{\delta}\,\mathrm{e}^{\mathrm{i}\frac{\bar{\delta}}{\bar{ \varepsilon}}}\,\bar{\rho}\left(\bar{\varepsilon}t,\bar{\varepsilon}\mathbf{r} \right),\] (II.8) is the linear perturbation of \(\rho\), which describes a locally plane wave with amplitude \(\bar{\rho}\) and phase \(S\) by definition [13; 20; 1]. The quantities \(\bar{\delta}\) and \(\bar{\varepsilon}\) are small (\(0<\bar{\varepsilon}<\bar{\delta}\ll 1\)) dimensionless book-keeping parameters used to keep track of the ordering of the linearised terms. In particular we keep only terms of the order \(\bar{\delta}^{0}\bar{\varepsilon}^{0}\) and \(\bar{\delta}^{1}\bar{\varepsilon}^{0}\) which are the background and the linearised terms respectively. Higher order terms in \(\bar{\delta}\) are disregarded since they are higher order perturbation terms. Similarly, higher than zeroth order in \(\bar{\varepsilon}\) terms are also not considered since they correspond to post-geometric optics approximations [1]. The quantity \(\bar{\rho}\), which is the amplitude of the perturbation, is assumed to be of the order of unity, while \(S\), which is the phase of the plane wave [7], is given by the equation \[S=\bar{\varepsilon}(\mathbf{k}\cdot\mathbf{r}-\omega t),\] (II.9) where \(\mathbf{k}\) is the wavevector, \(\mathbf{r}\) is the position vector, and \(\omega\) is the frequency. Please note that we use the ansatz presented in equations II.7-II.8 and the expressions developed above for the linearisation of all the quantities involved in the equations except for the gravitational potential, where we have assumed that \(\delta\Phi=0\), which is assumed to be a background quantity only. The background terms (i.e. those of order \(\bar{\delta}^{0}\bar{\varepsilon}^{0}\)) satisfy the system of equations (II.1)-(II.3), (II.6) and therefore vanish identically, and the first-order terms are the only to appear in the linearised equations. We find that the continuity equation (II.1) in its linearised form (i.e. containing terms of the order \(\bar{\delta}^{1}\bar{\varepsilon}^{0}\)) is given by \[\begin{split}&-\mathrm{i}\,\omega\bar{\rho}+\bar{\rho}\left(\mathbf{ \nabla}\cdot\mathbf{v}_{\mathrm{0}}\right)+\mathrm{i}\,\rho_{\mathrm{0}}\left( \mathbf{k}\cdot\bar{\mathbf{v}}\right)+\rho_{\mathrm{0}}\left(\mathbf{\nabla}\cdot\bar{ \mathbf{v}}\right)\\ &+\mathrm{i}\left(\mathbf{v}_{\mathrm{0}}\cdot\mathbf{k}\right)\bar{\rho} +\bar{\mathbf{v}}\cdot\mathbf{\nabla}\rho_{\mathrm{0}}=0.\end{split}\] (II.10) Similarly, the Euler equation (II.2) obtains the following form \[\begin{split}&-\mathrm{i}\,\omega\bar{\mathbf{v}}+\left(\bar{\mathbf{v}} \cdot\mathbf{\nabla}\right)\mathbf{v}_{\mathrm{0}}+\mathrm{i}\left(\mathbf{v}_{\mathrm{0}} \cdot\mathbf{k}\right)\bar{\mathbf{v}}+\left(\mathbf{v}_{\mathrm{0}}\cdot\mathbf{\nabla} \right)\bar{\mathbf{v}}\\ &-\frac{\bar{\rho}}{\rho_{\mathrm{0}}^{2}}\mathbf{\nabla}P_{\mathrm{0 }}+\mathrm{i}\,\frac{\mathbf{k}}{\rho_{\mathrm{0}}}\bar{P}-\frac{\bar{\rho}}{4\pi \rho_{\mathrm{0}}^{2}}\mathbf{B}_{\mathrm{0}}\times\left(\mathbf{\nabla}\times\mathbf{B }_{\mathrm{0}}\right)\\ &+\frac{1}{4\pi\rho_{\mathrm{0}}}\left[\bar{\mathbf{B}}\times\left( \mathbf{\nabla}\times\mathbf{B}_{\mathrm{0}}\right)+\mathbf{B}_{\mathrm{0}}\times\left(\bm {\nabla}\times\bar{\mathbf{B}}\right)\right]=0,\end{split}\] (II.11) where the terms containing the magnetic field is the linearised ideal MHD Lorentz force. The induction equation (II.3) becomes \[\begin{split}&-\mathrm{i}\,\omega\bar{\mathbf{B}}+\bar{\mathbf{B}}\left( \mathbf{\nabla}\cdot\mathbf{v}_{\mathrm{0}}\right)+\mathrm{i}\,\mathbf{B}_{\mathrm{0}}\left( \mathbf{k}\cdot\bar{\mathbf{v}}\right)\\ &-\left(\bar{\mathbf{B}}\cdot\mathbf{\nabla}\right)\mathbf{v}_{\mathrm{0}}- \left(\mathbf{B}_{\mathrm{0}}\cdot\mathbf{\nabla}\right)\bar{\mathbf{v}}+\left(\mathbf{v}_{ \mathrm{0}}\cdot\mathbf{\nabla}\right)\bar{\mathbf{B}}=0.\end{split}\] (II.12) Finally, the adiabatic condition (II.4) yields \[\begin{split}&\mathrm{i}\left(\mathbf{v}_{\circ}\cdot\mathbf{k}-\omega \right)\left(\bar{P}-c_{\mathrm{s}}^{2}\bar{\rho}\right)+\bar{\mathbf{v}}\cdot\mathbf{ \nabla}P_{\circ}-c_{\mathrm{s}}^{2}\,\bar{\mathbf{v}}\cdot\mathbf{\nabla}\rho_{\circ} \\ &-\bar{\rho}\left(\left.\frac{\partial c_{\mathrm{s}}^{2}}{ \partial P}\right|_{\rho_{0}}\mathbf{v}_{\circ}\cdot\mathbf{\nabla}P_{\circ}+\left. \frac{\partial c_{\mathrm{s}}^{2}}{\partial\rho}\right|_{P_{0}}\mathbf{v}_{\circ} \cdot\mathbf{\nabla}\rho_{\circ}\right)\\ &+\left.\frac{\bar{P}-c_{\mathrm{s}}^{2}\bar{\rho}}{\partial \Sigma_{\circ}/\partial P|_{\rho_{0}}}\left(\left.\frac{\partial^{2}\Sigma_{ \circ}}{\partial P^{2}}\right|_{\rho_{0}}\right)\mathbf{v}_{\circ}\cdot\mathbf{\nabla} P_{\circ}=0,\end{split}\] (II.13) where we used the assumption that the specific entropy is a function of pressure and density along with the definition (II.5) of the speed of sound. For a detailed derivation of the above equation see [12]. ## III Axisymmetric configurations In this section we discuss two axisymmetric configurations, the Rayleigh shearing instability and the magnetorotational instability. The Rayleigh shearing instability [14] characterizes the stability of a purely hydrodynamic system, i.e without a magnetic field. It is a well known result that the stability criterion depends on the angular velocity profile. In contrast, the MRI [3; 4; 9] is characterized through the angular velocity profile of systems in the context of ideal MHD. Since the physical systems of interest mainly are disc-shaped configurations around astrophysical objects, such as accretion and protoplanetary discs around stars, we will carry out our calculations in cylindrical polar coordinates \((R,z,\phi)\) and the respective orthonormal frame \(\hat{\mathbf{R}},\hat{\mathbf{z}},\hat{\mathbf{\phi}}\). In all cases we assume that all quantities are axisymmetric, i.e. they do not depend on the \(\phi\) coordinate (though, we may still have vector components along \(\hat{\mathbf{\phi}}\)). Additionally, since we wish to demonstrate that the vanishing magnetic field limit of the MRI is the Rayleigh shearing instability, we make the same assumptions for the quantities involved in the derivation of both stability criteria. ### The Rayleigh shearing instability In order to introduce the Rayleigh shearing instability, we will first describe the equilibrium of the system. We assume that the system consists of a fluid that is differentially rotating around the \(z\)-axis, having a velocity of the form \(\mathbf{v}_{0}:=\Omega(R)R\hat{\mathbf{\phi}}\), where \(\Omega\) is the angular velocity of the fluid. The density, pressure and gravitational potential have the functional forms, \(\rho:=\rho_{0}\left(\bar{\varepsilon}t,\bar{\varepsilon}R,\bar{\varepsilon}z\right)\), \(P_{\circ}:=P_{\circ}\left(\bar{\varepsilon}t,R,z\right)\), and \(\Phi_{\circ}:=\Phi\left(\bar{\varepsilon}t,R,z\right)\) respectively. Our assumptions describe a system where the background density varies slowly in any direction and in time, and the background pressure and gravitational potential varies slowly in time only, but have fast space dependence along both \((R,z)\) coordinates. Also, the angular velocity of the fluid is fast along the radial coordinate R. Please note that by considering dependence of the form \(\bar{\varepsilon}z\) along the \(z\) coordinate, as defined above, we are introducing a very small variation of the respective quantity along the \(z\) direction. This is obvious if we consider the chain rule of differentiation, where the derivative with respect to \(z\) is multiplied by \(\bar{\varepsilon}\). The same holds for the rest of the coordinates and time. Using these assumptions, and keeping terms of order \(\bar{\delta}^{0}\bar{\epsilon}^{0}\) only, we derive the hydrostatic equilibrium condition given by the \(R\) and \(z\) components of the Euler equation (II.2) \[\Omega^{2}\,R=\frac{\partial\Phi_{\circ}}{\partial R}+\frac{1}{\rho_{\circ}} \frac{\partial P_{\circ}}{\partial R},\] (III.1) and \[\frac{\partial\Phi_{\circ}}{\partial z}+\frac{1}{\rho_{\circ}}\frac{\partial P _{\circ}}{\partial z}=0,\] (III.2) respectively. The equations above describe a fluid in stationary equilibrium where the gravitational force along the radial direction is balanced by the gradient of the pressure and the centripetal force, while in the \(z\) direction we only have the pressure gradient and the gravitational force. The rest of background equations (II.1) and (II.6) are trivially satisfied for the assumptions we have made. We will consider wavevectors of the form \(\mathbf{k}=k_{R}\hat{\mathbf{R}}+k_{z}\hat{\mathbf{z}}\). This wavevector describes a plane wave that has no \(\phi\) dependence. Our choice is justified by the fact that the system we assume is axisymmetric. the linearised continuity equation (II.10) is given by \[\rho_{0}\left(\frac{1}{R_{0}}+\mathrm{i}\,k_{R}\right)\bar{v}_{R}+\mathrm{i}\, \rho_{o}k_{z}\bar{v}_{z}-\mathrm{i}\,\omega\bar{\rho}=0,\] (III.3) while the \(R\), \(z\) and \(\phi\) components of the linearised Euler equation (II.11) are \[-\mathrm{i}\,\omega\bar{v}_{R}-2\Omega\bar{v}_{\phi}-\frac{1}{\rho_{0}^{2}} \frac{\partial P_{0}}{\partial R}\bar{\rho}+\mathrm{i}\,\frac{k_{R}}{\rho_{0}} \bar{P}=0,\] (III.4) \[-\mathrm{i}\,\omega\bar{v}_{z}-\frac{1}{\rho_{0}^{2}}\frac{\partial P_{0}}{ \partial z}\bar{\rho}+\mathrm{i}\,\frac{k_{z}}{\rho_{0}}\bar{P}=0,\] (III.5) and \[\left(2\Omega+R\frac{\mathrm{d}\Omega}{\mathrm{d}R}\right)\bar{v}_{R}- \mathrm{i}\,\omega\bar{v}_{\phi}=0,\] (III.6) respectively. As we have already mentioned previously, in this purely hydrodynamic system the magnetic field terms and the linearised induction equation (II.12) are omitted. The linearised adiabatic condition (II.13) takes the form \[\frac{\partial P_{0}}{\partial R}\bar{v}_{R}+\frac{\partial P_{0}}{\partial z }\bar{v}_{z}-\mathrm{i}\,\omega\bar{P}+\mathrm{i}\,\omega c_{\mathrm{s}}^{2} \bar{\rho}=0.\] (III.7) Please note that the directional derivatives \(\mathbf{v}_{0}\cdot\mathbf{\nabla}\) of scalars are vanishing, as in equation (II.13). This happens because the fluid velocity has a single component along \(\hat{\mathbf{\phi}}\) and axisymmetric quantities do not have a \(\phi\) dependence. Equations (III.3-III.7) comprise a system of five equations in five variables. The variables are the perturbation amplitudes of the physical quantities, namely \(\bar{\rho}\), \(\bar{P}\), \(\bar{v}_{R}\), \(\bar{v}_{z}\), \(\bar{v}_{\phi}\). This system has a non-trivial solution if and only if the determinant of the matrix of the coefficients of the variables is zero [17]. The vanishing of the determinant yields the characteristic equation (along with the \(\omega=0\) solution) of the system \[\begin{split}&\left(\omega^{2}k^{2}-k_{z}^{2}\kappa^{2}\right)c_{ \mathrm{s}}^{2}+\left[-\omega^{4}+\omega^{2}\kappa^{2}+\frac{1}{\rho_{0}^{2}} \left(\frac{\partial P_{0}}{\partial z}k_{R}-\frac{\partial P_{0}}{\partial R} k_{z}\right)^{2}\right]\\ &+\frac{1}{R}\left[\omega^{2}\left(\frac{1}{\rho_{0}}\frac{ \partial P_{0}}{\partial R}-\mathrm{i}\,c_{\mathrm{s}}^{2}k_{R}\right)+\frac{ \mathrm{i}}{\rho_{0}^{2}}\frac{\partial P_{0}}{\partial z}\left(\frac{ \partial P_{0}}{\partial R}k_{z}-\frac{\partial P_{0}}{\partial z}k_{R}\right) \right]=0,\end{split}\] (III.8) where \(\kappa\) is the epicyclic frequency [15] given by \[\kappa^{2}=4\Omega^{2}+2R\Omega\frac{\mathrm{d}\Omega}{\mathrm{d}R}.\] (III.9) The expression given in equation (III.8) is further simplified using two assumptions. Firstly, we consider the cases that are not close to the axis of symmetry. A similar treatment is used in [9] in the sense that \(1/R\) is of the order of \(\bar{\varepsilon}^{\alpha},\ \alpha\geq 1\). Secondly, we also eliminate the sound waves considering the case where the speed of sound is large, that is \(c_{\mathrm{s}}^{2}\) is of the order of \(\bar{\varepsilon}^{\beta},\ \beta\leq-1\), and diving the equation by \(c_{\mathrm{s}}^{2}\). Since in our analysis we only keep zero order terms in \(\bar{\varepsilon}\), the surviving terms yield \[\omega^{2}=\kappa^{2}\frac{k_{z}^{2}}{k^{2}}.\] (III.10) The system is stable if, for real values of \(k\), \(\omega\) is real. This in turn means that for stability the condition \[\kappa^{2}\geq 0\] (III.11) holds. If this criterion is not satisfied it gives rise to the well known Rayleigh shearing instability [9; 15]. Combining the definition for the epicyclic frequency (III.9) with the above criterion we obtain the inequality \[\Omega\,\frac{\mathrm{d}}{\mathrm{d}R}\left(\Omega R^{2}\right)\geq 0,\] (III.12) which is satisfied if \(\Omega>0\) and \(\Omega R^{2}\) is an increasing function of R, or if \(\Omega<0\) and \(\Omega R^{2}\) is a decreasing function of R. Note that these two cases are equivalent since the sign of \(\Omega\) is a matter of convention. Also note that in both cases the absolute value of the differentiated quantity increases with \(R\) when the criterion is satisfied, as expected. This inequality means that even if \(\Omega\) is a decreasing function of \(R\), the system is still stable as long as \(\Omega R^{2}\) is increasing along \(R\). ### The magnetorotational instability In this section we derive the magnetorotational instability [3; 9]. This is done by introducing the magnetic field in the configuration of the previous section. The background quantities for the fluid density, the pressure, and the velocity, as well as the wavevector are the same as in the Rayleigh shearing configuration. For the magnetic field we consider components only along the \(z\) and the \(\phi\) direction, \(\mathbf{B}_{\circ}=B_{\circ,z}\left(\bar{\varepsilon}t,\bar{\varepsilon}R,\bar{ \varepsilon}z\right)\mathbf{\hat{z}}+B_{\circ,\phi}\left(\bar{\varepsilon}t,\bar{ \varepsilon}R,\bar{\varepsilon}z\right)\mathbf{\hat{\phi}}\). As mentioned previously the main difference between this analysis and Balbus' original paper [3] is that we use the full continuity equation instead of the Boussinesq approximation. Also we avoid the assumption that isobaric and isochoric surfaces coincide which may be somewhat restrictive. The background equations are, as in the Rayleigh shearing instability case, the \(R\) and \(z\) components of the Euler equation (II.2). The \(z\) component is given by equation (III.2) while for the \(R\) component we have \[\Omega^{2}\,R=\frac{\partial\Phi_{\circ}}{\partial R}+\frac{1}{\rho_{\circ}} \frac{\partial P_{\circ}}{\partial R}+\frac{B_{\circ,\phi}^{2}}{R},\] (III.13) where the extra term, compared to equation (III.1), appears due to the ideal MHD Lorentz force. The rest of the background equations vanish identically. The \(R\) component of the linearised Euler equation (II.11) is \[\begin{split}&-\mathrm{i}\,\omega\bar{v}_{R}-2\Omega\bar{v}_{ \phi}-\left(\frac{1}{\rho_{\circ}^{2}}\frac{\partial P_{\circ}}{\partial R}+ \frac{1}{R}\,\frac{B_{\circ,\phi}^{2}k_{z}}{4\pi\rho_{\circ}^{2}}\right)\bar{ \rho}+\mathrm{i}\,\frac{k_{R}}{\rho_{\circ}}\bar{P}\\ &-\mathrm{i}\,\frac{B_{\circ,z}k_{z}}{4\pi\rho_{\circ}}\bar{B}_{ R}+\mathrm{i}\,\frac{B_{\circ,z}k_{R}}{4\pi\rho_{\circ}}\bar{B}_{z}+\frac{B_{ \circ,\phi}}{4\pi\rho_{\circ}}\left(\frac{2}{R}+\mathrm{i}\,k_{R}\right)\bar{ B}_{\phi}=0,\end{split}\] (III.14) the \(z\) component is \[-\mathrm{i}\,\omega\bar{v}_{z}+\mathrm{i}\,\frac{k_{z}}{\rho_{\circ}}\bar{P}- \frac{1}{\rho_{\circ}^{2}}\frac{\partial P_{\circ}}{\partial z}\bar{\rho}+ \mathrm{i}\,\frac{B_{\circ,\phi}k_{z}}{4\pi\rho_{\circ}}\bar{B}_{\phi}=0,\] (III.15) and the \(\phi\) component is \[\left(2\Omega+R\frac{\mathrm{d}\Omega}{\mathrm{d}R}\right)\bar{v}_{R}- \mathrm{i}\,\omega\bar{v}_{\phi}-\frac{1}{R}\frac{B_{\circ,\phi}}{4\pi\rho_{ \circ}}\bar{B}_{R}-\mathrm{i}\,\frac{B_{\circ,z}k_{z}}{4\pi\rho_{\circ}}\bar{ B}_{\phi}=0.\] (III.16) The components of the linearised induction equation (II.12) are \[-\mathrm{i}\,B_{\circ,z}k_{z}\bar{v}_{R}-\mathrm{i}\,\omega\bar{B}_{R}=0,\] (III.17) \[B_{\circ,z}\left(\frac{1}{R}+\mathrm{i}\,k_{R}\right)\bar{v}_{R}-\mathrm{i}\, \omega\bar{B}_{z}=0,\] (III.18) and \[\mathrm{i}\,B_{\circ,\phi}k_{R}\bar{v}_{R}+\mathrm{i}\,B_{\circ,\phi}k_{z} \bar{v}_{z}-\mathrm{i}\,B_{\circ,z}k_{z}\bar{v}_{\phi}-R\frac{\mathrm{d} \Omega}{\mathrm{d}R}\bar{B}_{R}-\mathrm{i}\,\omega\bar{B}_{\phi}=0,\] (III.19) for the \(R\),\(z\) and \(\phi\) components respectively. Please note that the linearised continuity equation and linearised adiabatic condition are given by (III.3) and (III.7) respectively, since these equations are independent of the magnetic field. This is a system with eight equations and eight unknowns, the three extra unknowns (compared to the hydrodynamic system) being the perturbations of the three components of the magnetic field. The full characteristic equation, which is a sixth degree polynomial in \(\omega\), can be found in VI. It contains the acoustic modes and the terms related to small radial distances, which make it quite a cumbersome expression and it can not be treated analytically. Following the method used in the purely hydrodynamic case of the previous section, we eliminate the sound waves and terms of order \(1/R\) or smaller. Having applied these simplifications, the characteristic equation of the system is given by \[\frac{k^{2}}{k_{z}^{2}}\omega^{4}-\left(\kappa^{2}+2k^{2}v_{\rm{Ax}}^{2} \right)\omega^{2}+k_{z}^{2}v_{\rm{Ax}}^{2}\left(\kappa^{2}-4\Omega^{2}+k^{2} v_{\rm{Ax}}^{2}\right)=0,\] (III.20) where \(v_{\rm{Ax}}\) is the Alfven speed along the \(z\) direction and \(v_{\rm{Ax}}^{2}=\frac{B_{0,z}^{2}}{4\pi\rho_{0}}\). Please note that the Alfven speed is directly proportional to the magnetic field. In the following section we use \(v_{\rm{Ax}}^{2}\) to examine the low magnetic field behaviour of the system and we assume that the density is of the order of unity. This characteristic equation is identical to the one derived in [9] if we consider a wavevector with only a \(z\) component, i.e. \(k_{R}=0\). The left-hand-side of equation (III.20) is a convex quadratic polynomial in \(\omega^{2}\), since the coefficient of \(\omega^{4}\) is positive. The discriminant is \(k_{z}^{4}\left(\kappa^{4}+16k^{2}\Omega^{2}v_{\rm{Ax}}^{2}\right)\) which is always positive and therefore the two roots of the polynomial are real. Additionally the two \(\omega^{2}\) roots are positive and thus the system is stable if the coefficient of \(\omega^{2}\) is negative and the constant term is positive. The first condition implies that the minimum of the polynomial occurs at positive \(\omega^{2}\) and the second condition implies that the polynomial intersects the \(\omega^{2}=0\) axis at a positive value. These two conditions read \[\kappa^{2}+2k^{2}v_{\rm{Ax}}^{2}\geq 0,\] (III.21) and \[\kappa^{2}-4\Omega^{2}+k^{2}v_{\rm{Ax}}^{2}\geq 0.\] (III.22) Of these two inequalities (provided that always \(k^{2}v_{\rm{Ax}}^{2}>0\)) we only need the second one because if it is satisfied, the first is satisfied as well. Assuming then that \(k^{2}v_{\rm{Ax}}^{2}\) goes to zero (since we can either have a very small magnetic field or very small wavenumbers) the stability criterion reads \[\kappa^{2}\geq 4\Omega^{2}.\] (III.23) We will call this the Balbus criterion. Using the definition of the epicyclic frequency from equation (III.9) the stability condition obtains the following form \[\frac{{\rm d}\Omega^{2}}{{\rm d}\ln R}\geq 0,\] (III.24) which is the one derived in [9]. A simpler form for the above is \[\Omega\frac{{\rm d}\Omega}{{\rm d}R}\geq 0,\] (III.25) which is satisfied if \(\Omega>0\) and increasing along \(R\) or, if \(\Omega<0\) and decreasing along \(R\). Similarly to the Rayleigh criterion (III.12), in both cases the condition requires that the absolute value of \(\Omega\) is increasing along \(R\). In contrast to the Rayleigh configuration, a disc is stable if the magnitude of \(\Omega(R)\) is radially increasing outwards. However, for most astrophysical configurations \(\Omega(R)\) decreases in magnitude with respect to the radius and so the majority of realistic models should be unstable [2; 9]. A peculiar and interesting aspect of this result is that the vanishing magnetic field condition (III.23) does not coincide with the Rayleigh shearing instability criterion of the previous section, as we would anticipate. Physically this means that an arbitrarily small magnetic field would produce an instability in a configuration which would be stable if the magnetic field had not been introduced at all, i.e because we may have \(\kappa^{2}>0\) (Rayleigh criterion) but \(\kappa^{2}-4\Omega^{2}<0\) (MRI criterion with vanishing magnetic field). In the following section, we will discuss the vanishing magnetic field limit of the MRI. Vanishing magnetic field limit of the MRI The condition (III.23) arises by taking the limit \(k^{2}v_{\rm Az}^{2}\to 0\). Obviously, this limit is achieved if any of \(k\), \(v_{\rm Az}\), or both, approach zero. In the analysis below we consider the case where the magnitude of the Alfven speed is controlled only by the magnitude of the magnetic field (i.e. the density is of order unity). This is reasonable since we are interested in the vanishing magnetic field limit of the MRI. We will look into condition (III.22) in more detail, by examining all the possible range of values for the quantities \(k\), \(v_{\rm Az}\). We introduce a function \[\zeta=k^{2}v_{\rm Az}^{2},\] (IV.1) that will be used to keep track of the magnitude of this term. This quantity, being the product of \(k^{2}\) and \(v_{\rm Az}^{2}\), serves as the Alfven angular frequency squared. The characteristic equation (III.20) then reads \[\cos^{2}q\zeta^{2}+\left(\kappa^{2}\cos^{2}q-4\Omega^{2}\cos^{2}q-2\omega^{2} \right)\zeta+\omega^{2}\left(\frac{\omega^{2}}{\cos^{2}q}-\kappa^{2}\right)=0.\] (IV.2) where \(\cos q\) is the direction cosine, defined by \(k_{z}=k\cos q\). Note that we rearranged equation (III.20) in powers of \(\zeta\). Also, note that the direction cosine does not vanish since it would imply \(k_{z}=0\) and subsequently only \(\omega=0\) solutions. There are three cases to be considered with respect to the value of \(\zeta\) and the stability condition derived from equation (IV.2). The first case corresponds to \(\zeta\) large enough compared to the \(\kappa^{2}-4\Omega^{2}\) term so that no term of equation (IV.2) can be neglected and therefore the stability condition is given by inequality (III.22). The \(k^{2}v_{\rm Az}^{2}\) is included in the final criterion because it is of the same magnitude as the other term, as mentioned in the previous section as well. The second case occurs when the product of the Alfven speed and the wavenumber is such that the \(\zeta^{2}\) is sufficiently small compared to the other background terms of the characteristic equation to be omitted, but \(\zeta\) is not. In this case equation (IV.2) reduces to \[\frac{1}{\cos^{2}q}\omega^{4}-\left(\kappa^{2}+2k^{2}v_{\rm Az}^{2}\right) \omega^{2}+(\cos^{2}q)k^{2}v_{\rm Az}^{2}\left(\kappa^{2}-4\Omega^{2}\right)=0.\] (IV.3) The stability criterion for this characteristic equation is given by inequality (III.23), which is the condition obtained in [9]. The third case happens when \(\zeta\) is such that both the \(\zeta^{2}\) and the \(\zeta\) terms are negligible. In this case, the characteristic equation yields \[\omega^{2}\left(\omega^{2}\frac{1}{\cos^{2}q}-\kappa^{2}\right)=0,\] (IV.4) which is the characteristic equation of the Rayleigh shearing configuration (III.10), where \(k_{z}\) is eliminated using the direction cosine and the stability criterion is (III.11). In order to shed more light into the stability scenarios we will quantify the above mentioned three cases. Suppose there is a value \(\zeta_{\star}<1\) which is the largest possible value for which both \(\zeta\) and \(\zeta^{2}\) are small enough to be neglected (i.e. the third case of the stability analysis of equation (IV.2) mentioned above). Please note that we introduce this upper limit value of \(\zeta\) in order to compare linear and quadratic powers of \(\zeta\). Since we are interested in values of \(\zeta\) that are close to zero, we can introduce this assumption without any loss of generality. For all values of \(\zeta\leq\zeta_{\star}\) the characteristic equation reduces to the Rayleigh shearing equation. The value \(\zeta_{\star}\), in other words, is the largest value for which \(\zeta\), \(\zeta^{2}\) are effectively zero, by its definition. For values \(\zeta_{\star}<\zeta\leq\sqrt{\zeta_{\star}}\) (note that the square root is larger than the number itself since \(\zeta_{\star}<1\), and the right bound is the value such that \(\zeta^{2}=\zeta_{\star}\)) the linear terms in \(\zeta\) do not vanish whereas the \(\zeta^{2}\) terms can be neglected. For this interval the stability criterion is given by condition (III.23), derived in [3]. Further increase of \(\zeta\), i.e. \(\zeta>\sqrt{\zeta_{\star}}\), implies that both the \(\zeta\) and the \(\zeta^{2}\) terms are comparable to the rest of background terms and therefore they cannot be neglected. In this case the stability condition is that given by the inequality (III.22). Up to this point we discussed the magnitude of \(\zeta\) without examining the magnitudes of the individual factors, \(k^{2}\) and \(v_{\rm Az}^{2}\). There is a fundamental difference between these two quantities. The former characterises the perturbation given in equation (II.8) and is allowed to obtain all the values within the limits that are physically meaningful, as will be discussed below. The latter is a background quantity, which corresponds to a specific axisymmetric magnetic field function for each configuration. Consequently we have the follow implication. For each \(v_{\mathrm{Az}}^{2}\) value, which describes a physical system, we need to consider all possible \(k\) values in order to make a statement regarding the stability of the system. Under the scope of the present analysis, a system is stable if all conditions are met for all possible wavenumbers. If some of the wavenumbers do not satisfy the stability condition then the system is unstable. This has the following consequence. Suppose there exists a system as the one described in section III.2 with \(0\leq\kappa^{2}<4\Omega^{2}\) so that the Rayleigh criterion (III.12) is satisfied but the criterion (III.25) of MRI does not. For a given background value of \(v_{Az}\), if a wavenumber value exists such that \(\zeta_{\star}<\zeta\leq\sqrt{\zeta_{\star}}\), then the system is unstable. As discussed in [2; 9], the peculiar result in this analysis stems from the fact that the zero magnetic field limit of the system is still unstable, whereas considering the same system in the context of pure hydrodynamics, as in section III.1, the system is stable. The resolution of this apparent physical paradox lies in the feasible range of values that the wavenumber \(k\) can obtain. This is justified by the continuum hypothesis, i.e. that wavenumbers (and frequencies) of mechanical waves have some upper finite bound defined by the microscopic properties of continuous medium under consideration. Roughly, the wavelength (i.e. the inverse of wavenumber times \(2\pi\)) cannot be less than the mean free path of the particles consisting the medium [11; 18]. Therefore, the wavenumber has an upper finite limit \(k_{\mathrm{max}}\) in order to be physically possible to exist. Beyond this limit the physical system cannot be described by equations (II.1), (II.2), and (II.4) hence a different approach would be required. Given our explanation above, for certain values of \(v_{Az}\) we have shown that no values of \(k\) exist such that \(\zeta_{\star}<\zeta\leq\sqrt{\zeta_{\star}}\), therefore the system is stable. Indeed, for very small, approaching to zero, values of \(v_{\mathrm{Az}}^{2}\) (i.e. for \(v_{Az}^{2}\leq V_{A}^{2}\) as shown in Figure 1) there do not exist physically possible values of \(k\), such that \(\zeta_{\star}<\zeta\leq\sqrt{\zeta_{\star}}\). Instead we have \(\zeta\leq\zeta_{\star}\) for the viable values of \(k\). Hence, the appropriate criterion for this vanishing magnetic field limit is (III.11), as it would be if we did not introduce the magnetic field at all. Thus, the system is stable, as expected from the hydrostatic analysis in section III.1. This is shown collectively in Table 1. On the opposite limit as the magnetic field obtains larger values (\(v_{\mathrm{Az}}^{2}\rightarrow+\infty\)) there are fewer and fewer wavenumbers that satisfy \(\zeta_{\star}<\zeta\leq\sqrt{\zeta_{\star}}\), i.e. those that fall into the yellow shaded region in Figure 1. This implies that in this limit the system is more stable which is in agreement with [3; 8]. ## V Discussion We used similar assumptions to [9] to derive the characteristic equation of the Rayleigh shearing instability and of the MRI in sections III.1 and III.2, respectively. In particular, we used the geometric optics approach to derive the characteristic equation in both of the cases mentioned above. We assumed a slow or fast variance of each of the background quantities with respect to the coordinates and time in the sense of the two timing method. Based on the assumed functional forms of our quantities, we obtained the linearised system of equations and we derived the characteristic polynomial. Additionally, our derivation did not employ the Boussinesq approximation to derive the characteristic polynomial, but the complete form of the continuity equation was used. This approach allowed us to derive the full characteristic equation which is a sixth degree polynomial in \(\omega\) and it is presented in Appendix VI. This expression includes the terms related \begin{table} \begin{tabular}{c|c|c|c} \hline & & & \\ \hline \(v_{\mathrm{Az}}^{2}\) & unstable & stable & stable \\ \hline \(v_{\mathrm{Az}}^{2}\) & unstable & unstable & stable \\ \hline \end{tabular} \end{table} Table 1: Stability characterisation for different cases of \(\kappa^{2}\) and magnetic field. As mentioned previously we use \(v_{\mathrm{Az}}^{2}\) to examine the magnetic field strength. to the acoustic waves and to the small radial distances. By removing these terms from the full characteristic polynomial, we reached the same expression as in [9]. Regarding the stability characterisation mismatch of configurations that have decreasing angular velocity profiles but increasing \(\Omega R^{2}\), we have shown that the MRI criterion is applicable if the magnetic field is above some small but finite value. Below this value such configurations are characterised by the Rayleigh shearing instability criterion, because there do not exist physically possible wavenumbers that are infinitely large. To wrap up, we have found that weak magnetic fields give rise to the MRI, however, extremely weak magnetic fields can be disregarded entirely when interested in the stability of a differentially rotating fluid as described in section IV. As it is obvious from section III.2, from a strictly mathematical point of view, by taking the \(k^{2}v_{\rm Az}^{2}\to 0\) in the MRI characteristic equation (III.20) we obtain the Rayleigh characteristic equation (III.10). However, by looking into this in more detail (see Figure 1), we have managed to obtain the limiting case between the MRI criterion and the Rayleigh criterion, i.e. we have found the exact regions of quantities \(v_{Az}\) and \(k\) where each of the stability criteria (Rayleigh or MRI) holds. ## Acknowledgements Both authors acknowledge support from the International Hellenic University Research Scholarship. ## VI Appendix The full characteristic equation (along with a double \(\omega=0\) root) is given by \[\begin{array}{l}\left[\omega^{4}k^{2}-\omega^{2}k_{z}^{2}\left(\kappa^{2}+2k^{2 }v_{\rm Az}^{2}\right)+k_{z}^{4}v_{\rm Az}^{2}\left(\kappa^{2}-4\Omega^{2}+k^{2 }v_{\rm Az}^{2}\right)\right]c_{\rm s}^{2}\\ -\left\{\omega^{6}-\omega^{4}\left[\kappa^{2}+k_{R}^{2}\left(v_{\rm Az}^{2}+v_ {\rm A\phi}^{2}\right)-k_{z}\left(2v_{\rm Az}^{2}+v_{\rm A\phi}^{2}\right) \right]\right.\\ +\frac{\omega^{2}}{\rho_{\rm 0}^{2}}\left[2\frac{\partial P_{\rm 0}}{ \partial R}\frac{\partial P_{\rm 0}}{\partial z}k_{R}k_{z}+k_{R}^{2}\left(\rho_{ \rm 0}^{2}k_{z}^{2}v_{\rm Az}^{2}\left(v_{\rm Az}^{2}+v_{\rm A\phi}^{2}\right) -\left(\frac{\partial P_{\rm 0}}{\partial z}\right)^{2}\right)\right.\\ \left.+k_{z}^{2}\left(\rho_{\rm 0}^{2}\left(v_{\rm Az}^{2}\left(v_{\rm A \phi}^{2}k_{z}^{2}+\kappa^{2}-4\Omega^{2}\right)+v_{\rm Az}^{4}k_{z}^{2}+v_{ \rm A\phi}^{2}\kappa^{2}\right)-\left(\frac{\partial P_{\rm 0}}{\partial R} \right)^{2}\right)\right]\\ +\frac{\omega}{\rho_{\rm 0}}\left[4\Omega k_{z}^{2}v_{\rm Az}v_{\rm A \phi}\left(k_{z}\frac{\partial P_{\rm 0}}{\partial R}-k_{R}\frac{\partial P_{\rm 0}}{ \partial z}\right)\right]\\ +\frac{1}{\rho_{\rm 0}}\left(k_{z}\frac{\partial P_{\rm 0}}{ \partial R}-k_{R}\frac{\partial P_{\rm 0}}{\partial z}\right)^{2}k_{z}^{2}v_{\rm Az }^{2}\right\}\\ -\frac{1}{R\rho_{\rm 0}^{2}}\left\{{\rm i}\,\left(\omega^{2}-k_{z}^{2}v_{ \rm Az}^{2}\right)\left[k_{R}\left(\rho_{\rm 0}^{2}\left(c_{\rm s}^{2}\left( \omega^{2}-v_{\rm Az}^{2}k_{z}^{2}\right)+\omega^{2}v_{\rm Az}^{2}\right) \right.\right.\\ \left.\left.+\left(\frac{\partial P_{\rm 0}}{\partial z}\right)^{2}\right)- \frac{\partial P_{\rm 0}}{\partial R}\frac{\partial P_{\rm 0}}{ \partial z}k_{z}+{\rm i}\,\frac{\partial P_{\rm 0}}{\partial R}\rho_{\rm 0} \omega^{2}\right]\\ +2\rho_{\rm 0}\omega\Omega v_{\rm Az}v_{\rm A\phi}k_{z}\left(4\rho_{ \rm 0}c_{\rm s}^{2}k_{z}^{2}+{\rm i}\,\frac{\partial P_{\rm 0}}{ \partial z}k_{z}-3\rho_{\rm 0}\omega^{2}\right)\\ +\rho_{\rm 0}v_{\rm A\phi}^{2}\left[4\rho_{\rm 0}^{2}\omega\Omega v_{ \rm Az}v_{\rm A\phi}^{3}k_{\rm 0}^{3}+2\frac{\partial P_{\rm 0}}{ \partial R}k_{z}^{2}\left(v_{\rm Az}^{2}k_{z}^{2}+\omega^{2}\right)\right.\\ \left.\left.+{\rm i}\,k_{R}\left(v_{\rm Az}^{2}k_{z}^{2}\left(-\rho_{\rm 0} \omega^{2}+2{\rm i}\,\frac{\partial P_{\rm 0}}{\partial z}k_{z}\right)+2{\rm i}\, \frac{\partial P_{\rm 0}}{\partial z}\omega^{2}k_{z}+\rho_{\rm 0}\omega^{4}\right)\right]\right\}\\ +\frac{v_{\rm A\phi}^{2}}{R^{2}\rho_{\rm 0}^{2}}\left\{\left(v_{\rm Az }^{2}k_{z}^{2}+\omega^{2}\right)\left[\rho_{\rm 0}\omega^{2}\right.\\ \left.-k_{z}\left(\rho_{\rm 0}k_{z}\left(v_{\rm A\phi}^{2}+2c_{\rm s}^{2} \right)+{\rm i}\,\frac{\partial P_{\rm 0}}{\partial z}\right)\right]\right\}=0,\end{array}\] (VI.1) where \(v_{\rm A\phi}^{2}=\frac{B_{\rm 0,\phi}^{2}}{4\pi\rho_{\rm 0}}\).
2304.11154
Transition to the Haldane phase driven by electron-electron correlations
One of the most famous quantum systems with topological properties, the spin $\mathcal{S}=1$ antiferromagnetic Heisenberg chain, is well-known to display exotic $\mathcal{S}=1/2$ edge states. However, this spin model has not been analyzed from the more general perspective of strongly correlated systems varying the electron-electron interaction strength. Here, we report the investigation of the emergence of the Haldane edge in a system of interacting electrons -- the two-orbital Hubbard model -- with increasing repulsion strength $U$ and Hund interaction $J_\mathrm{H}$. We show that interactions not only form the magnetic moments but also form a topologically nontrivial fermionic many-body ground-state with zero-energy edge states. Specifically, upon increasing the strength of the Hubbard repulsion and Hund exchange, we identify a sharp transition point separating topologically trivial and nontrivial ground-states. Surprisingly, such a behaviour appears already at rather small values of the interaction, in a regime where the magnetic moments are barely developed.
A. Jażdżewska, M. Mierzejewski, M. Środa, A. Nocera, G. Alvarez, E. Dagotto, J. Herbrych
2023-04-21T17:53:22Z
http://arxiv.org/abs/2304.11154v2
# Transition to the Haldane phase driven by electron-electron correlations ###### Abstract **One of the most famous quantum systems with topological properties, the spin \(\mathcal{S}=1\) antiferromagnetic Heisenberg chain, is well-known to display exotic \(\mathcal{S}=1/2\) edge states. However, this spin model has not been analyzed from the more general perspective of strongly correlated systems varying the electron-electron interaction strength. Here we report the numerical investigation of the emergence of the Haldane state and its edge modes in a system of interacting electrons - the two-orbital Hubbard model - with increasing repulsion strength \(U\). We show that these interactions not only form the magnetic moments but also form a topologically nontrivial fermionic many-body ground-state with zero-energy edges states that only at very large \(U\) converge to the Haldane chain model. Specifically, upon increasing the strength of the Hubbard repulsion and Hund exchange, we identify a novel sharp transition point separating topologically trivial and nontrivial ground-states. Surprisingly, the latter appears already at rather small values of the interaction \(U\), in a regime where the magnetic moments are _barely developed_, thus generalizing the ideas of Haldane for \(\mathcal{S}=1\) spin Heisenberg models into previously unexplored territory involving delocalized electrons. Furthermore, our results indicate that the topological regime can be described by a liquid valence bonds state down to interaction strength of the order of the bare kinetic energy. + Footnote †: preprint: APS/123-QED **One of the most famous quantum systems with topological properties, the spin \(\mathcal{S}=1\) antiferromagnetic Heisenberg chain, is well-known to display exotic \(\mathcal{S}=1/2\) edge states. However, this spin model has not been analyzed from the more general perspective of strongly correlated systems varying the electron-electron interaction strength. Here we report the numerical investigation of the emergence of the Haldane state and its edge modes in a system of interacting electrons - the two-orbital Hubbard model - with increasing repulsion strength \(U\). We show that these interactions not only form the magnetic moments but also form a topologically nontrivial fermionic many-body ground-state with zero-energy edges states that only at very large \(U\) converge to the Haldane chain model. Specifically, upon increasing the strength of the Hubbard repulsion and Hund exchange, we identify a novel sharp transition point separating topologically trivial and nontrivial ground-states. Surprisingly, the latter appears already at rather small values of the interaction \(U\), in a regime where the magnetic moments are _barely developed_, thus generalizing the ideas of Haldane for \(\mathcal{S}=1\) spin Heisenberg models into previously unexplored territory involving delocalized electrons. Furthermore, our results indicate that the topological regime can be described by a liquid valence bonds state down to interaction strength of the order of the bare kinetic energy.** The precise role of the electron-electron interaction in many condensed matter systems is still under much debate. From the high critical temperature superconductivity of copper- and iron-based compounds to the magnetic properties of idealized spin models, strong correlations appear crucial for our understanding of materials physics. In parallel, topology in various compounds has been typically realized and investigated at the level of non-interacting band structures in the presence of spin-orbit coupling. However, the detailed study of the Coulomb correlation effects intertwined with topological physics has barely started and represents one of the grand challenges of present-day theoretical and experimental physics. In particular, in one of the most famous topologically nontrivial systems, i.e., the \(\mathcal{S}=1\) antiferromagnetic (AFM) Heisenberg model \(H_{\text{S}}=J\sum_{\ell}\mathbf{S}_{\ell}\cdot\mathbf{S}_{\ell+1}\) on a one-dimensional (1D) lattice geometry, the spin-spin interactions are necessary to form the zero-energy edge states, which is the hallmark of topological states. In his seminal work [1; 2], Haldane showed that integer \(\mathcal{S}=1,2,\ldots\) and half-integer \(\mathcal{S}=1/2,3/2,\ldots\) spin systems behave fundamentally different: the latter are gapped while the former are gapless. Affleck, Kennedy, Lieb, and Tasaki (AKLT) proved [3] that the ground-state of \(\mathcal{S}=1\) chains when generalized including biquadratic interactions, can be expressed as a valence bond state (VBS) composed of interacting \(\mathcal{S}=1/2\)-like singlets. In this picture, the AKLT state, when defined on an open chain, has two unpaired \(\mathcal{S}=1/2\) spins at the edges of the system forming zero-energy modes. The existence of topologically protected edge states in \(\mathcal{S}=1\) chains have been shown by extensive theoretical [4; 5; 6; 7; 8; 9] and experimental [10; 11; 12; 13; 14; 15] studies. Also, the road to the Haldane states from well-formed \(\mathcal{S}=1/2\) spins has been studied. The AKLT VBS state initiated various investigations of ladder-like \(\mathcal{S}=1/2\) systems where special constraints, such as ferromagnetic rung exchange or unpaired sites at the edges of overall AFM systems, lead to the topological \(\mathcal{S}=1\) Haldane phase. Such systems are not only a playground for theoretical investigations but were also realized using cold atoms in optical lattice setups [13]. In this context, the extended Bose Hubbard model (containing nearest-neighbour interactions) can also host the Haldane phase [16; 17]. However, in real low-dimensional materials [18], the \(\mathcal{S}=1\) moments arise due to the electron-electron correlations in a multi-orbital Hubbard model setup, which is technically challenging. Because the \(\mathcal{S}=1/2\) moments themselves are already an effective description of some fermionic systems, such analysis is usually unjustified for many compounds. But in more refined descriptions, the Coulomb repulsion and Hund's coupling not only cooperate but also can compete [19; 20]. Depending on their specific values, the Mott localization of electrons and the formation of well-developed spins can occur in portions of the phase diagram. As an example, in the largest family of \(\mathcal{S}=1\) chains, the nickel-based compounds [18], the two \(e_{g}\) electrons of Ni\({}^{+2}\) ions are necessary to form the \(\mathcal{S}=1\) spins due to the Hund's rule that maximizes the on-site magnetic moment. For AgVP\({}_{2}\)S\({}_{6}\) or Tl\({}_{2}\)Ru\({}_{2}\)O\({}_{7}\) the latter occurs on the \(t_{2g}\) orbitals of V\({}^{+3}\) or Ru\({}^{+4}\), respectively. In all the previously mentioned compounds, the emergence of the topological states is unknown when described from the more fundamental perspective of quantum mechanically fluctuating individual mobile electrons, including electron-electron interaction. To fully understand how the topological state in \(\mathcal{S}=1\) chains emerges from a fermionic description, one has to focus on the effects of electron interaction within the multi-orbital systems in which Hubbard and Hund's couplings are crucial ingredients. Here we demonstrate that the latter are sufficient for the onset of the topologically nontrivial phase. Specifically, upon increasing the strength of the Coulomb repulsion, we identify a clear transition between topologically trivial and non-trivial ground-states. Our analysis unveils the threshold value of the interaction \(U_{\rm c}\) where the Haldane gap opens. Although at \(U_{\rm c}\) we also identify the emergence of zero-energy edge states and finite string order correlations (the signature properties of \(\mathcal{S}=1\) Haldane phase), surprisingly, the magnetic moments are far from being fully developed, and spin excitations still resemble those in the regime of weak \(U\to 0\). Consequently, we here report that the Haldane phase is not limited by having \(\mathcal{S}=1\) moments. Specifically, its generalized existence can be shown to extend to unexpectedly small values of the interaction \(U\sim W\), with \(W\) being the kinetic energy half-bandwidth. **From two-orbital to Heisenberg model.** We employ the zero-temperature density-matrix renormalization group method [4; 21; 22] (DMRG) to solve the 1D two-orbital Hubbard model (2oH) at half electronic filling (\(n=2\), i.e., two particles per site; one particle per orbital) and zero total magnetization \(S^{z}_{\rm tot}=0\), relevant for Ni\({}^{+2}\)-based compounds. The 2oH is given by \[H_{\rm H} = t\sum_{\gamma\gamma^{\prime}\ell\sigma}\left(c^{\dagger}_{\gamma \ell\sigma}c_{\gamma^{\prime}\ell+1\sigma}+{\rm H.c.}\right)+U\sum_{\gamma \ell}n_{\gamma\ell\uparrow}n_{\gamma\ell\downarrow} \tag{1}\] \[+ U^{\prime}\sum_{\ell}n_{0\ell}n_{1\ell}-2J_{\rm H}\sum_{\ell}{ \bf S}_{0\ell}\cdot{\bf S}_{1\ell}\] \[+ J_{\rm H}\sum_{\ell}\left(P^{\dagger}_{0\ell}P_{1\ell}+{\rm H.c. }\right)\,.\] Although challenging, the above model contains the most generic many-body interactions found in multiorbital systems: \(U\) and \(U^{\prime}=U-5J_{\rm H}/2\) represent the intra- and inter-orbital electron-electron Coulomb repulsion, respectively, while \(J_{\rm H}\) accounts for the Hund rule, i.e., ferromagnetic exchange between spins at different orbitals; finally, \(P^{\dagger}_{0\ell}P_{1\ell}\) with \(P^{\dagger}_{\gamma\ell}=c^{\dagger}_{\gamma\uparrow}c^{\dagger}_{\gamma \downarrow\ell}\) represents the doublon-holon exchange. We will focus on degenerate bands with \(t=0.5\,[{\rm eV}]\), and in the following, we will use the half-bandwidth of kinetic energy as a unit, i.e., \(W=2t=1[{\rm eV}]\). While we will mostly consider the \(J_{\rm H}/U=0.25\) case, other values of Hund exchange will also be investigated [23]. Note that the \({\bf S}_{\gamma\ell}\) operators represent the spin-1/2 of electrons and that the above model preserves the SU(2) symmetry provided that \(U^{\prime}=U-5J_{\rm H}/2\) and the doublon-holon exchange term is included. Figure 1: **Spin excitations.****A** Evolution of the spin excitations, as measured by the dynamical spin structure factor \(S(q,\omega)\), with increasing strength of electron-electron interaction \(U\) for a system of \(L=80\) sites and \(J_{\rm H}/U=0.25\). The frequency scale was renormalized by the effective spin exchange \(J=2t^{2}/(U+J_{\rm H})\). White lines in the left top panel represent the two-spinon continuum of \(U=0\) Hubbard model, while the line in the bottom right panel depicts the magnon dispersion of the \(\mathcal{S}=1\) Heisenberg model. In the open boundary systems considered here, the zero energy Haldane edge states are expected at \(\omega=0\). However, the latter’s large intensity can blur the spectra’s details. To avoid this issue, we have evaluated the spin excitations only in the bulk of the system. **B** Total magnetic moment per site \({\bf T}^{2}=\mathcal{S}(\mathcal{S}+1)\) and charge fluctuations \(\delta n\) vs. interaction strength \(U\). Note \({\bf T}^{2}\) starts at 0.75 for noninteracting \(U=0\) electrons. The standard probe of spin excitations is the momentum \(q\) and energy \(\omega\) resolved dynamical spin structure factor \(S(q,\omega)\), which is the Fourier transform of the non-local Green's functions \(\langle\langle\mathbf{T}_{\ell}\mathbf{T}_{\ell^{\prime}}\rangle\rangle_{\omega}\)[23], with \(\mathbf{T}_{\ell}\) as the total on-site spin \(\mathbf{T}_{\ell}=\sum_{\gamma}\mathbf{S}_{\gamma\ell}\). The calculated \(S(q,\omega)\) is routinely compared to inelastic neutron scattering or resonant inelastic X-ray scattering data, also in the case of \(\mathcal{S}=1\) compounds. With increasing strength of interaction \(U\), the 2oH spectrum (Fig. 1A) develops from a continuum of \(\mathcal{S}=1/2\)-like excitations at \(U=0\)[24; 25] to the well-established magnon-like excitations [26; 27] of the \(\mathcal{S}=1\) Heisenberg model at large \(U\gg W\). Renormalizing the frequency by the effective spin exchange, \(J=2t^{2}/(U+J_{\mathrm{H}})\)[19], yields qualitative agreement between the models at \(U/W\simeq 4\). As expected, for such value of interaction \(U\), the average total magnetic moment is almost maximized \(\mathbf{T}^{2}=\mathcal{S}(\mathcal{S}+1)\simeq 2\) and the charge fluctuations \(\delta n=\langle n^{2}\rangle-\langle n\rangle^{2}\) are vanishing (Fig. 1B). The artificial broadening needed in the dynamical-DMRG method [28; 29] prevents us from extracting accurate values of the magnon gap directly from the spectrum of \(S(q,\omega)\). Instead, the gaps can be obtained from the difference in ground-state energies of two magnetization sectors with different \(S_{\mathrm{tot}}^{z}\) (with \(\Delta S\) being the magnetization difference) at fixed electron density \(n\). It is important to note that working on a finite-size lattice, the \(\Delta S=1\) excitations of 2oH are always gapless when extrapolated to the thermodynamic limit \(L\to\infty\) (Fig. 2A). For \(U\to 0\), the gapless spin excitations manifest the physics of noninteracting fermions, with a inverse-linear dependence on the system size \(\mathcal{O}(1/L)\) of the gap according to Lieb-Schultz-Mattis theorem [30]. In the opposite limit of the \(\mathcal{S}=1\) Heisenberg model at \(U\gg W\), the gapless \(\Delta S=1\) excitation originates in a four-fold degenerate ground-state (two-fold in the \(S_{\mathrm{tot}}^{z}=0\) sector) with two \(\mathcal{S}=1/2\) edge states [27; 31]. For a finite \(L\), the latter are split due to their overlap [32], which decays exponentially with increasing system size. See large-\(U\) data in Fig. 2A. Thus, within the open boundary condition system with edge states, the true magnon gap \(\Delta_{\mathrm{S}}\) can be extracted from \(\Delta S=2\) excitations [33; 4; 4]. Still, for \(U\to 0\), the magnons are gapless with \(\mathcal{O}(1/L)\) size dependence of the gap. On the other hand, increasing the strength of \(U\) changes the nature of the scaling. At large \(U\), we observe a saturation to a finite value in the \(L\to\infty\) limit. This saturation is to the well-known Haldane gap \(\Delta_{\mathrm{S}}/J\simeq 0.41\) for \(U\gtrsim 4\), confirming the accuracy of our procedure. Crucially, the finite-size scaling varying \(U\) reveals a novel critical (Hund \(J_{\mathrm{H}}\) dependent [23]) value of the interaction \(U_{\mathrm{c}}=U_{\mathrm{c}}(J_{\mathrm{H}})\) where the gap opens (Fig. 2B). For example, for \(J_{\mathrm{H}}/U=0.25\) the magnons become gapped at \(U_{\mathrm{c}}/W\simeq 0.9\). It is worth noting that the magnon gap \(\Delta_{\mathrm{S}}\) opens at a value of the interaction \(U=U_{\mathrm{c}}\) for which the overall spin excitations are _far_ from the \(\mathcal{S}=1\) Heisenberg model magnon-like spectrum. In fact, for \(U/W\sim 1\), the spin excitations still visually resemble the noninteracting continuum of \(\mathcal{S}=1/2\)-like moments, though with redistributed spectral weights (Fig. 1A). **Zero-energy edge modes.** As mentioned, the exponential in the system size dependence of the \(\Delta S=1\) gaps (Fig. 2A) indicates the presence of edge states. To quantify them, we analyze (Fig. 3) the zero-frequency \(\omega=0\) dynamical spin-spin correlation functions between the edge and the bulk of the system, i.e., the non-local Green's functions \((-1)^{\ell}\langle\langle T_{1}^{z}T_{\ell}^{z}\rangle\rangle_{\omega=0}\), capable of capturing zero-energy modes. Here, the \((-1)^{\ell}\) prefactor removes the AFM staggered pattern. At small \(U\), the spin correlations decay exponentially with distance \(\ell\) (Fig. 3A), as expected for a paramagnetic region. Increasing \(U\) leads to a slower, although still exponential, decay. At \(U\simeq U_{\mathrm{c}}\), the \(\omega=0\) correlations are approximately site-independent. Note that the latter does not originate in any long-range order because the value of spin correlations decays with the system size (see Fig. 3B and the discussion below). Interestingly, a characteristic V-shape of correlations develops above \(U_{\mathrm{c}}\). The latter is the manifestation of the edge states present at the (open) boundaries of the Figure 2: **Spin gaps.****A** Finite-size scaling of \(\Delta S=1\) (left panel) and \(\Delta S=2\) (right panel) spin excitations for \(J_{\mathrm{H}}/U=0.25\) and \(L\in\{10,20,\ldots,100\}\). Line color-code represents the value of the interaction \(U\). **B**\(U\) dependence of the extrapolated magnon gaps in units of \(W\). Top to bottom: \(J_{\mathrm{H}}/U=0.05,0.10,\ldots,0.40\). Inset depicts the same data but renormalized by the effective spin exchange \(J\). The saturation to the Haldane gap \(\Delta_{\mathrm{S}}/J\simeq 0.41\) is clearly visible (red dashed line). system [5]. In the \(\mathcal{S}=1\) Heisenberg model, the zero-energy modes are not localized at a single edge site but decay exponentially with the correlation length \(\xi_{\mathrm{S}}\simeq 6.1\). The latter leads to finite (exponentially suppressed) AFM spin correlations up to half \(\ell\sim L/2\) of the system. The increase of \(\langle\langle T_{1}^{z}T_{\ell}^{z}\rangle\rangle_{\omega=0}\) for \(\ell>L/2\) is exactly a consequence of correlated edge states: the edge-edge correlations are finite, while the edge-bulk correlations are vanishing. To assess the development of spin-spin correlations in the 2oH system, especially the correlated edge states, we can monitor the behaviour of the edge-edge and edge-bulk (Fig. 3B) values vs. the interaction \(U\). The former acquires a nonzero value at \(U_{\mathrm{c}}\)[23] and displays small finite-size effects. On the other hand, the finite value of the edge-bulk correlations decreases with system size \(L\) and vanishes in the \(L\to\infty\) limit. Furthermore, we can extract the interaction dependence of the edge correlation length (Fig. 3C) by fitting \(\ell<L/2\) data of the 2oH to \[(-1)^{\ell}\langle\langle T_{1}^{z}T_{\ell}^{z}\rangle\rangle_{\omega=0} \propto\exp(-\ell/\xi_{\mathrm{e}})\,. \tag{2}\] For \(U/W>4\) we reproduce \(\xi_{\mathrm{e}}\simeq\xi_{\mathrm{S}}\simeq 6.1\), consistent with dynamical spin structure factor \(S(q,\omega)\) investigations of the \(\mathcal{S}=1\) Heisenberg model physics. Interestingly, the extracted \(\xi\) diverges at \(U_{\mathrm{c}}\). The latter reflects the site-independent correlations in this region [23]. **Topological phase transition.** The opening at \(U_{\mathrm{c}}\) of a spin gap \(\Delta_{\mathrm{S}}\), the emergence of edge-edge correlations \(\langle\langle T_{1}^{z}T_{L}^{z}\rangle\rangle_{\omega=0}\), and the diverging edge correlation length \(\xi_{\mathrm{e}}\) all consistently indicate the existence of an interaction-induced topological phase transition. The latter is between topologically trivial and nontrivial regions, with the emergence of the Haldane edge states at \(U_{\mathrm{c}}\). The Figure 3: **Edge spin correlations.****A** Distance \(\ell\) dependence of the zero frequency \(\omega=0\) dynamical spin-spin correlations \((-1)^{\ell}\langle\langle T_{1}^{z}T_{\ell}^{z}\rangle\rangle_{\omega=0}\) for various values of interaction \(U\) (denoted by color-code). The results are normalized by the \(\ell=1\) value of the correlation function [23]. **B** Edge-edge \(|\langle\langle T_{1}^{z}T_{L}^{z}\rangle\rangle_{\omega=0}|\) (left panel) and edge-bulk \(|\langle\langle T_{1}^{z}T_{L/2}^{z}\rangle\rangle_{\omega=0}|\) (right panel) dynamical spin correlations vs. interaction strength \(U\). At \(U_{\mathrm{c}}\), we observe the appearance of finite edge-edge correlations, saturating at \(U\gg W\) to the value given by the \(\mathcal{S}=1\) Heisenberg model (red dashed line). **C** Extracted, Eq. (2), edge correlation length \(\xi_{\mathrm{e}}\) vs. interaction strength \(U\). Insets depict examples of spin-spin correlations for two system sizes (\(L=60\) and \(L=80\), together with fitted exponentials \(\propto\exp(-\ell/\xi_{\mathrm{e}})\). All data are calculated at \(J_{\mathrm{H}}/U=0.25\). **D** Interaction \(U/W\) – Hund exchange \(J_{\mathrm{H}}/U\) phase diagram on the basis of inverse edge correlation length \(1/\xi_{\mathrm{e}}\) for \(L=60\). White points depict \(U_{\mathrm{c}}\) obtained from the spin gap \(\Delta_{\mathrm{S}}\) opening, while the white line represents \(J_{\mathrm{H}}=t^{2}/U\). topological phases can be identified by investigating the entanglement spectrum of the system [35, 36], i.e., the Schmidt coefficients \(\lambda_{\alpha}\) of left/right (\(|\mathrm{L}\rangle/|\mathrm{R}\rangle\)) decomposed ground-state \(|\mathrm{gs}\rangle=\sum_{\alpha}\lambda_{\alpha}|\mathrm{L}\rangle_{\alpha}| \mathrm{R}\rangle_{\alpha}\), with \(\lambda_{\alpha}^{2}\) being the eigenvalues of the reduced density matrix of the partition. In the topologically nontrivial region, all \(\lambda_{\alpha}\)'s are evenly degenerate. Consequently, the entanglement entropy \(S_{\mathrm{vN}}=-\sum_{\alpha}\lambda_{\alpha}^{2}\ln\lambda_{\alpha}^{2}\) cannot drop below the \(\ln 2\) value for any cut of the system, consistent with the presence of entangled \(\mathcal{S}=1/2\) edge states. The analysis of the 2oH model indicates that this condition is fulfilled for \(U\gtrsim U_{\mathrm{c}}\) (Fig. 4A). Detailed investigation of the largest gap [23] in the entanglement spectrum (Fig. 4B) shows that the trivial region \(U<U_{\mathrm{c}}\) does not have any apparent structure in the \(\lambda_{\alpha}\) eigenvalues. On the other hand, the largest gap decays exponentially with system size for any \(U>U_{\mathrm{c}}\) (though, with slower decay in the proximity of \(U_{\mathrm{c}}\)) and vanishes in the thermodynamic limit \(L\to\infty\). In the context of the \(\mathcal{S}=1\) Heisenberg model, the topological Haldane phase can also be detected by studying the non-local string order parameter [37, 38, 31] \[\mathcal{O}_{s}(\ell)=-\left\langle A_{m}\exp\left(i\theta\sum_{n=m+1}^{m+ \ell-1}A_{n}\right)A_{m+\ell}\right\rangle\,, \tag{3}\] which for \(\theta=\pi\) and \(A_{\ell}=S_{\ell}^{z}\) measures the breaking of the discrete \(Z_{2}\times Z_{2}\) hidden symmetry (i.e., the dihedral group of \(\pi\) rotations). It is important to note that the phase \(\theta=\pi\) was obtained via the valence bond state structure of the AKLT state. For a generic spin-\(\mathcal{S}\) Heisenberg model, the string order phase becomes spin-dependent \(\theta=\theta(\mathcal{S})\), i.e., it has to reflect the properties of a given VBS ground-state [39, 40, 41, 42]. In the case of the 2oH model, for \(U>U_{\mathrm{c}}\), the \(\pi\)-string order \(\mathcal{O}_{s}\) does not decay (Fig. 5), as expected in the \(\mathcal{S}=1\) Haldane phase. However, it is important to note that the total spin operator of 2oH, \(A_{\ell}=T_{\ell}^{z}\), involves not only \(\mathcal{S}=1\) but also \(\mathcal{S}=1/2\) degrees of freedom and that for \(U\simeq U_{\mathrm{c}}\) the magnetic moment deviates strongly from \(\mathcal{S}=1\) (Fig. 1B). Nevertheless, we observe a _finite_ string order all the way down to \(U=U_{\mathrm{c}}\sim W\) showing that this type of order can exist in a fermionic system as well, even without well defined moments. Interestingly, consistent with the topological phase transition at \(U_{\mathrm{c}}\), for \(U<U_{\mathrm{c}}\) the string order vanishes, and the system size dependence changes from weakly increasing with \(L\) (for \(U>U_{\mathrm{c}}\)) to weakly decreasing with \(L\) (for \(U<U_{\mathrm{c}}\)). The latter is consistent with the slow scaling of \(\mathcal{O}_{s}\) for \(\mathcal{S}=1/2\) moments [43]. **Discussion and conclusion.** Investigating systems on finite lattices, especially with many-body interactions incorporated, is always a challenge: are we observing a true phase transition or a very rapid crossover? Furthermore, interaction-induced transitions in one dimension are rare due to the Mermin-Wagner theorem. The non-local character of the topological phases allows for such phenomenon even in 1D. Our numerical results indicate that the correlated one-dimensional two-orbital Hubbard model has a sharp transition at \(U_{\mathrm{c}}\sim W\) between a topologically trivial region and a generalized fermionic Haldane phase with edge states. Surprisingly, the magnetic moments are not yet fully developed in a vast region of the topological phase (Fig. 1B), and thus the \(\mathcal{S}=1\) Heisenberg model-like description cannot be applied directly. Actually, our analysis shows that the gapped ground-state with finite string order survives down to \(U\sim W\sim\mathcal{O}(t)\). Consequently, the latter indicates that a VBS-like state, similar to the AKLT state, could be formulated [44] even with mobile fermions. It seems true despite the fact that the length scale of spin-spin correlations indicate the spatially extended character of the ground-state, although with moments small in value. Our detailed interaction \(U\) and Hund exchange \(J_{\mathrm{H}}\) investigation (Fig. 3D) indicates that the SU(2) symmetric system undergoes the transition at \(J_{\mathrm{H}}\simeq t^{2}/U\), and consequently a finite \(U\sim W\) is necessary for the onset of the non-topological-topological phase transition in real materials. Figure 4: **Topological phase transition.****A** Interaction \(U\) dependence of the entanglement spectrum \(-2\ln\lambda_{\alpha}\), obtained at \(J_{\mathrm{H}}/U=0.25\) using a \(L=140\) site system partitioned in half. Color code depicts the number of occurrences of a given eigenvalue (number of degeneracies). The values for the \(\mathcal{S}=1\) Heisenberg model are also displayed (red dashed lines). **B** Analysis of the largest gap in the entanglement spectrum for various system sizes \(L=60,80,100,120,140\)[23]. Also, one could expect that for \(J_{\rm H}\gg U\) (i.e., when the system always has well developed on-site triplets formed by electrons), even small interaction will induce the Haldane phase. However, such region of parameter space is unrealistic because for \(J_{\rm H}/U>0.4\) the inter-orbital interaction \(U^{\prime}=U-5J_{\rm H}/2\) becomes attractive \(U^{\prime}<0\). It is therefore evident that setups with coupled \(\mathcal{S}=1/2\) triplets represent, from the electron system perspective, broken spin rotation with \(U^{\prime}\neq U-5J_{\rm H}/2\). Previous analysis of the Haldane phase in such setups indicate its fragility with respect to charge fluctuations [6, 7]. Our results indicate that within a two-orbital setup, the Haldane phase is robust down to rather small values of the interaction \(U\), in a regime where the magnetic moments are barely developed, thus generalizing the ideas of Haldane for \(S=1\) spin Heisenberg models into previously unexplored territory involving delocalized electrons. ###### Acknowledgements. **Funding:** M.M. acknowledges support from the National Science Centre (NCN), Poland, via project 2020/37/B/ST3/00020. M.S. and J.H. acknowledge grant support from the National Science Centre (NCN), Poland, via project 2019/35/B/ST3/01207. A.N. acknowledges support from the Max Planck-UBC-UTokyo Center for Quantum Materials and Canada First Research Excellence Fund (CFREF) Quantum Materials and Future Technologies Program of the Stewart Blusson Quantum Matter Institute (SBQMI), and the Natural Sciences and Engineering Research Council of Canada (NSERC). G.A. was partly supported by the Scientific Discovery through Advanced Computing (SciDAC) program funded by the U.S. DOE, Office of Science, Advanced Scientific Computing Research and BES, Division of Materials Sciences and Engineering. E.D. was supported by the US Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division. Part of the calculations have been carried out using resources provided by Wroclaw Centre for Networking and Supercomputing. **Author contributions:** J.H. conceived the study. A.J., M.M., E.D., and J.H. planned the project. A.J., M.S., and J.H. performed the numerical experiments and analyzed the data. A.N. and G.A. developed and tested the simulation codes. M.M., E.D., and J.H. wrote the manuscript. All authors provided comments on the publication. **Competing interests:** The authors declare no competing interests. **Data and materials availability:** The data and the code that supports the plots within this paper and other findings of this study are available at [45] and [46]. Figure 5: **String order.** Interaction \(U\) dependence of the string order parameter \(\mathcal{O}_{c}(\ell)\) with \(\theta=\pi\) phase at \(\ell=L/2\) distance in the bulk (\(m=L/4\)). Upper insets depicts \(\mathcal{O}_{c}(\ell)\) vs. distance \(\ell\) for \(U=0.5,1.0,3.0,8.0\) (left to right). The lower inset depicts a zoom to the proximity of the phase transition \(U_{c}\), with the shaded region depicting the trivial phase. All data are evaluated at \(J_{\rm H}/U=0.25\) using \(L=40,60,80,L=100\) site systems.
2301.05927
Aspects of Supergroup Gauge Theory
We provide a survey of recent studies of supergroup gauge theory. We first discuss supermatrix model as a zero-dimensional toy model of supergroup gauge theory and its geometric and algebraic characterization. We then focus on four-dimensional Yang--Mills theory with supergroup gauge symmetry and explore its non-perturbative properties, including instanton calculus, Seiberg-Witten geometry, Bethe/gauge correspondence, and its realization with intersecting defects.
Taro Kimura
2023-01-14T14:20:33Z
http://arxiv.org/abs/2301.05927v2
# Aspects of Supergroup Gauge Theory ###### Abstract We provide a survey of recent studies of supergroup gauge theory. We first discuss supermatrix model as a zero-dimensional toy model of supergroup gauge theory and its geometric and algebraic characterization. We then focus on four-dimensional Yang-Mills theory with supergroup gauge symmetry and explore its non-perturbative properties, including instanton calculus, Seiberg-Witten geometry, Bethe/gauge correspondence, and its realization with intersecting defects. ###### Contents * 1 Introduction * 2 Supermathematics * 2.1 Grassmann algebras * 2.2 Supervector space * 2.3 Supermatrix * 2.4 Supergroup * 3 Supermatrix model * 3.1 Hermitian supermatrix model: superunitary ensemble * 3.2 Real-quaternion supermatrix model: super orthosymplectic ensemble * 3.3 Coulomb gas analysis * 3.4 Free field realization * 4 Supergroup gauge theory * 4.1 Supergroup Yang-Mills theory * 4.2 Quiver gauge theory realization * 4.3 String/M-theory perspective Supergroup instanton counting * 5.1 Instanton moduli space * 5.2 ADHM construction of instanton * 5.3 ADHM construction of super instanton * 5.4 Equivariant localization * 6 Non-perturbative aspects of supergroup gauge theory * 6.1 Topological string theory approach * 6.2 Non-perturbative Schwinger-Dyson equation * 6.3 Free field realization * 6.4 Bethe/gauge correspondence * 6.5 Higgsing and intersecting defects ## 1 Introduction One of the important features of quantum mechanics is the quantum statistics: Quantum particles are indistinguishable from one another, and there exist two possible types of particles, _boson_ and _fermion_,1 obeying the Bose-Einstein statistics [14] and the Fermi-Dirac statistics [15, 16]. _Supersymmetry_ is a symmetry between bosons and fermions. In the context of supersymmetric quantum field theory, it is formulated as a space-time symmetry based on the superfield formalism,2 which provides various theoretical frameworks to understand non-perturbative aspects of quantum field theory. Although it is typically considered as a global symmetry, one may consider a local version of supersymmetry, which inevitably involves gravitational degrees of freedom with supersymmetry, a.k.a., supergravity. In addition to the framework to describe the fundamental interactions in the nature, supersymmetry has so far provided various applications based on its mathematical structure: In the context of condensed-matter physics, the method of supersymmetry is applied to discuss disorder systems as an alternative approach to the replica trick [1, 20]; The lattice model involving both hopping and spin interactions, called the \(t\)-\(J\) model, realizes supersymmetry by tuning the hopping parameter \(t\) and the spin coupling constant \(J\)[21, 22]; It has been argued that the critical point of the statistical model with the random field disorder involves emergent supersymmetry (Parisi-Sourlas supersymmetry) together with dimensional reduction [23, 24]; The critical point of two-dimensional tricritical Ising model is described as a minimal model of super-Virasoro (and also the ordinary Virasoro) algebra of central charge \(c=7/10\)[25]; The use of supergroup is also proposed in the context of functional renormalization group to consider a gauge invariant regulator [1]. Footnote 1: There exists another type of particles, called _anyon_, uniquely in \((2+1)\)-dimensional systems. Footnote 2: The first appearance of supersymmetry was as an internal symmetry introduced to describe both mesons and baryons in a unified way based on the unitary supergroup proposed by Miyazawa [26, 27]. The notion of symmetry is well described in terms of group theory, and _supergroup_ is an extension of the ordinary groups involving both bosonic and fermionic degrees of freedom. The main object that we discuss in this article is _supergroup gauge theory_, which is a gauge theory with local supergroup gauge symmetry. Regarding the particle statistics, Fierz and Pauli formulated a connection with the particle spins, which is nowadays known as the _spin-statistics theorem_. \(\boldsymbol{\sim}\) **Spin-statistics theorem**[**14, 15**]**** Integer-spin particles are bosons, while odd-half-integer-spin particles are fermions. This theorem is obtained under the following conditions: * Lorentz covariance and relativistic causality * Positive energies and positive norms in the Hilbert space Therefore, for example, this theorem is not applied to the ghost particle appearing in the gauge fixing process, which is a spin-0 fermionic particle. The fundamental degrees of freedom of supergroup gauge theory are spin-1 boson and fermion, and thus it is not compatible with the spin-statistics theorem. In fact, the spectrum of supergroup gauge theory is not bounded and there appear negative energy states. Even in such a situation, one may still apply the method of Lefschetz thimble to evaluate the path integral [16, 17] through the analytic continuation. Furthermore, it has been also known that \(\mathrm{U}(N|M)\) gauge theory is not distinguishable with \(\mathrm{U}(N-M|0)=\mathrm{U}(N-M)\) theory to all orders in perturbation theory [1, 18, 19, 20, 21]. However, instability of vacua implies that the perturbative analysis is not reliable and we need a proper non-perturbative treatment of supergroup gauge theory. In spite of unphysical natures, as discussed in this article, we can discuss various non-perturbative aspects of supergroup gauge theory as a natural extension of the ordinary gauge theory, and it indicates a chance of non-perturbative completion. ### Organization of the article A purpose of this article is to provide a self-contained overview of supergroup gauge theory. For this purpose, we start in Sec. 2 with basic notions of supermathematics, including the introduction of Grassmann algebras, supervector space, superalgebra, supermatrix, and Lie supergroup, which could be skipped by experienced readers on this subject. In Sec. 3, we discuss the supermatrix model, which can be viewed as a zero-dimensional supergroup gauge theory. In fact, the supermatrix model plays a role of a toy model in the study of supergroup gauge theory, which exhibits various similar properties to higher-dimensional theory discussed in the latter part of this article. After introducing the eigenvalue integral form of the partition function, in particular we study the asymptotic behavior based on the Coulomb gas method. We then study the operator formalism that we call the free field realization of the (super)matrix model, and explore the underlying infinite dimensional algebraic structure. In Sec. 4, we introduce the supergroup gauge theory and discuss its basic properties. We discuss realizations of supergroup theory from non-supergroup theory through analytic contin uation to unphysical regime, and we show the construction from string/M-theory perspective providing the Seiberg-Witten geometry for supergroup gauge theory. In Sec. 5, we explore instantons in supergroup gauge theory, which plays an important role in the study of non-perturbative aspects. After considering the ADHM construction of instantons, a systematic approach to construct the instanton solutions, for supergroup theory, we study the instanton moduli space and apply the equivariant localization formalism to derive the instanton partition function. We obtain the instanton partition function in the three-fold way, the equivariant index formula, the contour integral formula, and the combinatorial formula, for supergroup gauge theory. In Sec. 6, we explore non-perturbative aspects of supergroup gauge theory based on the instanton partition function obtained in advance. We demonstrate that the instanton partition function is also obtained in the framework of topological string involving both positive and negative branes. For this purpose, we introduce the negative brane analog of the topological vertex that we call the anti-veretx to compute the supergroup partition function. We then study the non-perturbative Schwinger-Dyson equation associated with the instanton partition function, which gives rise to doubly quantum Seiberg-Witten geometry, and discuss the free field realization associated with the underlying algebraic structure. In this case, we show that the instanton partition function obeys a \(q\)-analog of the Virasoro constraint. We also explore a connection with the quantum integrable system, called the Bethe/gauge correspondence, for supergroup gauge theory, and discuss its implications on the quantum integrable system side. We then discuss realizations of supergroup gauge theory in physical setups. We in particular study the codimension-two surface defect operators and show that the supergroup structure emerges from intersecting defects. ### Notations For \(N\in\mathbb{N}\), we define the set \[[N]=\{1,\ldots,N\}\,. \tag{1.1}\] Throughout the article, the vector bundle \(\mathbf{X}\) has the Chern character, \[\operatorname{ch}\mathbf{X}=\sum_{i\in[\operatorname{rk}\mathbf{X}]} \operatorname{e}^{x_{i}}, \tag{1.2}\] where \(\operatorname{rk}\mathbf{X}\) is the rank of the bundle \(\mathbf{X}\). We denote the one-dimensional bundle by \(\mathbf{x}\) with \(\operatorname{ch}\mathbf{x}=\operatorname{e}^{x}\). We denote the alternating sum of anti-symmetrizations of the bundle \(\mathbf{X}\) by \[\wedge_{y}\mathbf{X}=\sum_{i=0}^{\operatorname{rk}\mathbf{X}}(-y)^{i}\wedge^{ i}\mathbf{X}\,. \tag{1.3}\] In particular, we apply the notation, \(\wedge\mathbf{X}=\wedge_{1}\mathbf{X}\). Special functionsWe define the \(q\)-shifted factorial (\(q\)-Pochhammer symbol) \[(z;q)_{n}=\prod_{m\in[n]}(1-zq^{m-1})\,, \tag{1.4}\] and the theta function with the elliptic norm \(q\), \[\theta(z;q)=(z;q)_{\infty}(qz^{-1};q)_{\infty}\,. \tag{1.5}\] ## 2 Supermathematics In this section, we summarize the basic properties of supermathematics used in this article. See, e.g., [1, 2, 3, 4, 5, 6, 7] for general introductions to supermathematics. ### Grassmann algebras The starting point of supermathematics is the Grassmann algebra, which involves anti-commuting variables (Grassmann variables), \[\theta_{i}\theta_{j}=-\theta_{j}\theta_{i}\,. \tag{2.1}\] This anti-commutativity immediately leads to nilpotency of the Grassmann variables, \[\theta_{i}^{2}=0\,. \tag{2.2}\] In general, a product of even number of Grassmann variables is commutative (Grassmann even), while a product of odd number of Grassmann variables is anti-commutative (Grassmann odd). We remark that the Grassmann even variable is commutative, but still nilpotent: For example, we have \((\theta_{1}\theta_{2})\theta_{3}=\theta_{3}(\theta_{1}\theta_{2})\), while \((\theta_{1}\theta_{2})^{2}=0\). In the definition of the complex conjugation for the Grassmann variables, we need a modification compared with the ordinary variables: Applying the conjugation operator twice, we have an extra sign factor, \[\bar{\theta}=-\theta\,. \tag{2.3}\] From this definition, the norm of the Grassmann variable becomes self-conjugate,3 Footnote 3: Another convention is also applied in the literature: No sign factor for the double conjugation \(\bar{\theta}=\theta\), while the conjugation of the product is given by \(\overline{\theta_{1}\theta_{2}\cdots\theta_{n}}=\bar{\theta}_{n}\cdots\bar{ \theta}_{2}\theta_{1}\). In this convention, we still have the relation \(\overline{\theta}\theta=\bar{\theta}\theta\). \[\overline{\theta_{1}\theta_{2}}=-\theta_{1}\bar{\theta}_{2}= \bar{\theta}_{2}\theta_{1}\,. \tag{2.5}\] #### Derivative and integral The derivative and the integral for the Grassmann variable are defined as follows, \[\frac{\mathrm{d}\theta_{i}}{\mathrm{d}\theta_{j}}=\delta_{ij}\,, \qquad\int\mathrm{d}\theta_{i}\,\theta_{j}=\delta_{ij}\,,\qquad\int\mathrm{d} \theta_{i}\,1=0\,. \tag{2.6}\] Hence, these operations are essentially equivalent. Since the Grassmann variables are anti-commutative, we have to be careful of the ordering of the derivative and the integral operation. Under the linear transformation \(\theta_{i}\to\theta^{\prime}_{i}=(A\theta)_{i}=\sum_{j}A_{ij}\theta_{j}\), the integral measure behaves as follows \[\int\prod_{i}\mathrm{d}\theta^{\prime}_{i}=\int\prod_{i}\mathrm{ d}\theta_{i}\,(\det A)^{-1}\,, \tag{2.7}\] which is opposite to the measure behavior of the commutative variables involving the determinant factor in the numerator. ### Supervector space A (complex) supervector is an element of the (complex) supervector space, which is a \(\mathbb{Z}_{2}\)-graded vector space of dimension \((N|M)\), \[\Psi=(z_{i},\theta_{j})_{i\in[N],j\in[M]}\in\mathbb{C}^{N|M}= \mathbb{C}^{N}_{0}\oplus\mathbb{C}^{M}_{1}\,, \tag{2.8}\] where the index of \(\mathbb{C}_{\sigma}\), \(\sigma=0,1\) denotes the Grassmann parity. In general, a supervector space consists of even and odd subspaces, \[V=V_{0}\oplus V_{1}\,. \tag{2.9}\] We denote the parity of the element \(x\) by \(|x|=0\) if \(x\in V_{0}\) and \(|x|=1\) if \(x\in V_{1}\). We define the parity flip operator \(\Pi\), such that \((\Pi V)_{0}=V_{1}\) and \((\Pi V)_{1}=V_{0}\). The superdimension of \(V\) is defined as \[\mathrm{sdim}\,V=\sum_{\sigma=0,1}(-1)^{\sigma}\dim V_{\sigma}= \dim V_{0}-\dim V_{1}\,, \tag{2.10}\] which may take a negative integer value. #### Superalgebra Superalgebra is a \(\mathbb{Z}_{2}\)-graded algebra that consists of the even part and the odd part, \[\mathfrak{A}=\mathfrak{A}_{0}\oplus\mathfrak{A}_{1}\,. \tag{2.11}\] We define the supercommutator for \(x,y\in\mathfrak{A}\) as follows, \[[x,y]=xy-(-1)^{|x||y|}yx\,, \tag{2.12}\] which is skew-symmetric, \[[x,y]=-(-1)^{|x||y|}[y,x]\,. \tag{2.13}\] Then, we define the Lie superalgebra that obeys the super analog of the Jacobi identity, \[[x,[y,z]]=[[x,y],z]+(-1)^{|x||y|}[y,[x,z]]\,. \tag{2.14}\] ### Supermatrix For supervector spaces \(V\) and \(W\), we define a linear map, which is described using a supermatrix, \[M=\begin{pmatrix}M_{00}&M_{01}\\ M_{10}&M_{11}\end{pmatrix}\in\operatorname{Hom}(V,W)=\bigoplus_{\sigma, \sigma^{\prime}=0,1}\operatorname{Hom}(V_{\sigma^{\prime}},W_{\sigma})\,, \tag{2.15}\] where each block of the matrix is given by \[M_{\sigma\sigma^{\prime}}\in\operatorname{Hom}(V_{\sigma^{\prime}},W_{\sigma })\,. \tag{2.16}\] Since \(M_{01}\) and \(M_{10}\) change the Grassmann parity, they consists of the Grassmann odd variables. In particular, if \(V=W\), we have \(M\in\operatorname{End}(V)=\bigoplus_{\sigma,\sigma^{\prime}=0,1}\operatorname {Hom}(V_{\sigma},V_{\sigma^{\prime}})\), which is invertible if both \(M_{00}\) and \(M_{11}\) are invertible. The set of invertible supermatrices defines a general linear supergroup, \(\operatorname{GL}(V)=\operatorname{GL}(V_{0}|V_{1})\). We remark that the definition of supermatrix is not unique to (2.15). For example, denoting even and odd variables by \(x_{ij}\) and \(\xi_{ij}\), we have the following possibilities for \(V=W=\mathbb{C}^{2|1}\), \[M=\begin{pmatrix}x_{11}&x_{12}&\xi_{13}\\ x_{21}&x_{22}&\xi_{23}\\ \xi_{31}&\xi_{32}&x_{33}\end{pmatrix}\,,\qquad\begin{pmatrix}x_{11}&\xi_{12}& x_{13}\\ \xi_{21}&x_{22}&\xi_{23}\\ x_{31}&\xi_{32}&x_{33}\end{pmatrix}\,,\qquad\begin{pmatrix}x_{11}&\xi_{12}& \xi_{13}\\ \xi_{21}&x_{22}&x_{23}\\ \xi_{31}&x_{32}&x_{33}\end{pmatrix}\,, \tag{2.17}\] which correspond to the convention of supervector, \(\Psi=(z_{1},z_{2},\theta)\), \((z_{1},\theta,z_{2})\), \((\theta,z_{1},z_{2})\). This is related to the ambiguity of the root system in Lie superalgebra. See a related discussion in Sec. 4.3.1. #### Supertrace We define the supertrace operation for the supermatrix,4 Footnote 4: Not to be confused with the symmetrized trace. We will introduce one-parameter deformation of the supertrace in (3.26). \[\operatorname{str}M=\sum_{\sigma=0,1}(-1)^{\sigma}\operatorname{tr}_{\sigma}M \tag{2.18}\] where \(\operatorname{tr}_{\sigma}\) means the trace with respect to the subspace \(V_{\sigma}\): \(\operatorname{tr}_{\sigma}M=\operatorname{tr}M_{\sigma\sigma}\). Compared with the definition of the superdimension (2.10), the supertrace of the identity supermatrix provides the superdimension of the corresponding supervector space, \(\operatorname{str}\mathbbm{1}_{V}=\operatorname{sdim}V\). An important property of the supertrace is the cyclicity, \[\operatorname{str}M_{1}M_{2}=\operatorname{str}M_{2}M_{1}\,, \tag{2.19}\] which is analogous to the cyclic property of the ordinary matrix. #### Supertransposition We define the supertransposition operation for the supermatrix, \[\begin{pmatrix}A&B\\ C&D\end{pmatrix}^{\text{st}}=\begin{pmatrix}A^{\text{t}}&C^{\text{t}}\\ -B^{\text{t}}&D^{\text{t}}\end{pmatrix}\,, \tag{2.20}\] where we denote the ordinary transposition by \(A^{\text{t}}\), etc. This supertransposition shows an analogous property to the ordinary case, \((M_{1}M_{2})^{\text{st}}=M_{2}^{\text{st}}M_{1}^{\text{st}}\), while it is not an involution, \((M^{\text{st}})^{\text{st}}\neq M\) in general. The Hermitian conjugation is defined as follows, \[M^{\dagger}=\overline{M}^{\text{st}}\,, \tag{2.21}\] which is then an involution, \((M^{\dagger})^{\dagger}=M\). #### Superdeterminant We define the superdeterminant, which is also called the Berezinian, \[\operatorname{sdet}\begin{pmatrix}A&B\\ C&D\end{pmatrix}=\frac{\det\bigl{(}A-BD^{-1}C\bigr{)}}{\det D}=\frac{\det A}{ \det(D-CA^{-1}B)}\,. \tag{2.22}\] This is analogous to the determinant formula for the block matrix, \[\det\begin{pmatrix}A&B\\ C&D\end{pmatrix}=\det\bigl{(}A-BD^{-1}C\bigr{)}\det D=\det A\det\bigl{(}D-CA^{-1 }B\bigr{)}\,. \tag{2.23}\] We have the multiplicative property for the superdeterminant, \(\operatorname{sdet}(M_{1}M_{2})=\operatorname{sdet}M_{1}\operatorname{sdet}M_ {2}\). We remark an identity \[\operatorname{str}\log M=\log\operatorname{str}M\,. \tag{2.24}\] Recalling the behavior of the Grassmann variable measure (2.7), the integral measure denoted by \(\mathrm{d}\Psi=\mathrm{d}z_{1}\cdots\mathrm{d}z_{N}\,\mathrm{d}\theta_{1} \cdots\mathrm{d}\theta_{M}\) behaves under the linear transform for the supervector (2.8), \(\Psi^{\prime}=M\Psi\), as follows, \[\mathrm{d}\Psi^{\prime}=\mathrm{d}\Psi\left(\operatorname{sdet}M\right). \tag{2.25}\] ### Supergroup #### Unitary supergroup For the complex supervector space element (2.8), \(\Psi\in\mathbb{C}^{N|M}\), we consider the squared norm as follows, \[|\Psi|^{2}=\sum_{i\in[N]}|z_{i}|^{2}+\sum_{j\in[M]}\bar{\theta}_{j}\theta_{j}= \sum_{i\in[N]}|z_{i}|^{2}-\sum_{j\in[M]}\theta_{j}\bar{\theta}_{j}=\text{str}( \Psi\Psi^{\dagger})\,. \tag{2.26}\] Then, we define the unitary supergroup \(\mathrm{U}(N|M)\) as the isometry group with respect to the supervector space \(\mathbb{C}^{N|M}\), such that, \[|\Psi|^{2}=|U\Psi|^{2}\,,\qquad U^{\dagger}=U^{-1}\,. \tag{2.27}\] We remark that \(|U\Psi|^{2}=\text{str}(U\Psi\Psi^{\dagger}U^{\dagger})=\text{str}(\Psi\Psi^{ \dagger})\). Hence, we have \[\mathrm{U}(N|M)=\{U\in\text{GL}(\mathbb{C}^{N|M})\mid U^{\dagger}=U^{-1}\}\,. \tag{2.28}\] The even part of the unitary supergroup \(\mathrm{U}(N|M)\) is given by \(\mathrm{U}(N)\times\mathrm{U}(M)\), and the odd part is given by \(N\times\overline{M}\) and \(\overline{N}\times M\) representations of the even part with the dimension \(2NM\). #### Orthosymplectic supergroup We consider the real supervector space \(\mathbb{R}^{N|2M}\) and the squared norm of \(\Psi=(x_{1},\ldots,x_{N},\theta_{1},\ldots,\theta_{2M})\in\mathbb{R}^{N|2M}\) as follows, \[|\Psi|^{2}=\sum_{i\in[N]}x_{i}^{2}+2\sum_{j\in[M]}\theta_{2j-1}\theta_{2j}= \Psi^{\dagger}\Omega\Psi=\text{str}\left(\Omega\Psi\Psi^{\dagger}\right)\,, \tag{2.29}\] where we define the skew-symmetric form for the Grassmann odd sector, \[\Omega=\begin{pmatrix}\mathbbm{1}_{N}&0\\ 0&\mathbbm{1}_{M}\otimes\mathbbm{j}\end{pmatrix}\,,\qquad\mathbbm{j}=\begin{pmatrix} 0&1\\ -1&0\end{pmatrix}\,. \tag{2.30}\] We remark that the standard bilinear form does not work for the Grassmann variables due to its nilpotency, \(\sum_{j\in[M]}\theta_{j}\theta_{j}=0\). The isometry group for the real supervector space \(\mathbb{R}^{N|2M}\) is given by \[|\Psi|^{2}=|U\Psi|^{2}\,,\qquad U^{\text{st}}\Omega U=\Omega\,. \tag{2.31}\] We see that \(|U\Psi|^{2}=\text{str}(\Omega U\Psi\Psi^{\dagger}U^{\text{st}})=\text{str}( \Omega\Psi\Psi^{\dagger})\). The orthosymplectic supergroup for general field \(\mathbb{K}\) is given as follows, \[\text{OSp}(N|2M,\mathbb{K})=\{U\in\text{GL}(\mathbb{K}^{N|2M})\mid U^{\text{ st}}\Omega U=\Omega\}\,. \tag{2.32}\] Precisely speaking, the orthosymplectic group realized as the isometry group of the real supervector space is a compact supergroup, which is in fact a subgroup of the unitary supergroup, denoted by \(\text{UOSp}(N|M)=\text{OSp}(N|2M,\mathbb{C})\cap\mathrm{U}(N|2M)\). This is analogous to the notation of the compact symplectic group \(\mathrm{USp}(2n)=\mathrm{Sp}(2n,\mathbb{C})\cap\mathrm{U}(2n)\), which is also understood as the quaternionic unitary group \(\mathrm{U}(n,\mathbb{H})\). In this article, we use the notation, \(\mathrm{OSp}(N|M)=\mathrm{UOSp}(N|2M)\) and \(\mathrm{Sp}(n)=\mathrm{USp}(2n)\) unless it causes confusion.5 The even part of \(\mathrm{OSp}(N|M)\) is thus given by \(\mathrm{O}(N)\times\mathrm{Sp}(M)\), and the odd part is given by \(N\times 2M\) representation of the even part with the dimension \(2NM\). Footnote 5: In this notation, we have the isomorphisms at the level of Lie algebra, \(\mathfrak{sp}_{1}=\mathfrak{su}_{2}\), \(\mathfrak{sp}_{2}=\mathfrak{so}_{5}\). Another situation that leads to the orthosymplectic supergroup is a subsector of the supervector space \(\mathbb{C}^{2N|2M}\), \[\Psi=\begin{pmatrix}z_{1}&z_{2}\\ -\bar{z}_{2}&\bar{z}_{1}\\ z_{3}&z_{4}\\ \vdots&\vdots\\ -\bar{z}_{2N}&\bar{z}_{2N-1}\\ \theta_{1}&\bar{\theta}_{1}\\ \vdots&\vdots\\ \theta_{M}&\bar{\theta}_{M}\end{pmatrix}\,,\qquad\Psi^{\dagger}=\begin{pmatrix} \bar{z}_{1}&-z_{2}&\bar{z}_{3}&\cdots&-z_{2N}&\bar{\theta}_{1}&\cdots&\bar{ \theta}_{M}\\ \bar{z}_{2}&z_{1}&\bar{z}_{4}&\cdots&z_{2N-1}&-\theta_{1}&\cdots&-\theta_{M} \end{pmatrix} \tag{2.33}\] where the two-by-two block in the bosonic part is identified with a quaternion \[x_{i}=\begin{pmatrix}z_{2i-1}&z_{2i}\\ -\bar{z}_{2i}&\bar{z}_{2i-1}\end{pmatrix}\in\mathbb{H}\,. \tag{2.34}\] Hence, this is an element in \(\mathbb{H}^{N}\oplus\mathbb{R}^{2M}\subset\mathbb{C}^{2N|2M}\). We denote the norm of \(x\in\mathbb{H}\) by \(|x|\), such that \(\bar{x}x=|x|^{2}\mathbb{1}\) where \(\mathbb{1}\) is the quaternion identity. The norm of this supervector is given by \[|\Psi|^{2}=\mathrm{tr}_{\mathbb{H}}\,\Psi^{\dagger}\Psi=2\sum_{i\in[N]}|x_{i} |^{2}+2\sum_{j\in[M]}\bar{\theta}_{j}\theta_{j}=\mathrm{str}(\Psi\Psi^{ \dagger})\,, \tag{2.35}\] where we denote the trace operation over the quaternion by \(\mathrm{tr}_{\mathbb{H}}\), i.e., for the quaternion units, \(\mathbb{1}\), \(\mathbb{i}\), \(\mathbb{j}\), \(\mathbb{k}\), we have \(\mathrm{tr}_{\mathbb{H}}\,\mathbb{1}=2\), \(\mathrm{tr}_{\mathbb{H}}\,\mathbb{i}=\mathrm{tr}_{\mathbb{H}}\,\mathbb{j}= \mathrm{tr}_{\mathbb{H}}\,\mathbb{k}=0\). The isometry group of this supervector is thus given by the orthosymplectic supergroup \(\mathrm{U}(2N|2M)\supset\mathrm{OSp}(M|N)\supset\mathrm{Sp}(N)\times\mathrm{ O}(M)\). #### Analytic continuation We discuss how to obtain the supergroups considered above through the analytic continuation. First of all, the dimension of each classical group is given by6 Footnote 6: The orthogonal group is further classified by the parity of the rank, \(\dim\mathrm{O}(2N)=N(2N-1)\), \(\dim\mathrm{O}(2N+1)=N(2N+1)\). We remark that \(\dim\mathrm{O}(2N+1)=\dim\mathrm{Sp}(N)\). This dimension formula is also understood from the construction of the adjoint representations for these classical groups (5.63). \[\dim\mathrm{U}(N)=N^{2}\,,\qquad\dim\mathrm{O}(N)=\frac{N(N-1)}{2}\,,\qquad \dim\mathrm{Sp}(N)=N(2N+1)\,, \tag{2.36}\] from which we observe the following relations, \[\dim\mathrm{U}(-N)=\dim\mathrm{U}(N)\,,\qquad\dim\mathrm{O}(-2N)=\dim\mathrm{ Sp}(N)\,. \tag{2.37}\] Such a relation is discussed also at the level of their irreducible representations [13]. From this point of view, these supergroups are obtained from the ordinary classical groups through the analytic continuation, \[\mathrm{U}(N+M)\ \stackrel{{ M\epsilon\to M}}{{ \longleftrightarrow}}\ \mathrm{U}(N|M)\,, \tag{2.38a}\] \[\mathrm{O}(N+2M)\ \stackrel{{ M\epsilon\mapsto M}}{{ \longleftrightarrow}}\ \mathrm{OSp}(N|M)\,,\qquad\mathrm{Sp}(N+M)\ \stackrel{{ M\epsilon\to M}}{{ \longleftrightarrow}}\ \mathrm{OSp}(2M|N)\,. \tag{2.38b}\] In particular, we have the relations between the superdimension and the dimension of the classical groups, \[\mathrm{sdim}\,\mathrm{U}(N|M) =(N^{2}+M^{2})-2NM=(N-M)^{2}=\dim\mathrm{U}(N+M)\Big{|}_{M\to-M}\,, \tag{2.39a}\] \[\mathrm{sdim}\,\mathrm{OSp}(N|M) =\left(\frac{N(N-1)}{2}+M(2M+1)\right)-2NM=2\left(\frac{N}{2}-M \right)^{2}-\left(\frac{N}{2}-M\right)\] \[=\dim\mathrm{O}(N+2M)\Big{|}_{M\to-M}=\dim\mathrm{Sp}\left(\frac {N}{2}+M\right)\Bigg{|}_{N\to-N}\,. \tag{2.39b}\] This interpretation based on the analytic continuation seems also reasonable from the relation between the ordinary trace and the supertrace. ## 3 Supermatrix model The matrix model, just given by a matrix integral, is thought of as a zero-dimensional reduction of quantum field theory. It sounds rather a simple toy model, but it has been playing a role to understand various non-perturbative aspects of quantum field theory. In this section, we explore the supermatrix model as a toy model that exhibits supergroup symmetry. See also [11, 12] for details on this subject. ### Hermitian supermatrix model: superunitary ensemble Let \(H\) be an \((N|M)\)-dimensional Hermitian supermatrix, \(H^{\dagger}=H\). We define the partition function of Hermitian supermatrix model of rank \((N|M)\) as follows [1, 13], \[Z_{N|M}=\int\mathrm{d}H\,\mathrm{e}^{-\frac{1}{g}\operatorname{str}V(H)}\,, \tag{3.1}\] with a polynomial potential function of degree \((d+1)\), \[V(x)=\sum_{n=1}^{d+1}\frac{t_{n}}{n}\,x^{n}\,. \tag{3.2}\] This integral is invariant under superunitary transformation, \(H\to UHU^{\dagger}\), \(U\in\mathrm{U}(N|M)\): The potential part is invariant due to the cyclic property of the supertrace (2.19), \(\operatorname{str}V(UHU^{\dagger})=\operatorname{str}V(H)\), and the measure part is due to the unitarity of the superdeterminant \(|\operatorname{det}U|=1\). From this point of view, this model is also called the _superunitary ensemble_ as a supermatrix generalization of the unitary ensemble of the ordinary random matrices. See, e.g., [14, 15, 16]. In fact, this symmetry is interpreted as a remnant of the supergroup gauge symmetry. Similarly to the ordinary Hermitian matrix, we can diagonalize the Hermitian supermatrix via the superunitary transform,7 Footnote 7: Eigenvalues of Hermitian supermatrices are ordinary commutative numbers (Recall that the diagonal blocks of supermatrices are ordinary matrices). The matrix model involving Grassmann odd eigenvalues is known as the super-eigenvalue model [1]. \[H=UZU^{\dagger}\,,\qquad U\in\operatorname{U}(N|M)\,, \tag{3.3}\] where we denote the diagonal supermatrix by \[Z=\begin{pmatrix}X&0\\ 0&Y\end{pmatrix}\,,\qquad X=\operatorname{diag}(x_{1},\ldots,x_{N})\,,\ Y= \operatorname{diag}(y_{1},\ldots,y_{M})\,. \tag{3.4}\] We remark that the choice of \(U\) is not unique in the process of diagonalization: (i) The eigenvalues can be permuted \((x_{i},y_{j})_{i\in[N],j\in[M]}\to(x_{\sigma(i)},y_{\sigma^{\prime}(j)})_{i\in[ N],j\in[M]}\) where \(\sigma\in\mathfrak{S}_{N}\), \(\sigma^{\prime}\in\mathfrak{S}_{M}\). (ii) Since the diagonal superunitary matrix commutes with the diagonal supermatrix, \(U_{\operatorname{diag}}ZU^{\dagger}_{\operatorname{diag}}=Z\), where \(U_{\operatorname{diag}}\in\operatorname{U}(1)^{N+M}\), the decomposition is invariant under redefinition \(U\to UU_{\operatorname{diag}}\). Taking the derivative of the relation above, we have the following expression, \[U^{\dagger}\operatorname{d}H\,U=\operatorname{d}Z+[U^{\dagger}\operatorname{d }U\,,Z]\,. \tag{3.5}\] We remark that \(U^{\dagger}\operatorname{d}U\) is the Maurer-Cartan one-form with respect to the supergroup \(\operatorname{U}(N|M)\), and the Hermitian supermatrix takes a value in the Lie superalgebra \(\operatorname{Lie}\operatorname{U}(N|M)\) (up to the imaginary unit). Then, the right hand side of this equation can be also written using the covariant derivative in the adjoint representation, \(D=\operatorname{d}+U^{\dagger}\operatorname{d}U\). Denoting \(I_{0}=\{1,\ldots,N\}\) and \(I_{1}=\{N+1,\ldots,N+M\}\), and \(\operatorname{i}(i)=i\) for \(i\in I_{0}\) and \(\operatorname{i}(i)=i-N\) for \(i\in I_{1}\), each component of (3.5) is given by \[(U^{\dagger}\operatorname{d}H\,U)_{ii}=\begin{cases} \operatorname{d}x_{i(i)}&(i\in I_{0})\\ \operatorname{d}y_{i(i)}&(i\in I_{1})\end{cases} \tag{3.6a}\] \[(U^{\dagger}\operatorname{d}H\,U)_{ij}=(U^{\dagger}\operatorname{d }U)_{ij}\times\begin{cases}x_{i(j)}-x_{i(i)}&(i,j\in I_{0})\\ y_{i(j)}-y_{i(i)}&(i,j\in I_{1})\\ y_{i(j)}-x_{i(i)}&(i\in I_{0},j\in I_{1})\\ x_{i(j)}-y_{i(i)}&(i\in I_{1},j\in I_{0})\end{cases} \tag{3.6b}\] Hence, the supermatrix measure is given in terms of the eigenvalues and the eigenvector components as follows, \[\operatorname{d}H=|\Delta_{N|M}(X|Y)|^{2}\operatorname{d}X \operatorname{d}Y\operatorname{d}U \tag{3.7}\] where we define \[\operatorname{d}X=\prod_{i\in[N]}\operatorname{d}\!x_{i}\,\quad \operatorname{d}Y=\prod_{i\in[M]}\operatorname{d}\!y_{i}\,\quad \operatorname{d}U=\prod_{1\leq i<j\leq N+M}\operatorname{Re}(U^{\dagger} \operatorname{d}U)_{ij}\operatorname{Im}(U^{\dagger}\operatorname{d}U)_{ij}\,. \tag{3.8}\] In particular, \(\mathrm{d}U\) is the Haar measure on the unitary supergroup \(\mathrm{U}(N|M)\). The Jacobian part is given by the Cauchy determinant \[\Delta_{N|M}(X|Y)=\prod_{i<j}^{N}(x_{j}-x_{i})\prod_{i<j}^{M}(y_{j}-y_{i})\prod_ {i\in[N],j\in[M]}(x_{i}-y_{j})^{-1}\,, \tag{3.9}\] which has a determinantal formula for \(N\geq M\) as \[\Delta_{N|M}(X|Y)=\det_{\begin{subarray}{c}i\in[N],\,j\in[M]\\ k\in[N-M]\end{subarray}}\biggl{(}x_{i}^{k-1}\quad\frac{1}{x_{i}-y_{j}}\biggr{)}\;. \tag{3.10}\] In the limit \(M\to 0\), this is reduced to the Vandermonde determinant, \[\Delta_{N|0}(X)=\Delta_{N}(X)=\prod_{i<j}^{N}(x_{j}-x_{i})\,. \tag{3.11}\] Based on these diagonalization process, we obtain the eigenvalue integral form of the partition function, \[Z_{N|M}=\frac{\mathrm{vol}(\mathrm{U}(N|M))}{N!M!}\int\mathrm{d}\mu(X)\, \mathrm{d}\mu(Y)\,|\Delta_{N|M}(X|Y)|^{2}\,, \tag{3.12}\] where the integral measure is given by \[\mathrm{d}\mu(X)=\prod_{i\in[N]}\frac{\mathrm{d}x_{i}}{2\pi}\mathrm{e}^{- \frac{1}{g}V(x_{i})}\,,\qquad\mathrm{d}\mu(Y)=\prod_{i\in[M]}\frac{\mathrm{d}y _{i}}{2\pi}\mathrm{e}^{+\frac{1}{g}V(y_{i})}\,. \tag{3.13}\] Constant factors are understood as follows: The factorial terms are the volumes of \(\mathfrak{S}_{N}\) and \(\mathfrak{S}_{M}\), which are the Weyl group of \(\mathrm{U}(N)\) and \(\mathrm{U}(M)\), and \((2\pi)^{N+M}=\mathrm{vol}(\mathrm{U}(1)^{N+M})\), the volume of the maximal Cartan torus of \(\mathrm{U}(N|M)\). We have several remarks on this formula: 1. The signatures of the potential term for the \(x\)-variables and the \(y\)-variables are opposite. Hence, we should consider a complex contour to obtain a converging integral. For example, for the Gaussian case \(V(x)=\frac{1}{2}x^{2}\), the \(x\)-integral is taken along the real axis, \(-\infty\to+\infty\), while the \(y\)-integral should be taken along the imaginary axis, \(-\mathrm{i}\infty\to+\mathrm{i}\infty\), or vice versa. 2. The denominator contribution in the Cauchy determinant is singular in the limit \(x_{i}\to y_{j}\). If there is an intersection of the \(x\)-contour and the \(y\)-contour, such a singularity should be regularized using the principal value prescription. 3. The volume of the unitary supergroup becomes zero if \(NM\neq 0\) due to the Grassmann variable integral (Berezin's theorem. See e.g., [13]). Hence, we consider the partition function formally normalized by this zero-volume, \(\mathcal{Z}_{N|M}:=Z_{N|M}/\,\mathrm{vol}(\mathrm{U}(N|M))\).8 Footnote 8: It would be possible that the zero-volume factor cancels the diverging behavior of the eigenvalue integral to give a finite value in the end. We do not discuss details of this issue any further in this article. Taking into account these points, we obtain the eigenvalue integral form of the supermatrix partition function. #### \(\boldsymbol{\sim}\) Hermitian supermatrix model The eigenvalue integral form of the regularized partition function of the Hermitian supermatrix model is given as follows, \[\mathcal{Z}_{N|M}=\frac{1}{N!M!}\mathchoice{{\vbox{\hbox{$-$}} \kern-13.0pt}}{{\vbox{\hbox{$-$}}\kern-13.0pt}}{{\vbox{\hbox{$ -$}}\kern-13.0pt}}{{\vbox{\hbox{$-$}}\kern-13.0pt}}\! \int_{\gamma_{x}^{N}\times\gamma_{y}^{M}}\mathrm{d}\mu(X)\,\mathrm{d}\mu(Y)\,| \Delta_{N|M}(X|Y)|^{2}\,, \tag{3.14}\] where we denote the principal value integral by \(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.0pt}}{{\vbox{\hbox{$ -$}}\kern-13.0pt}}{{\vbox{\hbox{$-$}}\kern-13.0pt}}{{\vbox{ \hbox{$-$}}\kern-13.0pt}}\!\int\mathrm{d}x\,f(x)\), and \(\gamma_{x}\) and \(\gamma_{y}\) are the integration contours on the complex plane that provide a converging integral. ### Real-quaternion supermatrix model: super orthosymplectic ensemble We consider an \((N|M)\)-dimensional real-quaternion self-conjugate supermatrix, \[H=\begin{pmatrix}A&B\\ C&D\end{pmatrix} \tag{3.15}\] where \(A\) is an \(N\)-dimensional real symmetric matrix, and \(D\) is an \(M\)-dimensional quaternion self-dual matrix (realized as a \(2M\)-dimensional Hermitian matrix). \(B\) is a real Grassmann matrix of size \(N\times 2M\) and, in order that \(H\) is self-conjugate \(H^{\dagger}=H\), we have \(C=-B^{\ddagger}\). In this case, similarly to the Hermitian case (3.3), we can diagonalize the supermatrix via the orthosymplectic transformation, \[H=UZU^{\dagger}\,,\qquad U\in\mathrm{OSp}(N|M)\,. \tag{3.16}\] The diagonal supermatrix \(Z\) is given by \[Z=\begin{pmatrix}X&0\\ 0&Y\otimes\mathbbm{1}\end{pmatrix}\,, \tag{3.17}\] where we denote the identity element in quaternion by \(\mathbbm{1}\). Namely, if we use the two-by-two matrix realization of quaternion, it is given by the identity matrix of rank two. Hence, the supertrace is in this case given by \[\operatorname{str}Z=\sum_{i\in[N]}x_{i}-2\sum_{j\in[M]}y_{j}\,. \tag{3.18}\] We should be careful of the multiplicity in the quaternionic sector, which corresponds to the Kramers doublet. We will define a deformed supertrace operation respecting this multiplicity (see (3.26)). Applying the same argument to the unitary case, we have the relation (3.5) where the corresponding components are given as in (3.6). In this case, we should be careful of that the \(y\)-variables are doubly degenerated. Hence, we should count twice for the mixing terms \(y_{j}-x_{i}\) and four (\(=2\times 2\)) times for the quaternion part \(y_{j}-y_{i}\). Therefore, the real-quaternion supermatrix measure is given as follows, \[\mathrm{d}H=|\Delta^{(1|4)}_{N|M}(X|Y)|\,\mathrm{d}X\,\mathrm{d}Y\, \mathrm{d}U \tag{3.19}\] where we define \[\mathrm{d}X=\prod_{i\in[N]}\mathrm{d}x_{i}\,\qquad\mathrm{d}Y=\prod_{i \in[M]}\mathrm{d}y_{i}\, \tag{3.20}\] and the corresponding Haar measure \(\mathrm{d}U\) of supergroup \(\mathrm{OSp}(N|M)\). We use the following notation for the Jacobian part, \[\Delta^{(\beta|\beta^{\prime})}_{N|M}(X|Y)=\frac{\Delta_{N}(X)^{ \beta}\Delta_{M}(Y)^{\beta^{\prime}}}{\prod_{i\in[N],j\in[M]}(x_{i}-y_{j})^{2}} \tag{3.21}\] In this notation, we have \(\Delta_{N|M}(X|Y)^{2}=\Delta^{(2|2)}_{N|M}(X|Y)\). Collecting all the contributions, we obtain the real-quaternion supermatrix. \(\boldsymbol{\frown}\) **Real-quaternion supermatrix model** -- The eigenvalue integral form of the regularized partition function of real-quaternion supermatrix model is given as follows, \[\mathcal{Z}_{N|M}:=\frac{Z_{N|M}}{\mathrm{vol}(\mathrm{OSp}(N|M)) }=\frac{1}{N!M!}\!\!\int_{\gamma_{x}^{N}\times\gamma_{y}^{M}}\mathrm{d}\mu(X) \,\mathrm{d}\mu(Y)\,|\Delta^{(1|4)}_{N|M}(X|Y)|\,. \tag{3.22}\] We study several aspects of the supermatrix models in the following part. ### Coulomb gas analysis In the context of matrix model, we are in particular interested in the asymptotic behavior in the large size limit of the matrix model. We study such an asymptotic limit of the supermatrix model based on the Coulomb gas analysis. See also [1] for details in this part. We start with the partition function of the \(\beta\)-deformed supermatrix model, \[\mathcal{Z}_{N|M}=\frac{1}{N!M!}\int\mathrm{d}\mu(X)\,\mathrm{d} \mu(Y)\,|\Delta^{(\beta|\beta^{\prime})}_{N|M}(X|Y)|\,,\qquad\beta\beta^{ \prime}=4\,, \tag{3.23}\] where the integral measure is given by \[\mathrm{d}\mu(X)=\prod_{i\in[N]}\frac{\mathrm{d}x_{i}}{2\pi}\mathrm{e}^{-\frac {b}{g}V(x_{i})}\,,\qquad\mathrm{d}\mu(Y)=\prod_{i\in[M]}\frac{\mathrm{d}y_{i} }{2\pi}\mathrm{e}^{+\frac{b^{-1}}{g}V(y_{i})}\,,\qquad b=\sqrt{\frac{\beta}{2}}\,. \tag{3.24}\] In this notation, the measure factor is given by \[\Delta^{(\beta|\beta^{\prime})}_{N|M}(X|Y)=\frac{\Delta_{N}(X)^{ 2b^{2}}\Delta_{M}(Y)^{2b^{-2}}}{\prod_{i\in[N],j\in[M]}(x_{i}-y_{j})^{2}}\,. \tag{3.25}\] For the \(\beta\)-deformed case, it is convenient to define the \(b\)-deformed supertrace, \[\operatorname{str}_{b}\begin{pmatrix}A&B\\ C&D\end{pmatrix}=b\operatorname{tr}_{0}A-b^{-1}\operatorname{tr}_{1}D\,. \tag{3.26}\] With this operation, the potential factor is concisely written as \[\operatorname{str}_{b}V(Z)=b\sum_{i\in[N]}V(x_{i})-b^{-1}\sum_{j\in[M]}V(y_{j})\,, \tag{3.27}\] where the diagonal supermatrix \(Z\) is given as (3.4). Such a deformation is discussed in the context of symmetric polynomial associated with the Lie superalgebra root system. See, e.g., [11, 12, 13, 14], and also discussions in Secs. 6.4 and 6.5. #### 3.3.1 Saddle point equation To study the asymptotic behavior of the supermatrix model, we rewrite the partition function in the following form, \[\mathcal{Z}_{N|M}\approx\int\mathrm{d}X\,\mathrm{d}Y\,\mathrm{e}^{-\frac{1}{ g^{2}}S(X|Y)} \tag{3.28}\] where the integral measure is given by \[\mathrm{d}X=\prod_{i\in[N]}\mathrm{d}x_{i}\,\qquad\mathrm{d}Y=\prod_{j\in[M] }\mathrm{d}y_{j}\, \tag{3.29}\] and we define the effective action \[S(X|Y) =bg\sum_{i\in[N]}V(x_{i})-b^{-1}g\sum_{j\in[M]}V(y_{j})\] \[\quad-2b^{2}g^{2}\sum_{i<j}^{N}\log(x_{i}-x_{j})-2b^{-2}g^{2}\sum _{i<j}^{M}\log(y_{i}-y_{j})+2g^{2}\sum_{i\in[N],j\in[M]}\log(x_{i}-y_{j})\,. \tag{3.30}\] Then, introducing two parameters, \[\mathfrak{t}_{0}=bgN\,,\qquad\mathfrak{t}_{1}=b^{-1}gM\,, \tag{3.31}\] we consider the following asymptotic limit ('t Hooft limit) of the supermatrix model, \[g\ \longrightarrow\ 0\,,\qquad N,M\ \longrightarrow\ \infty\,,\qquad \mathfrak{t}_{0},\mathfrak{t}_{1}=O(1)\,. \tag{3.32}\] In the 't Hooft limit, the eigenvalue integral localizes on the configuration that obeys the following saddle point equations, \[0 =\frac{\partial S}{\partial x_{i}}=+bgV^{\prime}(x_{i})-2b^{2}g^{2} \sum_{j\in[N]\setminus\{i\}}\frac{1}{x_{i}-x_{j}}+2g^{2}\sum_{j\in[M]}\frac{1}{ x_{i}-y_{j}}\,, \tag{3.33a}\] \[0 =\frac{\partial S}{\partial y_{i}}=-b^{-1}gV^{\prime}(y_{i})-2b^{ -2}g^{2}\sum_{j\in[M]\setminus\{i\}}\frac{1}{y_{i}-y_{j}}+2g^{2}\sum_{j\in[N]} \frac{1}{y_{i}-x_{j}}\,. \tag{3.33b}\] We introduce the auxiliary functions, \[W_{0}(x) =bg\sum_{i\in[N]}\frac{1}{x-x_{i}}=bg\,\mathrm{tr}_{0}\left( \frac{1}{x-X}\right)\,, \tag{3.34a}\] \[W_{1}(x) =b^{-1}g\sum_{i\in[M]}\frac{1}{x-y_{i}}=b^{-1}g\,\mathrm{tr}_{1} \left(\frac{1}{x-Y}\right)\,,\] (3.34b) \[P_{0}(x) =bg\sum_{i\in[N]}\frac{V^{\prime}(x)-V^{\prime}(x_{i})}{x-x_{i}} =bg\,\mathrm{tr}_{0}\left(\frac{V^{\prime}(x)-V^{\prime}(X)}{x-X}\right)\,,\] (3.34c) \[P_{1}(x) =b^{-1}g\sum_{i\in[M]}\frac{V^{\prime}(x)-V^{\prime}(y_{i})}{x-y _{i}}=b^{-1}g\,\mathrm{tr}_{1}\left(\frac{V^{\prime}(x)-V^{\prime}(Y)}{x-Y} \right)\,. \tag{3.34d}\] The auxiliary functions \(W_{\sigma}(x)\) are in particular called the resolvents that involve a pole singularity at \(x\in\{x_{i}\}_{i\in[N]}\) and \(x\in\{y_{j}\}_{j\in[M]}\), respectively. Although the other functions \(P_{\sigma}(x)\) look a similar form, they are polynomial functions having no pole singularity. Recalling that the potential function is given as (3.2), the asymptotic behaviors of these auxiliary functions are given by \[W_{\sigma}(x)\ \xrightarrow{x\to\infty}\ \ \tfrac{\mathsf{t}_{\sigma}}{x}\,, \qquad P_{\sigma}(x)\ \xrightarrow{x\to\infty}\ \ \mathsf{t}_{\sigma}t_{d+1}x^{d-1}\,. \tag{3.35}\] Using these auxiliary functions, we may rewrite the saddle point equation (3.33) as follows, \[0 =-P_{0}(x)+V^{\prime}(x)W_{0}(x)-W_{0}(x)^{2}-bgW_{0}^{\prime}(x) +2g^{2}\sum_{i\in[N],j\in[M]}\frac{1}{(x-x_{i})(x_{i}-y_{j})}\,, \tag{3.36a}\] \[0 =+P_{1}(x)-V^{\prime}(x)W_{1}(x)-W_{1}(x)^{2}-b^{-1}gW_{1}^{\prime }(x)+2g^{2}\sum_{i\in[N],j\in[M]}\frac{1}{(x-y_{j})(y_{j}-x_{i})}\,. \tag{3.36b}\] Moreover, we define the supertrace analog of the auxiliary functions, \[\mathsf{W}(x) =W_{0}(x)-W_{1}(x)=g\,\mathrm{str}_{0}\left(\frac{1}{x-Z}\right)\,, \tag{3.37a}\] \[\mathsf{P}(x) =P_{0}(x)-P_{1}(x)=g\,\mathrm{str}_{b}\left(\frac{V^{\prime}(x)-V ^{\prime}(Z)}{x-Z}\right)\,, \tag{3.37b}\] which show the following asymptotic behavior, \[\mathsf{W}(x)\ \xrightarrow{x\to\infty}\ \ \tfrac{\mathsf{t}_{0}-\mathsf{t}_{1}}{x}\,, \qquad\mathsf{P}(x)\ \xrightarrow{x\to\infty}\ \ (\mathsf{t}_{0}-\mathsf{t}_{1})t_{d+1}x^{d-1}\,. \tag{3.38}\] The total resolvent \(\mathsf{W}(x)\), that we call the superresolvent, has poles with the residue \(+bg\) for \(x\in\{x_{i}\}\) and \(-b^{-1}g\) for \(x\in\{y_{i}\}\), while \(\mathsf{P}(x)\) is again a polynomial function. Then, combining the two equations, we obtain \[0=\mathsf{P}(x)-V^{\prime}(x)\mathsf{W}(x)+\mathsf{W}(x)^{2}+g(bW^{\prime}_{0}(x) +b^{-1}W^{\prime}_{1}(x))\,. \tag{3.39}\] We study this equation in detail in Sec. 3.3.3. We remark that the two saddle point equations (3.33) are written as a single equation using the superresolvent, \[0=V^{\prime}(x)-2\mathsf{W}_{\rm reg}(x)\,,\qquad x\in\{x_{i},y_{j}\}\,, \tag{3.40}\] where we define the regularized one by \[\mathsf{W}_{\rm reg}(x)=\begin{cases}bg\sum_{j\in[N]\setminus\{i\}}\frac{1}{ x-x_{j}}-W_{1}(x)&(x=x_{i})\\ W_{0}(x)-b^{-1}g\sum_{j\in[M]\setminus\{i\}}\frac{1}{x-y_{j}}-W_{1}(x)&(x=y_{i} )\end{cases} \tag{3.41}\] We remark that the expression of the saddle point equation (3.40) is identical to the standard matrix model by replacing the superresolvent with the original one. However, its analytic property should be different since the superresolvent may contain both positive and negative residues, while the original resolvent only involves positive one. #### 3.3.2 Functional method Let us discuss an alternative approach based on the functional method. We define the density functions \(\rho_{\sigma}(x)\) for \(\sigma=0,1\). Then, we rewrite the effective action (3.30) using these density functions, \[\begin{split} S[\rho_{0,1}]&=\sum_{\sigma=0,1}\left[ (-1)^{\sigma}\mathsf{t}_{\sigma}\int\mathrm{d}x\,\rho_{\sigma}(x)V(x)-2\mathsf{ t}_{\sigma}^{2}\int_{x<y}\mathrm{d}x\,\mathrm{d}y\,\rho_{\sigma}(x)\rho_{ \sigma}(y)\log|x-y|\right]\\ &\qquad+2\mathsf{t}_{0}\mathsf{t}_{1}\!\!\!\int\mathrm{d}x\, \mathrm{d}y\,\rho_{0}(x)\rho_{1}(y)\log|x-y|+\sum_{\sigma=0,1}\sum_{i=1}^{m_{ \sigma}}\mathsf{t}_{\sigma}\ell_{\sigma,i}\left(\epsilon_{\sigma,i}-\int_{ \mathcal{C}_{\sigma,i}}\mathrm{d}x\,\rho_{\sigma}(x)\right)\,,\end{split} \tag{3.42}\] where we consider the \(m_{\sigma}\)-cut solution for each sector, \(\sigma=0,1\): We added the Lagrange multiplier that imposes the condition \[\int_{\mathcal{C}_{\sigma,i}}\mathrm{d}x\,\rho_{\sigma}(x)=\epsilon_{\sigma, i}\,, \tag{3.43}\] where we denote the cut by \[\mathcal{C}_{\sigma}=\bigsqcup_{i\in[m_{\sigma}]}\mathcal{C}_{\sigma,i}\,, \qquad\sigma=0,1\,, \tag{3.44}\] and the corresponding filling fraction \((\epsilon_{\sigma,i})_{\sigma=0,1,i\in[m_{\sigma}]}\) with \(\sum_{i\in[m_{\sigma}]}\epsilon_{\sigma,i}=1\). Taking the functional derivative of the effective action, we obtain \[x\in\mathcal{C}_{\sigma,i}\ :\quad\mathfrak{t}_{\sigma}^{-1}\frac{ \delta S[\rho_{0,1}]}{\delta\rho_{\sigma}(x)} =(-1)^{\sigma}\left(V(x)-2\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499949pt}}{{\vbox{\hbox{$-$}}\kern-12.149815pt}}{{ \vbox{\hbox{$-$}}\kern-9.899849pt}}{{\vbox{\hbox{$-$}} \kern-8.999863pt}}\!\int\mathrm{d}y\,\bar{\rho}(y)\log|x-y|\right)-\ell_{ \sigma,i}\] \[=:(-1)^{\sigma}V_{\mathrm{eff}}(x)-\ell_{\sigma,i}\,, \tag{3.45}\] where we define the effective potential \(V_{\mathrm{eff}}(x)\) and the effective density functions, \[\bar{\rho}(x)=\mathfrak{t}_{0}\rho_{0}(x)-\mathfrak{t}_{1}\rho_{1}(x)\,. \tag{3.46}\] The functional version of the saddle point equation (3.40) is obtained by the derivative of the effective potential, \[\frac{\mathrm{d}V_{\mathrm{eff}}(x)}{\mathrm{d}x}=V^{\prime}(x)-2\mathchoice{{ \vbox{\hbox{$-$}}\kern-13.499949pt}}{{\vbox{\hbox{$-$}} \kern-12.149815pt}}{{\vbox{\hbox{$-$}}\kern-9.899849pt}}{{ \vbox{\hbox{$-$}}\kern-8.999863pt}}\!\int\mathrm{d}y\,\frac{ \bar{\rho}(y)}{x-y}\,. \tag{3.47}\] Writing the integral form of the superresolvent, \[\mathsf{W}(x)=\int\mathrm{d}y\,\frac{\bar{\rho}(y)}{x-y}\,, \tag{3.48}\] the regularized one is given by the principal value integral \[\mathsf{W}_{\mathrm{reg}}(x) =\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499949pt}}{{ \vbox{\hbox{$-$}}\kern-12.149815pt}}{{\vbox{\hbox{$-$}} \kern-9.8999849pt}}{{\vbox{\hbox{$-$}} \kern-8.999863pt}}\!\int\mathrm{d}y\,\frac{\bar{\rho}(y)}{x-y}\] \[=\operatorname{Re}\mathsf{W}(x\pm 10):=\lim_{\epsilon\to 0^{+}} \frac{\mathsf{W}(x+\mathrm{i}\epsilon)+\mathsf{W}(x-\mathrm{i}\epsilon)}{2}\,. \tag{3.49}\] Hence, we obtain the functional version of (3.40) as follows, \[\frac{\mathrm{d}V_{\mathrm{eff}}(x)}{\mathrm{d}x}=0\ \ \Longrightarrow\ \ V^{\prime}(x)-2 \operatorname{Re}\mathsf{W}(x\pm 10)=0\,,\qquad x\in\mathcal{C}_{\sigma}\,. \tag{3.50}\] #### 3.3.3 Spectral curve and quantization We have seen that the saddle point equation of the supermatrix model gives rise to the relation among the resolvents as shown in (3.39). Further taking the limit \(g\to 0\), the relation (3.39) is written in a closed form of the superresolvent, \[0=\mathsf{W}(x)^{2}-V^{\prime}(x)\mathsf{W}(x)+\mathsf{P}(x)\,, \tag{3.51}\] which defines the spectral curve of the supermatrix model: \(\boldsymbol{\frown}\) **Spectral curve of supermatrix model** -- Given the potential function \(V^{\prime}(x)\) and the polynomial function \(\mathsf{P}(x)\), the spectral curve of the supermatrix is given as follows, \[\Sigma=\{(x,y)\in\mathbb{C}\times\mathbb{C}\mid\mathcal{H}(x,y)=0\}\,,\qquad \mathcal{H}(x,y)=y^{2}-V^{\prime}(x)y+\mathsf{P}(x)\,. \tag{3.52}\] This is formally identical to the spectral curve of the standard matrix model (see, e.g. [5]), but as we mentioned before, we should be careful of its analytic property. While the spectral curve is based on the closed equation for the superresolvent, the saddle point equation (3.39) itself is not written as a closed form. In order to discuss an alternative form, we rewrite the resolvents as follows, \[W_{0}(x)=bg\frac{\mathrm{d}}{\mathrm{d}x}\log\psi_{0}(x)\,,\qquad W _{1}(x)=b^{-1}g\frac{\mathrm{d}}{\mathrm{d}x}\log\psi_{1}(x) \tag{3.53}\] where we define the wave functions (characteristic polynomials), \[\psi_{0}(x)=\prod_{i\in[N]}(x-x_{i})\,,\qquad\psi_{1}(x)=\prod_{i \in[M]}(x-y_{i})\,. \tag{3.54}\] Then, the superresolvent is given by the logarithmic derivative \[\mathsf{W}(x)=g\frac{\mathrm{d}}{\mathrm{d}x}\log\frac{\psi_{0} (x)^{b}}{\psi_{1}(x)^{b^{-1}}}\,. \tag{3.55}\] Together with these wave functions, we can recast the saddle point equation (3.39) in the following form, \[\left[D_{x}^{(b)2}-V^{\prime}(x)D_{x}^{(b)}+\mathsf{P}(x)\right] \psi_{0}\cdot\psi_{1}=0\,, \tag{3.56}\] where we define the \(b\)-deformed Hirota derivative, \[D_{x}^{(b)}\psi\cdot\phi=g\left(b\frac{\partial}{\partial x}-b^ {-1}\frac{\partial}{\partial x^{\prime}}\right)\psi(x)\phi(x^{\prime})\Big{|} _{x=x^{\prime}}=gb\psi^{\prime}(x)\phi(x)-gb^{-1}\psi(x)\phi^{\prime}(x)\,. \tag{3.57}\] The standard Hirota derivative corresponds to the case \(b=1\) (\(\beta=2\)): \(D_{x}=D_{x}^{(1)}\). In fact, this bilinear equation is interpreted as a quantization of the spectral curve (3.52): \(\boldsymbol{\frown}\) **Quantum curve for \(\beta\)-supermatrix model**: Based on the two-variable function \(\mathcal{H}(x,y)\) that defines the spectral curve (3.52), we have the quantum curve for \(\beta\)-supermatrix model involving the Hirota derivative, \[\mathcal{H}(\hat{x},\hat{y})\psi_{0}\cdot\psi_{1}=0\,,\qquad\hat {x}=x\,,\;\hat{y}=gD_{x}^{(b)}\,. \tag{3.58}\] Recalling the definition of the Hirota derivative, the canonical commutation relation is given by \[[\hat{y},\hat{x}]=\begin{cases}+gb&\text{(for $\psi_{0}$)}\\ -gb^{-1}&\text{(for $\psi_{1}$)}\end{cases} \tag{3.59}\] From this point of view, we call the bilinear equation (3.56) the quantum curve for the supermatrix model. We now have the both positive and negative quantum parameters (Planck constants) for quantization of the supermatrix spectral curve, corresponding to that the superresolvent has both positive and negative residues. Moreover, we now relate the supermatrix parameters to the so-called \(\Omega\)-background param eters (see Sec. 5), \[(\epsilon_{1},\epsilon_{2})=\left(g,-\frac{g}{b^{2}}\right)\,. \tag{3.60}\] In this notation, we have \(b^{2}=-\epsilon_{1}/\epsilon_{2}\), and thus the condition \(b=1\) is equivalent to \(\epsilon_{1}+\epsilon_{2}=0\). Then, the \(b\)-Hirota derivative is rewritten using \(\epsilon_{1,2}\) as follows, \[D_{x}^{(b)}\psi\cdot\phi=b\epsilon_{1}\psi^{\prime}(x)\phi(x)+b \epsilon_{2}\psi(x)\phi^{\prime}(x)=:bD_{x}^{(\epsilon_{1},\epsilon_{2})} \psi\cdot\phi\,, \tag{3.61}\] where \(D_{x}^{(\epsilon_{1},\epsilon_{2})}\) is the \((\epsilon_{1},\epsilon_{2})\)-deformed Hirota derivative defined in [25]. This operator is reduced to the ordinary derivative in the limit \(\epsilon_{1}\to 0\) or \(\epsilon_{2}\to 0\), which is called the Nekrasov-Shatashvili (NS) limit, \[D_{x}^{(\epsilon_{1},\epsilon_{2})}\psi\cdot\phi\ \longrightarrow\ \begin{cases} \epsilon_{1}\psi^{\prime}(x)\phi(x)&(\epsilon_{2}\to 0)\\ \epsilon_{2}\psi(x)\phi^{\prime}(x)&(\epsilon_{1}\to 0)\end{cases} \tag{3.62}\] See also Sec. 6.4 for a related discussion. #### 3.3.4 Gaussian model We consider the simplest example with the quadratic potential \(V(x)=\frac{1}{2}x^{2}\), which is called the Gaussian matrix model. In this case, the quantum curve is given by \[\left[D_{x}^{(b)2}-xD_{x}^{(b)}+(\mathfrak{t}_{0}-\mathfrak{t}_{ 1})\right]\psi_{0}\cdot\psi_{1}=0\,. \tag{3.63}\] Hence, in particular for the unitary case \(b=1\) (\(\beta=2\)), we have \[\left[D_{\xi}^{2}-\xi D_{\xi}+(N-M)\right]\psi_{0}\cdot\psi_{1}= 0\,,\qquad x=\sqrt{g}\xi\,. \tag{3.64}\] This bilinear equation is known to be (a part of) the bilinear equations for the \(\tau\)-functions in the symmetric form of the Painleve IV equation [25]. In this case, the polynomial solution is given by the generalized Hermite polynomial, which is given through specialization of Schur functions. In the NS limit, as mentioned above, the Hirota derivative is reduced to the ordinary derivative, where this bilinear equation is accordingly reduced to the differential equation for the Hermite polynomial. ### Free field realization We turn to discuss algebraic aspects of supermatrix model. In particular, we show that the supermatrix partition function has a realization in terms of the chiral boson fields (free field realization). #### 3.4.1 Operator formalism In order to discuss the free field formalism, we consider the ordinary \(\beta\)-deformed matrix model of rank \(N\), \[\mathcal{Z}_{N}=\int\mathrm{d}X\,\mathrm{e}^{-\frac{b}{g}\,\mathrm{ tr}\,V(X)}\Delta_{N}(X)^{2b^{2}}\,. \tag{3.65}\] In this case, the matrix moment (the power-sum average of the eigenvalues) is given as follows, \[\langle\mathrm{tr}\,X^{n}\rangle=\frac{1}{\mathcal{Z}_{N}}\int \mathrm{d}X\,(\mathrm{tr}\,X^{n})\,\mathrm{e}^{-\frac{b}{g}\,\mathrm{tr}\,V(X) }\Delta_{N}(X)^{2b^{2}}=-\frac{b}{g}n\frac{\partial}{\partial t_{n}}\log \mathcal{Z}_{N}\,. \tag{3.66}\] Hence, the derivative with the coupling constant \(\{t_{n}\}\) plays a similar role to the multiplication of the matrix power \(\{\mathrm{tr}\,X^{n}\}\). This is because the potential factor \(\mathrm{e}^{-\frac{b}{g}\,\mathrm{tr}\,V(X)}\) plays a role of the plane wave factor \(\mathrm{e}^{\mathrm{i}px}\) in the Fourier transform (FT). In order to have this correspondence for all \(n\in\mathbb{N}\), we consider the potential with infinitely many coupling constants, \(V(x)=\sum_{n=1}^{\infty}\frac{t_{n}}{n}x^{n}\). For the supermatrix model defined by \[\mathcal{Z}_{N|M}=\frac{1}{N!M!}\int\mathrm{d}X\,\mathrm{d}Y\, \mathrm{e}^{-\frac{1}{g}\,\mathrm{str}_{b}\,V(Z)}\frac{\Delta_{N}(X)^{2b^{2}} \Delta_{M}(Y)^{2b^{-2}}}{\prod_{i\in[N],j\in[M]}(x_{i}-y_{j})^{2}}\,, \tag{3.67}\] the same argument is applied for \(\langle\mathrm{str}_{b}\,X^{n}\rangle\leftrightarrow-\frac{1}{g}n\frac{ \partial}{\partial t_{n}}\). Then, we define oscillator operators, \[a_{n}=\sqrt{2}gn\frac{\partial}{\partial t_{n}}\left(\stackrel{{ \mathrm{FT}}}{{\longleftrightarrow}}\ -\frac{1}{\sqrt{2}b}\,\mathrm{tr}\,X^{n}\right)\,,\qquad a_{-n}=\frac{1}{ \sqrt{2}g}t_{n}\,,\qquad(n>0) \tag{3.68}\] which obeys the commutation relation of the Heisenberg algebra, \[[a_{n},a_{m}]=n\delta_{n+m,0}\,. \tag{3.69}\] In addition, we also add the zero modes \((a_{0},\bar{a}_{0})\) with the commutation relation, \[[a_{n},\bar{a}_{0}]=\delta_{n,0}\,, \tag{3.70}\] where we interpret \(a_{0}\stackrel{{\mathrm{FT}}}{{\longleftrightarrow}}-(\mathrm{ tr}\,X^{0})/\sqrt{2}b=-N/\sqrt{2}b\). In this formalism, there exist infinitely many operators \(\{a_{n}\}_{n\in\mathbb{Z}}\). They are independent operators if the matrix size is taken to be infinite \(N\to\infty\). For example, if \(N=1\), we have relations among these operators, \(\mathrm{tr}\,X^{n}=(\mathrm{tr}\,X)^{n}\). We define the vacuum state, which is annihilated by the positive modes, \[a_{n}|0\rangle=0\,,\qquad n\geq 0\,. \tag{3.71}\] In this sense, the positive modes are the annihilation operators, and the negative modes are the creation operators. We also define the charged vacuum using the zero mode, \[|\alpha\rangle=\mathrm{e}^{\alpha\bar{a}_{0}}|0\rangle\,,\qquad a _{0}|\alpha\rangle=\alpha|\alpha\rangle\,. \tag{3.72}\] Based on the operators defined above, we define the operators, called the _chiral boson_ and the _U(1) current_, as follows, \[\phi(x) =-\sum_{n\in\mathbb{Z}\neq 0}\frac{a_{n}}{n}x^{-n}+a_{0}\log x+\bar{a}_{ 0} \implies -\frac{1}{\sqrt{2}b}\left[\operatorname{tr}\log(x-X)-\frac{b}{g}V(x) \right]\,, \tag{3.73a}\] \[J(x) =\partial\phi(x)=\sum_{n\in\mathbb{Z}}\frac{a_{n}}{x^{n+1}} \implies -\frac{1}{\sqrt{2}b}\left[\operatorname{tr}\left(\frac{1}{x-X} \right)-\frac{b}{g}V^{\prime}(x)\right]\,. \tag{3.73b}\] Furthermore, we also define the energy-momentum tensor \[T(x)=\frac{1}{2}\colon JJ::\!(x)+\rho\partial J(x)=:\sum_{n\in \mathbb{Z}}\frac{L_{n}}{x^{n+2}}\,,\qquad\rho=\sqrt{2}(b-b^{-1})\,, \tag{3.74}\] where we denote the normal ordering symbol by \(:-:\), the annihilation operators are placed to right, the creation operators are placed to left in this symbol. Recalling the operator product expansion (OPE) of the current operators, \[J(x)J(x^{\prime})=\frac{1}{(x-x^{\prime})^{2}}+\operatorname{ regular}, \tag{3.75}\] the normal ordering is given as follows, \[:JJ:\!(x)=\lim_{x^{\prime}\to x}\left[J(x)J(x^{\prime})-\frac{1}{(x-x^{ \prime})^{2}}\right]\,. \tag{3.76}\] It turns out that the generators \(\{L_{n}\}_{n\in\mathbb{Z}}\) written in terms of the oscillators, \[L_{n}=\frac{1}{2}\sum_{m\in\mathbb{Z}}:a_{m}a_{n-m}:-\rho(n+1)a_{n}\,, \tag{3.77}\] obey the algebraic relation of the Virasoro algebra, \[[L_{n},L_{m}]=(n-m)L_{n+m}+\frac{c}{12}n(n^{2}-1)\delta_{n+m,0}\,, \tag{3.78}\] with the central charge \[c=1-6(b-b^{-1})^{2}=\begin{cases}1&(\beta=2)\\ -2&(\beta=1,4)\end{cases}\,. \tag{3.79}\] Hence, the energy-momentum tensor plays a role of the generating current of the Virasoro algebra, and the construction of the energy-momentum tensor as a bilinear form of the current operator is called the Sugawara construction. #### 3.4.2 Vertex operators We define the vertex operator from the chiral boson, \[\mathsf{V}_{\alpha}(x)=:\mathrm{e}^{\alpha\phi(x)}:. \tag{3.80}\] Then, the OPE with the energy-momentum tensor is given by \[T(x){\sf V}_{\alpha}(x^{\prime})=\frac{\Delta_{\alpha}}{(x-x^{\prime})^{2}}{\sf V }_{\alpha}(x^{\prime})+\frac{1}{x-x^{\prime}}\frac{\partial}{\partial x^{\prime }}{\sf V}_{\alpha}(x^{\prime})+\text{regular} \tag{3.81}\] where the coefficient called the conformal weight \(\Delta_{\alpha}\) is given by \[\Delta_{\alpha}=\frac{1}{2}\alpha(\alpha+\rho)\,. \tag{3.82}\] From the Virasoro algebra point of view, the conformal weight is given by the eigenvalue of the operator \(L_{0}\). Moreover, the operator annihilated by \(L_{n>0}\) is called the primary operator, and the vertex operator \({\sf V}_{\alpha}(x)\) is actually primary. We remark that there are two possibilities to provide \(\Delta_{\alpha}=1\), \[\alpha_{0}=-\sqrt{2}b\,,\qquad\alpha_{1}=\sqrt{2}b^{-1}\quad \implies\quad\Delta_{\alpha_{0,1}}=1\,. \tag{3.83}\] Hence, defining the screening current having the conformal weight one, \[S_{\sigma}(x)=:{\rm e}^{\alpha_{\sigma}\phi(x)}:\,,\qquad\sigma =0,1\,, \tag{3.84}\] the singular part of the OPE with the energy-momentum tensor is written as a total derivative, \[T(x)S_{\sigma}(x^{\prime}) =\frac{1}{(x-x^{\prime})^{2}}S_{\sigma}(x^{\prime})+\frac{1}{x-x^ {\prime}}\frac{\partial}{\partial x^{\prime}}S_{\sigma}(x^{\prime})+\text{ regular}\] \[=\frac{\partial}{\partial x^{\prime}}\left[\frac{1}{x-x^{\prime}} S_{\sigma}(x^{\prime})\right]+\text{regular}\,. \tag{3.85}\] This implies that the screening charge defined by \[Q_{\sigma}=\oint{\rm d}x\,S_{\sigma}(x)\,, \tag{3.86}\] does not provide a singular contribution in the OPE, and thus it commutes with the energy-momentum tensor \[[T(x),Q_{\sigma}]=0\,. \tag{3.87}\] This is a crucial property that characterizes the Virasoro algebra: In fact, the Virasoro algebra is defined as a commuting sub-algebra of the Heisenberg algebra, and such a characterization can be applied to more generalized situations (W-algebras). See, e.g., [2]. #### 3.4.3 Construction of matrix model Let us discuss how to construct the matrix model based on the operator formalism discussed above. Recalling that the vertex operator product is given by \[\frac{{\sf V}_{\alpha}(x){\sf V}_{\alpha^{\prime}}(x^{\prime})}{ :{\sf V}_{\alpha}(x){\sf V}_{\alpha^{\prime}}(x^{\prime}):}=(x-x^{\prime})^{ \alpha\alpha^{\prime}}\,, \tag{3.88}\] the screening charge product is given by \[Q_{0}^{N} =\oint_{|x_{1}|<\cdots<|x_{N}|}\mathrm{d}X\,\Delta_{N}(X)^{2b^{2}} \colon\prod_{i\in[N]}S_{0}(x_{i}):\] \[=\frac{1}{N!}\oint\mathrm{d}X\,\Delta_{N}(X)^{2b^{2}}\colon\prod_{ i\in[N]}S_{0}(x_{i}):\,, \tag{3.89}\] where the integration contour is initially taken in the radial ordering, and then analytically continued to obtain the second expression. Hence, defining the \(\mathcal{Z}\)-state \[|\mathcal{Z}_{N}\rangle=Q_{0}^{N}|0\rangle\,, \tag{3.90}\] and the modified dual charged vacuum,9 Footnote 9: This modified vacuum is realized as a coherent state with respect to the Heisenberg algebra. \[\langle\alpha;d+1|a_{-n}=\begin{cases}\langle\alpha;d+1|\alpha&(n=0)\\ \langle\alpha;d+1|\frac{\mathfrak{t}_{n}}{\sqrt{2}g}&(n\in[d+1])\\ 0&(\text{otherwise})\end{cases} \tag{3.91}\] we obtain the matrix model partition function as a correlation function of the vertex operators, \[\langle N\alpha_{0};d+1|\mathcal{Z}_{N}\rangle=\langle N\alpha_{0};d+1|Q_{0}^ {N}|0\rangle=\frac{1}{N!}\oint\mathrm{d}X\,\mathrm{e}^{-\frac{b}{g}\operatorname {tr}V(X)}\Delta_{N}(X)^{2b^{2}}\,, \tag{3.92}\] where the potential function is now given by a finite polynomial function, \(V(x)=\sum_{n=1}^{d+1}\frac{\mathfrak{t}_{n}}{n}x^{n}\). Considering both screening charges \(Q_{0,1}\), we instead obtain \[|\mathcal{Z}_{N|M}\rangle :=Q_{0}^{N}Q_{1}^{M}|0\rangle\] \[=\frac{1}{N!M!}\oint\mathrm{d}X\,\mathrm{d}Y\,\frac{\Delta_{N}(X) ^{2b^{2}}\Delta_{M}(Y)^{2b^{-2}}}{\prod_{i\in[N],j\in[M]}(x_{i}-y_{j})^{2}} \colon\prod_{i\in[N]}S_{0}(x_{i})\prod_{j\in[M]}S_{1}(y_{j}):\,, \tag{3.93}\] which gives rise to the supermatrix model partition function. \(\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}}}}}}}}} \boldsymbol{}} \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbolboldsymbol { \boldsymbol { }}}}}}}}}}}} \boldsymbol{}} \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}}}} \boldsymbol{}} \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}} \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol \boldsymbol{\boldsymbol \boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \ \boldsymbol{ \ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \ \ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol { \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol{ \boldsymbol { \boldsymbol{ \boldsymbol { \boldsymbol{ \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol \boldsymbol { \boldsymbol{ \boldsymbol{ \boldsymbol{\boldsymbol \boldsymbol \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol {\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ { \boldsymbolboldsymbol{{{{{\boldsymbol\boldsymbol\boldsymbol\boldsymbol\boldsymbol\boldsymbol\boldsymbol}\ {{{{{{{\boldsymbol{\boldsymbol{\boldsymbol{{\boldsymbol}}\boldsymbol{}\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\ }}}\ }\ \boldsymbol{{{{{} } \boldsymbol{ \boldsymbol{ \boldsymbol {\boldsymbol{\boldsymbolboldsymbol{ \boldsymbol { \boldsymbol \boldsymbol { \boldsymbol \boldsymbol { \boldsymbol \boldsymbol { \boldsymbol \boldsymbol { \boldsymbol \boldsymbol { \boldsymbol \boldsymbol \boldsymbol} {\boldsymbol{ \boldsymbol} { \boldsymbol}}}}}}}}}}\) \[\ #### 3.4.4 Virasoro constraint Let us consider the action of the energy-momentum tensor on the vacuum state, \[T(x)|0\rangle=\sum_{n\in\mathbb{Z}}\frac{L_{n}}{x^{n+2}}|0\rangle\,. \tag{3.95}\] Requiring the regularity at \(x=0\), we should have \[L_{n}|0\rangle=0\,,\qquad n\geq-1\,. \tag{3.96}\] This condition is rephrased as follows: The vacuum is interpreted as the primary field with the conformal weight zero, hence \(L_{n\geq 0}\) annihilates the vacuum. Furthermore, \(L_{-1}\) is a generator of the translation realized by the derivative \(\frac{\partial}{\partial x}\). Hence, \(L_{-1}\) annihilates the vacuum since it is translation invariant. Recalling that the \(\mathcal{Z}\)-state is constructed from the vacuum with the screening charges, that commute with the energy-momentum tensor (and thus the Virasoro generators). This implies that the \(\mathcal{Z}\)-state shows the same behavior as in the case of the vacuum (3.96). \(\boldsymbol{\frown}\) **Virasoro constraint** The \(\mathcal{Z}\)-state constructed by the screening charges obeys the following relation, \[L_{n}|\mathcal{Z}_{N|M}\rangle=0\,,\qquad n\geq-1\,. \tag{3.97}\] This is called the Virasoro constraint for the (super)matrix model. We see how this Virasoro is obtained in the context of the matrix model. We start with the following identity, \[0 =\int\mathrm{d}X\sum_{\ell\in[N]}\frac{\partial}{\partial x_{\ell }}\left[x_{\ell}^{k}\mathrm{e}^{-\frac{b}{g}\operatorname{tr}V(X)}\Delta_{N}( X)^{2b^{2}}\right]\] \[=\int\mathrm{d}X\sum_{\ell\in[N]}\left[kx_{\ell}^{k-1}-\frac{b}{ g}V^{\prime}(x_{\ell})x_{\ell}^{k}+2b^{2}\sum_{j(\neq\ell)}\frac{x_{\ell}^{k}}{x_{ \ell}-x_{j}}\right]\mathrm{e}^{-\frac{b}{g}\operatorname{tr}V(X)}\Delta_{N}( X)^{2b^{2}}\,. \tag{3.98}\] Recalling the identity \[\sum_{i\in[N]}\sum_{j(\neq i)}\frac{2x_{i}^{k}}{x_{i}-x_{j}} =\sum_{i\neq j}\frac{x_{i}^{k}-x_{j}^{k}}{x_{i}-x_{j}}\] \[=\sum_{m=0}^{k-1}\operatorname{tr}X^{m}\operatorname{tr}X^{k-m-1 }-k\operatorname{tr}X^{k-1}\,, \tag{3.99}\] we obtain the following relation among the expectation values of the matrix moments, called the _loop equation_, \[\sum_{m=0}^{k-1}\langle\operatorname{tr}X^{m}\operatorname{tr}X^{k-m-1} \rangle+(b^{-2}-1)k\langle\operatorname{tr}X^{k-1}\rangle-\frac{b^{-1}}{g} \langle\operatorname{tr}X^{k}V^{\prime}(X)\rangle=0\,. \tag{3.100}\] In the operator formalism discussed in Sec. 3.4.1, we may write this relation as \[L_{k-1}\mathcal{Z}_{N}=0\,,\qquad k\geq 0\,, \tag{3.101}\] where we apply the free field realization of the Virasoro generator (3.77). See [10] for the loop equation of the supermatrix model in the case \(b=1\). ## 4 Supergroup gauge theory In this section, we introduce supergroup gauge theory, gauge theory having supergroup gauge symmetry, and discuss fundamental perspectives. #### Differential forms Let \(G\) be a Lie supergroup, and the corresponding Lie superalgebra \(\mathfrak{g}=\operatorname{Lie}G\). Let \(M\) be a \(d\)-dimensional space-time manifold. The fundamental degrees of freedom of supergroup gauge theory is the one-form connection that takes a value in \(\mathfrak{g}\), \(A\in\Omega^{1}(M,\mathfrak{g})\). Then, we define the covariant derivative \(D=d+A\) and the curvature two-form given by \(D^{2}=dA+A^{2}\in\Omega^{2}(M,\mathfrak{g})\). The \(G\)-gauge transformation is given by \[G\ :\ D\longmapsto gDg^{-1}\,,\qquad F\longmapsto gFg^{-1}\,,\qquad A \longmapsto gAg^{-1}+gdg^{-1}\,,\qquad g\in G\,. \tag{4.1}\] Formally, these expressions are parallel with the ordinary (non-supergroup) gauge theory. In the \(\mathbb{Z}_{2}\)-graded situation, the connection is then called the _superconnection_[12, 13]. Let us discuss the supermatrix realization of these differential forms. Let \(G=\operatorname{U}(n_{0}|n_{1})\) for the moment. In this case, the connection \(A\) is given by an anti-Hermitian supermatrix, \[A=\begin{pmatrix}A^{(0)}&\psi\\ \psi^{\dagger}&A^{(1)}\end{pmatrix}\quad\text{with}\quad A^{(\sigma)\dagger}=- A^{(\sigma)}\quad\Longrightarrow\quad A^{\dagger}=-A\,. \tag{4.2}\] We recall that \(\bar{\bar{\theta}}=-\theta\) for the Grassmann variable (see (2.3)). We remark that \(A^{(\sigma)}\) is in the adjoint representation of \(\operatorname{U}(n_{\sigma})\), and \(\psi\) is in the bifundamental representation, \(\operatorname{U}(n_{0})\times\overline{\operatorname{U}(n_{1})}\). Moreover, each component is given by \[A^{(\sigma)}=A^{(\sigma)}_{\mu}dx^{\mu}\,,\qquad\psi=\psi_{\mu}dx^{\mu}\,. \tag{4.3}\] Hence, \(\psi_{\mu}\) is a spin-1 fermionic degree of freedom, that is not compatible with the spin-statistics theorem. The curvature two-form is given by \[F=dA+A\wedge A=\begin{pmatrix}dA^{(0)}+A^{(0)}\wedge A^{(0)}+\psi\wedge\psi^{ \dagger}&d\psi+A\wedge\psi+\psi\wedge B\\ d\psi^{\dagger}+\psi^{\dagger}\wedge A+B\wedge\psi^{\dagger}&dA^{(1)}+A^{(1)} \wedge A^{(1)}+\psi^{\dagger}\wedge\psi\end{pmatrix}\,. \tag{4.4}\] We may write the wedge product using the anti-symmetrization symbol, \[a\wedge b=\frac{1}{2}\left(a_{\mu}b_{\nu}-a_{\nu}b_{\mu}\right)dx^{\mu}dx^{\nu }=\frac{1}{2}a_{[\mu}b_{\nu]}dx^{\mu}dx^{\nu}\,. \tag{4.5}\] Therefore, the component of the curvature two-form is given by \[F=\frac{1}{2}F_{\mu\nu}dx^{\mu}dx^{\nu} \tag{4.6}\] where we have \[F_{\mu\nu}=\begin{pmatrix}F_{\mu\nu}^{(0)}+\psi_{[\mu}\psi_{\nu]}^{ \dagger}&\partial_{[\mu}\psi_{\nu]}+A_{[\mu}^{(0)}\psi_{\nu]}+\psi_{[\mu}A_{ \nu]}^{(1)}\\ \partial_{[\mu}\psi_{\nu]}^{\dagger}+\psi_{[\mu}^{\dagger}A_{\nu]}^{(0)}+A_{[ \mu}^{(1)}\psi_{\nu]}^{\dagger}&F_{\mu\nu}^{(1)}+\psi_{[\mu}^{\dagger}\psi_{ \nu]}\end{pmatrix}\,. \tag{4.7}\] We denote the curvature associated with the bosonic subgroup of \(G\) by \(F^{(\sigma)}=dA^{(\sigma)}+A^{(\sigma)}\wedge A^{(\sigma)}=\frac{1}{2}F_{\mu \nu}^{(\sigma)}dx^{\mu}dx^{\nu}\), e.g., for \(G=\mathrm{U}(n_{0}|n_{1})\), \(F^{(\sigma)}\in\Omega^{2}(M,\mathfrak{u}_{n_{\sigma}})\). In general, we define the commutator for the differential forms as follows: Let \(X=X^{a}\otimes t^{a}\in\Omega^{p}(M,\mathfrak{g})\) and \(Y=Y^{a}\otimes t_{b}\in\Omega^{q}(M,\mathfrak{g})\) with the generators of the Lie superalgebra denoted by \((t^{a})_{a\in[\dim\mathfrak{g}]}\). Recalling \(X^{a}\wedge Y^{b}=(-1)^{pq}(-1)^{|a||b|}Y^{b}\wedge X^{a}\), denoted by \(|a|=|X_{a}|\), etc, the commutator for the differential forms is defined by \[[X,Y] =t^{a}t^{b}\otimes X^{a}\wedge Y^{b}-(-1)^{pq}t^{b}t^{a}\otimes Y ^{b}\wedge X^{a}\] \[=(t^{a}t^{b}-(-1)^{|a||b|}t^{b}t^{a})\otimes X^{a}\wedge Y^{b}\] \[=[t^{a},t^{b}]\otimes X^{a}\wedge Y^{b}\,. \tag{4.8}\] The commutator in the last line is the supercommutator with respect to the superalgebra \(\mathfrak{g}\) defined in (2.12). For the one-form connection, \(A\in\Omega^{1}(M,\mathfrak{g})\), we have \[A\wedge A=\frac{1}{2}[A,A]=\frac{1}{2}[t^{a},t^{b}]\otimes A^{a} \wedge A^{b}\,. \tag{4.9}\] Therefore, the component of the curvature two-form is given in the same way as the ordinary case, \[F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}+[A_{\mu},A_{\nu}]\,, \tag{4.10}\] which takes a value in the Lie superalgebra \(\mathfrak{g}\). ### Supergroup Yang-Mills theory We consider the Yang-Mills (YM) action for the supergroup theory, \[S_{\mathrm{YM}}=-\frac{1}{g^{2}}\int_{M}\mathrm{str}(F\wedge \star F)\,, \tag{4.11}\] where we define the Hodge star operator, \(\star\) : \(\Omega^{p}(M)\to\Omega^{d-p}(M)\). Hence, we have the volume form, \(F\wedge\star F=\frac{1}{4}F_{\mu\nu}F^{\mu\nu}d\mathrm{vol}(M)\). In order to impose the invariance under the \(G\)-gauge transform (4.1), we replace the ordinary trace with the supertrace. From the expression of the two-form curvature (4.7), we can explicitly write down the YM action in terms of the gauge fields, \((A^{(\sigma)},\psi)\), which would be complicated. Hence, for the moment, we just write the leading contributions, \[S_{\text{YM}}=-\frac{1}{g^{2}}\int_{M}\text{tr}_{0}(F^{(0)}\wedge\star F^{(0)})+ \frac{1}{g^{2}}\int_{M}\text{tr}_{1}(F^{(1)}\wedge\star F^{(1)})+\cdots\,, \tag{4.12}\] which are the standard YM actions of the bosonic subgroups \(\text{U}(n_{\sigma})\) in \(G=\text{U}(n_{0}|n_{1})\). Now we observe that the kinetic term signatures are opposite due to the supertrace property, that means that the spectrum of supergroup YM theory is not bounded. Such a property was also found in the supermatrix model discussed in Sec. 3, and thus it seems to be a universal behavior in supergroup theory. Due to this unboundness, the notion of vacua is not well-defined in this case, and thus we would need a non-perturbative completion for supergroup theory:10 In fact, it has been known that supergroup theory is pertubatively equivalent to the ordinary gauge theory through the analytic continuation, but there would be an essential difference in the non-perturbative regime. Footnote 10: Such a situation resembles unstable vacua. In this case, we should find true (stable) vacua non-perturbatively. Even though there are no bounded vacua, one can still consider the equation of motion with respect to the YM action, which is given by \[D(\star F)=0\,. \tag{4.13}\] This is a second order non-linear PDE on \(M\), which is in general difficult to solve. In the four-dimensional case, we have a special class of solutions, called the (anti-)instanton, given by the solution of the (anti-)self-dual ((A)SD) YM equation, \[\star F=\mp F\,. \tag{4.14}\] One can check that the instanton provides a solution of the equation of motion using the Bianchi identity, \[D(\star F)\stackrel{{\text{(A)SD}}}{{=}}\mp DF \stackrel{{\text{Bianchi}}}{{=}}0\,. \tag{4.15}\] We remark that the Bianchi identity still holds for the supergroup case due to the Jacobi identity (2.14). In fact, the instanton plays an essential role in non-perturbative aspects of supergroup gauge theory. We will discuss the details of instantons in Sec. 5. ### Quiver gauge theory realization As observed in (4.12), the YM action of supergroup gauge theory consists of two ordinary YM actions. In fact, supergroup gauge theory has a realization as a quiver gauge theory through analytic continuation. Quiver gauge theory is a class of gauge theories involving multiple gauge degrees of freedom: The gauge group is given by a product form, \(G=\prod_{i}G_{i}\). In addition to the gauge field, there is another degree of freedom, called the _bifundamental matter_, that connects different gauge fields, transforming in the bifundamental representation of the connecting gauge groups, \(G_{i}\times G_{j}\), \(G_{i}\times\overline{G}_{j}\), etc. In the case of \(\text{U}(n_{0}|n_{1})\) supergroup theory, in addition to the gauge fields of \(\text{U}(n_{\sigma})\) subgroup, there are also fermionic degrees of freedom that transform in \(\text{U}(n_{0})\times\overline{\text{U}(n_{1})}\) and the conjugate. Hence, these fermionic fields are interpreted as bifundamental matters from this point of view. Since we have two such fields \((\psi,\psi^{\dagger})\), this theory is identified with \(\widehat{A}_{1}\) quiver gauge theory.11 Footnote 11: This classification is based on the identification of quiver diagram and (affine) Dynkin diagram. \(\widehat{A}_{1}\) quiver consists of two nodes and two connecting edges. An important feature of supergroup gauge theory is the signature of the kinetic term. In order to realize this situation based on \(\widehat{A}_{1}\) quiver theory, we have to assign the coupling constants as follows, \[\left(\frac{1}{g_{0}^{2}}\,,\,\frac{1}{g_{1}^{2}}\right)\ \longrightarrow\ \left(+\frac{1}{g^{2}}\,,\,-\frac{1}{g^{2}}\right)\,. \tag{4.16}\] Otherwise, we impose the condition \(\frac{1}{g_{0}^{2}}+\frac{1}{g_{1}^{2}}=0\). This assignment is in fact unphysical since we have a negative coupling. Therefore, the supergroup gauge theory is realized in unphysical parameter regime, which could be interpreted as the analytic continuation of physical \(\widehat{A}_{1}\) quiver gauge theory [14]. #### Orthosymplectic theory From this point of view, the orthosymplectic supergroup theory is realized similarly by \(\widehat{A}_{1}\) quiver with O and Sp gauge nodes. In fact, the combinations of O \(\times\) O and Sp \(\times\) Sp are not compatible with their flavor symmetry, and O \(\times\) Sp is a unique choice among O and Sp theories. #### Chern-Simons theory A similar construction is also available for Chern-Simons theory. In this case, the Chern-Simons level plays a role of the coupling constant. In fact, the supersymmetric Chern-Simons-matter theory, a.k.a., Aharony-Bergman-Jafferis-Maldacena (ABJM) theory [1, 2], completely fits these conditions: It is a quiver gauge theory involving two nodes and two bifundamental matters, and two couplings (levels) with opposite signs. Actually, the partition function of ABJM theory obtained through the localization formalism supports its connection with supergroup theory: The partition function of \(\mathrm{U}(N)_{k}\times\mathrm{U}(M)_{-k}\) theory takes a form [13, 14], \[\mathcal{Z}_{N|M}=\frac{1}{N!M!}\int\mathrm{d}X\,\mathrm{d}Y\, \mathrm{e}^{-\frac{1}{2g}\,\mathrm{tr}\,X^{2}+\frac{1}{2g}\,\mathrm{tr}\,Y^{2} }\frac{\prod_{i<j}^{N}\sinh(x_{i}-x_{j})^{2}\prod_{i<j}^{M}\sinh(y_{i}-y_{j})^ {2}}{\prod_{j\in[M]}^{i\in[N]}\cosh(x_{i}-y_{j})^{2}}\,, \tag{4.17}\] where we have the pure imaginary coupling \(g=2\pi\mathrm{i}/k\). This is interpreted as a trigonometric analog of the supermatrix model discussed in Sec. 3. We remark that since the Chern-Simons theory is a topological theory, a negative coupling (negative level) does not imply unphysical behavior. See also [14, 15, 16][17, 18][1][1] for related works on supergroup Chern-Simons theory. #### Supergroup quiver gauge theory We can generalize this argument for quiver gauge theory of supergroups. In this case, since each gauge node is of supergroup, we need copies of \(\widehat{A}_{1}\) quiver theory. For \(\Gamma\)-quiver supergroup gauge theory, the total structure is given by \((\widehat{A}_{1},\Gamma)\) quiver. Such a theory characterized by a pair of quivers is called the double quiver theory, which has a natural geometric origin in eight dimensions. See [16]. ### String/M-theory perspective String/M-theory provides various insights on non-perturbative aspects of gauge theory. In particular, considering a stack of D-branes, non-Abelian gauge theory is realized as a low-energy effective theory. Open strings ending on the brane provide matrix degrees of freedom as homomorphism of the Chan-Paton vector space associated with the boundary condition of open string: \(n\) stack of branes gives rise to \(n\)-dimensional vector space. From this point of view, we require two different types of branes to realize a \(\mathbb{Z}_{2}\)-graded vector space to construct supergroup gauge theory. A natural candidate is an anti-brane. Even though a brane-anti-brane system exhibits similar properties, it has been known that this configuration does not yield supergroup gauge theory: Open strings connecting brane-anti-brane give rise to the _tachyon_, that is a bosonic degree of freedom transforming in the bifundamental representation of two gauge groups associated with branes and anti-branes. See, e.g., [14, 15, 16] for details. Therefore, we need a different type of branes required for realizing a fermionic degree of freedom. Such an object is known as the _negative brane_ (also called the _ghost brane_) [13, 14, 15], and it has been shown that a stack of brane-negative-brane actually yields a supergroup gauge theory. More recently, it has been pointed out that such a negative brane plays an essential role in the resurgence [17, 18]. Comparing the anti-brane and the negative brane, while the anti-brane has a positive tension (positive energy density; source of gravity), the negative brane has a negative tension (negative energy density; source of anti-gravity) although both have negative RR charges. From this point of view, the negative brane is associated with an unphysical open string boundary condition. On the other hand, an advantage of negative branes is that a bound state with branes does not violate further supersymmetry, which is still BPS, while that for anti-branes is not BPS. This property also plays an important role to discuss the instanton solution in supergroup gauge theory. See Sec. 5. #### 4.3.1 Hanany-Witten construction A stack of brane-negative-brane yields sixteen supercharges supergroup gauge theory similarly to the configuration of ordinary branes. In order to reduce the supersymmetries, one can consider the Hanany-Witten setup [19] that involves NS5 branes and D4 branes suspended between them in the type IIA setup. Adding the negative D4 branes, we can realize four-dimensional \(\mathcal{N}=2\) supergroup gauge theory that preserves eight supercharges. Each brane is extended in the following directions:12 Footnote 12: We denote the ordinary \(p\)-brane (positive brane) by D\(p\) or D\(p^{+}\), the negative brane by D\(p^{-}\), the anti-brane by \(\overline{\text{D}p}\) or \(\overline{\text{D}p}^{+}\), the anti-negative brane by \(\overline{\text{D}p}^{-}\). \[\begin{array}{lccccccccccccc}\hline\hline&0&1&2&3&4&5&6&7&8&9\\ \hline\text{NS5}&\text{--}&\text{--}&\text{--}&\text{--}&\text{--}&\text{--}& \text{--}&\text{--}&\text{--}&\text{--}\\ \text{D4}^{\pm}&\text{--}&\text{--}&\text{--}&\text{--}&\text{--}&\text{--}& \text{--}&\text{--}\\ \hline\hline\end{array} \tag{4.18}\] In this configuration, the positions of D\(4^{\pm}\) branes in 45-direction are identified with the Coulomb moduli of \(\mathcal{N}=2\) gauge theory (two coordinates are combined into a single complex coordinate) and the distance between NS5 branes in 6-direction \(L\) is interpreted as the gauge coupling constant: \[\begin{array}{l}\includegraphics[width=142.26378pt]{Fig1}\\ \end{array} \tag{4.19}\] Let us remark an ambiguity of the diagram presented above. Since we have two different types of branes, we have possibly different configurations, for example, for U(2\(|\)1) theory, as follows, \[\begin{array}{l}\includegraphics[width=142.26378pt]{Fig2}\\ \end{array} \tag{4.20}\] This ambiguity corresponds to that for the simple roots of Lie superalgebra. We also present the corresponding Dynkin diagrams aside the brane diagrams. See also Sec. 2.3 for a related discussion. #### 4.3.2 Gauging trick Starting with the Hanany-Witten configuration for supergroup gauge theory, we in addition include infinitely extended D\(4^{+}\) branes. Since they have infinite length, their couplings are zero (non-dynamical). Then, we consider gauging this configuration: The middle part of D\(4^{+}\) branes can be moved in 45-direction, while the external parts are still frozen, that plays a role of the flavor brane. Further tuning the position of D\(4^{+}\) branes, one can remove D\(4^{-}\) branes via pair annihilation between D\(4^{+}\) and D\(4^{-}\) branes, and the resulting configuration does not involve D\(4^{-}\) branes any longer: \[\begin{array}{l}\includegraphics[width=142.26378pt]{Fig3}\\ \end{array} \tag{4.21}\] This configuration is identical to \(\mathrm{U}(n_{0})\) gauge theory with \(n_{F}=2n_{1}\) flavor degrees of freedom [11]. This process implies that the (Coulomb branch of) moduli space of vacua of supergroup gauge theory has an intersection with that for the ordinary \(\mathcal{N}=2\) gauge theory with flavors (SQCD). #### 4.3.3 \(\widehat{A}_{1}\) quiver realization As discussed in Sec. 4.2, supergroup gauge theory has a realization as \(\widehat{A}_{1}\) quiver gauge theory. In fact, \(\widehat{A}_{1}\) quiver theory is similarly realized in this framework. In this case, we compactify 6-direction and put D4 branes in this direction. Then, we have two domains suspended by NS5 branes, that realize two distinct gauge nodes. This configuration can be converted to the previous configurations through analytic continuation as follows. We recall that supergroup gauge theory can be obtained by tuning the couplings as in (4.16) in \(\widehat{A}_{1}\) quiver theory, and in this setup, each coupling \(1/g_{\sigma}^{2}\) is interpreted as length of D4 branes between NS5 branes, \(L_{\sigma}\propto 1/g_{\sigma}^{2}\). From this point of view, a negative brane would be interpreted as a brane with negative length, that would be thought of as analytic continuation: (4.22) Let us comment a possible connection to the gauging trick discussed above. Recalling that the supergroup condition (4.16), namely \(L_{0}+L_{1}=0\), it would correspond to the shrinking limit in 6-direction. In fact, this is the strong coupling limit, that also implies necessity of non-perturbative treatment in supergroup gauge theory. Then, applying the T-dual in 6-direction, this direction is infinitely extended, and the gauge coupling of either first or second gauge node has to be zero. This means that the resulting configuration is a single node gauge theory with a flavor node, which is consistent with the gauging trick. This argument is extended to supergroup quiver gauge theory in general. See [14, 15] for details. #### 4.3.4 Seiberg-Witten theory Seiberg-Witten theory provides a geometric description of the moduli space of supersymmetric vacua of four-dimensional \(\mathcal{N}=2\) gauge theory. In fact, one can concisely extract the Seiberg-Witten geometry from the Hanany-Witten brane configuration [16]. In this setup, we specify the positions of D4 branes and NS5 branes using two complex variables \((x,y)\in\mathbb{C}\times\mathbb{C}^{\times}\) as follows, \[y+\frac{\mathfrak{q}}{y}=\det(x-\Phi)\,, \tag{4.23}\] where we denote the exponentiated complexified gauge coupling by \(\mathfrak{q}=\exp(2\pi\mathrm{i}\tau)\), \(\tau=\frac{\theta}{2\pi}+\frac{4\pi\mathrm{i}}{g^{2}}\), with the \(\theta\)-angle, and \(\Phi\) is the adjoint complex scalar field in the vector multiplet that parametrizes the Coulomb branch of the moduli space. In order to obtain the Seiberg-Witten geometry of supergroup gauge theory, we replace the characteristic polynomial with the supercharacteristic function, \[y+\frac{\mathfrak{q}}{y}=\mathrm{sdet}(x-\Phi)\,. \tag{4.24}\] Hence, we have the Seiberg-Witten curve for \(\mathrm{U}(n|m)\) pure SYM theory as follows. \(\boldsymbol{\frown}\) **Seiberg-Witten curve for \(\mathrm{U}(n|m)\) theory** The Seiberg-Witten curve for \(\mathrm{U}(n|m)\) pure SYM theory is given by the supercharacteristic function of the complex adjoint scalar \(\Phi\in\mathrm{Lie}\,\mathrm{U}(n|m)\) in the vector multiplet, \[\Sigma=\{(x,y)\in\mathbb{C}\times\mathbb{C}^{\times}\mid y+\frac{ \mathfrak{q}}{y}=\mathrm{sdet}(x-\Phi)\}\,. \tag{4.25}\] In terms of the eigenvalues of \(\Phi\), \(\{a_{\alpha}^{\sigma}\}_{\sigma=0,1,\alpha\in[n_{\sigma}]}\) for \(G=\mathrm{U}(n_{0}|n_{1})\), we may write it as follows, \[y+\frac{\mathfrak{q}}{y}=\frac{T_{0}(x)}{T_{1}(x)}\,,\qquad T_{ \sigma}(x)=\prod_{\alpha\in[n_{\sigma}]}(x-a_{\alpha}^{\sigma})\,, \tag{4.26}\] which can be further rewritten as \[T_{1}(x)y+\mathfrak{q}\frac{T_{1}(x)}{y}=T_{0}(x)\,. \tag{4.27}\] This algebraic equation characterizes the Seiberg-Witten curve of \(\mathrm{U}(n_{0})\) gauge theory with \(2n_{1}\) flavors, which is consistent with the gaugino trick. This algebraic equation shows the Riemann surface of genus \(g=n_{0}-1\) with \(2n_{1}\) punctures. Imposing the special unitary condition, \(\sum_{\alpha\in[n_{1}]}a_{\alpha}^{1}=0\), the last two cycles are not independent, hence we have \((n_{0}-1)\)\(A\) and \(B\) cycles and \(2(n_{1}-1)\) cycles associated with the punctures. The Euler characteristics of this Riemann surface is given by \(\chi=2-2(n_{0}-1)+2(n_{1}-1)=2-2(n_{0}-n_{1})\), where we have the superdimension \(\mathrm{sdim}\,\mathbb{C}^{n_{0}|n_{1}}=n_{0}-n_{1}\). This relation is also obtained from \(\widehat{A}_{1}\) quiver point of view [11]. We can also incorporate the superflavor factor by imposing the supercharacteristic function with respect to \(G_{F}=\mathrm{U}(n_{0}^{F}|n_{1}^{F})\). We will derive the Seiberg-Witten geometry for supergroup gauge theory from the microscopic instanton counting in Sec. 5. ## 5 Supergroup instanton counting As in the case of the ordinary non-supergroup gauge theory, the instanton plays an important role in the study of non-perturbative aspects of supergroup gauge theory. In this section, we explore several aspects of instantons and their applications to study dynamics of supergroup gauge theory. ### Instanton moduli space The instanton is a solution of the ASDYM equation (4.14). Writing the SD/ASD part of the curvature two-form as \[F_{\pm}=\frac{F\pm\star F}{2}\,, \tag{5.1}\] such that \(F=F_{+}+F_{-}\), the ASDYM equation is rewritten as \[F_{+}=0\,, \tag{5.2}\] which would be thought of as a "half" flat connection, and hence the corresponding moduli space is expected to have a rich mathematical structure. Let \(M=\mathbb{C}^{2}\) and \(G=\mathrm{U}(n)\). In order that the YM action becomes finite, we require that the curvature behaves as \(F\to 0\) as \(x\to\infty\). This implies that the connection approaches to the pure gauge at the boundary, \(A\xrightarrow{x\to\infty}gdg^{-1}\), where we obtain a map, \(g\): \(\partial\mathbb{C}^{2}=\mathbb{S}^{3}\to G\). Knowing that \(\pi_{3}(G)=\mathbb{Z}\) for any compact simple Lie groups, we can apply a classification on the gauge field with respect to the topological charge \[k=\frac{1}{8\pi^{2}}\int_{M}\mathrm{tr}\,F\wedge F=c_{2}[M]\in \mathbb{Z}\,, \tag{5.3}\] which is called the instanton number, given by the integral of the second Chern class over \(M\).13 Here, we focus on the ASD instanton solution having a positive charge \(k\geq 0\). The SD anti-instanton solution is similarly obtained by changing the orientation of the manifold. Hence, the moduli space of instantons has a decomposition Footnote 13: Precisely speaking, the four-manifold \(M\) should be compact to have \(c_{2}[M]\in\mathbb{Z}\). In this case, \(\mathbb{C}^{2}\simeq\mathbb{R}^{4}\) can be mapped to a four-sphere \(\mathbb{S}^{4}\) through the stereographic projection. This map does not violate the ASD property since it is a conformal map. Under this map, the boundary of \(\mathbb{C}^{2}\) is mapped to the north pole (marked point) on \(\mathbb{S}^{4}\), where we fix the gauge as \(A=gdg^{-1}\) (called the framing). \[\mathfrak{M}_{G}=\bigsqcup_{k=0}^{\infty}\mathfrak{M}_{G,k}\,, \tag{5.4}\] where each topological sector of the moduli space is given by \[\mathfrak{M}_{G,k}=\{A\in\Omega^{1}(M,\mathfrak{g})\ |\ F_{+}=0,c_{2}[M]=k\}/ \mathcal{G}\,. \tag{5.5}\] We remark that the \(\mathcal{G}\)-quotient means the quotient with respect to the equivalent class under \(G\)-gauge transformation. ### ADHM construction of instanton Although the definition of the moduli space (5.5) is conceptually reasonable, it is still difficult to analyze in practice. In order to discuss such an instanton moduli space, we can apply the ADHM construction, which is a systematic approach to obtain instanton solutions [1]. See, e.g., [16] for an extended review on this subject. For the \(k\)-instanton solution in \(\mathrm{U}(n)\)-YM theory, we define two vector spaces, \[N=\mathbb{C}^{n}\,,\qquad K=\mathbb{C}^{k}\,. \tag{5.6}\] The fundamental degrees of freedom in this construction are the linear maps associated with these vector spaces, \[X =\mathrm{Hom}(K,K)\oplus\mathrm{Hom}(K,K)\oplus\mathrm{Hom}(N,K) \oplus\mathrm{Hom}(K,N)\] \[\ni(B_{1},B_{2},I,J)\,, \tag{5.7}\] where we impose the so-called ADHM equations, \[0 =\mu_{\mathbb{R}}:=[B_{1},B_{1}^{\dagger}]+[B_{2},B_{2}^{ \dagger}]+II^{\dagger}-J^{\dagger}J\,, \tag{5.8a}\] \[0 =\mu_{\mathbb{C}}:=[B_{1},B_{2}]+IJ\,. \tag{5.8b}\] We remark that these equations are invariant under the \(\mathrm{U}(k)\) action, \[g\cdot(B_{1},B_{2},I,J)=(gB_{1}g^{-1},gB_{2}g^{-1},gI,Jg^{-1})\,, \qquad g\in\mathrm{U}(k)\,. \tag{5.9}\] We call \(\mu_{\mathbb{R},\mathbb{C}}\) the _moment maps_ that take a value in the dual of Lie algebra, \(\mu_{\mathbb{R},\mathbb{C}}:X\to\mathfrak{u}_{k}^{\vee}\otimes\mathbb{R}^{3}\). Based on these degrees of freedom, we have an alternative description of the instanton moduli space, \[\mathfrak{M}_{G,k}=\{(B_{1},B_{2},I,J)\in X\mid\mu_{\mathbb{R}}=0,\mu_{ \mathbb{C}}=0\}/\!\!/\mathrm{U}(k)\,. \tag{5.10}\] The triple slash means the hyper-Kahler quotient with respect to three moment maps, \((\mu_{\mathbb{R}},\mathrm{Re}\,\mu_{\mathbb{C}},\mathrm{Im}\,\mu_{\mathbb{C}})\). We show that the ASD connection can be constructed from these ADHM variables. We define the zero-dimensional Dirac operator, \[D^{\dagger}=\begin{pmatrix}B_{1}-z_{1}&B_{2}-z_{2}&I\\ -B_{2}^{\dagger}+\bar{z}_{2}&B_{1}^{\dagger}-\bar{z}_{1}&-J^{\dagger}\end{pmatrix} \ :\ K\otimes\mathbb{C}^{2}\oplus N\to K\otimes\mathbb{C}^{2}\,. \tag{5.11}\] We remark that the space-time dependence in the Dirac operator is associated with the quaternion structure \[\begin{pmatrix}z_{1}&z_{2}\\ -\bar{z}_{2}&\bar{z}_{1}\end{pmatrix}=\begin{pmatrix}x_{4}+\mathrm{i}x_{3}&x _{2}+\mathrm{i}x_{1}\\ -x_{2}+\mathrm{i}x_{1}&x_{4}-\mathrm{i}x_{3}\end{pmatrix}=x\cdot\sigma\,, \tag{5.12}\] where we define the quarternion basis, \[\sigma=(\mathrm{i}\vec{\sigma},\mathbbm{1}_{\mathbb{C}^{2}})\,, \qquad\bar{\sigma}=(-\mathrm{i}\vec{\sigma},\mathbbm{1}_{\mathbb{C}^{2}})\,. \tag{5.13}\] Their product is given by \[\sigma_{\mu}\bar{\sigma}_{\nu}=\delta_{\mu\nu}+\mathrm{i}\eta^{( \pm)}_{\mu\nu}\,,\qquad\bar{\sigma}_{\mu}\sigma_{\nu}=\delta_{\mu\nu}+ \mathrm{i}\eta^{(-)}_{\mu\nu}\,, \tag{5.14}\] where \(\eta^{(\pm)}_{\mu\nu}\) is an (A)SD tensor, called the 't Hooft symbol, \(\star\eta^{(\pm)}_{\mu\nu}=\pm\eta^{(\pm)}_{\mu\nu}\). The ADHM equations (5.8) are equivalent to the condition such that the Dirac operator squared is diagonal with respect to \(\mathbb{C}^{2}\), \[D^{\dagger}D=\Delta\otimes\mathbbm{1}_{\mathbb{C}^{2}}\,, \tag{5.15}\] where \(\Delta\) : \(K\to K\) is explicitly given by \[\Delta =(B_{1}-z_{1})(B_{1}^{\dagger}-\bar{z}_{1})+(B_{2}-z_{2})(B_{2}^{ \dagger}-\bar{z}_{2})+II^{\dagger}\] \[=(B_{1}^{\dagger}-\bar{z}_{1})(B_{1}-z_{1})+(B_{2}^{\dagger}- \bar{z}_{2})(B_{2}-z_{2})+J^{\dagger}J\,. \tag{5.16}\] Therefore, the asymptotic behavior is given by \(\Delta\xrightarrow{z\to\infty}|z|^{2}\mathbbm{1}_{K}\). Since the Dirac operator is a rectangular matrix of a rank \(2k\), we consider the complementary space, which is given by the set of normalized zero modes (the Dirac operator kernel), \[\operatorname{Ker}D^{\dagger}=\{\Psi:N\to K\otimes\mathbb{C}^{2} \oplus N\mid D^{\dagger}\Psi=0,\Psi^{\dagger}\Psi=\mathbbm{1}_{N}\}\,. \tag{5.17}\] We define the projector from \(K\otimes\mathbb{C}^{2}\oplus N\) to \(N\) using the zero modes, \[P:=\Psi\Psi^{\dagger}=\mathbbm{1}_{K\otimes\mathbb{C}^{2}\oplus N }-D(\Delta^{-1}\otimes\mathbbm{1}_{\mathbb{C}^{2}})D^{\dagger}\,, \tag{5.18}\] where we remark \(PD=\Psi\Psi^{\dagger}D=0\). Then, we obtain a connection from the zero modes, \[A=\Psi^{\dagger}d\Psi\,. \tag{5.19}\] Due to the normalization condition, it turns out to be anti-Hermitian, \(A^{\dagger}=-A\). The curvature two-form constructed from this connection is given as follows, \[F=dA+A\wedge A=d\Psi^{\dagger}(\mathbbm{1}_{K\otimes\mathbb{C}^{ 2}\oplus N}-\Psi\Psi^{\dagger})d\Psi=\Psi^{\dagger}(dD)(\Delta^{-1}\otimes \mathbbm{1}_{\mathbb{C}^{2}})(dD^{\dagger})\Psi\,. \tag{5.20}\] Recalling \[dD=\begin{pmatrix}-\mathbbm{1}_{K}\otimes\bar{\sigma}\\ 0\end{pmatrix}\,,\qquad dD^{\dagger}=\begin{pmatrix}-\mathbbm{1}_{K}\otimes \sigma&0\end{pmatrix}\,, \tag{5.21}\] it turns out that the curvature is ASD, \[F=\Psi^{\dagger}\begin{pmatrix}\Delta^{-1}\otimes 2\mathrm{i}\eta^{(-)}&0 \\ 0&0\end{pmatrix}\Psi\,, \tag{5.22}\] where we define the (A)SD two-form \(\eta^{(\pm)}=\frac{1}{2}\eta^{(\pm)}_{\mu\nu}dx^{\mu}dx^{\nu}\), such that \(\star\eta^{(\pm)}=\pm\eta^{(\pm)}\). In addition, applying Osborn's formula [14], we see that the instanton number is given as follows, \[\frac{1}{8\pi^{2}}\int_{M}\operatorname{tr}F\wedge F=\frac{1}{16 \pi^{2}}\int_{M}\mathrm{d}^{4}x\,\partial^{2}\partial^{2}\operatorname{tr}_{ K}\log\Delta^{-1}=\operatorname{tr}_{K}\mathbbm{1}_{K}=k\,. \tag{5.23}\] In this calculation, we use the asymptotic behavior of \(\Delta\xrightarrow{x\to\infty}|x|^{2}\mathbbm{1}_{K}\) and evaluate the boundary contribution. #### 5.2.1 String theory perspective As discussed in Sec. 4.3, one can obtain gauge theory from a stack of D-branes. In particular, \(k\)-instanton configuration in \(\mathrm{U}(n)\)-YM theory is realized as \(n\)\(\mathrm{D}p\) & \(k\)\(\mathrm{D}(p-4)\) brane system. Let us put \(p=4\). In this context, the two vector spaces \((N,K)\) are identified with the Chan-Paton spaces of \(n\)\(\mathrm{D}4\) and \(k\)\(\mathrm{D}0\) branes, and thus the ADHM variables \((B_{1},B_{2},I,J)\) are interpreted as degrees of freedom of open strings connecting \(\mathrm{D}0\)-\(\mathrm{D}0\), \(\mathrm{D}4\)-\(\mathrm{D}0\), and \(\mathrm{D}0\)-\(\mathrm{D}4\) branes, respectively. The ADHM equations are obtained as the BPS equations of this configuration. We remark that the anti-instanton is realized by \(\overline{\mathrm{D}0}\) brane that violates the BPS condition. #### 5.2.2 Regularization of moduli space We would study the moduli space defined here, however it has been known that it is non-compact and singular that should be reguralized for the latter purpose. For the sake of compactification of the instanton moduli space, we add the point-like instantons, \(\bigsqcup_{\ell=0}^{k}\mathfrak{M}_{n,k-\ell}\times\mathrm{Sym}^{\ell}\,\mathbb{ C}^{2}\), which is called the _Uhlenbeck compactification_ (see, e.g., [10]). This is analogous to the compactification of \(\mathbb{R}^{4}\) to \(\mathbb{S}^{4}\) by adding the infinity. Another regularization that we need is resolution of singularity. This can be done by modifying the ADHM equations, \((\mu_{\mathbb{R}},\mu_{\mathbb{C}})=(\zeta_{\neq 0}\mathbbm{1}_{K},0)\), and the corresponding moduli space is given by [14] \[\mathfrak{M}^{\zeta}_{n,k}=\{(B_{1},B_{2},I,J)\in X\mid\mu_{ \mathbb{R}}=\zeta_{\neq 0}\mathbbm{1}_{K},\mu_{\mathbb{C}}=0\}/\!\!/\mathrm{ U}(k)\,. \tag{5.24}\] This modification has been also discussed in the context of the instanton moduli space on noncommutative \(\mathbb{C}^{2}\)[11]. Another moduli space is for the rank-\(n\) framed torsion free sheaves on \(\mathbb{CP}^{2}\) given by the geometric invariant theory (GIT) quotient (see, e.g., [14]), \[\widetilde{\mathfrak{M}}_{n,k}=\{(B_{1},B_{2},I,J)\in X\mid\mu_{ \mathbb{C}}=0,(\text{co-stability})\}/\!\!/\mathrm{GL}(K)\,, \tag{5.25}\] which is known to be isomorphic to the resolved moduli space \(\mathfrak{M}^{\zeta}_{n,k}\). The (co-)stability condition is given as follows, \[K=\begin{cases}\mathbb{C}[B_{1},B_{2}]I(N)&(\zeta>0;\text{ stability})\\ \mathbb{C}[B_{1},B_{2}]J(N)^{\vee}&(\zeta<0;\text{ co-stability})\end{cases} \tag{5.26}\] One can obtain the (co-)stability condition from the real part of the ADHM equation, \(\mu_{\mathbb{R}}=\zeta\mathbbm{1}_{K}\), depending on the sign of the deformation parameter \(\zeta\). See, e.g., [19] for details. ### ADHM construction of super instanton We turn to the construction of instantons in supergroup gauge theory [13, 14]. Let \(G=\mathrm{U}(n_{0}|n_{1})\). Recalling that the ADHM moduli space is given by the \(\mathrm{U}(k)\)-quotient for the \(k\)-instanton sector, it seems natural to consider the supergroup quotient, i.e., \(\mathrm{U}(k_{0}|k_{1})\)-quotient in the supergroup setup. This implies that the topological sector is characterized by \(k=(k_{0}|k_{1})\) in this case. From this point of view, we replace the vector spaces \((N,K)\) with the supervector spaces, \[N=N_{0}\oplus N_{1}=\mathbb{C}^{n_{0}|n_{1}}=\mathbb{C}^{n_{0}} \oplus\mathbb{C}^{n_{1}}\,,\qquad K=K_{0}\oplus K_{1}=\mathbb{C}^{k_{0}|k_{1}}= \mathbb{C}^{k_{0}}\oplus\mathbb{C}^{k_{1}}\,. \tag{5.27}\] Then, the ADHM variables are obtained as supermatrices representing the linear maps for these supervector spaces. The remaining steps are formally parallel with the standard case discussed in Sec. 5.2. In this case, Osborn's formula (5.23) yields \[\frac{1}{8\pi^{2}}\int_{M}\operatorname{str}F\wedge F= \operatorname{str}_{K}\mathbbm{1}_{K}=k_{0}-k_{1}\,, \tag{5.28}\] which implies that \(k_{0}\) and \(k_{1}\) count the positive and negative charge instantons. This is reasonable from string theory perspective discussed in Sec. 5.2.1: The current \((k_{0}|k_{1})\)-instanton configuration in \(\mathrm{U}(n_{0}|n_{1})\)-YM theory would be realized as \(k_{0}\) D0\({}^{+}\), \(k_{1}\) D0\({}^{-}\), \(n_{0}\) D4\({}^{+}\), and \(n_{1}\) D4\({}^{-}\) brane system, which relate the supervector spaces (5.27) to the Chan-Paton spaces. In contrast to the brane-anti-brane system, the current situation does not violate the ASD property, which is compatible with the BPS equation. Meanwhile, the resolved moduli space is given for \((n,k)=(n_{0}|n_{1},k_{0}|k_{1})\) by \[\mathfrak{M}^{\zeta}_{n,k}=\{(B_{1},B_{2},I,J)\in X\mid\mu_{ \mathbb{R}}=(+\zeta\mathbbm{1}_{K_{0}})\oplus(-\zeta\mathbbm{1}_{K_{1}}),\mu_ {\mathbb{C}}=0\}/\!\!/\mathrm{U}(k_{0}|k_{1})\,. \tag{5.29}\] We remark that the deformation parameter is assigned with opposite signs for the positive and negative sectors. Physically speaking, this is because the positive and negative instantons (positive and negative D0 branes) are oppositely charged under the flux yielding the non-commutativity of the space-time manifold via the Seiberg-Witten map [11]. Hence, fixing \(\zeta>0\), this moduli space is isomorphic to the following super-GIT quotient, \[\widetilde{\mathfrak{M}}_{n,k}=\{(B_{1},B_{2},I,J)\in X\mid\mu_{ \mathbb{C}}=0,\text{stability for $K_{0}$, co-stability for $K_{1}$}\}/\!\!/\mathrm{GL}(K)\,. \tag{5.30}\] For \(\zeta<0\), the stability and co-stability conditions are exchanged. ### Equivariant localization Based on the description of the instanton moduli space, we apply the equivariant localization formalism to compute the partition function of supergroup gauge theory. See, e.g., [10] for details of the localization calculus. In the Euclidean path integral formalism, the partition function of \(G\)-gauge theory is given as follows, \[Z=\int\mathfrak{D}A\,\mathrm{e}^{-S[A]}\,, \tag{5.31}\] where we consider the case \(M=\mathbb{R}^{4}\), and the action consists of the YM action and the \(\theta\)-term. Writing the one-form connection in the form of \(A=A_{k}+\delta A\), where we denote the \(k\)-instanton configuration by \(A_{k}\) and the deviation by \(\delta A\) (we assume no \(k\)-dependence), we have the following decomposition of the path integral measure, \(\mathfrak{D}A=\sum_{k=0}^{\infty}\mathfrak{D}A_{k}\mathfrak{D}(\delta A)\). In this decomposition, the weight with the action is written by \(\mathrm{e}^{-S[A]}=\mathfrak{q}^{k}\mathrm{e}^{-S_{\mathrm{adv}}[\delta A]}\), where \(\mathfrak{q}=\exp\!\left(\mathrm{i}\theta-\frac{8\pi^{2}}{g^{2}}\right)=:\exp(2 \pi\mathrm{i}\tau)\), and the deviation part \(S_{\mathrm{dev}}[\delta A]\) starts with \(O(\delta A^{2})\) since \(A_{k}\) gives a solution to the equation of motion. Noticing that the integral over \(A_{k}\) can be identified with the integral over the moduli space \(\mathfrak{M}_{G,k}\), we have the following form of the partition function,14 Footnote 14: The anti-instanton partition function shall be obtained by the complex conjugate of the instanton partition function: The weight in this case is given by \(\tilde{\mathfrak{z}}^{k}=\exp k\left(-\mathrm{i}\theta-\frac{8\pi^{2}}{g^{2}}\right)\), and the moduli space integral will be \(\overline{Z_{k}}\) since it has an opposite orientation compared with the original one. \[Z=Z_{\mathrm{inst}}Z_{\mathrm{pert}}\,,\qquad Z_{\mathrm{inst}}=\sum_{k=0}^{ \infty}\mathfrak{q}^{k}Z_{k}\,, \tag{5.32}\] where \(Z_{\mathrm{pert}}\) is the deviation (perturbative) part, while \(Z_{k}\) is called the _instanton partition function_, \[Z_{k}=\int_{\mathfrak{M}_{G,k}}1=\mathrm{vol}(\mathfrak{M}_{G,k})\,. \tag{5.33}\] As mentioned in Sec. 5.2, we should replace the moduli space \(\mathfrak{M}_{G,k}\) with the regularized version. Moreover, noticing that there exist several group actions on the moduli space, the integral is performed as the equivariant integral, and thus \(\mathrm{vol}(\mathfrak{M}_{G,k})\) is understood as the equivariant volume, which can be computed based on the equivariant localization formula, \[\int_{\mathfrak{M}_{G,k}}\alpha=\sum_{\lambda\in\mathfrak{M}_{G,k}^{\mathrm{ T}}}\frac{\iota^{*}\alpha}{e(T_{\lambda}\mathfrak{M}_{G,k})}\,, \tag{5.34}\] where \(\alpha\) is an equivariant cohomology class, \(\iota^{*}\) is a pull-back of the inclusion map, \(\iota:\,o\times\mathfrak{M}_{G,k}\hookrightarrow\mathbb{C}^{2}\times\mathfrak{ M}_{G,k}\), \(e\) is the equivariant Euler class, \(\mathfrak{M}_{G,k}^{\mathrm{T}}\) is a set of the equivariant \(\mathsf{T}\)-fixed points, and \(T_{\lambda}\mathfrak{M}_{G,k}\) is the tangent bundle to the moduli space at the fixed point \(\lambda\). The instanton partition function corresponds to the trivial insertion \(\alpha=1\). In the presence of the fundamental matter, we add the equivariant Euler class of the matter bundle \(\mathsf{M}\), whose fiber is the space of virtual zero modes of the Dirac operator in the instanton background. For \(\mathcal{N}=2^{*}\) theory involving the adjoint matter, we insert the shifted tangent bundle, \(\mathsf{M}_{\mathrm{adj}}\otimes T\mathfrak{M}_{G,k}\), where \(\mathsf{M}_{\mathrm{adj}}\) is a line bundle whose Chern root is identified with the adjoint mass parameter.15 Footnote 15: In the five-dimensional (equivariant K-theory) convention discussed below, this case corresponds to Hirzebruch’s \(\chi_{y}\)-genus of the instanton moduli space with identifying \(y=\mathrm{e}^{m}\). #### 5.4.1 Index functor The formalism discussed above is generalized to five and six dimensional theories by replacing the equivariant Euler class with the corresponding multiplicative index functor: For the vector bundle \(\mathbf{X}\) with the character, \(\mathrm{ch}\,\mathbf{X}=\sum_{i=1}^{\mathrm{rk}\,\mathbf{X}}n_{i}\mathrm{e}^{x _{i}}\) with multiplicity \(n_{i}\in\mathbb{Z}\), we define \[\mathbb{I}[\mathbf{X}]=\prod_{i\in[\mathrm{rk}\,\mathbf{X}]}\left[x_{i}\right] ^{n_{i}}, \tag{5.35}\] where we apply the notation,16 Footnote 16: Not to be confused with the notation (1.1). \[[x]=\begin{cases}x&(4d)\\ (1-\mathrm{e}^{-x})&(5d)\\ \theta(e^{-x};p)&(6d)\end{cases} \tag{5.36}\] The instanton partition function is then given by summation over the topological sectors, \[Z_{k}=\sum_{\lambda\in\mathfrak{M}^{\Gamma}_{G,k}}Z_{\lambda}\,,\qquad Z_{ \lambda}=\mathbb{I}[-T_{\lambda}\mathfrak{M}_{G,k}]\,. \tag{5.37}\] The five and six dimensional conventions correspond to the equivariant K-theory and the equivariant elliptic cohomology, respectively. The five-dimensional index is given by the character of alternating sum of anti-symmetrizations of the dual bundle,17 Footnote 17: For the vector bundle \(\mathbf{X}\) with the character, \(\operatorname{ch}\mathbf{X}=\sum_{i=1}^{n_{\text{k}}\mathbf{X}}n_{i}\mathrm{e} ^{x_{i}}\), the dual bundle is defined to have the character, \(\operatorname{ch}\mathbf{X}^{\vee}=\sum_{i=1}^{n_{\text{k}}\mathbf{X}}n_{i} \mathrm{e}^{-x_{i}}\). \[\mathbb{I}[\mathbf{X}]\ \stackrel{{ 5d}}{{=}}\ \operatorname{ch} \wedge\mathbf{X}^{\vee}=\sum_{i=0}^{\infty}(-1)^{i}\operatorname{ch}\wedge^{i }\mathbf{X}^{\vee}\,. \tag{5.38}\] In this case, we have the K-theoretic instanton partition function as an equivariant Euler characteristic over the instanton moduli space [22, 23], \[Z_{k}=\sum_{i=0}^{\infty}(-1)^{i}\operatorname{ch}H^{i}(\mathfrak{M}_{G,k}, \mathcal{O})\,, \tag{5.39}\] where we denote the structure sheaf of the moduli space by \(\mathcal{O}\). From the point of view of the localization formula (5.34), we may rewrite the partition function as follows,18 Footnote 18: The \(\widehat{A}\) genus is typically used for the five-dimensional convention instead of the Todd genus. See, e.g., [17]. \[Z_{k}=\int_{\mathfrak{M}_{G,k}}\operatorname{td}(T\mathfrak{M}_{G,k})\,, \tag{5.40}\] where we define the Todd class, \[\operatorname{td}(\mathbf{X})=\prod_{i=1}^{\operatorname{rk}\mathbf{X}} \frac{x_{i}}{1-\mathrm{e}^{-x_{i}}}=e(\mathbf{X})\mathbb{I}[\mathbf{X}]^{-1} =e(\mathbf{X})\mathbb{I}[-\mathbf{X}]\,. \tag{5.41}\] Recalling the Hirzebruch-Riemann-Roch formula for the holomorphic Euler characteristic of a holomorphic vector bundle \(E\) on \(\mathfrak{M}\),19 Footnote 19: We may write the K-theoretic partition function (5.39) as an alternating sum of higher direct images and the pushforward of the projection map from the moduli space to the point (derived pushforward), which can be understood as the Grothendieck–Hirzebruch–Riemann–Roch formula. \[\chi(\mathfrak{M},E)=\int_{\mathfrak{M}}\operatorname{ch}E\,\operatorname{td }(T\mathfrak{M})\,, \tag{5.42}\] the K-theoretic partition function is understood as an integral of "1" analogously to the cohomological version (5.33). We remark that the equivariant Euler class contribution in the denominator of the localization formula (5.34) is canceled with that in the numerator of the Todd class. See also [10] for details. For the six-dimensional case, the top form insertion is necessary to be compatible with the modular property. Hence, for \(\operatorname{U}(n)\) gauge theory, we need to incorporate \(2n\) fundamental matters, or a single adjoint matter, which correspond to the superconformal matter content in the four-dimensional convention. We remark that these index formulations are understood as the Witten index of the supersymmetric ADHM quiver quantum mechanics on the circle \(\mathbb{S}^{1}\) and the elliptic genus of the ADHM quiver gauge theory on the elliptic curve \(\mathcal{E}\) with the nome \(p\in\mathbb{C}^{\times}\). #### 5.4.2 Equivariant fixed points We turn to more details on the instanton moduli space. We denote the fiber of the cotangent bundle at the marked point \(o\in\mathbb{C}^{2}\) by \[Q=T_{o}^{\vee}\mathbb{C}^{2}=Q_{1}\oplus Q_{2} \tag{5.43}\] with the character \(\operatorname{ch}Q_{i}=q_{i}=\operatorname{e}^{\epsilon_{i}}\) (\(i=1,2\)). We use the notation \[Q_{12}=Q_{1}\otimes Q_{2}=\det Q\,, q_{12}=q_{1}q_{2}\quad(\epsilon_{12}=\epsilon_{1}+\epsilon_{2})\,, \tag{5.44a}\] \[P_{i}=\wedge Q_{i}=1-Q_{i}\quad(i=1,2)\,, P_{12}=P_{1}P_{2}\,. \tag{5.44b}\] Then, the equivariant group actions on the moduli space are given by \[g\cdot(B_{1},B_{2},I,J) =(gB_{1}g^{-1},gB_{2}g^{-1},gI,Jg^{-1})\,, g\in\operatorname{GL}(K)\,, \tag{5.45a}\] \[h\cdot(B_{1},B_{2},I,J) =(B_{1},B_{2},Ih^{-1},hJ)\,, h\in\operatorname{GL}(N)\,,\] (5.45b) \[(q_{1},q_{2})\cdot(B_{1},B_{2},I,J) =(q_{1}^{-1}B_{1},q_{2}^{-1}B_{2},I,q_{12}^{-1}J)\,, (q_{1},q_{2})\in\mathsf{T}_{Q}\subset\operatorname{GL}(Q)\,. \tag{5.45c}\] Parametrizing these group elements as \(g=\operatorname{e}^{\phi}\), \(h=\operatorname{e}^{a}\), \(q_{i}=\operatorname{e}^{\epsilon_{i}}\) (\(i=1,2\)), namely \(\phi\in\operatorname{Lie}\operatorname{GL}(K)\), \(h\in\operatorname{Lie}\operatorname{GL}(N)\), \((\epsilon_{1},\epsilon_{2})\in\operatorname{Lie}\mathsf{T}_{Q}\), the fixed point equations are given as follows, \[[\phi,B_{i}]-\epsilon_{i}B_{i} =0\,, (i=1,2) \tag{5.46a}\] \[\phi I-Ia =0\,,\] (5.46b) \[-J\phi+(a-\epsilon_{12})J =0\,. \tag{5.46c}\] We apply the basis diagonalizing \(a=\bigoplus_{\alpha\in[n]}a_{\alpha}\), which corresponds to \(N=\bigoplus_{\alpha\in[n]}N_{\alpha}\), and \((a_{\alpha})_{\alpha\in[n]}\in\operatorname{Lie}\mathsf{T}_{N}\). We have the (left/right) eigenvalue equations, \(\phi I_{\alpha}=a_{\alpha}I_{\alpha}\) and \(J_{\alpha}\phi=J_{\alpha}(a_{\alpha}-\epsilon_{12})\). Moreover, using (5.46a), we obtain \[\phi B_{1}^{i-1}B_{2}^{j-1}I_{\alpha} =(a_{\alpha}+(i-1)\epsilon_{1}+(j-1)\epsilon_{2})B_{1}^{i-1}B_{2 }^{j-1}I_{\alpha}\,, \tag{5.47a}\] \[J_{\alpha}B_{1}^{i-1}B_{2}^{j-1}\phi =J_{\alpha}B_{1}^{i-1}B_{2}^{j-1}(a_{\alpha}-i\epsilon_{1}-j \epsilon_{2})\,. \tag{5.47b}\] Applying the stability condition, \(K=\mathbb{C}[B_{1},B_{2}]I(N)\), and recalling \(\dim K=k\), there exist only \(k\) eigenvectors in the form of \(B_{1}^{i-1}B_{2}^{j-1}I_{\alpha}\), which are parametrized by \(n\)-tuple partition, \(\lambda=(\lambda_{\alpha})_{\alpha\in[n]}\in\mathfrak{M}_{n,k}^{\mathsf{T}}\) with \(|\lambda|=\sum_{\alpha\in[n]}|\lambda_{\alpha}|=k\), and hence \(B_{1}^{i-1}B_{2}^{j-1}I_{\alpha}=0\) for \((i,j)\not\in\lambda_{\alpha}\) and also \(J_{\alpha}=0\), \(\alpha\in[n]\). Namely, we have \(K=\operatorname{Span}\{B_{1}^{i-1}B_{2}^{j-1}I_{\alpha}\}_{\alpha\in[n],(i,j) \in\lambda_{\alpha}}\). Starting with the co-stability condition, we have the left eigenvectors in the form of \(J_{\alpha}B_{1}^{i-1}B_{2}^{j-1}\) with \(I_{\alpha}=0\). We remark that together with the ADHM equation, \(\mu_{\mathbb{C}}=[B_{1},B_{2}]+IJ=0\), we have \([B_{1},B_{2}]=0\) at the fixed point. Therefore, we obtain the characters of the vector spaces \((N,K)\) as follows, \[\operatorname{ch}K\Big{|}_{\lambda}=\sum_{\alpha\in[n]}\operatorname{ch}K_{ \alpha}\Big{|}_{\lambda_{\alpha}}\,,\qquad\operatorname{ch}N=\sum_{\alpha\in[ n]}e^{a_{\alpha}}\,, \tag{5.48}\] where we have \[\operatorname{ch}K_{\alpha}\Big{|}_{\lambda_{\alpha}}=\sum_{(i,j)\in\lambda_{ \alpha}}\begin{cases}\mathrm{e}^{a_{\alpha}}q_{1}^{i-1}q_{2}^{j-1}&\text{( with stability)}\\ \mathrm{e}^{a_{\alpha}}q_{1}^{-i}q_{2}^{-j}&\text{(with co-stability)}\end{cases} \tag{5.49}\] This structure has a connection with the ideal \(\mathfrak{l}_{\lambda}\) generated by all monomials outside the partition, \(\{z_{1}^{i-1}z_{2}^{j-1}\}_{(i,j)\not\in\lambda}\) as follows, \[K_{\alpha}\Big{|}_{\lambda_{\alpha}}=N_{\alpha}\otimes\begin{cases}\mathfrak{l }_{\emptyset}/\mathfrak{l}_{\lambda_{\alpha}}&\text{(stability)}\\ Q_{12}^{\vee}(\mathfrak{l}_{\emptyset}/\mathfrak{l}_{\lambda_{\alpha}})^{\vee}& \text{(co-stability)}\end{cases}\,. \tag{5.50}\] We remark that \(\mathfrak{l}_{\emptyset}=\mathbb{C}[z_{1},z_{2}]\). #### 5.4.3 Tangent bundle We next describe the tangent bundle to the moduli space. From the ADHM construction, we have the following chain complex, \[\begin{CD}\operatorname{Hom}(K,K)@>{}>{}>\oplus\\ @V{}V{\operatorname{Hom}(K,K)}@>{d_{1}}>{}>\operatorname{Hom}(N,K)@>{d_{2}}>{}> \operatorname{Hom}(K,K)\otimes Q_{12}^{\vee}\\ @V{}V{\operatorname{Hom}(K,N)\otimes Q_{12}^{\vee}}V\\ @V{}V{C_{0}}V@V{}V{C_{1}}V@V{}V{}V\\ \end{CD} \tag{5.51}\] where \(C_{0,1,2}\) correspond to \(\operatorname{GL}(K)\) action, ADHM variables, and the moment map \(\mu_{\mathbb{C}}\), and we define \[d_{1}(\xi)=\begin{pmatrix}[\xi,B_{1}]\\ [\xi,B_{2}]\\ \xi I\\ -J\xi\end{pmatrix}\,,\qquad d_{2}\begin{pmatrix}b_{1}\\ b_{2}\\ i\\ j\end{pmatrix}=[B_{1},b_{2}]+[b_{1},B_{2}]+Ij+iJ\,. \tag{5.52}\] These maps are the infinitesimal \(\operatorname{GL}(K)\) action parametrized by \(\xi\in\operatorname{Lie}\operatorname{GL}(K)\) and the differential of the moment map \(\mu_{\mathbb{C}}\). We can check that \(d_{2}\circ d_{1}(\xi)=[\xi,\mu_{\mathbb{C}}]=0\) due to the ADHM equation, \(\mu_{\mathbb{C}}=0\). Then, the tangent bundle is identified with the middle cohomology with respect to this sequence, \[T\mathfrak{M}_{n,k}=\operatorname{Ker}d_{2}/\operatorname{Im}d_{1}=C_{1}-C_{0}-C_ {2}=N^{\vee}K+Q_{12}^{\vee}K^{\vee}N-P_{12}^{\vee}K^{\vee}K\,. \tag{5.53}\] There exists another construction of the tangent bundle based on \(\mathsf{Y}_{o}\) the observable sheaf on \(\mathfrak{M}_{n,k}\times o\) obtained from \(\mathsf{Y}_{\mathbb{C}^{2}}\) the universal sheaf on \(\mathfrak{M}_{n,k}\times\mathbb{C}^{2}\) via the localization, \[\mathsf{Y}\equiv\mathsf{Y}_{o}=N-P_{12}K\,,\qquad\mathsf{Y}_{\mathbb{C}^{2}}= \frac{\mathsf{Y}}{P_{12}}\,, \tag{5.54}\] where we use the notation that \((N,K)\) also means a bundle whose fiber is given by the vector space \(N\) and \(K\), respectively. Several equivalent expressions are available for the character of the observable sheaf at the fixed point \(\lambda\in\mathfrak{M}_{n,k}^{\mathsf{T}}\). The first expression is given by \[\operatorname{ch}\mathsf{Y}\Big{|}_{\lambda}=\operatorname{ch}(P_{1}\mathsf{ X}) \tag{5.55}\] where we define the partial reduction of the universal sheaf \(\mathsf{X}=\mathsf{Y}/P_{1}=\mathsf{Y}_{\mathbb{C}^{2}}P_{2}\), and the character \[\operatorname{ch}\mathsf{X}=\sum_{x\in\mathcal{X}_{\lambda}}x\,,\qquad \mathcal{X}_{\lambda}=\{x_{\alpha,i}=\mathrm{e}^{a_{\alpha}}q_{1}^{i-1}q_{2}^ {\lambda_{\alpha,i}}\}_{\alpha\in[n],i\in\mathbb{N}}\,. \tag{5.56}\] In order to obtain this expression, we need the condition \(|q_{1}|<1\) to have \(\sum_{i=1}^{\infty}q_{1}^{i-1}=1/(1-q_{1})\). If \(|q_{2}|<1\), we may apply another expression based on the transposed partition of \(\lambda\) denoted by \(\check{\lambda}\), \[\operatorname{ch}\mathsf{Y}\Big{|}_{\lambda}=\operatorname{ch}(P_{2}\check{ \mathsf{X}})\,,\qquad\operatorname{ch}\check{\mathsf{X}}=\sum_{x\in\check{ \mathcal{X}}_{\lambda}}x\,,\qquad\check{\mathcal{X}}_{\lambda}=\{x_{\alpha,i}= \mathrm{e}^{a_{\alpha}}q_{1}^{\check{\lambda}_{\alpha,j}}q_{2}^{j-1}\}_{ \alpha\in[n],j\in\mathbb{N}}\,. \tag{5.57}\] Applying the co-stability condition, we instead obtain \[\operatorname{ch}\mathsf{Y}\Big{|}_{\lambda}=\begin{cases} \operatorname{ch}(P_{1}^{\vee}\mathsf{X})\,,&\operatorname{ch}\mathsf{X}= \sum_{x\in\mathcal{X}_{\lambda}}x\,,\quad\mathcal{X}_{\lambda}=\{x_{\alpha,i} =\mathrm{e}^{a_{\alpha}}q_{1}^{-i+1}q_{2}^{-\lambda_{\alpha,i}}\}_{\alpha\in [n],i\in\mathbb{N}}&(|q_{1}|>1)\\ \operatorname{ch}(P_{2}^{\vee}\check{\mathsf{X}})\,,&\operatorname{ch}\check {\mathsf{X}}=\sum_{x\in\check{\mathcal{X}}_{\lambda}}x\,,\quad\check{ \mathcal{X}}_{\lambda}=\{x_{\alpha,i}=\mathrm{e}^{a_{\alpha}}q_{1}^{-\check{ \lambda}_{\alpha,j}}q_{2}^{-j+1}\}_{\alpha\in[n],j\in\mathbb{N}}&(|q_{2}|>1)\\ \end{cases}\,. \tag{5.58}\] Another expression is given as a finite sum, \[\operatorname{ch}\mathsf{Y}\Big{|}_{\lambda}=\begin{cases} \sum_{x\in\mathcal{X}_{\partial_{+}\lambda}}x-\sum_{x\in\mathcal{X}_{ \partial_{-}\lambda}}xq_{12}&\text{(stability)}\\ \sum_{x\in\mathcal{X}_{\partial_{+}\lambda}}xq_{12}-\sum_{x\in\mathcal{X}_{ \partial_{-}\lambda}}x&\text{(co-stability)}\\ \end{cases} \tag{5.59}\] where we define the addable and removable boundary of the partition, \[\partial_{+}\lambda=\{(i,\lambda_{\alpha,i}+1)\mid\lambda_{\alpha,i-1}> \lambda_{\alpha,i}\}_{\alpha\in[n],i\in\mathbb{N}}\,,\quad\partial_{-}\lambda =\{(i,\lambda_{\alpha,i})\mid\lambda_{\alpha,i+1}<\lambda_{\alpha,i}\}_{ \alpha\in[n],i\in\mathbb{N}} \tag{5.60}\] where we put \(\lambda_{\alpha,0}=\infty\), and we define \[\mathcal{X}_{\partial_{+}\lambda} =\begin{cases}\{\mathrm{e}^{a_{\alpha}}q_{1}^{i-1}q_{2}^{j-1}\mid(i,j)\in\partial_{+}\lambda\}&\text{(stability)}\\ \{\mathrm{e}^{a_{\alpha}}q_{1}^{-i}\dot{q}_{2}^{-j}\mid(i,j)\in\partial_{+} \lambda\}&\text{(co-stability)}\end{cases}\,, \tag{5.61a}\] \[\mathcal{X}_{\partial_{-}\lambda} =\begin{cases}\{\mathrm{e}^{a_{\alpha}}q_{1}^{i-1}q_{2}^{j-1}\mid(i,j)\in\partial_{-}\lambda\}&\text{(stability)}\\ \{\mathrm{e}^{a_{\alpha}}q_{1}^{-i}\dot{q}_{2}^{-j}\mid(i,j)\in\partial_{-} \lambda\}&\text{(co-stability)}\end{cases}\,. \tag{5.61b}\] The tangent bundle is then obtained from the observable sheaf as follows, \[\mathsf{V}:=\frac{\mathsf{Y}^{\vee}\mathsf{Y}}{P_{12}}=-T\mathfrak{M}_{n,k}+ \frac{N^{\vee}N}{P_{12}}\,. \tag{5.62}\] The last term \(N^{\vee}N/P_{12}\) is understood as the perturbative contribution. The combination of \(\mathsf{Y}^{\vee}\mathsf{Y}\) corresponds to the adjoint representation of GL group. In order to construct the SO/Sp gauge theory, we consider the anti-symmetric and symmetric tensor product, which yield the adjoint representation of SO/Sp group [13, 14, 15], \[\mathsf{V}_{\mathrm{SO}}=\frac{1}{2}\frac{\mathsf{Y}^{2}-\mathsf{Y}^{[2]}}{P_ {12}}\,,\qquad\mathsf{V}_{\mathrm{Sp}}=\frac{1}{2}\frac{\mathsf{Y}^{2}+ \mathsf{Y}^{[2]}}{P_{12}}\,, \tag{5.63}\] where we denote the degree-\(p\) Adams operation to \(X\) by \(X^{[p]}\). Hence, the tangent bundle at the fixed point is given by \[\mathsf{V}_{\lambda}=-T_{\lambda}\mathfrak{M}_{n,k}+\frac{N^{\vee}N}{P_{12}} =\frac{\mathsf{Y}^{\vee}\mathsf{Y}}{P_{12}}\Bigg{|}_{\lambda}=\frac{P_{1}^{ \vee}}{P_{2}}\mathsf{X}^{\vee}\mathsf{X}\,. \tag{5.64}\] Denoting the total fixed point configuration space by \(\mathfrak{M}^{\mathsf{T}}=\bigsqcup_{k=0}^{\infty}\mathfrak{M}^{\mathsf{T}}_ {n,k}\), the full partition function is given as follows. \(\boldsymbol{\frown}\)Equivariant index formulaThe full partition function of pure SYM gauge theory is given by summation over the fixed point contributions, \[Z=\sum_{\lambda\in\mathfrak{M}^{\mathsf{T}}}\mathfrak{q}^{|\lambda|}\,\mathbb{ I}[\mathsf{V}_{\lambda}]\ \stackrel{{\eqref{eq:Z-1}}}{{=}}\ \sum_{\lambda\in\mathfrak{M}^{\mathsf{T}}}\mathfrak{q}^{|\lambda|}\,\prod_{(x, x^{\prime})\in\mathcal{X}_{\lambda}\times\mathcal{X}_{\lambda}}\frac{(x^{ \prime}/x;q_{2}^{-1})_{\infty}}{(q_{1}x^{\prime}/x;q_{2}^{-1})_{\infty}}\,, \qquad|q_{2}|>1\,, \tag{5.65}\] where \((z;q)_{\infty}\) is the \(q\)-factorial (1.4) interpreted as the \(q\)-deformation of the gamma function. For four and six-dimensional cases, we obtain a similar infinite product formula based on the gamma and elliptic gamma functions, respectively. See, e.g., [14]. #### 5.4.4 Combinatorial formula We have seen that the partition function is obtained by applying the index functor to the tangent bundle. The full partition function is written in the closed form, whereas it involves infinite product as in (5.65). In fact, the infinite product contribution originates from the perturbative part, and one can subtract the finite rational contribution to be identified with the instanton partition function. For this purpose, we evaluate the character of the tangent bundle with the fix point configuration (see, e.g. [143]), \[\mathrm{ch}\,T_{\lambda}\mathfrak{M}_{n,k}=\sum_{\alpha,\beta\in[n]}\mathrm{e}^{a _{\alpha\beta}}\Xi(\lambda_{\alpha},\lambda_{\beta};q_{1},q_{2})\,, \tag{5.66}\] where we denote \(a_{\alpha\beta}=a_{\alpha}-a_{\beta}\). The combinatorial factor is given by \[\Xi(\lambda_{\alpha},\lambda_{\beta};q_{1,2}) =\mathrm{ch}\left(\mathfrak{l}_{\emptyset}/\mathfrak{l}_{\lambda_ {\beta}}+Q_{12}^{\vee}(\mathfrak{l}_{\emptyset}/\mathfrak{l}_{\lambda_{\alpha} })^{\vee}-P_{12}^{\vee}(\mathfrak{l}_{\emptyset}/\mathfrak{l}_{\lambda_{ \beta}})^{\vee}(\mathfrak{l}_{\emptyset}/\mathfrak{l}_{\lambda_{\beta}})\right)\] \[=\sum_{s\in\lambda_{\alpha}}q_{1}^{\mathfrak{l}_{\beta}(s)}q_{2 }^{-\mathfrak{a}_{\alpha}(s)-1}+\sum_{s\in\lambda_{\beta}}q_{1}^{-\mathfrak{l} _{\alpha}(s)-1}q_{2}^{\mathfrak{a}_{\beta}(s)}\,, \tag{5.67}\] where we define the arm and leg lengths, \[\mathbf{a}_{\alpha}(i,j)=\lambda_{\alpha,i}-j\,,\qquad\mathbf{l}_{\alpha}(i,j )=\check{\lambda}_{\alpha,j}-i\,. \tag{5.68}\] We remark that the same combinatorial factor is obtained starting from the co-stability condition. We observe the following relation, \[\Xi(\lambda_{\alpha},\lambda_{\beta};q_{1,2})=\Xi(\check{\lambda}_{\alpha}, \check{\lambda}_{\beta};q_{2,1})=q_{12}^{-1}\Xi(\lambda_{\beta},\lambda_{ \alpha};q_{1,2}^{-1})\,. \tag{5.69}\] Hence, applying the index functor, we obtain the \(k\)-instanton partition function, a.k.a., _Nekrasov partition function_. \(\boldsymbol{\frown}\) **Nekrasov partition function (combinatorial formula) [144, 15]** The instanton partition function has the following combinatorial formula (Nekrasov partition function), \[Z_{k}=\sum_{\lambda\in\mathfrak{M}_{n,k}^{\mathsf{T}}}\mathbb{I}[-T_{\lambda} \mathfrak{M}_{n,k}]=\sum_{\lambda\in\mathfrak{M}_{n,k}^{\mathsf{T}}}\prod_{ \alpha,\beta\in[n]}\mathsf{N}_{\lambda_{\alpha}\lambda_{\beta}}(a_{\alpha \beta};\epsilon_{1,2})\,, \tag{5.70}\] where we define the combinatorial factor, \[\mathsf{N}_{\lambda_{\alpha}\lambda_{\beta}}(z;\epsilon_{1,2})= \prod_{s\in\lambda_{\alpha}}[z+\epsilon_{1}\mathfrak{l}_{\beta}(s)- \epsilon_{2}(\mathbf{a}_{\alpha}(s)+1)]\] \[\qquad\times\prod_{s\in\lambda_{\beta}}[z-\epsilon_{1}(\mathfrak{ l}_{\beta}(s)+1)+\epsilon_{2}\mathbf{a}_{\alpha}(s)]\,. \tag{5.71}\] #### 5.4.5 Contour integral formula It has been known that the instanton partition function can be described in the form of contour integral, a.k.a., LMNS (Losev-Moore-Nekrasov-Shatashvili) formula. For this purpose, we use the vector bundles on the moduli space without substituting the fixed point value, i.e., \(\mathrm{ch}\,K=\sum_{I\in[k]}\mathrm{e}^{\phi_{I}}\). Then, we compute the index functor of the tangent bundle to the \(k\)-instanton moduli space, which gives rise to the contour integral over the maximal Cartan torus \(\mathsf{T}_{K}\subset\mathrm{GL}(K)\). #### \(\boldsymbol{\sim}\) LMNS formula (contour integral formula) [11, 12, 13] The instanton partition function has the following contour integral formula (LMNS formula), \[Z_{k}=\mathbb{I}[-T\mathfrak{M}_{n,k}]=\frac{1}{k!}\frac{[-\epsilon_{12}]^{k}}{[ -\epsilon_{1,2}]^{k}}\oint_{\mathsf{T}_{K}}\mathrm{d}\phi\,\frac{1}{P(\underline {\phi})\widetilde{P}(\underline{\phi}+\epsilon_{12})}\prod_{I\neq J}^{k} \mathcal{S}(\phi_{IJ})^{-1}\,, \tag{5.72}\] where we define the gauge polynomials \[P(\underline{\phi})=\mathbb{I}[N^{\vee}K]=\prod_{I\in[k],\alpha\in[n]}[\phi_ {I}-a_{\alpha}]\,,\quad\widetilde{P}(\underline{\phi})=\mathbb{I}[K^{\vee}N] =\prod_{I\in[k],\alpha\in[n]}[-\phi_{I}+a_{\alpha}] \tag{5.73}\] and the \(\mathcal{S}\)-function, \[\mathcal{S}(z)=\frac{[z-\epsilon_{1,2}]}{[z][z-\epsilon_{12}]}\,. \tag{5.74}\] We use the notation, \([z-\epsilon_{1,2}]=[z-\epsilon_{1}][z-\epsilon_{2}]\). The integral measure is defined as \(\mathrm{d}\phi=\prod_{I\in[k]}\mathrm{d}\phi_{I}/2\pi\mathrm{i}\), which is understood as the index of the zero mode appearing from \(K^{\vee}K\). We have the Weyl group volume, \(k!=|\mathfrak{S}_{k}|\). In fact, the poles of the integrand are consistent with the fixed points, such that we take the residues at \(\phi_{I}=\phi_{I-1}+\epsilon_{1,2}\) or \(\phi_{I}=a_{\alpha}\), which yields the map, \(\{\phi_{I}\}_{I\in[k]}\to\{a_{\alpha}+(i-1)\epsilon_{1}+(j-1)\epsilon_{2}\}_{ \alpha\in[n],(i,j)\in\lambda_{alpha}}\). This multi-variable contour integral is also understood as the Jeffrey-Kirwan residue prescription [10]. See, e.g., [1, 1, 1, 13] for details. #### 5.4.6 Quiver gauge theory Using the observable sheaf, we can similarly obtain the hypermultiplet contribution to the partition function. For the matter bundle \(\mathsf{M}\) and \(\widetilde{\mathsf{M}}\), we have the (anti)fundamental hypermultiplet contribution as follows, \[\mathsf{H}=-\frac{\mathsf{M}^{\vee}\mathsf{Y}}{P_{12}}\,,\qquad\widetilde{ \mathsf{H}}=-\frac{\mathsf{Y}^{\vee}\widetilde{\mathsf{M}}}{P_{12}}\,, \tag{5.75}\] where the corresponding Chern roots are identified with the fundamental mass parameters, \[\mathrm{ch}\,\mathsf{M}=\sum_{f\in[n^{F}]}\mathrm{e}^{m_{f}}\,,\qquad\mathrm{ ch}\,\widetilde{\mathsf{M}}=\sum_{f\in[\hat{n}^{F}]}\mathrm{e}^{\hat{m}_{f}}\,. \tag{5.76}\] For quiver gauge theory of type \(\Gamma=(\Gamma_{0},\Gamma_{1})\) with \(\Gamma_{0}=\{\text{nodes}\}\) and \(\Gamma_{1}=\{\text{edges}\}\),20 we have the instanton moduli space for each node, and the total moduli space is given by \(\mathfrak{M}_{\underline{G},\underline{k}}=\bigsqcup_{i\in\Gamma_{0}} \mathfrak{M}_{G_{i},k_{i}}\)[14, 15]. In this case, we have the observable sheaf for each node, \((\mathsf{Y}_{i})_{i\in\Gamma_{0}}\), and the vector multiplet and the bifundamental hypermultiplet contributions are given by \[\mathsf{V}_{i}=\frac{\mathsf{Y}_{i}^{\vee}\mathsf{Y}_{i}}{P_{12}}\,, \qquad\mathsf{H}_{e:i\to j}=-\mathsf{M}_{e}\,\frac{\mathsf{Y}_{i}^{\vee} \mathsf{Y}_{j}}{P_{12}}\,, \tag{5.77}\] where \(\mathsf{M}_{e}\) is a line bundle assigned to the edge \(e\in\Gamma_{1}\) with the Chern root identified with the bifundamental mass parameter \(m_{e}\). For a generic quiver (except for the loop/cyclic cases), we can put \(m_{e}\) to be zero by shift of Coulomb moduli parameters. The total tangent bundle is thus given by [14] \[-T\mathfrak{M}_{\underline{G},\underline{k}}=\sum_{i\in\Gamma_{0}}\mathsf{V}_ {i}+\sum_{e:i\to j}\mathsf{H}_{e:i\to j}=\sum_{i,j\in\Gamma_{0}}\frac{ \mathsf{Y}_{i}^{\vee}c_{ij}^{+}\mathsf{Y}_{j}}{P_{12}}\,, \tag{5.78}\] where we define the half \(q\)-Cartan matrix, \[c_{ij}^{+}=\delta_{ij}-\sum_{e:i\to j}\mathsf{M}_{e}\,. \tag{5.79}\] We remark that this total tangent bundle involves also the perturbative contributions in addition to the instanton part. Contour integralGiven the tangent bundle, we obtain the contour integral form of the instanton partition function. Let us discuss the examples. \(A_{p}\) quiver theoryWe consider the linear quiver that consists of \(p\) gauge nodes: \(\raisebox{-1.0pt}{\includegraphics[scale=0.4]{fig/diagram_p_12}}\). In this case, we can put all the bifundamental mass parameters to be zero without loss of generality. The bifundamental hypermultiplet contribution is given by \[\mathsf{H}_{i\to i+1}=-\frac{\mathsf{Y}_{i}^{\vee}\mathsf{Y}_{i+1}}{P_{12}}=- \frac{\mathsf{N}_{i}^{\vee}\mathsf{N}_{i+1}}{P_{12}}+\mathsf{N}_{i}^{\vee} \mathsf{K}_{i+1}+Q_{12}^{\vee}\mathsf{K}_{i}^{\vee}\mathsf{N}_{i+1}-P_{12}^{ \vee}\mathsf{K}_{i}^{\vee}\mathsf{K}_{i+1}\,, \tag{5.80}\] where the first term is interpreted as the perturbative contribution. Hence, applying the index functor to the instanton part, we obtain the contour integral formula for the instanton partition function, \[Z_{\underline{k}}=\frac{1}{\underline{k}!}\frac{[-\epsilon_{12} ]^{k}}{[-\epsilon_{1,2}]^{k}}\oint_{\mathsf{T}_{K}}\mathrm{d}\phi\,\frac{\prod _{i\in[p-1]}P_{i}(\underline{\phi}_{i+1})\widetilde{P}_{i+1}(\underline{\phi }_{i}+\epsilon_{12})}{\prod_{i\in[p]}P_{i}(\underline{\phi}_{i}) \widetilde{P}_{i}(\underline{\phi}_{i}+\epsilon_{12})}\frac{\prod_{i\in[p-1] }\prod_{j\in[k_{i}]}^{\ell[k_{i+1}]}\mathsf{S}(\phi_{i+1,I}-\phi_{i,J})}{\prod _{i\in[p]}\prod_{I\neq J}\mathsf{S}(\phi_{i,I}-\phi_{i,J})}\,, \tag{5.81}\] where we denote \(\underline{k}!=\prod_{i\in\Gamma_{0}}k_{i}!\) and \(k=\sum_{i\in\Gamma_{0}}k_{i}\). The gauge polynomials are given by \(P_{i}(z)=\prod_{\alpha\in[n_{i}]}[z-a_{i,\alpha}]\) and \(\widetilde{P}_{i}(z)=\prod_{\alpha\in[n_{i}]}[a_{i,\alpha}-z]\). \(\widehat{A}_{p}\) quiver theoryThe next example is a cyclic quiver \(\widehat{A}_{p}\) with \(p+1\) nodes: \(\raisebox{-1.0pt}{\includegraphics[scale=0.4]{fig/diagram_p_12}}\). In this case, we can put all the bifundamental mass parameters to be the same, \(m_{i\to i+1}=m\) Then, the contour integral form of the instanton partition function is given by \[Z_{\underline{k}}=\frac{1}{k!}\frac{[-\epsilon_{12}]^{k}}{[-\epsilon_{1,2}]^{k}}\oint_{\mathsf{T}_{K}}\mathrm{d}\underline{\phi}\prod_{i\in\mathbb{Z}_{ p+1}}\left[\frac{P_{i}(\underline{\phi}_{i+1}+m)\widetilde{P}_{i+1}( \underline{\phi}_{i}-m+\epsilon_{12})}{P_{i}(\underline{\phi}_{i})\widetilde{ P}_{i}(\underline{\phi}_{i}+\epsilon_{12})}\frac{\prod_{J\in[k_{i}]}^{I\in[k_{i}+1 ]}\mathsf{S}(\phi_{i+1,I}-\phi_{i,J}+m)}{\prod_{I\neq J}^{k_{i}}\mathsf{S}(\phi_ {i,I}-\phi_{i,J})}\right]\,. \tag{5.82}\] The case \(p=0\) involves a single node with a loop edge describing the hypermultiplet in the adjoint representation, which corresponds to four-dimensional \(\mathcal{N}=2^{*}\) theory. The instanton partition function is given by \[Z_{k}=\frac{1}{k!}\left(\frac{[-\epsilon_{12}]}{[-\epsilon_{1,2}]}\mathsf{S}(m )\right)^{k}\oint_{\mathsf{T}_{K}}\mathrm{d}\underline{\phi}\,\frac{P( \underline{\phi}+m)\widetilde{P}(\underline{\phi}-m+\epsilon_{12})}{P( \underline{\phi})\widetilde{P}(\underline{\phi}+\epsilon_{12})}\prod_{I\neq J }^{k}\frac{\mathsf{S}(\phi_{IJ}+m)}{\mathsf{S}(\phi_{IJ})}\,, \tag{5.83}\] where \(m\) is the adjoint mass parameter. #### Fixed points The fixed points are similarly described for quiver gauge theory. For \(\underline{G}=\prod_{i\in\Gamma_{0}}\mathrm{U}(n_{i})\), we have a set of partitions characterizing the fixed points, \[\lambda=(\lambda_{i,\alpha})_{i\in\Gamma_{0},\alpha\in[n_{i}]}\in\mathfrak{M} ^{\mathsf{T}}\,, \tag{5.84}\] and we define \[\mathcal{X}_{\lambda}=\bigsqcup_{i\in\Gamma_{0}}\mathcal{X}_{\lambda_{i}}\,, \qquad\mathcal{X}_{\lambda_{i}}=\{x_{i,\alpha,k}=\mathrm{e}^{a_{i,\alpha}}q_{1 }^{k-1}q_{2}^{\lambda_{i,\alpha,k}}\}_{\alpha\in[n],k\in\mathbb{N}}\,, \tag{5.85}\] under the stability condition and \(|q_{1}|<1\). Then, the total tangent space at the fixed point \(\lambda\in\mathfrak{M}^{\mathsf{T}}\) is given by \[-T_{\lambda}\mathfrak{M}_{\underline{G},\underline{k}}=\sum_{i,j\in\Gamma_{0}} \frac{P_{1}^{\vee}}{P_{2}}\mathsf{X}_{i}^{\vee}c_{ij}^{+}\mathsf{X}_{j}\,, \qquad\mathrm{ch}\,\mathsf{X}_{i}=\sum_{x\in\mathsf{X}_{\lambda_{i}}}x\,, \tag{5.86}\] and thus the full partition function is given as follows, \[Z=\sum_{\lambda\in\mathfrak{M}^{\mathsf{T}}}\underline{q}^{|\lambda|}\, \mathbb{I}[-T_{\lambda}\mathfrak{M}_{\underline{G},\underline{k}}] \tag{5.87}\] where we denote the instanton counting parameter, \[\underline{q}^{|\lambda|}=\prod_{i\in\Gamma_{0}}\mathfrak{q}_{i}^{|\lambda_{i} |}\,. \tag{5.88}\] #### 5.4.7 Supergroup gauge theory We apply the formalism discussed above to supergroup gauge theory. The discussion of this part is mainly based on [10]. First of all, we consider the supercharacter of the supervector spaces instead of the ordinary characters, \[\operatorname{sch}K =\operatorname{ch}K_{0}-\operatorname{ch}K_{1}=\sum_{I\in[k_{0}]} \operatorname{e}^{\phi_{I}^{0}}-\sum_{I\in[k_{1}]}\operatorname{e}^{\phi_{I}^{ 1}}\,, \tag{5.89a}\] \[\operatorname{sch}N =\operatorname{ch}N_{0}-\operatorname{ch}N_{1}=\sum_{\alpha\in[n_ {0}]}\operatorname{e}^{a_{\alpha}^{0}}-\sum_{\alpha\in[n_{1}]}\operatorname{e }^{a_{\alpha}^{1}}. \tag{5.89b}\] In order to apply the localization formula, we then analyze the fixed points in the moduli space \(\mathfrak{M}_{n,k}\) under the equivariant actions. In the case of supergroup gauge theory, we still have the same form of the fixed point equations (5.46). Recalling that we should assign the both stability and co-stability conditions, \[K_{0}=\mathbb{C}[B_{1},B_{2}]I(N_{0})\,,\qquad K_{1}=\mathbb{C}[B_{1},B_{2}]J( N_{1})^{\vee}\,, \tag{5.90}\] the fixed points under the equivariant actions are parametrized by \(\lambda=(\lambda_{\alpha}^{\sigma})_{\sigma=0,1,\alpha\in[n_{\sigma}]}\) with \(|\lambda^{\sigma}|=\sum_{\alpha\in[n_{\sigma}]}|\lambda_{\alpha}^{\sigma}|=k_{ \sigma}\). Hence, each subspace is given by \(K_{0}=\operatorname{Span}\{B_{1}^{i-1}B_{2}^{j-1}I_{\alpha^{0}}\}_{\alpha\in [n_{0}],(i,j)\in\lambda_{\alpha}^{0}}\) and \(K_{1}=\operatorname{Span}\{J_{\alpha^{1}}B_{1}^{i-1}B_{2}^{j-1}\}_{\alpha\in [n_{1}],(i,j)\in\lambda_{\alpha}^{1}}\) at the fixed point \(\lambda\in\mathfrak{M}_{n,k}^{\mathsf{T}}\). Therefore, the supercharacter is given by \[\operatorname{sch}K\Big{|}_{\lambda}=\sum_{\alpha\in[n_{0}]}\sum_{(i,j)\in \lambda_{\alpha}^{0}}\operatorname{e}^{a_{0}^{0}}q_{1}^{i-1}q_{2}^{j-1}-\sum_ {\alpha\in[n_{1}]}\sum_{(i,j)\in\lambda_{\alpha}^{1}}\operatorname{e}^{a_{ \alpha}^{1}}q_{1}^{-i}q_{2}^{-j}\,. \tag{5.91}\] Namely, we assign the \(\operatorname{GL}(Q)\)-equivariant parameters \((\epsilon_{1,2})\) for the positive node and \((-\epsilon_{1,2})\) for the negative node. This assignment is similarly understood as the analytic continuation to obtain supergroup theory as discussed in Sec. 4.2, which is also consistent with the \(\Omega\)-background in the context of supermatrix model (3.60). #### Contour integral formula We construct the observable sheaf as before, \(\mathsf{Y}=N-P_{12}K\). Obtaining the vector multiplet contribution, associated with the tangent bundle, \[\mathsf{V} =\frac{\mathsf{Y}^{\vee}\mathsf{Y}}{P_{12}}=-T\mathfrak{M}_{n,k} +\frac{\mathsf{N}^{\vee}\mathsf{N}}{P_{12}}\] \[\xrightarrow{\text{sch}}\sum_{\sigma,\sigma^{\prime}=0,1}(-1)^{ \sigma+\sigma^{\prime}}\left[\frac{N_{\sigma}^{\vee}N_{\sigma^{\prime}}}{P_{12 }}-N_{\sigma}^{\vee}K_{\sigma^{\prime}}-Q_{12}^{\vee}K_{\sigma}^{\vee}N_{ \sigma^{\prime}}+P_{12}K_{\sigma}^{\vee}K_{\sigma^{\prime}}\right]\,, \tag{5.92}\] we have the contour integral form of the instanton partition function. #### \(\boldsymbol{\frown}\) LMNS formula for supergroup gauge theory The instanton partition function of \(\operatorname{U}(n_{0}|n_{1})\) gauge theory is given by the following contour integral over the Cartan torus \(\mathsf{T}_{K}\) with \(k_{01}=k_{0}+k_{1}\), \[Z_{k} =\mathbb{I}[-T\mathfrak{M}_{n,k}]\] \[=\frac{1}{k_{0,1}!}\frac{[-\epsilon_{12}]^{k_{01}}}{[-\epsilon_{ 1,2}]^{k_{01}}}\oint_{\mathsf{T}_{K}}\mathrm{d}\phi\,\frac{P_{0,1}(\underline{ \phi}^{1,0})\widetilde{P}_{0,1}(\underline{\phi}^{1,0}+\epsilon_{12})}{P_{0, 1}(\underline{\phi}^{0,1})\widetilde{P}_{0,1}(\underline{\phi}^{0,1}+ \epsilon_{12})}\frac{\prod_{J\in[k_{1}]}^{I\in[k_{0}]}\mathcal{S}(\phi_{I}^{0} -\phi_{J}^{1})\mathcal{S}(\phi_{J}^{1}-\phi_{I}^{0})}{\prod_{J\neq J}^{k_{1}} \mathcal{S}(\phi_{IJ}^{1})\,}\,. \tag{5.93}\] This formula agrees with the contour integral formula of quiver gauge theory obtained in Sec. 5.4.6, which is consistent with the quiver gauge theory realization of supergroup gauge theory presented in Sec. 4.2. In this case, we should take the residues of the following poles, or or and or or to be compatible with the stability and the co-stability conditions. From the point of view of the JK residue prescription, we may use the reference vectors for the even and the odd sectors. The formula (5.93) is originally given as an integral over, which is evaluated by the maximal torus. Hence, it should be understood as a normalized integral by the supergroup volume as discussed for the supermatrix models in Secs. 3.1 and 3.2. #### Equivariant index formula Recalling that we assign the stability and the co-stability conditions for the even and the odd nodes, we obtain for as (5.94) where we define the sets, (5.95) Imposing the fixed point configuration, the vector multiplet contribution is given as follows, (5.96) Applying the index functor with taking care of the conditions and, we obtain the partition function. Equivariant index formula for supergroup partition functionThe partition function of gauge theory is given by summation over the fixed point contributions, (5.97) where the contribution at each fixed point is given in the five-dimensional convention for and by (5.98) We remark a similarity of this partition function to the supermatrix model discussed in Sec. 3. We will see further similarities from the underlying algebraic point of view. See Sec. 6.3. #### Combinatorial formula We consider the combinatorial formula, which gives rise to the finite rational expression, \[\operatorname{sch}T_{\lambda}\mathfrak{M}_{n,k}=\sum_{\sigma,\sigma^{\prime}=0,1} (-1)^{\sigma+\sigma^{\prime}}\sum_{\alpha\in[n_{\sigma}],\beta\in[n_{\sigma^{ \prime}}]}\mathrm{e}^{a_{\sigma}^{\sigma}-a_{\beta}^{\sigma^{\prime}}}\Xi_{ \sigma\sigma^{\prime}}(\lambda_{\alpha},\lambda_{\beta};q_{1,2})\,, \tag{5.99}\] where the diagonal combinatorial factors are given by \[\Xi_{00}(\lambda_{\alpha},\lambda_{\beta};q_{1,2})=\Xi(\lambda_{\alpha}, \lambda_{\beta};q_{1,2})\,,\qquad\Xi_{11}(\lambda_{\alpha},\lambda_{\beta};q_{ 1,2})=\Xi(\lambda_{\beta},\lambda_{\alpha};q_{1,2})\,. \tag{5.100}\] This is clear from the original expression of \(\Xi\) in terms of the polynomial ideals (5.50) and (5.67). See Sec. 5.4.4. The off-diagonal factors are given as follows (see [12, 13]), \[\Xi_{01}(\lambda_{\alpha},\lambda_{\beta};q_{1,2}) =\operatorname{ch}\left(Q_{12}^{\vee}(\nicefrac{{1}}{{0}}/ \nicefrac{{1}}{{\lambda_{\beta}}})^{\vee}+Q_{12}^{\vee}(\nicefrac{{1}}{{0}}/ \nicefrac{{1}}{{\lambda_{\alpha}}})^{\vee}-P_{12}^{\vee}Q_{12}^{\vee}( \nicefrac{{1}}{{0}}/\nicefrac{{1}}{{\lambda_{\alpha}}})^{\vee}(\nicefrac{{1} }{{0}}/\nicefrac{{1}}{{\lambda_{\beta}}})^{\vee}\right)\] \[=-\sum_{i=1}^{\tilde{\lambda}_{\alpha,1}}\sum_{j^{\prime}=1}^{ \lambda_{\beta,1}}\left[q_{1}^{-\tilde{\lambda}_{\beta,j^{\prime}}-i}q_{2}^{- \lambda_{\alpha,i}-j^{\prime}}-q_{1}^{-i}q_{2}^{-j^{\prime}}\right]\] \[\qquad+\sum_{(i,j)\in\lambda_{\alpha}}q_{1}^{-i}q_{2}^{-\lambda_ {\beta,1}-j}+\sum_{(i^{\prime},j^{\prime})\in\lambda_{\beta}}q_{1}^{-\tilde{ \lambda}_{\alpha,1}-i^{\prime}}q_{2}^{-j^{\prime}}\,, \tag{5.101a}\] \[\Xi_{10}(\lambda_{\alpha},\lambda_{\beta};q_{1,2}) =\operatorname{ch}\left(\nicefrac{{1}}{{0}}/\nicefrac{{1}}{{ \lambda_{\beta}}}+\nicefrac{{1}}{{0}}/\nicefrac{{1}}{{\lambda_{\alpha}}}-P_{ 12}^{\vee}(\nicefrac{{1}}{{0}}/\nicefrac{{1}}{{\lambda_{\alpha}}})(\nicefrac{ {1}}{{0}}/\nicefrac{{1}}{{\lambda_{\beta}}})\right)\] \[=-\sum_{i=1}^{\tilde{\lambda}_{\alpha,1}}\sum_{j^{\prime}=1}^{ \lambda_{\beta,1}}\left[q_{1}^{\tilde{\lambda}_{\beta,j^{\prime}}+i-1}q_{2}^{ \lambda_{\alpha,i}+j^{\prime}-1}-q_{1}^{i-1}q_{2}^{j^{\prime}-1}\right]\] \[\qquad+\sum_{(i,j)\in\lambda_{\alpha}}q_{1}^{i-1}q_{2}^{\lambda_ {\beta,1}+j-1}+\sum_{(i^{\prime},j^{\prime})\in\lambda_{\beta}}q_{1}^{\tilde{ \lambda}_{\alpha,1}+i^{\prime}-1}q_{2}^{j^{\prime}-1}\,. \tag{5.101b}\] In contrast to the diagonal part \(\Xi_{00(11)}\), further simplification does not occur for these off-diagonal ones. We remark that these off-diagonal factors are symmetric under \(\lambda_{\alpha}\leftrightarrow\lambda_{\beta}\), \[\Xi_{01}(\lambda_{\alpha},\lambda_{\beta};q_{1,2})=\Xi_{01}(\lambda_{\beta}, \lambda_{\alpha};q_{1,2})\,,\qquad\Xi_{10}(\lambda_{\alpha},\lambda_{\beta};q _{1,2})=\Xi_{10}(\lambda_{\beta},\lambda_{\alpha};q_{1,2})\,, \tag{5.102}\] and we have the relation, \[q_{12}\,\Xi_{01}(\lambda_{\alpha},\lambda_{\beta};q_{1,2})=\Xi_{10}(\lambda_{ \alpha},\lambda_{\beta};q_{1,2}^{-1})\,. \tag{5.103}\] Applying the index functor to this expression, we obtain the instanton partition function. In addition to the Nekrasov factor for the diagonal part (5.71), we also define the off-diagonal combinatorial factor, \[\mathsf{N}^{01}_{\lambda_{\alpha}\lambda_{\beta}}(z;\epsilon_{1,2}) =\prod_{i=1}^{\tilde{\lambda}_{\alpha,1}}\prod_{j^{\prime}=1}^{ \lambda_{\beta,1}}\frac{[z-i\epsilon_{1}-j^{\prime}\epsilon_{2}]}{[z-(\tilde{ \lambda}_{\beta,j^{\prime}}+i)\epsilon_{1}-(\lambda_{\alpha,i}+j^{\prime})]}\] \[\qquad\times\prod_{(i,j)\in\lambda_{\alpha}}[z-i\epsilon_{1}-( \lambda_{\beta,1}+j)\epsilon_{2}]\prod_{(i^{\prime},j^{\prime})\in\lambda_{ \alpha}}[z-(\tilde{\lambda}_{\alpha,1}+i^{\prime})\epsilon_{1}-j^{\prime} \epsilon_{2}]\,, \tag{5.104}\] \[\mathsf{N}^{10}_{\lambda_{\alpha}\lambda_{\beta}}(z;\epsilon_{1,2 }) =\prod_{i=1}^{\tilde{\lambda}_{\alpha,1}}\prod_{j^{\prime}=1}^{ \lambda_{\beta,1}}\frac{[z+(i-1)\epsilon_{1}+(j^{\prime}-1)\epsilon_{2}]}{[z+ (\tilde{\lambda}_{\beta,j^{\prime}}+i-1)\epsilon_{1}+(\lambda_{\alpha,i}+j^{ \prime}-1)]}\] \[\qquad\times\prod_{(i,j)\in\lambda_{\alpha}}[z+(i-1)\epsilon_{1}+( \lambda_{\beta,1}+j-1)\epsilon_{2}]\prod_{(i^{\prime},j^{\prime})\in\lambda_{ \alpha}}[z+(\tilde{\lambda}_{\alpha,1}+i^{\prime}-1)\epsilon_{1}+(j^{\prime}- 1)\epsilon_{2}]\,. \tag{5.105}\] The combinatorial form of the \(k\)-instanton partition function for \(\mathrm{U}(n_{0}|n_{1})\) supergroup gauge theory is given as follows. \(\boldsymbol{\frown}\) **Nekrasov partition function for \(\mathrm{U}(n_{0}|n_{1})\) gauge theory** The instanton partition function of \(\mathrm{U}(n_{0}|n_{1})\) gauge theory has the following combinatorial expression, \[Z_{k}=\sum_{\lambda\in\mathfrak{M}^{\mathsf{T}}_{n,k}}\frac{\prod_{\alpha\in[ n_{0}],\beta\in[n_{1}]}\mathsf{N}^{01}_{\lambda_{\alpha}\lambda_{\beta}}(a_{ \alpha}^{0}-a_{\beta}^{1};\epsilon_{1,2})\mathsf{N}^{10}_{\lambda_{\beta} \lambda_{\alpha}}(a_{\beta}^{1}-a_{\alpha}^{0};\epsilon_{1,2})}{\prod_{\sigma= 0,1}\prod_{\alpha,\beta\in[n_{\sigma}]}\mathsf{N}_{\lambda_{\beta}\lambda_{ \beta}^{\sigma}}(a_{\alpha}^{\sigma}-a_{\beta}^{\sigma};\epsilon_{1,2})}\,. \tag{5.106}\] We see that the denominator factors are the standard gauge node contributions, whereas the numerator factors are analogous, but not identical to the bifundamental hypermultiplet contributions. ## 6 Non-perturbative aspects of supergroup gauge theory In this section, we explore non-perturbative aspects of supergroup gauge theory based on the instanton partition function obtained in the previous section. ### Topological string theory approach We have seen in Sec. 4.3 that string/M-theory provides a geometric description of \(\mathcal{N}=2\) gauge theory. Choosing the Calabi-Yau (CY) three-fold constructed from resolution of specific singularities, one can obtain the corresponding Seiberg-Witten geometry. This process is known as geometric engineering [13, 14]. This description can be further pursued at the level of the microscopic instanton counting: The A-model topological string amplitude of the corresponding non-compact toric CY three-fold computes the five-dimensional instanton partition function [11, 12, 13]. The primary example is the local \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) geometry, which corresponds to pure SU(2) SYM theory. In this case, we have the following web diagram, which is dual to the toric diagram, (6.1) This web diagram can be identified with the configuration of -brane in type IIB theory obtained from the T-dual of the Hanany-Witten configuration discussed in Sec. 4.3.1. In order to compute the topological string amplitude, we may apply a systematic approach, called the topological vertex [1]. The idea is to describe the toric CY three-fold by gluing the local patches, corresponding to each trivalent vertex in the web diagram. We denote the topological vertex function with generic boundary conditions parametrized by the partitions by (6.2) where we denote the skew Schur function by with the Littlewood-Richardson coefficient. The combinatorial factor is defined by. We denote the Weyl vector by, such that. The string coupling with would be identified with the -background parameters at the self-dual point (unrefined situation),. In order to reproduce generic, we need the refinement of the topological vertex [1, 1]. From this point of view, recalling that the web diagram describes -branes, we introduce the anti-topological vertex (anti-vertex for short) involving the negative coupling to realize supergroup gauge theory [13, 14], (6.3) We can check that this vertex-anti-vertex formalism reproduces the instanton partition function of supergroup gauge theory [14]. Moreover, this vertex formalism exhibits further interesting features. As discussed in Sec. 4.3.1, the brane realization is not unique for supergroup gauge theory. In fact, we can see that the same instanton partition function can be obtained from different configurations corresponding to the possible Dynkin diagrams (4.20). Another remark is the relation to non-supergroup gauge theory. From the combinatorial property of the Schur function, flipping the coupling yields the transposition of the partition, etc, which leads to the relation (6.4) up to the framing factor. We remark that such a reflection relation holds only for the unrefined case. Hence, we have realization of \(\mathrm{U}(1|1)\) theory in terms of the ordinary vertices, \[\tikzfig{10} \tag{6.5}\] This configuration gives rise to \(\mathrm{U}(1)\) theory with two fundamental matters, which is consistent with the gauging trick discussed in Sec. 4.3.2. Let us comment on a algebraic interpretation of the anti-vertex. It has been known that the refined topological vertex is understood as the intertwiner of quantum troidal algebra of \(\mathfrak{gl}_{1}\)[10], known as Ding-Iohara-Miki algebra [14, 15]. In this context, the anti-vertex is realized as the intertwiner involving the negative level [12, 13]. See also [11] for a related realization of superalgebra from the CY geometry. ### Non-perturbative Schwinger-Dyson equation The instanton partition function is given by summation over the fixed point configurations, which can be related to each other by the process to add/remove instantons. We discuss the behavior of instanton partition function under such a non-perturbative process of adding/removing instantons, which gives rise to functional relations that we call the _non-perturbative Schwinger-Dyson equations_[11, 12, 13, 14, 15]. #### 6.2.1 Adding/removing instantons We consider instanton-adding process by shifting the vector space, \(K\to K+V\) with \(V=\mathbb{C}^{v}\), where \(v\) is the number of instantons that we add to the configuration. Under this shift, the tangent bundle at the fixed point \(\lambda\in\mathfrak{M}_{n,k}^{\mathsf{T}}\) behaves as follows, \[\delta_{V}T_{\lambda}\mathfrak{M}_{n,k}=\mathsf{Y}^{\vee}V+Q_{12}^{\vee}V^{ \vee}\mathsf{Y}-P_{12}^{-1}V^{\vee}V\,. \tag{6.6}\] Hence, this shift defines a map, \(\delta_{V}:\,\mathfrak{M}_{n,k}\to\mathfrak{M}_{n,k+v}\). Applying the index functor, we obtain \[\delta_{V}Z_{\lambda}=\mathbb{I}[-\delta_{V}T_{\lambda}\mathfrak{M}_{n,k}]= \frac{Z_{\lambda+v}}{Z_{\lambda}}=\frac{1}{v!}\frac{[-\epsilon_{12}]^{v}}{[- \epsilon_{1,2}]^{v}}\oint\mathrm{d}\underline{\phi}\,\frac{1}{\mathcal{Y}_{ \lambda}(\underline{\phi})\overline{\mathcal{Y}}_{\lambda}(\underline{\phi}+ \epsilon_{12})}\prod_{I\neq J}^{v}\mathcal{S}(\phi_{IJ})^{-1}\,, \tag{6.7}\] where we denote by \(Z_{\lambda+v}\) the instanton partition function contribution with the charge \(k+v\) that can be obtained from the configuration \(\lambda\) by adding \(v\) instantons, and we define the \(\mathcal{Y}\)-function with the line bundle \(\mathsf{x}\) whose Chern root is given by \(x\), \[\mathcal{Y}_{\lambda}(x)=\mathbb{I}[\mathsf{Y}^{\vee}\mathsf{x}] \stackrel{{\eqref{eq:2.1}}}{{=}}\prod_{x^{\prime}\in\mathcal{X} _{\lambda}^{\mathrm{local}}}\frac{[x-x^{\prime}]}{[x-x^{\prime}-\epsilon_{1}] }\stackrel{{\eqref{eq:2.1}}}{{=}}\frac{\prod_{x^{\prime}\in \mathcal{X}_{\phi_{+}\lambda}^{\mathrm{local}}}[x-x^{\prime}]}{\prod_{x^{ \prime}\in\mathcal{X}_{\phi_{-}\lambda}^{\mathrm{local}}}[x-x^{\prime}- \epsilon_{12}]}\,, \tag{6.8a}\] \[\widetilde{\mathcal{Y}}_{\lambda}(x)=\mathbb{I}[\mathsf{x}^{\vee} \mathsf{Y}]\stackrel{{\eqref{eq:2.1}}}{{=}}\prod_{x^{\prime}\in \mathcal{X}_{\lambda}^{\mathrm{local}}}\frac{[x^{\prime}-x]}{[x^{\prime}-x+ \epsilon_{1}]}\stackrel{{\eqref{eq:2.1}}}{{=}}\frac{\prod_{x^{ \prime}\in\mathcal{X}_{\phi_{+}\lambda}^{\mathrm{local}}}[x^{\prime}-x]}{\prod _{x^{\prime}\in\mathcal{X}_{\phi_{-}\lambda}^{\mathrm{local}}}[x^{\prime}-x+ \epsilon_{12}]}\,, \tag{6.8b}\] with the set \[\mathcal{X}_{\bullet}^{[\log]}=\{\log x\mid x\in\mathcal{X}_{\bullet}\}\,. \tag{6.9}\] We have the relation for these two \(\mathcal{Y}\)-functions, \[\mathcal{Y}(z)=\begin{cases}(-1)^{n}\widetilde{\mathcal{Y}}(z)&(4d)\\ (-1)^{n}\mathrm{e}^{-nz}\mathrm{e}^{\sum_{\alpha\in[n]}a_{\alpha}}\widetilde{ \mathcal{Y}}(z)&(5d\ \&\ 6d)\end{cases}\,. \tag{6.10}\] We denote \(\mathcal{Y}_{\lambda}(\underline{\phi})=\prod_{I\in[v]}\mathcal{Y}_{\lambda}( \phi_{I})\) and \(\mathrm{d}\phi=\prod_{I\in[v]}\mathrm{d}\phi_{I}\,/2\pi\mathrm{i}\). We focus on the simplest case, \(v=1\). In this case, we evaluate the contour integral surrounding the poles of \(1/\mathcal{Y}_{\lambda}(\phi)\) based on the finite product form of the \(\mathcal{Y}\)-function, \[\delta_{V}Z_{\lambda} =\frac{[-\epsilon_{12}]}{[-\epsilon_{1,2}]}\oint\frac{\mathrm{d} \phi}{2\pi\mathrm{i}}\frac{1}{\mathcal{Y}_{\lambda}(\phi)\widetilde{ \mathcal{Y}}_{\lambda}(\phi+\epsilon_{12})}\] \[=\frac{1}{[-\epsilon_{1,2}]}\sum_{x\in\mathcal{X}_{\partial_{0+ \lambda}}^{[\log]}}\frac{\prod_{x^{\prime}\in\mathcal{X}_{\partial_{0-\lambda} }^{[\log]}}[x-x^{\prime}-\epsilon_{12}][x^{\prime}-x]}{\prod_{x^{\prime}\in \mathcal{X}_{\partial_{+\lambda}^{+\lambda}}^{[\log]}}[x-x^{\prime}][x^{ \prime}-x-\epsilon_{12}]}=\sum_{x\in\mathcal{X}_{\partial_{+\lambda}}^{[\log ]}}\frac{-1}{\mathcal{Y}_{\lambda}(x)\widetilde{\mathcal{Y}}_{\lambda^{ \prime}}(x+\epsilon_{12})}\,, \tag{6.11}\] where the configuration \(\lambda^{\prime}\) is obtained by shift (adding an instanton). The last equality is shown as follows, \[\frac{1}{[-\epsilon_{1,2}]}\frac{\prod_{x^{\prime}\in\mathcal{X}_ {\partial_{-\lambda}}^{[\log]}}[x-x^{\prime}-\epsilon_{12}][x^{\prime}-x]}{ \prod_{x^{\prime}\in\mathcal{X}_{\partial_{+\lambda}}^{[\log]}\setminus\{x\}} [x-x^{\prime}][x^{\prime}-x-\epsilon_{12}]} =\frac{1}{\mathcal{Y}_{\lambda}(x)\widetilde{\mathcal{Y}}_{ \lambda}(x+\epsilon_{12})}\times\lim_{x^{\prime\prime}\to x}\frac{[x-x^{ \prime\prime}][x^{\prime\prime}-x-\epsilon_{12}]}{[-\epsilon_{1,2}]}\] \[=\frac{1}{\mathcal{Y}_{\lambda}(x)\widetilde{\mathcal{Y}}_{ \lambda}(x+\epsilon_{12})}\] \[\qquad\qquad\times\lim_{x^{\prime\prime}\to x}\frac{[x-x^{\prime \prime}][x^{\prime\prime}-x-\epsilon_{12}][x^{\prime\prime}-x-\epsilon_{1,2}]}{ [-\epsilon_{1,2}][x^{\prime\prime}-x][x^{\prime\prime}-x-\epsilon_{12}]}\] \[=\frac{-1}{\mathcal{Y}_{\lambda}(x)\widetilde{\mathcal{Y}}_{ \lambda^{\prime}}(x+\epsilon_{12})}\,. \tag{6.12}\] From this expression, we identify \(Z_{\lambda^{\prime}}/Z_{\lambda}=-1/\mathcal{Y}_{\lambda}(x)\widetilde{ \mathcal{Y}}_{\lambda^{\prime}}(x+\epsilon_{12})\), from which we obtain \[\operatorname*{Res}_{x^{\prime}\to x}\left[Z_{\lambda^{\prime}} \widetilde{\mathcal{Y}}_{\lambda^{\prime}}(x^{\prime}+\epsilon_{12})+Z_{ \lambda}\frac{1}{\mathcal{Y}_{\lambda}(x^{\prime})}\right]=0\,. \tag{6.13}\] Summing up all the configurations, we see that \[\mathcal{T}(x):=\left\langle\widetilde{\mathcal{Y}}(x+\epsilon_{12})+\frac{ \mathfrak{q}}{\mathcal{Y}(x)}\right\rangle \tag{6.14}\] is a pole-free regular function in \(x\), where we define the gauge theory average of the observable by \(\langle\mathcal{O}\rangle=\frac{1}{Z}\sum_{\lambda\in\mathfrak{M}\mathbb{N} ^{\Gamma}}Z_{\lambda}\mathcal{O}_{\lambda}\). The combination of the \(\mathcal{Y}\)-functions yielding a pole-free function is called the (average of) \(qq\)-character (of \(A_{1}\) quiver in this case) [11, 12]. Indeed, the Seiberg-Witten curve is obtained in the classical limit \(\epsilon_{1,2}\to 0\) of this relation, \(y+\mathfrak{q}/y=\det(x-\phi)\) by identifying the characteristic polynomial by \(\mathcal{T}(x)\).22 We remark that in this limit, we can apply the saddle point analysis, so that the observable average can be replaced by the on-shell value with the saddle point configuration \(\lambda_{*}\), \(\langle\mathcal{Y}(x)\rangle\to\mathcal{Y}_{\lambda_{*}}(x)=y(x)\) and \(\langle 1/\mathcal{Y}(x)\rangle\to 1/\mathcal{Y}_{\lambda_{*}}(x)=1/y(x)\). Footnote 22: Precisely speaking, we should convert \(\widetilde{\mathcal{Y}}\) to \(\mathcal{Y}\) using the relation (6.10) providing an additional polynomial factor, which can be interpreted as the Chern–Simons term contribution in five dimensions. We may consider the removing process in a similar way. In this case, we consider the shift of the form, \(K\to K-V\). The tangent bundle behaves as \[\delta_{-V}T_{\lambda}\mathfrak{M}_{n,k}=-\mathsf{V}^{\vee}V-Q_{12}^{\vee}V^{ \vee}\mathsf{Y}-P_{12}^{-1}V^{\vee}V\,, \tag{6.15}\] where \(\delta_{-V}:\,\mathfrak{M}_{n,k}\to\mathfrak{M}_{n,k-v}\), and the index functor yields \[\delta_{-V}Z_{\lambda}=\mathbb{I}[-\delta_{-V}T_{\lambda}\mathfrak{M}_{n,k}] =\frac{Z_{\lambda-v}}{Z_{\lambda}}=\frac{1}{v!}\frac{[-\epsilon_{12}]^{v}}{[- \epsilon_{1,2}]^{v}}\oint\mathrm{d}\underline{\phi}\,\mathcal{Y}_{\lambda}( \underline{\phi})\widetilde{\mathcal{Y}}_{\lambda}(\underline{\phi}+\epsilon _{12})\prod_{I\neq J}^{v}\mathcal{S}(\phi_{IJ})^{-1}\,. \tag{6.16}\] In the case of \(v=1\), we obtain \[\delta_{-V}Z_{\lambda}=\frac{[-\epsilon_{12}]}{[-\epsilon_{1,2}]}\oint\frac{ \mathrm{d}\phi}{2\pi\mathrm{i}}\mathcal{Y}_{\lambda}(\phi)\widetilde{ \mathcal{Y}}_{\lambda}(\phi+\epsilon_{12})=\sum_{x\in\mathcal{X}_{\partial_{- \lambda}}^{\mathrm{logel}}}-\mathcal{Y}_{\lambda^{\prime}}(x)\widetilde{ \mathcal{Y}}_{\lambda}(x+\epsilon_{12})\,, \tag{6.17}\] where we denote the configuration obtained by removing an instanton by \(\lambda^{\prime}\). We can similarly obtain the \(qq\)-character from this expression as well. #### 6.2.2 Supergroup analysis We consider the adding/removing instanton process for supergroup gauge theory. Denoting the observable sheaf \(\mathsf{Y}=\mathsf{Y}_{0}\oplus\mathsf{Y}_{1}\), the character of even and odd part is given as follows, \[\mathrm{ch}\,\mathsf{Y}_{0}\Big{|}_{\lambda}=\sum_{x\in\mathcal{X}_{\partial_{ +}\lambda^{0}}}x-\sum_{x\in\mathcal{X}_{\partial_{-\lambda^{0}}}}xq_{12}\,, \qquad\mathrm{ch}\,\mathsf{Y}_{1}\Big{|}_{\lambda}=\sum_{x\in\mathcal{X}_{ \partial_{+}\lambda^{1}}}xq_{12}-\sum_{x\in\mathcal{X}_{\partial_{-\lambda^{1} }}}x \tag{6.18}\] where we define \[\mathcal{X}_{\partial_{\pm}\lambda^{0}}=\{\mathrm{e}^{a_{0}^{0}}q_{1}^{i-1}q_{ 2}^{j-1}\mid(i,j)\in\partial_{\pm}\lambda^{0}\}\,,\qquad\mathcal{X}_{\partial _{\pm}\lambda^{1}}=\{\mathrm{e}^{a_{0}^{1}}q_{1}^{-i}q_{2}^{-j}\mid(i,j)\in \partial_{\pm}\lambda^{1}\}\,. \tag{6.19}\] The \(\mathcal{Y}\)-function is then defined as follows, \[\mathcal{Y}_{\lambda}(z)=\frac{\mathcal{Y}_{0,\lambda^{0}}(z)}{\mathcal{Y}_{ 1,\lambda^{1}}(z)}\,,\qquad\widetilde{\mathcal{Y}}_{\lambda}(z)=\frac{ \widetilde{\mathcal{Y}}_{0,\lambda^{0}}(z)}{\widetilde{\mathcal{Y}}_{1, \lambda^{1}}(z)}\,, \tag{6.20}\] where each factor is given by \[\mathcal{V}_{0,\lambda^{0}}(z) =\frac{\prod_{x\in\mathcal{X}_{\partial_{+}\lambda^{0}}^{\text{loc} }}[z-x]}{\prod_{x\in\mathcal{X}_{\partial_{-}\lambda^{0}}^{\text{loc}}}[z-x- \epsilon_{12}]}\,,\qquad\mathcal{V}_{1,\lambda^{1}}(z)=\frac{\prod_{x\in \mathcal{X}_{\partial_{+}\lambda^{0}}^{\text{loc}}}[z-x-\epsilon_{12}]}{\prod_ {x\in\mathcal{X}_{\partial_{-}\lambda^{1}}^{\text{loc}}}[z-x]}\,, \tag{6.21a}\] \[\widetilde{\mathcal{V}}_{0,\lambda^{0}}(z) =\frac{\prod_{x\in\mathcal{X}_{\partial_{+}\lambda^{0}}^{\text{ loc}}}[x-z]}{\prod_{x\in\mathcal{X}_{\partial_{-}\lambda^{0}}^{\text{loc}}}[x-z+ \epsilon_{12}]}\,,\qquad\widetilde{\mathcal{V}}_{1,\lambda^{1}}(z)=\frac{ \prod_{x\in\mathcal{X}_{\partial_{+}\lambda^{1}}^{\text{loc}}}[x-z+\epsilon_{ 12}]}{\prod_{x\in\mathcal{X}_{\partial_{-}\lambda^{1}}^{\text{loc}}}[x-z]}\,. \tag{6.21b}\] Under the shift \(K\to K+V\) by \(V=\mathbb{C}^{v_{0}|v_{1}}\), we have the same expression for the tangent bundle as before (6.6). The resulting contour integral is given as follows, \[\delta_{V}Z_{\lambda} =\frac{1}{v_{0,1}!}\frac{[-\epsilon_{12}]^{v_{01}}}{[-\epsilon_{1,2}]^{v_{01}}}\oint\text{d}\underline{\phi}\,\frac{1}{\mathcal{V}_{\lambda}( \underline{\phi})\widetilde{\mathcal{V}}_{\lambda}(\underline{\phi}+\epsilon_ {12})}\frac{\prod_{J\in[v_{1}]}^{I\in[v_{0}]}\mathcal{S}(\phi_{I}^{0}-\phi_{J} ^{1})\mathcal{S}(\phi_{J}^{1}-\phi_{I}^{0})}{\prod_{I\neq J}^{v_{0}}\mathcal{S }(\phi_{IJ}^{0})\prod_{I\neq J}^{v_{1}}\mathcal{S}(\phi_{IJ}^{1})}\,. \tag{6.22}\] The one-dimensional cases \(v=(1|0)\) and \((0|1)\) are simultaneously formulated by \[\delta_{V}Z_{\lambda} =\frac{[-\epsilon_{12}]}{[-\epsilon_{1,2}]}\oint\frac{\text{d} \phi}{2\pi\text{i}}\frac{1}{\mathcal{V}_{\lambda}(\phi)\widetilde{\mathcal{V} }_{\lambda}(\phi+\epsilon_{12})}=\frac{[-\epsilon_{12}]}{[-\epsilon_{1,2}]} \oint\frac{\text{d}\phi}{2\pi\text{i}}\frac{\text{d}\phi}{\mathcal{V}_{1, \lambda^{1}}(\phi)\widetilde{\mathcal{V}}_{1,\lambda^{1}}(\phi+\epsilon_{12})} {\mathcal{V}_{0,\lambda^{0}}(\phi)\widetilde{\mathcal{V}}_{0,\lambda^{0}}( \phi+\epsilon_{12})}\] \[=\frac{[-\epsilon_{12}]}{[-\epsilon_{1,2}]}\oint\frac{\text{d} \phi}{2\pi\text{i}}\frac{\prod_{x\in\mathcal{X}_{\partial_{-}\lambda^{0}}^{ \text{loc}}}[\phi-x-\epsilon_{12}][x-\phi]}{\prod_{x\in\mathcal{X}_{\partial_ {+}\lambda^{1}}^{\text{loc}}}[\phi-x-\epsilon_{12}][x-\phi]}\,. \tag{6.23}\] From this expression, we observe that adding an instanton to the positive node (positive instanton) is equivalent to removing an instanton from the negative node (negative instanton). Similarly, removing a positive instanton is equivalent to adding a negative instanton from this point of view. Therefore, we obtain the same \(qq\)-character expression in terms of the total \(\mathcal{V}\)-functions, \[\mathcal{T}(x)=\left\langle\widetilde{\mathcal{V}}(x+\epsilon_{12})+\frac{ \mathfrak{q}}{\mathcal{V}(x)}\right\rangle=\left\langle\frac{\widetilde{ \mathcal{V}}_{0}(x+\epsilon_{12})}{\widetilde{\mathcal{V}}_{1}(x+\epsilon_{12} )}+\mathfrak{q}\frac{\mathcal{V}_{1}(x)}{\mathcal{V}_{0}(x)}\right\rangle\,, \tag{6.24}\] which is again consistent with \(\widehat{A}_{1}\) quiver realization in Sec. 4.2 by identifying \(\mathcal{V}_{0,1}(x)\) with the \(\mathcal{V}\)-functions of \(\widehat{A}_{1}\) quiver. Identifying the \(\mathcal{T}\)-function with the supercharacteristic function of the adjoint scalar, \(\mathcal{T}(x)=\text{sdet}(x-\Phi)\), we reproduce the Seiberg-Witten curve for \(\text{U}(n_{0}|n_{1})\) theory discussed in Sec. 4.3.4 in the classical limit \(\epsilon_{1,2}\to 0\). #### 6.2.3 Geometry of \(qq\)-character Let us comment on geometric representation theoretical perspectives of the \(qq\)-character. We reconsider the shift \(K\to K+V\) in the presence of the \(\mathcal{V}\)-functions, \[\delta_{V}\left(\mathsf{Y}^{\vee}W-T\mathfrak{M}_{n,k}\right)=-\mathsf{Y}^{ \vee}V-Q_{12}^{\vee}V^{\vee}\mathsf{Y}-P_{12}^{\vee}V^{\vee}W+P_{12}^{\vee}V ^{\vee}V\,, \tag{6.25}\] where we define \(W=\mathbb{C}^{w}\) with the character \(\operatorname{ch}W=\sum_{\alpha\in[w]}\mathrm{e}^{\xi_{\alpha}}\), such that \(\mathbb{I}[\mathrm{Y}^{\vee}W]=\prod_{\alpha\in[w]}\mathcal{Y}(\xi_{\alpha})\). The generic \(qq\)-character is then obtained by the following index formula, \[\mathsf{T}_{w}=\sum_{v=0}^{\infty}\mathfrak{q}^{v}\,\mathsf{T}_{w,v}\,, \tag{6.26}\] where each contribution is given by \[\mathsf{T}_{w,v} =\mathbb{I}[\mathsf{Y}^{\vee}(W-(1+Q_{12}^{\vee})V)-P_{12}^{\vee} V^{\vee}W+P_{12}^{\vee}V^{\vee}V]\] \[=\frac{1}{v!}\frac{[-\epsilon_{12}]^{v}}{[-\epsilon_{1,2}]^{v}} \oint\mathrm{d}\underline{\phi}\,\frac{\mathcal{Y}(\underline{\phi})}{ \mathcal{Y}(\underline{\phi})\mathcal{Y}(\underline{\phi}-\epsilon_{12})} \mathcal{S}(\underline{\xi}-\underline{\phi})\prod_{I\neq J}^{v}\mathcal{S}( \phi_{IJ})^{-1}\,. \tag{6.27}\] We denote \(\mathcal{Y}(\underline{\xi})=\prod_{\alpha\in[w]}\mathcal{Y}(\xi_{\alpha})\), \(\mathcal{S}(\underline{\xi}-\underline{\phi})=\prod_{\alpha\in[w],I\in[v]} \mathcal{S}(\xi_{\alpha}-\phi_{I})\), etc. We remark that the dual function \(\widetilde{\mathcal{Y}}\) is converted to \(\mathcal{Y}\) for convenience. Taking the pole at \(\phi_{I}=\xi_{\alpha}\) in the contour integral, we obtain the result [21, 22], \[\mathsf{T}_{w,v}=\sum_{\begin{subarray}{c}\mathbb{I}[\mathsf{J}]=[w]\\ \mathbb{J}]=v\end{subarray}}\prod_{\alpha\in\mathbb{I},\beta\in\mathbb{J}} \mathcal{S}(\xi_{\alpha}-\xi_{\beta})\frac{\prod_{\alpha\in\mathbb{I}} \mathcal{Y}(\xi_{\alpha})}{\prod_{\beta\in\mathbb{J}}\mathcal{Y}(\xi_{\beta}- \epsilon_{12})}\,. \tag{6.28}\] There are \(\binom{w}{v}\) contributions in \(\mathsf{T}_{w,v}\) for given \((w,v)\), which correspond to the fixed points in the cotangent bundle of the Grassmannian \(\operatorname{Gr}(v,w)\) given as a quiver variety of type \(A_{1}\). From the representation theoretical point of view, this corresponds to the degree-\(w\) tensor product of the fundamental representation of quantum affine algebra \(U_{q}(\widehat{\mathfrak{sl}_{2}})\) associated with \(A_{1}\) quiver [20, 21, 22]. In order to obtain the \(qq\)-character of the irreducible representation, we need to specialize the parameters \((\mathrm{e}^{\xi_{\alpha}})_{\alpha\in[w]}\to(\mathrm{e}^{\xi},\mathrm{e}^{ \xi}q_{1},\ldots,\mathrm{e}^{\xi}q_{1}^{w-1})\), known as the \(q\)-segment condition [20, 21]. See also [20]. The contribution to the \(qq\)-character \(\mathsf{T}_{w,v}\) shown in (6.27) has more direct interpretation in terms of the quiver variety of type \(A_{1}\), which we denote by \(\mathfrak{M}_{w,v}=\{(I,J)\in\operatorname{Hom}(W,V)\oplus\operatorname{Hom} (V,W)\mid IJ=0\}/\operatorname{GL}(V)\) with the dimension, \(\dim\mathfrak{M}_{w,v}=2v(w-v)\). Hence, it becomes empty when \(v>w\). Applying the same argument to Sec. 5.4.3, the tangent bundle is given by \(T\mathfrak{M}_{v,w}=W^{\vee}V+Q_{12}^{\vee}V^{\vee}W-(1+Q_{12}^{\vee})V^{ \vee}V\). Denoting \(c=1+Q_{12}^{\vee}\) (\(q\)-Cartan matrix of type \(A_{1}\), see the definition (6.33)), we obtain a geometric formula in the five-dimensional convention [21, 22], \[\mathsf{T}_{w,v}=q_{2}^{-\frac{1}{2}\dim\mathfrak{M}_{w,v}}\int_{\mathfrak{M }_{w,v}}\frac{\operatorname{ch}\wedge\mathsf{Y}W^{\vee}}{\operatorname{ch} \wedge\mathsf{Y}c^{\vee}V^{\vee}}\operatorname{ch}\wedge_{q_{2}}T^{\vee} \mathfrak{M}_{w,v}\operatorname{td}(T\mathfrak{M}_{w,v})\,. \tag{6.29}\] Moreover, we may rewrite this formula as follows, \[\mathsf{T}_{w,v}=\int_{\mathfrak{M}_{w,v}}\frac{\operatorname{ch}\wedge \mathsf{Y}W^{\vee}}{\operatorname{ch}\wedge\mathsf{Y}c^{\vee}V^{\vee}} \widehat{X}_{y}(T\mathfrak{M}_{w,v})\,, \tag{6.30}\] where we define another genus, an analog of \(\widehat{A}\) genus (see, e.g., [12]), by \[\widehat{X}_{y}(\mathbf{X})=\prod_{i\in[\operatorname{rk}\mathbf{X}]}\frac{x_ {i}(\mathrm{e}^{\frac{x_{i}}{2}}y^{-\frac{1}{2}}-\mathrm{e}^{-\frac{x_{i}}{2}}y ^{\frac{1}{2}})}{\mathrm{e}^{\frac{x_{i}}{2}}-\mathrm{e}^{-\frac{x_{i}}{2}}}\,. \tag{6.31}\] We remark that this genus is reduced to the \(L\)-genus when we take \(y^{\frac{1}{2}}=\mathrm{i}\), up to the normalization \((-1)^{\frac{1}{2}\,\mathrm{rk}\,\mathbf{X}}\). From this point of view, it is clear that, if there is no \(\mathcal{Y}\)-function insertion, the \(qq\)-character is reduced to the (normalized) \(\chi_{q2}\)-genus of the quiver variety \(\mathfrak{M}_{w,v}\), and thus the \(q\)-character (the limit \(q_{2}\to 1\) of \(qq\)-character) is reduced to the Euler characteristics. On the other hand, another deformation of the \(q\)-character, a.k.a., the \(t\)-analog of \(q\)-character, is the generating function of the Betti numbers (analogous to the Poincare polynomial) of the fixed point set of the quiver variety under the \(S^{1}\) action [22, 23], which is also interpreted as another kind of deformation of Euler characteristics. Generic quiverWe can apply this formalism to generic quiver gauge theory. In this case, we start with the total tangent bundle (5.78), and consider the shift \(K_{i}\to K_{i}+V_{i}\) (\(i\in\Gamma_{0}\)) with generic \(\mathcal{Y}\)-function insertions, \[\delta_{V}\left(\sum_{\underline{i}\in\Gamma_{0}}\mathsf{Y}_{i}^ {\vee}W_{i}-T\mathfrak{M}_{\underline{n},\underline{k}}\right)=\sum_{i,j\in \Gamma_{0}}\left(-\mathsf{Y}_{i}^{\vee}c_{ij}^{+}V_{j}-Q_{12}^{\vee}V_{i}^{ \vee}c_{ij}^{+}Y_{j}+P_{12}^{\vee}V_{i}^{\vee}c_{ij}^{+}V_{j}-P_{12}^{\vee}V _{i}^{\vee}\delta_{ij}W_{j}\right)\,, \tag{6.32}\] where \(\operatorname{ch}W_{i}=\sum_{\alpha\in[w_{i}]}\mathrm{e}^{\xi_{i,\alpha}}\), such that \(\mathbb{I}\left[\sum_{i\in\Gamma_{0}}\mathsf{Y}_{i}^{\vee}W_{i}\right]=\prod_{ i\in\Gamma_{0}}\prod_{\alpha\in[w_{i}]}\mathcal{Y}_{i}(\xi_{i,\alpha})\). Defining the full \(q\)-Cartan matrix from the half one (5.79) as \[c_{ij}=c_{ij}^{+}+c_{ij}^{-}=(1+Q_{12}^{\vee})\delta_{ij}-\sum_{e:i\to j} \mathsf{M}_{e}-\sum_{e:j\to i}Q_{12}^{\vee}\mathsf{M}_{e}^{\vee}\,,\qquad c_{ ij}^{-}=Q_{12}^{\vee}c_{ji}^{+\vee}\,, \tag{6.33}\] and converting \(\widetilde{\mathcal{Y}}\) to \(\mathcal{Y}\) as before, we have the contour integral form of the \(qq\)-character, \[\mathsf{T}_{\underline{w}}=\sum_{\underline{v}}\underline{q}^{ \underline{v}}\,\mathsf{T}_{\underline{w},\underline{v}}\,, \tag{6.34}\] where each contribution is given by \[\mathsf{T}_{\underline{w},\underline{v}} =\mathbb{I}\left[\mathsf{Y}_{i}^{\vee}(W_{i}-c_{ij}V_{j})-P_{12} ^{\vee}V_{i}^{\vee}W_{i}+P_{12}^{\vee}V_{i}^{\vee}c_{ij}^{+}V_{j}\right]\] \[=\frac{1}{\underline{v}!}\frac{[-\epsilon_{12}]^{v}}{[-\epsilon_ {1,2}]^{v}}\oint\mathrm{d}\underline{\phi}\prod_{i\in\Gamma_{0}}\frac{\mathcal{ Y}_{i}(\underline{\xi}_{i})}{\mathcal{A}_{i}(\underline{\phi}_{i})}8(\underline{\xi}_{i}- \underline{\phi}_{i})\prod_{I\neq J}^{v_{i}}\mathcal{S}(\phi_{i,IJ})^{-1}\prod _{e:i\to j}\mathcal{S}(\underline{\phi}_{j}-\underline{\phi}_{i}+m_{e}) \tag{6.35}\] and we define the \(\mathcal{A}\)-function for \(\operatorname{ch}\mathsf{x}=\mathrm{e}^{x}\), \[\mathcal{A}_{i}(x)=\mathbb{I}\left[\sum_{j\in\Gamma_{0}}\mathsf{Y}_{j}^{\vee }c_{ji}\mathsf{x}\right]=\frac{\mathcal{Y}_{i}(x)\mathcal{Y}_{i}(x-\epsilon_ {12})}{\prod_{e:j\to i}\mathcal{Y}_{j}(x+m_{e})\prod_{e:i\to j}\mathcal{Y}_{j}( x-m_{e}-\epsilon_{12})}\,. \tag{6.36}\] Therefore, the \(\mathcal{Y}\)-function and the \(\mathcal{A}\)-function are interpreted as (exponentiated) weight and root vectors, and their operator analogs are discussed in the construction of \(q\)-deformed W-algebras [17, 18]. We can obtain the geometric formulas, (6.29) and (6.30), for generic quiver by replacing the \(q\)-Cartan matrix for generic one (6.33) [22, 18]. Supergroup caseThe geometric formalism presented above is also applicable to supergroup case. The representation of quiver \(\Gamma\) consists of the vector spaces assigned to each node and the linear maps between them, and we denote the category of representations of quiver \(\Gamma\) by \(\operatorname{Rep}(\Gamma)\) (see, e.g., [10]). Then, the _super-representation of quiver_ is similarly constructed by replacing the ordinary vector spaces with the supervector spaces [12, 13]. In this case, we obtain a contour integral formula for the \(qq\)-character similarly to the supergroup LMNS formula (5.93). ### Free field realization Similarly to the matrix model as discussed in Sec. 3.4, the gauge theory partition function has a similar free field realization. Let us discuss an operator formalism of gauge theory in this part. #### 6.3.1 Holomorphic deformation In the context of four-dimensional \(\mathcal{N}=2\) gauge theory, the holomorphic function, called the _prepotential_, plays a central role to characterize the supersymmetric Lagrangian. The ordinary SYM theory corresponds to the quadratic prepotential \(\mathcal{F}=\operatorname{tr}\Phi^{2}\) at UV. We now consider generic holomorphic deformation of the prepotential, \(\mathcal{F}\to\mathcal{F}+\sum_{n=1}^{\infty}t_{n}\operatorname{tr}\Phi^{n}\). Even after the deformation, we can still apply the localization computation, and the partition function is given as follows [11, 12], \[Z(t)=\sum_{\lambda\in\mathfrak{M}^{\mathsf{F}}}Z_{\lambda}(t)\,,\qquad Z_{ \lambda}(t)=Z_{\lambda}Z_{\lambda}^{\operatorname{pot}}(t) \tag{6.37}\] where we define the potential term \[Z_{\lambda}^{\operatorname{pot}}(t)=\exp\left(\sum_{n=1}^{\infty}t_{n} \mathcal{O}_{n,\lambda}\right)\,. \tag{6.38}\] We denote the \(\lambda\)-fixed point contribution of the chiral ring operator by \(\mathcal{O}_{n,\lambda}=\operatorname{tr}\Phi^{n}|_{\lambda}\) for 4d, and 5d and 6d analogues are obtained by the Wilson loop and the Wilson surface extending on the circle and the torus. This \(t\)-deformed partition function is a generating function of the chiral ring operators, \[\langle\mathcal{O}_{n}\rangle=\frac{\partial}{\partial t_{n}}\log Z(t)\Big{|} _{t\to 0}\,. \tag{6.39}\] From this point of view, the \(t\)-dependent part plays a similar role to the potential function in the matrix model as discussed in Sec. 3.4, and we see the Heisenberg algebra, \[[\partial_{n},t_{m}]=\delta_{n,m} \tag{6.40}\] on the Fock space, \(\mathsf{F}=\mathbb{C}\llbracket\partial_{n},t_{n}\rrbracket\,|0\rangle\), such that the vacuum state is defined as \(\partial_{n}\,|0\rangle=0\) (\(t\)-constant). The dual vacuum is then defined by \(\langle 0|\,t_{n}=0\). #### 6.3.2 \(Z\)-state In the operator formalism, the deformation parameter behaves as an operator acting on the Fock space. From this point of view, the \(t\)-extended partition function is also seen as an operator, and through the operator-state correspondence, we define the \(Z\)-state as follows, \[\ket{Z}=Z(t)\ket{0}\,. \tag{6.41}\] We also define \(\ket{Z_{\lambda}}=Z_{\lambda}(t)\ket{0}\), such that the total \(Z\)-state is given by \(\ket{Z}=\sum_{\lambda\in\mathfrak{M}^{\mathbb{T}}}\ket{Z_{\lambda}}\). From this \(Z\)-state, the undeformed partition function can be obtained as follows, \[Z=\bra{0}\ket{Z}=\bra{0}\ket{Z(t)}\ket{0}\,. \tag{6.42}\] Namely, the partition function is given as a correlator (of vertex operators as we will see soon) in this formalism. Such a connection between the BPS observables and the formalism of CFT-like theory is called the BPS/CFT correspondence [20], and the realization of the partition function as a correlator is interpreted as a consequence of such a correspondence. We remark that the expression (6.42) implies that the partition function is realized as a correlator on a sphere. One can similarly consider a torus correlator (a character on the Fock module), and it turns out that such a torus correlator computes the six-dimensional partition function in this context [17]. In the formalism of class \(\mathcal{S}\) theory [16], on the other hand, the torus correlator is associated with the loop structure in quiver gauge theory, typically found in affine quiver gauge theory. These two realizations are related to each other through the duality exchanging base and fiber of Seiberg-Witten geometry, and S-duality in type IIB string theory [14, 15]. #### 6.3.3 Vertex operators Let us consider the vertex operator realization of the \(Z\)-state. We focus on the five-dimensional convention. See [14] and [17] for four and six dimensional cases. For this purpose, we define the screening currents, \[S_{\sigma}(x)=\,:\exp\left(s_{0}^{\sigma}\log x+\tilde{s}_{0}^{\sigma}+\sum_{n \in\mathbb{Z}_{\neq 0}}s_{n}^{\sigma}x^{-n}\right):\,,\qquad\sigma=0,1\,, \tag{6.43}\] with the commutation relations, \[[s_{n}^{0},s_{m}^{0}]=-\frac{1}{n}\frac{1-q_{1}^{n}}{1-q_{2}^{-n}}(1+q_{12}^{- n})\delta_{n+m,0}\,,\qquad[s_{n}^{1},s_{m}^{1}]=-\frac{1}{n}\frac{1-q_{2}^{n}}{1-q _{1}^{-n}}(1+q_{12}^{-n})\delta_{n+m,0}\,, \tag{6.44a}\] \[[s_{n}^{0},s_{m}^{1}]=\frac{1}{n}(q_{1}^{n}+q_{2}^{-n})\delta_{n+ m,0}\,,\qquad[s_{n}^{1},s_{m}^{0}]=\frac{1}{n}(q_{1}^{-n}+q_{2}^{n})\delta_{n+m,0}\,,\] (6.44b) \[[\tilde{s}_{0}^{0},s_{n}^{0}]=-2b^{2}\delta_{n,0}\,,\qquad[\tilde{ s}_{0}^{1},s_{n}^{1}]=-2b^{-2}\delta_{n,0}\,,\qquad[\tilde{s}_{0}^{0},s_{n}^{1}]=[ \tilde{s}_{0}^{1},s_{n}^{0}]=2\delta_{n,0}\,, \tag{6.44c}\] with \(b^{2}=-\epsilon_{1}/\epsilon_{2}\). This parametrization is taken to be compatible with the matrix model notation (3.60). For \(|q_{1}|<1\), \(|q_{2}|>1\), we have the following OPEs, \[\frac{S_{0}(x)S_{0}(x^{\prime})}{:S_{0}(x)S_{0}(x^{\prime}):} =\frac{(x^{\prime}/x;q_{2}^{-1})_{\infty}(q_{12}^{-1}x^{\prime}/x;q _{2}^{-1})_{\infty}}{(q_{1}x^{\prime}/x;q_{2}^{-1})_{\infty}(q_{1}^{-1}x^{ \prime}/x;q_{2}^{-1})_{\infty}}x^{\prime-2b^{2}}\,, \tag{6.45a}\] \[\frac{S_{1}(x)S_{1}(x^{\prime})}{:S_{1}(x)S_{1}(x^{\prime}):} =\frac{(x^{\prime}/x;q_{1})_{\infty}(q_{12}x^{\prime}/x;q_{1})_{ \infty}}{(q_{1}x^{\prime}/x;q_{1})_{\infty}(q_{2}^{-1}x^{\prime}/x;q_{1})_{ \infty}}x^{\prime-2b^{-2}}\,,\] (6.45b) \[\frac{S_{0}(x)S_{1}(x^{\prime})}{:S_{0}(x)S_{1}(x^{\prime}):} =\frac{(-q_{2}xx^{\prime})}{(1-q_{1}x^{\prime}/x)(1-q_{2}x/x^{ \prime})}\,. \tag{6.45c}\] Based on the screening current, we have the following realization for the \(\lambda\)-fixed point contribution for \(\mathrm{U}(n)\) gauge theory [17],23 Footnote 23: In this case, we need infinitely many screening currents to construct five-dimensional gauge theory partition function. A similar construction is known for three-dimensional gauge theory, which involves a finite number of the screening currents [1, 1, 20, 16]. These theories are related through the Higgsing process. See Sec. 6.5. \[Z_{\lambda}(t)=\prod_{x\in\mathcal{X}_{\lambda}}^{\succ}S_{0}(x) \tag{6.46}\] The product \(\prod^{\succ}\) is the radial ordering product with the ordering in the set \(\mathcal{X}_{\lambda}\), \(\prod^{\succ}S_{0}(x)=S_{0}(x_{1})S_{0}(x_{2})\cdots\) for \(|x_{1}|>|x_{2}|>\cdots\) with \(|q_{1}|\ll|q_{2}^{-1}|<1\) and \(\mathrm{e}^{a_{\alpha}}\sim 1\). In order to discuss the total partition function obtained by summation over the fixed point contributions, we define the screening charge from the screening current, \[Q_{0}(x)=\sum_{k\in\mathbb{Z}}S_{0}(xq_{2}^{k})\,,\qquad Q_{1}(x)=\sum_{k\in \mathbb{Z}}S_{1}(xq_{1}^{k})\,. \tag{6.47}\] Then, the \(Z\)-state is constructed from the screening charge,24 Footnote 24: We also have the free field realization of the contour integral formula based on similar vertex operators. See [16, 16] for details. \[|Z\rangle=Z(t)\,|0\rangle\,\qquad Z(t)=\prod_{x\in\mathcal{X}_{\emptyset}}^{ \succ}Q_{0}(x)\,. \tag{6.48}\] We remark that in the expansion of the screening charge product, there are contributions of pseudo-fixed points, \(\lambda\in\mathfrak{M}^{\mathbb{Z}}\backslash\mathfrak{M}^{\mathsf{T}}\), where \(\mathfrak{M}^{\mathbb{Z}}=\{\mathbb{Z}\ni\lambda_{\alpha,i}\xrightarrow{i>1}0, \alpha\in[n],i\in\mathbb{N}\}\). However, we have \(Z_{\lambda}(t)=0\) for \(\lambda\in\mathfrak{M}^{\mathbb{Z}}\backslash\mathfrak{M}^{\mathsf{T}}\), and hence it does not contribute to the partition function. We have a similar realization of supergroup gauge theory partition function. In this case, we may use both the screening currents to obtain the \(Z\)-state [17, 18]. #### \(\boldsymbol{\frown}\)\(Z\)-state for supergroup gauge theory The \(Z\)-state for supergroup gauge theory is constructed by two types of screening charges, \[|Z\rangle=\prod_{x\in\mathcal{X}_{\emptyset}^{0}}^{\succ}Q_{0}(x)\prod_{x \in\hat{X}_{\emptyset}^{1}}^{\succ}Q_{1}(x)\,|0\rangle. \tag{6.49}\] In fact, this expression is analogous to the operator formalism of the supermatrix model discussed in Sec. 3.4. #### 6.3.4 \(q\)-Virasoro constraint Similarly to the Virasoro algebra discussed in Sec. 3.4, one can construct an operator that commutes with the screening charges in this case as well. We define the vertex operator, called the \(\mathsf{Y}\)-operator, which yields the \(\mathscr{Y}\)-function in gauge theory as follows, \[\mathscr{Y}(x)=\bra{0}\mathsf{Y}(x)\ket{Z}\,,\qquad\widetilde{\mathscr{Y}}(x)= \bra{Z}\mathsf{Y}(x)\ket{0}\,. \tag{6.50}\] Then, we define the \(\mathsf{T}\)-operator using the \(\mathsf{Y}\)-operator, \[\mathsf{T}(x)=\mathsf{Y}(x)+\mathsf{Y}^{-1}(xq_{12}^{-1})\,. \tag{6.51}\] Namely, this is an operator analog of the \(qq\)-character introduced in Sec. 6.2. We see that this \(\mathsf{T}\)-operator commutes with the screening charges, \([\mathsf{T}(x),Q_{\sigma}(x^{\prime})]=0\). Having the mode expansion, \(\mathsf{T}(x)=\sum_{n\in\mathbb{Z}}T_{n}x^{-n}\), they obey the following algebraic relation, \[[T_{n},T_{m}]=-\sum_{k=1}^{\infty}f_{k}(T_{n-k}T_{m+k}-T_{m-k}T_{n+k})-\frac{ (1-q_{1}^{n})(1-q_{2}^{n})}{1-q_{12}}(q_{12}^{n}-q_{12}^{-n})\delta_{n+m,0}\,, \tag{6.52}\] where the coefficients \(\{f_{k}\}_{k\in\mathbb{N}}\) are defined from the structure function, \[f(x)=\exp\left(\sum_{n=1}^{\infty}\frac{(1-q_{1})(1-q_{2})}{n(1+q_{12}^{n})}x^ {n}\right)=\sum_{k=0}^{\infty}f_{k}x^{k}\,. \tag{6.53}\] This structure function comes from the OPE between the \(\mathsf{Y}\)-operators, \(\mathsf{Y}(x)\mathsf{Y}(x^{\prime})/\colon\mathsf{Y}(x)\mathsf{Y}(x^{\prime}) \colon=f^{-1}(x^{\prime}/x)\). The algebraic relation (6.52) is equivalent to the following relation for the \(\mathsf{T}\)-operator, \[f\left(\frac{x^{\prime}}{x}\right)\mathsf{T}(x)\mathsf{T}(x^{\prime})-f\left( \frac{x}{x^{\prime}}\right)\mathsf{T}(x^{\prime})\mathsf{T}(x)=-\frac{(1-q_{1 })(1-q_{2})}{1-q_{12}}\left(\delta\left(\frac{x^{\prime}}{x}q_{12}\right)- \delta\left(\frac{x}{x^{\prime}}q_{12}\right)\right)\,, \tag{6.54}\] where we define the multiplicative \(\delta\)-function, \(\delta(z)=\sum_{n\in\mathbb{Z}}z^{n}\). This is called the _quadratic relation_ (or the \(fTT\) relation) for the generating current of the \(q\)-deformed Virasoro algebra constructed in [10]. We have remarks on this relation. Taking the limit \(q_{1}\to 1\) or \(q_{2}\to 1\), the quadratic relation is simply reduced to \([\mathsf{T}(x),\mathsf{T}(x^{\prime})]=0\). Hence, the \(\mathsf{T}\)-operator becomes a commuting operator, which is then identified with the \(q\)-character, and also with the transfer matrix of the corresponding quantum integrable system. From this point of view, we have a connection with quantum integrable system in this limit. See Sec. 6.4 for a related discussion. Another remark is that, interpreting the \(\mathsf{T}\)-operator as the \(qq\)-character, the quadratic relation corresponds to the anti-symmetric tensor product of the fundamental representation of \(A_{1}\) algebra, namely \(\mathfrak{sl}(2)\). Therefore, the right hand side of (6.54) is interpreted as a contribution of the trivial representation. We can see more structures in generic tensor product. See [123] for more details. From this point of view, the gauge theory average of \(qq\)-character is identified with the \(\mathsf{T}\)-operator correlator, \(\mathscr{T}(x)=\bra{0}\mathsf{T}(x)\ket{Z}\). In order to obtain a degree-\(n\) polynomial average, we need a modification of the vacuum state. We introduce the degree-\(n\) vacuum, such that \(\mathbb{T}_{m}\left|n\right>=0\) for \(m>n\), and define the modified \(Z\)-state, \(\left|Z\right>=Z(t)\left|n\right>\). Recalling that the \(\mathsf{T}\)-operator commutes with the screening charges (hence with \(Z(t)\)), the \(qq\)-character average, \(\mathscr{T}(x)=\left<0\right|\mathsf{T}(x)\left|Z\right>\), becomes a degree-\(n\) polynomial in \(x\). This is an analogue of the Virasoro constraint that we call the \(q\)-Virasoro constraint. See also [12, 13, 14, 15, 16] for related discussions. From this point of view, we obtain the same \(q\)-Virasoro constraint for supergroup gauge theory since the \(\mathsf{T}\)-operator commutes with the both screening charges, \(Q_{\sigma}(x)\). Considering generic quiver gauge theory, we will obtain the constraint with quiver W-algebra. See [17] for details. ### Bethe/gauge correspondence Although we have focused on gauge theory on \(\mathbb{C}^{2}\), we may discuss its implication to further low dimensional theory. Taking the limit \(\epsilon_{2}\to 0\), while keeping \(\epsilon_{1}\) finite, a.k.a., _Nekrasov-Shatashvili (NS) limit_, the partition function asymptotically behaves as \(Z\approx\exp\left(\frac{1}{\epsilon_{2}}\widetilde{\mathcal{W}}+\cdots\right)\), where \(\widetilde{\mathcal{W}}\) is identified with the twisted superpotential of the corresponding two-dimensional \(\mathcal{N}=(2,2)\) theory. Then, the saddle point equation in this limit yields the twisted \(F\)-term condition, \[\exp\left(\frac{\partial\widetilde{\mathcal{W}}}{\partial\sigma} \right)=1\,, \tag{6.55}\] where we denote the scalar field in the twisted chiral multiplet by \(\sigma\). For example, for four-dimensional \(\mathrm{U}(n)\) gauge theory with \(2n\) flavors (\(n\) fundamental and \(n\) anti-fundamental hypermultiplets), the saddle point equation is given by \[-\mathfrak{q}\frac{a(x)d(x+\epsilon_{12})}{\mathscr{Y}(x)\mathscr{Y}(x+ \epsilon_{12})}=1\,, \tag{6.56}\] where \(a(x)\) and \(d(x)\) are the matter polynomials, \(a(x)=\prod_{f\in[n]}[x-m_{f}]\) and \(d(x)=\prod_{f\in[n]}[x-\widetilde{m}_{f}]\). Recalling the definition of the \(\mathscr{Y}\)-function (6.8), we write \(\mathscr{Y}(x)=\mathscr{Q}(x)/\mathscr{Q}(x-\epsilon_{1})\) with redefinition, \(\mathscr{Y}(x)\to d(x)\mathscr{Y}(x)\). Then, the saddle point equation is written in the following form in the limit \(\epsilon_{2}\to 0\),25 Footnote 25: In this case, the \(\mathscr{Q}\)-function involves infinitely many roots. We need to impose the Higgsing condition to obtain a finite polynomial \(\mathscr{Q}\)-function. See, e.g., [11, 12] and also Sec. 6.5 for a related discussion. \[\frac{a(x)}{d(x)}=-\mathfrak{q}\frac{\mathscr{Q}(x+\epsilon_{1}) }{\mathscr{Q}(x-\epsilon_{1})}\,, \tag{6.57}\] which can be identified with the Bethe equation of \(\mathfrak{sl}(2)\)-spin chain (choice of the index convention (5.36) corresponds to XXX/XXZ/XYZ chain) of length \(L=n\) and the roots of the \(\mathscr{Q}\)-function identified with the Bethe roots. Denoting the mass parameter by \(m_{\alpha}=\nu_{\alpha}+\mathrm{i}s_{\alpha}\) and put \(\widetilde{m}_{\alpha}=\bar{m}_{\alpha}\), the parameters \((\nu_{\alpha},s_{\alpha})\) are identified with the inhomogeneity and the spin of the site \(\alpha\in[L]\). In this context, the gauge coupling \(\tau\) with \(\mathfrak{q}=\exp(2\pi\mathrm{i}\tau)\) is identified with the twist boundary condition parameter. Applying this formalism to supergroup gauge theory, we obtain the Bethe equation of the form, \[\frac{a_{0}(x)}{a_{1}(x)}\frac{d_{1}(x)}{d_{0}(x)}=-\mathfrak{q} \frac{\mathscr{Q}_{0}(x+\epsilon_{1})}{\mathscr{Q}_{0}(x-\epsilon_{1})}\frac{ \mathscr{Q}_{1}(x+\epsilon_{1})}{\mathscr{Q}_{1}(x-\epsilon_{1})}\,, \tag{6.58}\] which implies positive and negative magnons carrying positive and negative excitations and the sites involving positive and negative spins [11, 12]. We remark that the situation that we now consider is different from the spin chain with superalgebra symmetry. In order to realize such a superspin chain, we need to consider quiver gauge theory that corresponds to the Dynkin diagram of the Lie superalgebra [10, 11, 13, 14]. The spin chain model is not a unique example to be discussed in this framework. For example, it has been known that the Seiberg-Witten curve of \(G\)-SYM theory is given by the spectral curve of \(\widehat{L^{G}}\)-Toda chain [14]. In fact, one can obtain quantum Toda chain from pure SYM theory by imposing the codimension-two surface defect. In the presence of such a defect, we need a modification of the instanton moduli space to the so-called (affine) Laumon space, which is equivalent to consider instantons on the partial orbifold space, \(\mathbb{C}\times\mathbb{C}/\mathbb{Z}_{n}\)[10, 11, 12]. In this context, the \(\Omega\)-background parameter \(\epsilon_{1}\) plays a role of the quantum parameter, and when \(\epsilon_{2}\) is also finite, we obtain the non-stationary quantum integrable system, which is reduced to the stationary system in the NS limit, \(\epsilon_{2}\to 0\). In this case, one can construct the quantum Toda Hamiltonian from the \(qq\)-character (\(q\)-character in the NS limit) in the presence of the surface defect. Applying this formalism to supergroup gauge theory, we then obtain a super-Toda chain, which is associated with the root system of the corresponding Lie superalgebra [13] (In general, one can construct the integrable Toda chain from the root system data). Further incorporating the hypermultiplet in the adjoint representation, namely \(\mathcal{N}=2^{*}\) theory, the Toda chain is promoted to the Calogero-Moser-Sutherland (CMS) system, and we obtain the "double" CMS system associated with the Lie superalgebra [10, 11, 12] from supergroup gauge theory with the surface defect. See [12] for details. ### Higgsing and intersecting defects As we discussed throughout this article, supergroup gauge theory inevitably violates the unitarity, and thus its physical realization seems not to be straightforward. In this part, we would demonstrate that supergroup theory can be engineered from physical setups by imposing the defect operators. In the context of supersymmetric gauge theory, there exist two branches in the moduli space of supersymmetric vacua, called the Coulomb and Higgs branches (In general, one can also consider the mixed branch). Although Seiberg-Witten theory of four-dimensional \(\mathcal{N}=2\) gauge theory describes the Coulomb branch, it has been known that there exists a root of Higgs branch locus, which is reached by tuning the fundamental mass parameter to the Coulomb moduli parameter as \(m_{f}=a_{\alpha}\)[10, 13]. In the presence of the \(\Omega\)-background, this condition is "quantized" to be \(m_{f}=a_{\alpha}+n_{\alpha,1}\epsilon_{1}+n_{\alpha,2}\epsilon_{2}\), and under this condition, the site \((n_{\alpha,1}+1,n_{\alpha,2}+1)\) cannot be included by the \(\alpha\)-th partition, \((n_{\alpha,1}+1,n_{\alpha,2}+1)\not\in\lambda_{\alpha}\) (the pit condition [13]).26 Namely, we have a restriction on the instanton configuration. In fact, such a restriction on a partition is also discussed in the context of representation theory of supergroup: Partitions with the pit condition parameterize irreducible representations of \(\mathrm{U}(n_{\alpha,1}|n_{\alpha,2})\). In particular, for \(n_{\alpha,2}=0\), partitions have at most \(n_{\alpha,1}\) rows, which yield irreducible representations of \(\mathrm{U}(n_{\alpha,1})\) group. From physical point of view, \(n_{\alpha,1,2}\) are interpreted as the numbers of fluxes or vortex defects in \(\mathbb{C}_{1}\) and \(\mathbb{C}_{2}\) direction, respectively. Hence, the situation with both \(n_{\alpha,1,2}\) realizes intersecting defects crossing at the origin [11, 12, 12, 13, 14, 15, 16]. See also [17]. Computing the partition function of this configuration, it is given in the form of (deformation of) supermatrix model, which implies that we have an emerging supergroup structure originated from physical (non-supergroup) theory by considering the intersecting defects. A similar realization of supergroup theory can be discussed in a higher-dimensional setup. Starting with an eight-dimensional setup on \(\mathbb{C}^{4}\), called the gauge origami, we incorporate D-branes extended in \(\mathbb{C}_{1}\times\mathbb{C}_{2}\) and \(\mathbb{C}_{2}\times\mathbb{C}_{3}\)[18, 19]. In this case, we obtain the non-stationary double CMS system at finite \(\epsilon_{2}\), and the stationary one in the limit \(\epsilon_{2}\to 0\)[18, 19]. This implies that supergroup structure emerges from intersecting defects at the \(\mathbb{C}_{2}\)-plane. ### Acknowledgement I would like to thank Nicolas Babinet, Heng-Yu Chen, Norton Lee, Fabrizio Nieri, Go Noshita, Vasily Pestun, Yuji Sugimoto for collaborations and fruitful discussions on this subject. I am in particular grateful to Go Noshita for careful reading of the manuscript and valuable comments. This work was in part supported by "Investissements d'Avenir" program, Project ISITE-BFC (No. ANR-15-IDEX-0003), EIPHI Graduate School (No. ANR-17-EURE-0002), and Bourgogne-Franche-Comte region.
2301.13176
Graphene Oxide Photoreduction Recovers Graphene Hot Electron Cooling Dynamics
Reduced graphene oxide (rGO) is a bulk-processable quasi-amorphous 2D material with broad spectral coverage and fast electronic response. rGO sheets are suspended in a polymer matrix and sequentially photoreduced while measuring the evolving optical spectra and ultrafast electron relaxation dynamics. Photoreduced rGO yields optical absorption spectra that fit with the same Fano lineshape parameters as monolayer graphene. With increasing photoreduction time, rGO transient absorption kinetics accelerate monotonically, reaching an optimal point that matches the hot electron cooling in graphene. All stages of rGO ultrafast kinetics are simulated with a hot-electron cooling model mediated by disorder-assisted supercollisions. While the rGO room temperature 0.31 ps$^{-1}$ electronic cooling rate matches monolayer graphene, subsequent photoreduction can rapidly increase the rate by ~10-12$\times$. Such accelerated supercollision rates imply a reduced mean-free scattering length caused by photoionized point-defects on the rGO sp$^2$ sub-lattice. For visible range excitations of rGO, photoreduction shows three increasing spectral peaks that match graphene quantum dot (GQD) transitions, while a broad peak from oxygenated defect edge states shrinks. These three confined GQD states donate their hot carriers to the graphene sub-lattice with a 0.17 ps rise-time that accelerates with photoreduction. Collectively, many desirable photophysical properties of 2D graphene are replicated through selectively reducing rGO scaffolded within a 3D bulk polymeric network.
Alden N. Bradley, Spencer G. Thorp, Gina Mayonado, Edward Elliott, Matt W. Graham
2023-01-30T18:43:21Z
http://arxiv.org/abs/2301.13176v1
# Graphene Oxide Photoreduction Recovers Graphene Hot Electron Cooling Dynamics ###### Abstract Reduced graphene oxide (rGO) is a bulk-processable quasi-amorphous 2D material with broad spectral coverage and fast electronic response. rGO sheets are suspended in a polymer matrix and sequentially photoevolved while measuring the evolving optical spectra and ultrafast electron relaxation dynamics. Photoreduced rGO yields optical absorption spectra that fit with the same Fano lineshape parameters as monolayer graphene. With increasing photoreduction time, rGO transient absorption kinetics accelerate monotonically, reaching an optimal point that matches the hot electron cooling in graphene. All stages of rGO ultrafast kinetics are simulated with a hot-electron cooling model mediated by disorder-assisted supercollisions. While the rGO room temperature 0.31 ps\({}^{-1}\) electronic cooling rate matches monolayer graphene, subsequent photoreduction can rapidly increase the rate by 10-12\(\times\). Such accelerated supercollision rates imply a reduced mean-free scattering length caused by photoionized point-defects on the rGO sp\({}^{2}\) sub-lattice. For visible range excitations of rGO, photoreduction shows three increasing spectral peaks that match graphene quantum dot (GQD) transitions, while a broad peak from oxygenated defect edge states shrinks. These three confined GQD states donate their hot carriers to the graphene sub-lattice with a 0.17 ps rise-time that accelerates with photoreduction. Collectively, many desirable photophysical properties of 2D graphene are replicated through selectively reducing rGO scaffolded within a 3D bulk polymeric network. \({}^{*}\) co-authors contributed equally. ## I Introduction Graphene oxides (GO) are a widely-used substitute for graphene's remarkable mechanical properties, but its highly amorphous lattice lacks desirable electronic properties such as high conductivity, fast photoresponse and broad spectral coverage. When GO is incorporated in certain polymeric networks, we show systematic photoreduction makes it more graphene-like while maintaining pristine optical-quality films. GO has oxygenated functional groups attached to the 2D carbon lattice via out-of-plane bonds that prevent GO sheets from aggregating in solution phase.[1; 2] GO can be made more graphene-like by chemical or photothermal reduction to make reduced graphene oxide (rGO). Conventionally, these graphene-like rGO layers aggregate and scatter light strongly, making their optical properties hard to compare against monolayer(ml) graphene. Using systematic reduction of isolated GO-in polymer composites, we show the emergence of spectral lineshapes and extract ultrafast photoelectron cooling dynamics that are closely analogous to that of ml-graphene. GO is often used as a bulk-processable substitute for graphene for wide-ranging applications, including electronic sensing, plasmonics, and desalination.[3; 4; 5; 6; 7; 8; 9] The large presence of oxygen in GO introduces an effective band gap (Fig. 1a inset), with a tunable energy determined by the carbon-to-oxygen ratio. Previous theoretical and experimental studies suggest bandgaps ranging from \(\sim\)0.6-3.1 eV for GO that can vanish nearly completely as GO is reduced.[10] GO samples reduced via pulsed Xe arc lamps effectively remove hydroxyl, epoxy, and carboxyl groups to increase the size of graphene-like \(sp^{2}\) regions. The amount of photoreduction changes the ratio of the oxygenated-\(sp^{3}\) to conjugated-\(sp^{2}\) sub-lattice regions.[11; 12; 13] Very selective growths and controlled reduction are required to realize desired optoelectronic applications for GO that have included broadband optical nonlinearity[14; 15], tunable photoluminescence,[16] and resonant energy transfer.[17] With widely-varying ratios of oxygen and carbon, the highly inhomogeneous and amorphous nature of GO and rGO lattice make a direct comparison to ml-graphene difficult. In rGO, individual \(sp^{2}\) graphene-like sublattice regions often become surrounded by \(sp^{3}\) oxidized domains, forming molecular-like confined regions often called graphene quantum dots (GQDs) or graphene nanoclusters. While the composition of rGO varies greatly, it can roughly be decomposed into three types of sub-lattice illustrated in Fig. 1b: (1) extended \(sp^{2}\) hybridized regions, (2) confined \(sp^{2}\) lattice nanoclusters or GQDs, and (3) oxidized or \(sp^{3}\) regions. Zhang _et. al_ performed transient absorption on rGO in solution and found that the carbon (\(sp^{2}\)) and oxidized domains (\(sp^{3}\)) could be treated independently.[18; 19] Photoexcited carriers in the spatially-confined \(sp^{2}\) GQDs produce Frenkel excitons with energies tunable with the size of the GQD conjugation network.[20; 21] The local oxygenated functional groups at domain edges also create many optically active defect states within the lattice that are seen in photoluminescence studies.[22; 23; 24] While some of the mechanical and chemical properties of GO-based materials are analogous to graphene, the conditions necessary to replicate graphene-like electronic behavior in rGO are less clear. Past studies have compared the transient absorption (TA) response of GO and rGO prepared by chemical reduction in solution[25] and thin films.[26; 24] This study concerns the optical properties of GO and rGO embedded in a transparent polymer film over six controlled degrees of photoreduction. The TA relaxation resolves how the ultrafast hot electron cooling rate is modified at each stage of photoreduction using tunable probe energies ranging from 1.2 to 2.3 eV. While the hot electron cooling in graphene is typically modeled with two rates associated with optical phonon scattering and disorder-assisted relaxation processes,[27; 28; 29] In addition to graphene-like relaxation, prior rGO studies are dominated by a long, 10-200 ps relaxation component previously ascribed to electron trapping at defect sites.[30] The results obtained from the succession photoreduction of GO are modeled with first-principle models of absorption lineshapes and hot-electron cooling applied previously to graphene. In Section IV.A, the evolution of the absorption lineshape with photoreduction is modeled by competing contributions from graphene-like Fano lineshape and GO-oxide-related absorption. Then Section IV.B applies a hot electron supercollision model to determine at what stage of photoreduction rGO most closely matches the dynamics of ml-graphene. Over most visible and UV excitation energies, Section IV.C shows the GO-sub-lattice and graphene quantum-dot states dominate both the photoluminescence and ultrafast response. Lastly, we resolve how photoreduction of GO impacts the ultrafast rate of acceptor-donor electron transfer from the photoexcited GQDs to graphene acceptor states. ## II Experimental Methods The GO and rGO polymer samples were fabricated using commercially available chemically exfoliated graphene oxide sheets (Graphenea) containing \(\sim\)53% carbon and \(\sim\)44% oxygen. The sheets are dispersed in a N, N-dimethylacrylamide (DMAA) polymer with added PMMA sites to scaffold the GO and minimize aggregation. The mixture is cured between two 1 mm thick glass slides, resulting in a sample thickness of 220 microns. The sample is then photo-reduced via a pulsed Xenon arc lamp at a 1 Hz repetition rate. This low frequency was chosen to prevent gas bubbles from forming during the reduction process. Absorbance is measured via Cary IR-UV-Vis spectrometer. Both excitation and emission photoluminescence are detected with a commercial fluorimeter (Horiba NanoLog). Both degenerate and non-degenerate pump-probe experiments are conducted with 140 fs pulses from a Ti:sapphire lasers (Coherent Chameleon) and Optical Parametric Oscillators (APE OPO Compact). An optical parametric amplifier is used to tune the output wavelength. The beam is split into two parts, a strong pump and a weaker probe power beam with a ratio of \(\sim\)10:1. The intensity of the pump beam is modulated using an acousto-optic modulator (AOM, Crystal Tech) at 500 kHz. The polarization of the pump and probe beam is linear and set parallel to each other. For the non-degenerate experiment, the pump beam is frequency doubled by a Figure 1: **(a)** Comparison of GO vs. rGO band with chemical structures. **(b)** Illustration of the three prominent sublattices types within the rGO structure (sp\({}^{2}\), sp\({}^{2}\) graphene quantum dot (GQD) and oxygenated sp\({}^{3}\)-lattice). **(c)** Linear and transient absorption spectra are measured at five stages of the photoreduction. With increasing photoreduction, NIR transmittance decreases to more closely approximate the (renormalized) CVD ml-graphene transmittance curve. Conversely, as grown GO in solution (gray line) has a prominent \(\pi-\pi^{*}\) bandgap. (_inset_) Graphene band structure highlighting the M-saddle point transition. **(d)** Corresponding transient transmittance kinetics at _Eprobe_=1.8 eV show carrier relaxation accelerates with reduction. (_inset_) The \(\tau_{2}\) lifetime increases linearly with photoreduction. second harmonic generation unit (OPE SHG) prior to modulation. Alternatively, a white-light supercontinuum is generated to provide a broadly tunable probe. Both beams are focused onto the sample by a single lens. The probe beam waist at the sample is approximately 80 microns. The transmitted probe-beam is detected by photodiode lock-in amplification (Zurich Instruments, HFLI and MFLI) at 500 kHz modulation. To compare the rGO polymer physics to ml-graphene, similar measurements to the above were carried out using an ultrafast transient absorption (TA) microscopy setup with a 1 \(\mu\)m spot size. The ml-graphene was prepared by chemical vapor deposition (CVD) and wet-transferred to a thin silicon nitride grid. The above non-degenerate pump-probe scheme was used in a collinear geometry coupled to a 4\(f\)-confocal scanning microscope (Olympus BX51W). The absorption spectra of ml-graphene are taken on the same microscope by coupling in a tunable Xe-arc illumination source and detecting the full plane images on a camera (EMCCD, PI-ProEM) camera after background renormalization. ## III Results Spanning the UV to near-IR regions, Fig. 1c plots the absolute linear transmission of six graphene oxide (GO) samples in a polymer composite with increasing photothermal reduction times labeled from rGO\({}_{1}\) to rGO\({}_{5}\). Additionally plotted on a renormalized scale, we overlay the linear absorption spectra of both pristine monolayer (ml) graphene (_black line_), and the starting as-grown commercial GO solution (gray line, GO\({}_{solution}\)). The GO solution has a clear bandgap, peaking at the molecular \(\pi-\pi^{*}\) transition. Conversely, ml-graphene gives an expected Fano resonance lineshape peaked at 265 nm, red-shifted from the M-saddle-point transition labeled in Fig. 1c (_inset_).[31] The rGO\({}_{o}\) curve in Fig. 1c is the 'as-grown' GO after incorporation into a hybrid polyacrylic and PMMA polymer matrix described in the methods. The absolute absorbance increases monotonically with GO photothermal reduction time over the NIR and IR regions plotted (from 0.35 eV to 1.5 eV). Photoreduction of GO leads to a spectral lineshape that absorbs light more analogously to CVD monolayer graphene plotted in Fig. 1c. In the solution phase and most polymers, GO aggregates as it is reduced, resulting in colloidal mixtures that strongly scatter light. GO is incorporated in a polymer-sphere matrix scaffold that makes systematic photoreduction possible while maintaining pristine optical quality films. Thus, we are able to compare the absorption lineshapes, photoluminescence, and ultrafast hot electron cooling rates over a wide range of photoreduction. Interestingly, the more heavily reduced graphene oxide samples in Fig. 1c have a transmittance lineshape and slope similar to ml-graphene throughout the near-infrared (NIR) regions. In the supplementary Fig. S2, this absorption spectrum is extended out past 3 \(\mu\)m to the IR-region where the strong similarity to graphene absorption is maintained. Figure 1d plots the normalized transient transmission (\(\Delta T/T\), semi-log scale) kinetics of sequentially photoreduced GO/rGO samples acquired with a 1.8 eV degenerate pump and probe configuration. As the degree of reduction increases, the kinetic relaxation rate accelerates. The data shown in both Figs. 1 and 2 fits (_solid lines_) to a least-squares algorithm requiring three-exponents (\(\tau_{1}\), \(\tau_{2}\), and \(\tau_{3}\)) with pulse deconvolution for the 155 fs laser autocorrelation response. After GO is incorporated and stabilized in the polymer matrix, the relaxation dynamics accelerate monotonically with photoreduction time. In stark contrast, the as-grown solution of GO (gray line in Fig. 1d) has much longer TA relaxation dynamics at all timescales, bearing little resemblance to faster graphene. At a 1.8 eV visible probe energy, the GO polymer composite that received no reduction (highest oxygen content) has the longest TA relaxation kinetics with its \(\tau_{3}\) component comprising 21% of total decay amplitude. The inset of Fig. 1d shows the \(\tau_{2}\) lifetimes all decrease linearly from \(\sim\)1.2 to 0.9 ps with increasing lamp photoreduction time. All samples have a characteristic \(\tau_{2}\) time similar to graphene's characteristic \(\sim\)1 ps decay expected for 1.8 eV probe, suggesting all five samples exhibit graphene-like hot-electron cooling dynamics. By analogy with monolayer graphene, the \(\tau_{1}\) would be associated with relaxation by optical phonons, and \(\tau_{2}\) with disorder-assisted hot electron cooling.[29] The fitting parameter for the fast and long decays are constant at \(\tau_{1}=0.15\) ps and \(\tau_{3}=66\) ps, and all parameters are shown in Fig. 2c-d. Figure 2 plots how the kinetic relaxation rates depend on the selected probe energy (\(E_{pr}\)). Comparing Fig. 2a at \(E_{pr}\)=1.3 eV to Fig. 1d at 1.8 eV, a similar pattern with photoreduction emerges. However, the longest component, \(\tau_{3}\) is negligible for all five cases of photothermal reduction rGO\({}_{1-5}\). In Fig. 2d the slower \(\tau_{2}\) lifetime decreases linearly from 2.5 ps to 1 ps with increasing photoreduction time. \(\tau_{1}\) varies the least with photoreduction. Interestingly, the most reduced samples relax even faster compared to monolayer CVD-grown graphene (_black dashed line_). Figures 2a show fits to a triexponential decay curve showing lifetimes of \(\sim\)0.4 ps, 1-2.5 ps, and \(>\)30 ps for \(\tau_{1}\), \(\tau_{2}\) and \(\tau_{3}\) respectively. Regardless of the incident TA probe energy (1.2 to 1.8 eV), rGO samples relaxed progressively faster as the photoreduction time increased. Figure 2b shows that TA dynamics of GO, rGO\({}_{3}\), and rGO\({}_{5}\) are slower at \(E_{pr}\)= 1.3 eV (closed circles, 2.6 eV pump) than the \(E_{pr}\)= 1.2 eV (open circles, 2.2 eV pump) probe energy window. Interestingly, the most reduced sample, rGO\({}_{5}\), always decays more quickly than ml-graphene. This faster decay relative to graphene suggests that the photothermal reduction is ultimately damaging the sp\({}^{2}\) graphene-sub-lattice by causing increased disorder and defect sites. This symmetry-breaking results in low energy disorder states that have been previously observed in conjugated carbon systems.[32; 33] This is further supported by the qualitative increase in lattice defect states that is evident by increased emission in IR region of the PL spectra (see supplemental Fig. S2). Figures 2c-d contain the results of our exponential fitting lines shown in Fig. 1d and 2a-b (_solid lines_). The top panel shows the amplitude of the fast time component (\(\sim\)0.4 ps) at 1.2 eV, 1.3 eV, and 1.8 eV, which accelerates only moderately as the GO samples are reduced. The middle panel shows the amplitude of the second (\(\tau_{2}\approx 1-2.5\) ps, pink) and third (\(\tau_{3}>\)30 ps, blue) time components, which both decrease with reduction. Importantly, the slow time \(\tau_{3}\) component goes to zero in the limit of heavy reduction and closely resembles the ml-graphene relaxation. The bottom panel of Fig. 2c shows the \(\tau_{2}\) relaxation time of GO decreases roughly linearly with photoreduction time. At all probe energies, the \(\tau_{2}\) relaxation time decreases with reduction, with rGO\({}_{3,4,5}\) having lifetimes shorter than that of CVD graphene under the same optical conditions. The CVD ml-graphene (dashed line in Fig. 1-2) was fit to a \(\tau_{2}=\)1.9 ps at 1.2 eV and 1.1 ps at 1.8 probe energies respectively. In most heavily oxygenated rGO samples, the longest \(\tau_{3}\sim 61\) ps component comprises up to 16% of the total decay amplitude. Such samples contain many functional groups, however, the large band gap of the fully oxided regions is well outside the spectral range of both pump and probe laser energies. Instead, graphene quantum dots (GQD) create gapped sp\({}^{2}\) molecule-like regions with size-tunable bandgaps that are resonant with our probe beam.[22] For rGO\({}_{3,4,5}\) samples, Fig. 2d shows that the \(\tau_{3}\) time-component is zero for \(E_{probe}<1.3\) eV, suggesting only graphene-like sp\({}^{2}\) sublattice regions are relevant to the electronic dynamics throughout this near-infrared probe region. ## IV Analysis and Discussion ### rGO Fano Lineshape Absorption Analysis The transmission spectra in Fig. 1a, Fig. S2 and fitted absorption spectra in Fig. 3 all show lineshapes similar to ml-graphene throughout the NIR and IR spectral regions from \(\sim\)0.4 to 3.5 eV. The absorption maxima of both ml-graphene and rGO in Fig. 3 (black line) deviate from the tight-binding model prediction of the graphene van Hove singularity M-point resonance at \(\sim\)5.1 eV.[34] Instead, the graphene absorption is best fit by a Fano Figure 2: **(a)**\(\Delta T/T\) relaxation kinetics at \(E_{probe}=1.3\) eV accelerate with sequential GO photoreduction. Fits show two exponential lifetimes, with only the most oxidized samples requiring a third lifetime of \(\tau_{3}=61\pm 2\) ps.**(b)** The \(\Delta T/T\) kinetics for \(E_{probe}=1.2\) eV (open circles) relax faster than at 1.3 eV (_closed circles_). The rGO\({}_{3}\) photoreduction stage most closely approximates the ml-graphene interband relaxation kinetics shown (_dashed line_). **(c)** For each probe energy, the \(\tau_{1}\) lifetimes (top) are roughly constant, whereas the \(\tau_{2}\) lifetime (bottom) decrease linearly \(\sim 2.5\times\) during with photoreduction to become even faster than ml-graphene. **(d)** Amplitudes (\(A_{1/2/3}\)) of each lifetime component suggest a composition change with increasing amplitude from sp\({}^{2}\) sub-lattice dynamics. The smallest \(A_{3}\) (blue) amplitude quickly decreases to zero as GO is reduced. lineshape with a renormalized peak resonance energy, \(E_{r}\) that is red-shifted from the M-point by energy by \(\cong 0.3-0.4\) eV.[35; 36] The asymmetric Fano lineshape accounts for the ratio of interference between the discrete (M-point) and continuum transition probabilities through the dimensionless Fano parameter, \(q\).[35] Thus, the tight binding model of the graphene absorption spectrum in Fig. 3 is renormalized for effective electron-hole interaction effects by fitting to the below asymmetric Fano lineshape, \[A_{Fano}(E)=A\left[\frac{\left(\frac{2}{\gamma}(E-E_{r})+q\right)^{2}}{1+\left( \frac{2}{\gamma}(E-E_{r})^{2}\right)}\right] \tag{1}\] where \(\gamma\) is the Lorentizian homogeneous linewidth and \(A\) is the amplitude scaling constant. Fig. 3 plots a hyperspectral measurement of CVD ml-graphene (black line) with its corresponding Fano lineshape fit (dashed line), given by equation 1 above. Table 1 gives the resulting Fano parameters and show excellent agreement of this work graphene values with the established literature values.[35; 37] This provides an essential calibration base to quantitatively compare against the lineshape fit of rGO absorption spectra. Figure 3 shows good agreement between the absorption spectra of rGO\({}_{1}\) and rGO\({}_{5}\), and the asymmetric Fano resonance after it is convolved with two Gaussians peaks at energies corresponding to the absorption of the \(n-\sigma^{*}\) and \(\pi-\pi^{*}\) transitions. This fitting analysis suggests that the absorption spectrum in rGO can be understood to contain a Fano resonance similar to that of CVD ml-graphene. The molecular-like \(\pi-\pi^{*}\) transitions are illustrated in Fig. 3 (_inset_), and show graphene quantum dot (GQD) states also contribute to the spectral weight and are centered near 4.6 eV.[38] At 4.3 eV, rGO also contains sub-gap defect states between the \(\pi\) and \(\pi^{*}\) states, which results from previously reported local oxygen-based disorder that creates edge defect state (\(n\)) to \(\sigma^{*}\) transitions.[39; 40; 41; 2; 2; 30; 42] Due to the heterogeneous oxygen coverage, these local disorder edge states have a much broader absorption FWHM. As rGO\({}_{1}\) is further reduced, we observe in Fig. 3 that the peak area of the \(n-\sigma^{*}\) Gaussian decreases as oxygen is removed, resulting in fewer edge states. Both our most oxidized samples (GO\({}_{o}\) and GO\({}_{solution}\)) did not fit well to a Fano lineshape, suggesting only rGO samples have a graphene-like absorption lineshape in the IR and NIR regions. Table 1 contains a summary of the Fano fitting parameters, showing good agreement between the literature[35; 37] and our results for monolayer graphene and rGO\({}_{5}\). rGO\({}_{5}\) contains a large absorption from the linear dispersion near the K and K' points, where excited carriers couple strongly to the continuum, similar to monolayer graphene. For rGO\({}_{1}\), the Fano parameter \(q\) decreases significantly from monolayer graphene, suggesting electron-hole interaction effects are increasingly screened for transitions near the van Hove singularity. For GO and lightly reduced rGO, Table 1 shows the Fano parameter is many times larger than highly reduced samples and monolayer graphene. This suggests the many edges states in more oxidized graphene couple strongly to continuum-like states. The inset of Fig. 3 shows a qualitative depiction of how the density of states changes from GO to rGO. As the samples are reduced, they contain larger area regions of non-interrupted \(sp^{2}\) carbon, leading to a more graphene-like distribution of continuum states, resulting in a better Fano lineshape fit. The two convolved Gaussians show the effect of reduction on the absorption spectra, with the amplitude of the \(n-\sigma^{*}\) transition decreasing significantly, suggesting the removal of oxygen functional groups. We also see that the absorption peak in rGO\({}_{1}\) shifts slightly to lower energy compared to rGO\({}_{5}\). This shift has been theoretically predicted by Roy et al.[22], who used DFT to calculate the band structure of GO at varying oxygen content, finding that the addition of oxygen decreases the band gap at the M-point. However, the underlying Figure 3: Under each linear absorption spectra (_solid lines_) the deconvolved Fano resonance lineshape fit is plotted in _dashed lines_. Unlike pristine ml-graphene (_black_), the two rGO samples plotted also require two convolved Gaussians (_dash-dot_) suggesting molecular-like transition labeled \(\pi\) to \(\pi^{*}\) and edge defect transitions, \(n\) to \(\sigma^{*}\) (see inset). The resulting Fano-Gaussian convolved fits (_dotted lines_) show the graphene sub-lattice Fano-parameter, \(q\) increases with photoreduction consistent with more lattice disorder. \begin{table} \begin{tabular}{l c c c} Sample & \(E_{r}\) (eV) & \(\gamma\) (eV) & \(q\) \\ \hline ml-graphene [CVD] & 4.80 & 1.69 & -3.2 \\ ml-graphene [exfoliated][35] & 4.73 & 1.30 & -3.3 \\ rGO\({}_{5}\) [highly reduced] & 4.69 & 1.68 & -3.2 \\ rGO\({}_{1}\) [barely reduced] & 4.62 & 2.16 & -50 \\ \end{tabular} \end{table} Table 1: Fano fitting parameters for data in Fig. 3 (_dashed lines_) show good agreement with our monolayer graphene data established literature values.[35; 36] The Fano parameter \(q\) of rGO\({}_{5}\) best matches ml-graphene. Two convolved Gaussian for GQD \(\pi-\pi^{*}\) and edge state defects are also required. Fano resonance energy (\(E_{R}\) in Table 1) does not change with photoreduction. The very large q Fano parameter required to fit the most oxidized rGO\({}_{1}\) samples suggests the sp\({}^{2}\) hybridized regions are not extensively delocalized and retain a molecular-like character. ### Hot-electron cooling rates in reduced graphene oxide Figure 4 fits the hot electron cooling TA kinetics in progressively reduced GO as the TA probe energy is increased from 1.2 (top) to 1.8 eV (bottom). Specifically, the hot-electron cooling rate (\(\tau_{SC}\)) is extracted. Unlike the exponential rate \(\tau_{2}\) from Fig. 2, \(\tau_{SC}\) is analogous to the recombination rate as the electron cool near the Fermi energy, and is independent of probe energy (\(E_{probe}\)). To connect the above phenomenological exponential relaxation models of GO to this first-principle hot-electron cooling model, the fits in Figure 4 models our TA relaxation kinetics using a hot electron heat dissipation rate \(H=C_{e}(dT_{e}/dt)\), where \(C_{e}\) and \(T_{e}\) are the electronic heat capacity and temperature respectively. The top-panel of Fig. 4a contains first-principle hot electron cooling model fits (_solid lines_) to the normalized TA kinetics of the rGO samples. Hot electron cooling rates in rGO can be qualitatively understood by comparing to CVD ml-graphene kinetics (black dotted line). The lowest energy probe (\(E_{pr}\)=1.2 eV) in the top panel of Fig. 4a shows the hot electron cooling rate response of ml-graphene (dashed line) is identical to rGO\({}_{1}\), rGO\({}_{2}\) and rGO\({}_{3}\). Interestingly, rGO\({}_{4,5}\) dissipates heat even faster than CVD ml-graphene. The mechanism for fast energy dissipation or hot-electron cooling in graphene has been widely debated in the past. The optical phonon dissipation model [43; 28; 44] evolves on the sub-ps relaxation timescale of the \(\tau_{1}\) component. At longer relaxation times, the disorder-mediated acoustic phonon decay pathway or supercollision (SC) hot electron cooling model are the primary factor limiting cooling of the photoexcited hot electron temperature, \(T_{e}(t)\). [45] Experimental studies demonstrate the SC-model [45] successfully predicts graphene's photocurrent [29], optical [46] and electrical [47] heating response. However, the applicability of the SC-model to more disordered lattice of GO and rGO has not been considered. To understand hot electron cooling in rGO, we apply the acoustic phonon SC-model illustrated in Fig. 4b (inset). In the SC model, hot electron cooling near the Fermi level occurs without crystal momentum conservation. Instead, higher-energy (\(\sim k_{B}T_{e}\)) acoustic phonons are emitted with the momentum imbalance, \(q_{recoil}\) accounted for by disorder-induced intrinsic lattice recoil. [45] This SC- hot electron is illustrated in Fig. 4b (inset), and gives in a faster hot electron cooling rate than a hot-phonon model that is given by, [29; 45] \[\frac{dT_{e}}{dt}=-\frac{H}{\alpha T_{e}}=-\frac{A}{\alpha}\frac{T_{e}^{3}-T_{ l}^{3}}{T_{e}}. \tag{2}\] where \(A/\alpha\) is the SC rate coefficient, \(T_{l}\) and \(T_{e}\) are the lattice and electron temperatures, respectively. Solving Eq. 2, \(T_{e}(t)\cong\frac{T_{e}}{1+AT_{e}t/\alpha}\) when \(T_{e}(t)\gg T_{l}\), where \(T_{o}\) is the initial electron temperature. Since all data shown is at \(T_{l}=\)292 K, the transient change in \(T_{e}(t)\) is small compared to \(T_{l}\), or \(T_{e}(t)-T_{l}\ll T_{l}\) such that we can approximate Eq. 2 by expanding the leading terms to arrive at the room-temperature hot electron temperature, \(T_{e}(t)\cong T_{l}+(T_{o}-T_{l})e^{-t/\tau_{SC}}\), to get the expression \(\tau_{SC}^{-1}=3AT_{l}/\alpha\). [29] The TA response is obtained using the hot electron (or hole) temperature (\(T_{e}\)) through analytically fitting to the transient interband optical conductivity, \(\Delta\sigma(E_{o},t)=-e^{2}/4\hbar\left[f_{e/h}(T_{e}(t),E_{pr})-f_{e/h}(T_{l },E_{pr})\right]\). [34] The Fermi-Dirac hot-electron occupancy function, \(f_{e/h}(T_{e}(t),E_{pr})\) at the probe energy (\(E_{pr}\)) equations are given in the Supplementary Materials as a change in interband optical conductivity \(\Delta\sigma(t,E_{pr})\). [48; 44] In Fig. 4b the hot electron cooling rates (\(\tau_{SC}^{-1}\)) for rGO are extracted by fitting the data in Fig. 4a to the analytical SC-model solution (Eq. 2), allowing for two additional exponential components (\(\tau_{1}\) and \(\tau_{3}\)). This fast component, \(\tau_{1}\cong 0.34\) ps, averages over the initial electron thermalization and optic phonon emission timescale and is discussed elsewhere. [49; 50] Any molecular-like \(\pi-\pi^{*}\) transitions present are captured by \(\tau_{3}\sim 61\) ps. The accelerating TA relaxation kinetics in Fig. 4a are consistent with the idea that photoreduction of GO creates more disorder and defects on the graphene sublattice. Figure 4b shows an increase in the rate of hot electron cooling, \(\tau_{SC}^{-1}\). Unlike the earlier exponential fits, the rate \(\tau_{SC}^{-1}\) is independent of the probe energy and is the rate at which the hot-electron Fermi-Dirac distribution cools. The hot electron cooling time for the comparison monolayer CVD -grown graphene (dashed line in Fig. 4b) at 292 K is 3.1 ps. \(\tau_{SC}^{-1}\) increases by a factor of \(\sim\)6 as the samples are reduced. This suggests the xenon arc lamp used to reduce GO is a largely destructive process to underlying sp\({}^{2}\) sub-lattice. At the highest level of photoreduction, Fig. 4b suggests the increased lattice disorder destroys the desired graphene-like extended lattice by creating to many point-defects. The \(\tau_{SC}^{-1}=3AT_{l}/\alpha\) expression is a direct measure of lattice disorder by the expression \(\frac{A}{\alpha}\cong\frac{2}{3}\frac{\lambda}{k_{F}l}\frac{k_{B}}{h}\), where the mean free scattering path is \(k_{F}l\). [45] The electron-phonon coupling strength can be approximated as \(\lambda=\frac{D^{2}}{\rho s^{2}}\frac{2E_{F}}{\pi(h_{F})^{2}}\), where both the deformation potential, \(D\) and Fermi energy \(E_{F}\) are the experimental variable that increase the hot electron cooling rate. Figure 4b shows that \(\frac{A}{\alpha}\cong 0.3\) ns\({}^{-1}\)K\({}^{-1}\) for rGO\({}_{1-3}\), which matches the monolayer CVD graphene values in literature. [46] However, further photoreduction increases \(\frac{A}{\alpha}\) upto \(6\times\), suggesting the graphene sub-lattice is being damaged. If the deformation potential is approximately constant, then that \(A/\alpha\propto E_{F}/k_{F}l\), suggesting that the damage of photoreduction decreases the mean free scattering path by photoionization, which increase sp\({}^{2}\) sub-lattice defect sites. Our fitted data in Fig. 4 confirms that acoustic phonons supercollisions (SCs) best describe the rate-limiting heat dissipation kinetics in reduced graphene oxide. Furthermore, Fig. 4b shows how disorder from photodamage to the rGO lattice systematically increases the hot-electron cooling rate. This controlled change in lattice disorder provides new evidence of the predominant role of disorder-assisted SC in describing the hot-election in graphene. ### Oxygenated sub-lattice contributions from graphene quantum dots Sections IV.A and B above both show the rGO sample and ml-graphene have remarkably similar lineshape and hot-electron cooling rates over optical energies that ranging from 0.4 to 1.8 eV. This section focuses one the differences that arise in visible and UV range where GQDs and defect-edge states are also also optically be excited. Figure 5a plots the PL emission spectra of the least reduced, GO\({}_{o}\) and most reduced, rGO\({}_{5}\) samples after a 4.6 eV excitation. The main asymmetric peak appears to shift from \(\sim\)2.4 to 2.7 eV with photoreduction. The experimental emission spectra (dots) are fit (solid lines) using 4 convolved Gaussian peaks (dotted lines). All peak energies and FWHM spectral width (except at 2.34 eV) are found to be approximately invariant to photoreduction. The peak at 2.7 eV in Fig. 5a corresponds with emission from the smallest graphene quantum dot states (labeled GQD\({}_{1}\)) \(\pi^{*}-\pi\) orbital relaxation. At the lower energies, both peaks centered near 1.55 eV and 1.80 eV grow with photoreduction, consistent with emission from larger graphene quantum dot states labeled GQD\({}_{2}\) and GQD\({}_{3}\), respectively. We observe an increase in the emission intensity from these three \(sp^{2}\) peaks with reduction, confirming they do not result from oxygen groups. Conversely, the emission at 2.3 eV represents the carrier recombination in \(sp^{3}\) oxygen (\(\sigma^{*}-n\)). The magnitude and width of this emission decrease with reduction as oxygen functional groups are removed. PL from GO and rGO in solution has been widely documented in the literature, showing that reduction of GO increases PL intensity at near IR wavelengths while also blue-shifting the main peak.[38; 51; 52] In accordance with literature, Fig. 5a shows an increase in PL intensity with reduction, at peaks centered at 1.80 eV and 1.55 eV. The PL of the oxygenated GO lattice is known to emit broadly near 2.4 eV with locally varying oxygen content responsible for the broader FWHM.[26; 41] In rGO, PL is dominated by \(\pi^{*}-\pi\) carrier recombination in regions of confined graphene quantum dots. As the reduction process removes oxygen, formerly isolated sp\({}^{2}\) carbon atoms join together to form conjugated carbon rings, and regions that already contained large area conjugated sp\({}^{2}\) carbon structures increase in size. The observed decreasing area of the peak at 2.3 eV with photoreduction suggests this peak emission is likely due to egde states or oxygen-defects the boundaries of the sp\({}^{3}\) region. The newly formed GQD in rGO are ascribed to the increas Figure 4: **(a)** TA relaxation kinetics of the six progressively reduced GO GO samples compared to ml-graphene (black dashed) at 1.2, 1.3, and 1.8 eV probe energies (top to bottom). Fitted lines now incorporate the SC-hot electron cooling model of Eq. 2. **(b)** SC-model hot electron cooling rates (\(\tau_{SC}^{-1}\)) extracted increase sharply for longest photoreduction times. For rGO\({}_{1-3}\), the disorder parameter \(A/\alpha\) is similar to ml-graphene rate (dashed), and expectantly is invariant to the laser probe energy of 1.2 eV (black) and 1.3 eV (pink).[29] ing PL at 2.7 eV, 1.80 eV and 1.55 eV peaks. DFT studies by Sk et al. [21] show how the bandgap energy of a GQDs changes with respect to its size and found that GQDs about 1.3 nm in mean diameter create Frenkel exciton states near 2.7 eV, while slightly larger 2 nm GQDs emit around 1.8 eV. rGO contains an ensemble of GQDs of various sizes separated by oxygenated regions. Reduction removes oxygen, gradually increasing the GQD size, evidenced by the increased PL in rGO at 1.55 and 1.80 eV. Figure 5a inset contains a qualitative depiction of the bands and energy levels in GO. The optical response of graphene is determined by the \(\pi\) and \(\pi^{*}\) states, which lie between the \(\sigma-\sigma^{*}\) gap in GO.[53; 41] Oxygen functional groups break the symmetry of the pristine graphene lattice, resulting in localized defect states that exist in the \(\pi-\pi^{*}\) gap. Since the gap between \(\sigma\) states is much larger than 2.4 eV, this emission is suggested as \(n-\sigma\) transition (dashed purple arrow). In both GO and rGO, emission at 2.7 eV dominates the PL spectra, which was shown to result from \(\pi\) states in isolated \(sp^{2}\) domains (gray dashed arrow).[38] Emission at lower energies comes from a broad range of GQD states and the local disorder states. Figure 5b shows the degenerate transient absorption response of the samples at 1.8 eV and 2.5 eV, respectively. At 1.8 eV, we observe a saturable absorption signal containing a long component that slowly goes away with reduction. At 2.5 eV, we see a reverse saturable absorption response, which decays extremely quickly in all samples. A similar transition has been previously observed by Bhattacharya et. al, who saw that a sign flip in the pump-probe response occurred near 2.3 eV.[54] Since the most reduced samples have the largest reverse saturable absorption response, we can rule out excited state absorption from oxygen groups as the cause of the sign flip. We attribute this sign-change to absorption from the interband transition in graphene, which has been previously documented to exhibit a sign flip for high pump fluences at this energy.[55; 48] We do not see a change in sign when probing the oxygen states at 1.8 eV, further confirming the sp\({}^{2}\) nature of the peak labeled GQD\({}_{2}\). Figure 5c shows the 1.2 eV probe energy pump fluence dependence. At low pump fluences, the TA response of all samples exhibits a linear dependence on the pump fluence. Above incident photon flux of \(\sim 4\times 10^{12}\) photon/cm\({}^{2}\), a sublinear trend is observed that is fit to the Eq. 2 hot electron cooling model TA response. The nonlinear saturation effect fits to the expected nonlinear Fermi-Dirac filling factor. Notably, the more oxidized GO\({}_{1}\) and rGO\({}_{2}\) samples have the most nearly linear behaviors, consistent with the expected smaller confined sp\({}^{2}\) sub-lattice regions. Conversely, Fig. 5c(inset) shows the pump power dependence for the differential transmission at 1.8 eV pump and probe. GO displaying the smallest response, which increases with reduction until rGO\({}_{3}\). The response saturates for the three most reduced samples as shown in the inset of Fig. 5b. This trend matches the absorption spectra at 1.8 eV, where the absorption increases monotonically with reduction, with the exception of the \(\Delta\)T/T response saturating for the most reduced samples. The pump dependence gives us insight into how the probe response changes with lattice temperature. At low pump powers, the 1.2 eV probe has the same magnitude for all samples, suggesting that even oxidized samples have large regions of graphene-like sp\({}^{2}\) hybridization. Figure 5: **(a)** The photoluminescence emission spectra of GO\({}_{o}\) (green line fit) and rGO\({}_{5}\) (red line fit) with 4 convolved Gaussian fit (dashed lines).Photoreduction increases the PL peaks from graphene quantum dots resonances, labeled GQD\({}_{1-3}\). Conversely, emission from the oxygenated sub-lattice \(n-\sigma\) defect edge state decreases as GO is reduced (see inset for corresponding transitions). **(b)**The degenerate TA response near the 1.8 eV GQD\({}_{2}\) resonance (top) vs. near the excited state absorption at 2.3 eV (bottom). (_inset_) TA response increasing with photoreduction. **(c)**\(\Delta T/T\) pump power photon fluence dependence of reduced GO samples using a 1.2 eV probe. Fit are to the graphene SC-hot electron cooling model in Eq. 2. Over a wide range of incident photon flux, the saturable absorption susceptibility, \(\Delta T/T\) is invariant to photoreduction suggesting only graphene hot electrons are probed below \(\sim\)1.2 eV. (_inset_) Conversely at 1.8 eV the \(\Delta T/T\) changes strongly, suggesting increasing GQD\({}_{2}\) states. The 1.8 eV data remains linear overall pump fluences but has a large dependence on the amount of reduction. While the 1.2 eV data probes graphene-like states, the 1.8 eV data primarily probes the confined GQD\({}_{2}\) states that lead to longer lifetimes and a strong dependence on photoreduction. The size and population of these GQD states depend heavily on the oxygen content. As shown in Figure 5c(inset), reduction increases the transient response, which suggests that reduction increases the population of sp\({}^{2}\) GQD\({}_{2}\) states that absorbs at 1.8 eV. This trend matches the increase in PL seen at 1.8 eV after photoreduction. ### Donor-acceptor electronic transfer in rGO Using non-degenerate TA spectroscopy, we can excite molecular-like GQDs at high energies and probe the electron transfer rate to graphene at lower energy states. Figure 6a shows the normalized TA relaxation at 2.5 eV pump 1.2 eV probe near time zero, which shows a clear delayed rise in the most oxidized samples. Conversely, the most reduced samples show a rise limited by the laser cross-correlation. This delayed rising kinetic edge is indicative of an acceptor-donor electron relationship. Charge transfer has been documented in GO, where photoexcited charges on a different molecular species are transferred to GO.[56; 57; 58] Figure 6b illustrates charge transfer between molecular GQDs and larger graphene-like regions. When the pump moved to longer energies (e.g. 1.8 eV in Fig. 1c), the delayed rise is no longer seen because the population of GQD donors is too small relative to graphene. Figure 6b depicts the charge transfer process that is responsible for the observed delayed rise. Carriers photoexcited in the confined GQD states are localized by the surrounding oxygen functional groups. In GO, the large density of oxygenated regions results in a weaker coupling between confined GQDs and graphene submetallic sublattice regions, leading to the observed delayed rise. In the photoreduced samples, carriers excited into \(sp^{2}\) GQD states are now closer to extendend graphene regions, and so the delayed acceptor-donor electron transfer is not observed to lower energy states. Figure 6c gives a qualitative description of the structure and acceptor-donor electron transfer process in rGO. Our graphene oxide begins with \(\sim\)44% oxygen content, these oxygen functional groups interrupt the delocalized \(\pi\)-orbitals and prohibit hopping between carbon sites. Reduction removes oxygen, which decreases the mean distance from a confined GQD donor and graphene-like \(sp^{2}\) sublattice region. Such changes to the effective percolation network of the \(sp^{2}\) sublattice have previously been shown to also increase GO carrier mobility and conductivity[59; 60]. The longer dynamics in GO are caused by excited carriers being more isolated by larger oxygenated regions as shown in Fig. 6c, which limit possible relaxation pathways. In rGO, some of the oxygen has been removed, recovering large-area graphene-like domains which decay more quickly than pristine graphene. ## V Conclusions The highly variable composition of the quasi-amorphous GO 2D lattice makes a systematic comparison against monolayer graphene a challenge. To help overcome this challenge, GO is suspended in a polymeric network scaffold where five successive photoreductions (rGO\({}_{1-5}\)) were possible without any evidence of inter-layer aggregation. Ultimately, this yielded optical quality rGO films with an absorption lineshape that fits to ml-graphene Fano resonance lineshape parameters. Likewise this step-wise photoreduction accelerates the hot electron relaxation kinetics monotonically over each of the variable probe energy windows studied from 1.2 to 2.5 eV. At intermediate photoreduction times or rGO\({}_{2-3}\), Fig. 4 shows that a hot electron cooling model of disorder-assisted supercollision matches the \(\tau_{SC}=\)3.1 ps hot electron cooling of monolayer graphene. Figure 4b shows the recovery of ultrafast hot electron relaxation rates similar to monolayer-graphene in moderately reduced samples(rGO\({}_{1-3}\) ), suggesting a largely uninterrupted \(sp^{2}\) bonded network analogous to graphene. Under extreme photoreduction or using UV-Vis optical Figure 6: **(a)** Normalized transient absorption kinetics shows a 170 delayed rise for GO that systematically accelerates with successive photoreduction. **(b)** This rise is assigned to an acceptor-donor relationship between the 2.5 eV pump of GQD states and the 1.2 eV probe of the accepting graphene states. **(c)** Band illustration of rGO depicts charge transfer described from confined GQDs to larger sp\({}^{2}\) graphene-like regions. excitation, the optical properties of rGO begin to deviate strongly from graphene. Owing to increasing local disorder and broken lattice symmetry, extreme photothermal reduction yields hot electron cooling rates that are faster than pristine graphene. Subsequent photoreduction accelerates the extracted hot electron cooling rate 10-12x, revealing how photodamage induces local disorder to mediate faster hot electron cooling. On longer, \(>\)50 ps timescales, rGO also exhibits a slower decay response than graphene owing to many isolated graphene quantum dot (GQD) regions and oxygenated edge trap states which serve to delay the ground state recovery. Using probe energies in the visible wavelength range at 1.8 eV, Figs. 1c and 4 shows that photothermal reduction does not recover pristine graphene properties, as evidenced by the slower decay kinetics of all rGO samples relative to graphene. The prevalence of isolated GQDs regions and oxygenated-edge trap states each create further bottlenecks of electronic relaxation that slow the effective relaxation. Fortunately, we find these long lifetimes of rGO are no longer oberved below 1.3 eV optical excitations, as there are no discernible GQD sub-lattice states large enough to create a resonance at these energies. Collectively, these results show many of the desirable optoelectronics properties of 2D graphene can be replicated using selectively reduced graphene oxide suspended in a 3D bulk polymeric network. This study lends itself to large-scale processing of rGO thin films and applications in high-speed optoelectronics and photonic switching applications. ###### Acknowledgements. This material is based upon work supported by the Office of the Under Secretary of Defense for Research and Engineering under award number FA9550-22-1-0276, and the DEVCOM Army Research Laboratory award number W56HZV-16-C-0147. **Supplementary Materials**: Details on sample characteristics, data modeling methods, and further absorption and PL spectral data show similar graphene-like propertis out to the mid-IR regions as far as 0.5 eV. **Data Availability Statement**:The data that support the findings of this study are available from the corresponding author upon reasonable request.
2310.05055
FairTune: Optimizing Parameter Efficient Fine Tuning for Fairness in Medical Image Analysis
Training models with robust group fairness properties is crucial in ethically sensitive application areas such as medical diagnosis. Despite the growing body of work aiming to minimise demographic bias in AI, this problem remains challenging. A key reason for this challenge is the fairness generalisation gap: High-capacity deep learning models can fit all training data nearly perfectly, and thus also exhibit perfect fairness during training. In this case, bias emerges only during testing when generalisation performance differs across subgroups. This motivates us to take a bi-level optimisation perspective on fair learning: Optimising the learning strategy based on validation fairness. Specifically, we consider the highly effective workflow of adapting pre-trained models to downstream medical imaging tasks using parameter-efficient fine-tuning (PEFT) techniques. There is a trade-off between updating more parameters, enabling a better fit to the task of interest vs. fewer parameters, potentially reducing the generalisation gap. To manage this tradeoff, we propose FairTune, a framework to optimise the choice of PEFT parameters with respect to fairness. We demonstrate empirically that FairTune leads to improved fairness on a range of medical imaging datasets. The code is available at https://github.com/Raman1121/FairTune
Raman Dutt, Ondrej Bohdal, Sotirios A. Tsaftaris, Timothy Hospedales
2023-10-08T07:41:15Z
http://arxiv.org/abs/2310.05055v3
# FairTune: Optimizing Parameter Efficient Fine Tuning for Fairness in Medical Image Analysis ###### Abstract Training models with robust group fairness properties is crucial in ethically sensitive application areas such as medical diagnosis. Despite the growing body of work aiming to minimise demographic bias in AI, this problem remains challenging. A key reason for this challenge is the fairness generalisation gap: High-capacity deep learning models can fit all training data nearly perfectly, and thus also exhibit perfect fairness during training. In this case, bias emerges only during testing when generalisation performance differs across subgroups. This motivates us to take a bi-level optimisation perspective on fair learning: Optimising the learning strategy based on validation fairness. Specifically, we consider the highly effective workflow of adapting pre-trained models to downstream medical imaging tasks using parameter-efficient fine-tuning (PEFT) techniques. There is a trade-off between updating more parameters, enabling a better fit to the task of interest vs. fewer parameters, potentially reducing the generalisation gap. To manage this tradeoff, we propose _FairTune_, a framework to optimise the choice of PEFT parameters with respect to fairness. We demonstrate empirically that _FairTune_ leads to improved fairness on a range of medical imaging datasets. ## 1 Introduction The use of AI in healthcare applications is growing rapidly. Powerful new models enabled by large datasets (Mei et al., 2022; Ghesu et al., 2022; Irvin et al., 2019) are rapidly being developed, leading to highly performant automated diagnosis systems (Tiu et al., 2022) that are increasingly being deployed clinically in clinical practice (Esteva et al., 2021; Dutt et al., 2022; Vats et al., 2022). However, AI models have repeatedly been shown to exhibit unwanted biases towards various demographic subgroups (Seyved-Kalantari et al., 2021; Obermeyer et al., 2019; Larrazabal et al., 2020; Ricci Lara et al., 2022) - for example by providing substantially worse performance on disadvantaged subgroups defined by protected attributes such as gender, race, age, and socioeconomic status. This is obviously socially, ethically, and clinically problematic, especially in potentially life-and-death situations that arise in healthcare. The issue of biased and inequitable AI systems has prompted a growing body of research striving to analyze the origins of bias and develop interventions to mitigate model bias (Xu et al., 2023). Nevertheless, recent investigations cast doubt on the extent of progress achieved thus far. Notably, Zietlow et al. (2022) postulate that the majority of existing interventions aimed at promoting fairness prove ineffective when applied to deep models, which are commonly utilized for tasks involving images and text data. The reason behind this ineffectiveness lies in the nature of these interventions, such as those proposed by Sagawa et al. (2020) and Zhao et al. (2019), which impose constraints on the _training data_. For instance, they enforce equal performance across subgroups (Zhao et al., 2019). However, while such constraints can impact the training of shallow models typically employed for tabular data, deep models possess the capability to perfectly fit all training data, rendering these fairness constraints automatically satisfied and devoid of any influence on the model's learning process. We substantiate this well-documented challenge empirically in Figure 1, which illustrates that, in a typical medical image analysis scenario, the training data can be fitted flawlessly. Consequently, the model is already _intrinsically equitable within the training set_. The observed bias in real-world applications emerges during testing, primarily due to differential generalization across subgroups. Another recent study (Zong et al., 2023) empirically evaluated a wide range of fairness interventions designed to regularise deep model learning on a large suite of medical image analysis tasks. However, they found that prior progress was over-estimated. When subjected to a standardized hyperparameter tuning procedure for a fair evaluation, none of the existing fairness interventions exhibited a statistically significant enhancement in fair learning when compared to the conventional approach of supervised learning by empirical risk minimization (ERM). In this research paper, we introduce a novel approach to fair learning that addresses the challenge highlighted by Zietlow et al. (2022) and depicted in Figure 1. Our method is rooted in the concept of capacity control, and involves introducing a form of regularization during the learning process specifically tailored to _minimize bias in unseen data_. To accomplish this, we operate within the pre-train/fine-tune framework (Mei et al., 2022; Yosinski et al., 2014; Tang et al., 2022; Zong et al., 2023). This framework entails initializing models through pre-training on extensive external datasets like ImageNet (Deng et al., 2009), followed by fine-tuning on comparatively smaller medical imaging datasets. In this context, as we progressively update the model from its initial pre-trained state, the risk of overfitting to the nuances of the training set increases, leading to the generalization gap illustrated in Figure 1. Hence, the primary challenge lies in restraining the extent of model updates. In this regard, we will illustrate that employing parameter-efficient fine-tuning techniques, which involve the selective updating of a subset of network parameters (Dutt et al., 2023), can result in more equitable generalization. However, this approach poses a critical question: _"Which parameters should be updated to maximize fairness?"_ To tackle this question, we introduce our framework named _FairTune_, designed to search for the optimal parameter update mask. We seek the mask that, when applied to constrain the fine-tuning process, yields a high degree of fairness in the validation data. Our empirical findings consistently demonstrate that FairTune outperforms Empirical Risk Minimization (ERM) in terms of fairness across various medical imaging benchmarks. To summarise our contributions: **(1)** We directly corroborate the conjecture of Zietlow et al. (2022) that bias arises during train-test generalisation (Figure 1). **(2)** In contrast to existing fairness interventions, we introduce a new fair learning approach that regularises learning so as to optimise validation fairness (cf: existing methods that ineffectively target training fairness). **(3)** Our empirical findings across a diverse set of benchmarks consistently demonstrate that _FairTune_ reliably improves performance over ERM. ## 2 Related Work ### Fairness in Medicine Bias and unfairness have been widely reported in biomedical AI (Seyyed-Kalantari et al., 2021; Ricci Lara et al., 2022; Obermeyer et al., 2019). Biases can arise from a complex array of different underlying causes including dataset imbalance, label noise, and reliance on underlying spurious correlations. A particularly problematic manifestation is that of bias amplification (Lloyd, 2018; Hall et al., 2022), where biases that exist in the training set are amplified by the model's predictions during deployment. Measuring fairness is itself a complex problem, as many different fairness met Figure 1: Bias arises during train-test generalisation. Left (Training AUROC): High-capacity deep models can exhibit perfect group fairness during training because they can classify all the training data perfectly. Right (Validation AUROC): Bias arises because the disadvantaged subgroup has worse generalisation error than the privileged subgroup. Fine-tuning ViT-Base on the Papila dataset. rics have been proposed, with no consensus on a single preferred metric. For example, optimising for equal performance among demographic subgroups (Dwork et al., 2012; Verma and Rubin, 2018) is intuitive. But this can lead to the _levelling down_ phenomenon (Zietlow et al., 2022), where fairness is achieved by decreasing the performance of the advantaged group to match the disadvantaged group - potentially even including pathological solutions of reducing both groups' performance to zero. Achieving fairness by levelling down has been criticised as violating the ethical principles of benefcence and non-maleficence (Beauchamp, 2003; Chen et al., 2018; Ustun et al., 2019). We also remark that evaluating systems for fairness is itself complex (Zong et al., 2023; Verma and Rubin, 2018) as fair learning is inevitably a multi-objective problem that seeks to simultaneously achieve potentially conflicting goals of good overall performance and good fairness. ### Previous Attempts to Solve Fairness Fair machine learning has now been widely studied, with numerous methods being proposed that address bias reduction via both pre-processing (e.g., data re-balancing) and post-processing, as well as interventions aimed at guiding the learning algorithm to generate a fairer predictor. Due to the large volume of the proposed methods in the literature, we refer the readers to comprehensive surveys (Mehrabi et al., 2021; Caton and Haas, 2023; Zong et al., 2023) for a more in-depth exploration of the available techniques and their nuances. A crucial observation, however, is that a large family of methods (Sagawa et al., 2020; Zhao et al., 2019; Agarwal et al., 2018; Beutel et al., 2017; Diana et al., 2021; Jeong et al., 2023; Donahue et al., 2016; Yii et al., 2022; Donini et al., 2018; Dumoulin et al., 2016; Kim et al., 2019; Kleindessner et al., 2022; Lohaus et al., 2020; Martinez et al., 2020; Padala and Gujar, 2020; Wang et al., 2020; Zafar et al., 2017; Wu et al., 2022; Park et al., 2022) rely on imposing fairness constraints on the _training_ set. As suggested by Zietlow et al. (2022), these are ineffective in the deep learning regime where constraints are trivially satisfied by a classifier that achieves 100% training accuracy (Figure 1). Another family of methods endeavours to introduce various forms of regularization during model training, aiming to enhance generalization, such as achieving domain independence. While some of these studies initially reported promising outcomes, a recent exhaustive benchmarking study (Zong et al., 2023) has indicated that these assertions were premature. When evaluated across multiple benchmarks, existing methods consistently fall short of systematically outperforming a well-tuned supervised learning baseline for fairness (ERM). We are inspired by studies such as Zietlow et al. (2022); Zong et al. (2023) to design an algorithm that tunes how to regularise learning with the explicit objective of optimising for _validation fairness_. ### Parameter-Efficient Fine-Tuning Fine-tuning models that have been pre-trained on large datasets is common practice in deep learning (Yosinski et al., 2014; Kornblith et al., 2019). Leveraging a pre-trained initialization enables downstream tasks to be learned with significantly less data compared to training from scratch. Parameter-Efficient Fine-Tuning (PEFT) methods are a family of techniques geared towards improving the fine-tuning process. They achieve this by carefully selecting a small subset of parameters for updating during fine-tuning while keeping the majority frozen. The underlying concept is that this judicious choice of selective updates should facilitate effective adaptation to the target task (via the minority of updatable parameters) while guarding against overfitting (courtesy of the majority of frozen parameters). A growing number of PEFT methodologies have emerged, each distinguishing itself by its specific selection of parameters for updating. These selections may include biases (Ben Zaken et al., 2022), attention matrices (Touvron et al., 2022), or normalization layers (Basu et al., 2023). Alternatively, some methods introduce and learn specific sets of new parameters, such as low-rank adapters (Hu et al., 2022), all while maintaining the entire pre-trained backbone in a frozen state. PEFT techniques have gained wide popularity in mainstream NLP and computer vision applications, although their adoption in medical image analysis tasks remains nascent (Dutt et al., 2023; Ma and Wang, 2023; Wu et al., 2023; Zhang and Liu, 2023). In this work, we aim to demonstrate that PEFT (Parameter-Efficient Fine-Tuning) offers benefits beyond enhancing traditional generalization capabilities. Specifically, our findings will illustrate that PEFT can enhance fairness by narrowing the generalization gap, especially for disadvantaged subgroups, as depicted in Figure 1. Nevertheless, a central challenge persists across all existing PEFT methods, namely, they rely on heuristic approaches for partitioning parameters into frozen and updatable sets. Current methods do not offer a principled or learned method for establishing the optimal partition. This becomes particularly crucial, because the ideal PEFT assumption, i.e., the freeze/update partition, may be dataset dependent. For instance, larger datasets might accommodate a more extensive parameter update without suffering from overfitting compared to smaller datasets. The key novelty of this paper lies in our approach: instead of prescribing a specific PEFT update mask, we introduce a framework designed to autonomously determine the optimal PEFT mask that maximises validation fairness. ## 3 Methodology ### Fairness Metrics We focus on evaluating the fairness of binary classification of medical images. Given an image \(x\) we predict its diagnosis label \(y\) in a way that aims to be independent of any sensitive attribute \(s\) (age, sex, ethnicity, etc.) so that the trained model is fair and does not unduly disadvantage any particular demographic subgroup. There are a plethora of metrics to measure fairness such as equality of opportunity, equal odds, subgroup performance difference, and so on (Verma and Rubin, 2018). Each of these may be more appropriate for different social and economic situations. Our overall framework is agnostic to the choice of fairness metric used, as our contribution is an approach to optimise for any user-specified fairness metric. However, for most of our experiments, we will optimise the metric of most-disadvantaged group performance (Sagawa et al., 2020). In this setting we are given a loss function \(\mathcal{L}(\mathcal{D};\theta)\) (e.g., cross-entropy, or 1 - area under ROC curve) for model \(\theta\) on dataset \(\mathcal{D}\). We assume it can be evaluated for different subgroups \(s\) of the dataset \(\mathcal{D}\) as \(\mathcal{L}(\mathcal{D}_{s};\theta)\). Then the metric for fair learning is \[\mathcal{L}^{fair}=\max_{s\in S}\mathcal{L}(\mathcal{D}_{s};\theta). \tag{1}\] We will also report other metrics such as the fairness _gap_, estimated as the performance difference between the disadvantaged and privileged subgroups, (\(\max_{s}\mathcal{L}(\mathcal{D}_{s};\theta)-\min_{s}\mathcal{L}(\mathcal{D}_{ s};\theta)\)). ### Parameter-Efficient Fine-Tuning In PEFT, we fine-tune only a subset of parameters \(\phi\subset\theta\) such that \(|\phi|\ll|\theta|\). PEFT strategies can be interpreted as specifying a sparse binary mask \(\omega\) that determines what parts of \(\theta\) should be updated. Given parameters \(\theta_{0}\) of the pre-trained model and a change \(\Delta\phi\) to be applied to their values, the fine-tuning process can be described as \[\Delta\phi^{*}=\operatorname*{arg\,min}_{\Delta\phi}\mathcal{L}^{base}\left( \mathcal{D}^{train};\theta_{0}+\omega\odot\Delta\phi\right).\] where \(\mathcal{L}^{base}\) is a standard deep learning loss such as cross-entropy for classification. Different PEFT methods essentially correspond to different structures on the sparsity structure of the binary mask \(\omega\). For example, BitFit (Ben Zaken et al., 2022) solely updates bias parameters in a neural network. Attention Tuning (Touvron et al., 2022) enables updating all the attention matrices in a transformer, and so on. These methods are generally effective in reducing overfitting when learning large models on small datasets thanks to eliminating most parameter updates. We will show that they are also effective in improving generalisation fairness compared to conventional fine-tuning. There are two key outstanding challenges, however: (1) The optimal PEFT strategy (binary mask \(\omega\)) is dataset-dependent. For example, a sparser mask \(\omega\) may be preferred for a smaller target task with greater risk of overfitting, and a denser mask may be preferred for a task that is more different to the pre-training task and thus requires stronger adaptation. (2) The optimal PEFT strategy may depend on the ultimate generalisation objective. For example, a sparser mask \(\omega\) might be preferred for fair generalisation compared to conventional overall generalisation. We present a solution to both of these issues by introducing an algorithm to optimise the mask \(\omega\) with respect to a fair generalisation objective. ### Optimising PEFT for Fairness We begin with a pre-trained model \(\theta_{0}\), a dataset (\(\mathcal{D}\)) split into training, validation and test sets (\(\mathcal{D}^{train},\mathcal{D}^{val},\mathcal{D}^{test}\)). Each dataset \(\mathcal{D}=(\mathcal{X},\mathcal{Y},\mathcal{S})\) contains a set of images \(\mathcal{X}\), labels \(\mathcal{Y}\) and sensitive attribute metadata \(\mathcal{S}\). We also define a search space for PEFT masks \(\omega\in\Omega\). The goal is to find \(\omega\) that leads to the best fair generalisation (Sec 3.1) when conducting PEFT learning (Sec 3.2). **Bi-level Optimization (BLO):** We formalize our problem statement as a bi-level optimization problem consisting of an inner and an outer loop. In the inner loop, we fine-tune the pre-trained model on the medical dataset (\(\mathcal{D}^{train}\)) using a conventional loss \(\mathcal{L}^{base}\) and PEFT mask \(\omega\). In the outer loop, we search for the PEFT mask \(\omega\) which leads the inner loop to produce the fairest outcome on the validation set (\(\mathcal{D}^{val}\)), as measured by \(\mathcal{L}^{fair}\). More formally, we solve Equation 2. \[\omega^{*} =\operatorname*{argmin}_{\omega}\mathcal{L}^{fair}\left(\mathcal{ D}^{val};\Delta\phi^{*}\right) \tag{2}\] \[\text{such that}\quad\Delta\phi^{*} =\operatorname*{argmin}_{\Delta\phi}\mathcal{L}^{base}\left( \mathcal{D}^{train};\theta_{0}+\omega\odot\Delta\phi\right).\] There are a number of possible strategies for solving BLO problems such as Equation 2 including meta-gradient, evolutionary search, Bayesian Optimisation and others (Hospedales et al., 2021; Sinha et al., 2018; Liu et al., 2021). In practice, we adopt a hybrid approach with a gradient-free Tree-structured Parzen Estimator (TPE) (Bergstra et al., 2011) with successive halving (SH) strategy (Jamieson and Talwalkar, 2016) for optimising \(\omega^{*}\) in the outer loop (Akiba et al., 2019), and conventional long-horizon gradient-descent fine-tuning in the inner loop. We illustrate the process in Figure 2 and provide full details in Algorithm 1. Additional details on the HPO are given in Appendix A.5 Besides the selective-update mask \(\omega\), the learning rate \(\alpha\) also provides a coarse cue of how much to update. For example a suitably curtailed learning rate would prevent the most egregarious overfitting shown in Figure 1. We also optimise \(\alpha\) along with \(\omega\) within the same HPO process of Algorithm 1. ## 4 Experiments ### Experimental Setup **Architectures:** Our experiments adopted the Vision Transformer (ViT) implementation present in the _Pytorch Image Models_ package (Wightman, 2019). A ViT consists of several blocks and each block contains two normalization layers (LN1 and LN2), Multi-Head Self-Attention (MHSA) sub-block and MLP (MLP) sub-components. The normalization layers (LayerNormalization Ba et al. Figure 2: Illustration that shows how our approach optimises the structure of PEFT with respect to fairness. Hyperparameter optimisation (HPO) selects a mask that decides which components of a pre-trained model \(\theta\) are fine-tuned using PEFT. For each sampled mask, the fine-tuned model is evaluated on the validation set to compute the fairness loss \(\mathcal{L}^{fair}\), which is then reported to the HPO algorithm that decides what masks to sample and which is the final best option. (2016)) are present before and after MHSA. The MLP consists of two fully-connected layers with GELU non-linearity in between. The base variant of ViT consists of 12 such blocks. **Baselines:** We compare our approach with: (1) full training from scratch. (2) conventional full fine-tuning, as conducted in (Zong et al., 2023) (where it is referred to as ERM) where every layer of an ImageNet pre-trained model is adapted on the medical image task, (3) linear readout, where the ImageNet pre-trained feature extractor is frozen and only the classification head is learned, as conducted in (Azizi et al., 2021; Chen et al., 2020), (4) PEFT method Attention Tuning (Touvron et al., 2022) where only attention matrices are fine-tuned, (5) PEFT method Layer Norm Tuning (Basu et al., 2023) where only layer-norm parameters are fine-tuned, (6) FairPrune (Wu et al., 2022), a method that achieves fairness by pruning the model parameters post-training, and (7) Fair Supervised Contrastive Loss (FSCL) (Park et al., 2022) that inherits the properties of supervised contrastive learning and penalizes the usage of sensitive attribute information in representation for improving fairness. We remark that the thorough benchmark in (Zong et al., 2023) already dismissed a suite of algorithms designed for the purpose of fair learning as equal or worse than ERM/Fine-Tuning (Vapnik, 1999), including **DomainInd**Wang et al. (2020), **LAFTR**Madras et al. (2018), **CFair**Zhao et al. (2019), **LNL**Kim et al. (2019), **EnD**Tartaglione et al. (2021), **ODR**Sarhan et al. (2020), **GroupDRO**Sagawa et al. (2020), **SWaD**Cha et al. (2021), and **SAM**Foret et al. (2021). **Search Space:** We define a PEFT search space that consists of the choice to fine-tune or freeze each module within a 12-layer VIT, where each VIT layer consists of MHSA, MLP, and LN modules. Thus our main PEFT search space \(\Omega\) is the space of 36-bit binary marks \(\Omega\in\{0,1\}^{36}\). This search space contains Attention Tuning (Touvron et al., 2022), Layer Norm Tuning (Basu et al., 2023), linear readout, and full-fine tuning (Zong et al., 2023) as special cases. As ablations, we also consider 12-bit search spaces that solely search for the combination of layer-norm and attention layers to tune, as sparser alternatives to (Touvron et al., 2022; Basu et al., 2023). **Datasets:** Our experiments include seven frequently adopted medical image analysis datasets. The selection of datasets was based on five integral factors: a) the presence of sensitive attributes, b) the presence of different potential sources of bias, c) the representation of different anatomical regions (domains), d) varying size, and e) public availability for reproducibility. Following these factors, we included Fitzpatrick17K (Groh et al., 2021, 2022), HAM10000 (Tschandl, 2018), Papila (Kovalyk et al., 2022), OL3I (Zambrano Chaves et al., 2021), OASIS-1 (Marcus et al., 2007), Harvard-GF3300 (Luo et al., 2023), and CheXpert Irvin et al. (2019). Following the settings in Zong et al. (2023), the preprocessing steps for all datasets included the binarization of the sensitive attributes (skin type, age, and sex) and the classification label along with the removal of studies with missing information. More details on data preprocessing are presented in Appendix A.4. **Experimental Settings:** All experiments fine-tune an ImageNet-pre-trained ViT-Base model for 30 epochs with a linear warmup for 10 epochs and a cosine annealing learning rate scheduler (Loshchilov and Hutter, 2016). The batch size is set to 512 and the optimizer used is AdamW (Loshchilov and Hutter, 2018). We consider one sensitive attribute at a time and note the overall performance, the worst subgroup performance and the gap between the best and worst subgroup. **PEFT Mask Search and Hyperparameter Optimisation:** For PEFT mask search, we rely on Tree-structured Parzen Estimator (TPE) (Bergstra et al., 2011) for sampling hyperparameter values. We also employ a pruning strategy, _successive halving_(Jamieson and Talwalkar, 2016), for early termination of unpromising trials. Our search space includes the binary mask (36 or 12 bits) along with the learning rate. For large-scale CheXpert dataset we use random sensitive-attribute balanced 10% subsampling to accelerate the HPO, and then the full train set for actual training. Since the medical image analysis tasks are usually severely imbalanced, we use AUC rather than cross-entropy loss or accuracy as the meta-objective \(\mathcal{L}^{fair}\) for the search. Since we use a gradient-free outer-loop optimizer, it is not necessary for the meta-objective to be differentiable. As remarked in (Zong et al., 2023), existing fairness methods generally did not specify any validation criteria. They found that when conducting common fairness-driven HPO, the previously claimed differences between state-of-the-art methods vanished, and none outperformed ERM (full fine-tuning baseline, in our case). Thus we carefully ensure that all competitors optimise their learning rates based on \(\mathcal{L}^{fair}\), while only our _FairTune_ further optimisise the PEFT mask based on the same criterion. Our main HPO objective is disadvantaged subgroup AUC (Sagawa et al., 2020), as discussed in Sec. 3.1. We will also experiment with overall AUC as an ablation for comparison. ### Main Results We report our main results in Table 1 in terms of overall test AUROC, most disadvantaged subgroup AUC, and the gap between advantaged and disadvantaged subgroups. From the table we can draw several conclusions: (1) Fine-tuning improves on both training from scratch and linear readout of the frozen features. As discussed in (Zong et al., 2023), this is a very strong baseline which many state-of-the-art purpose-designed fair learning were not able to surpass when combined with proper hyperparameter tuning. (2) Nevertheless, the state-of-the-art PEFT methods Attention Tuning (Touvron et al., 2022) and LN Tuning (Basu et al., 2023) surpass this baseline, although they are not purpose designed for fairness at all. We attribute this to their reducing the base architecture's adaptation capacity, limiting its ability to overfit to the advantaged subgroup, and thus limiting the generalisation gap (Figure 1). (3) Recent fairness interventions FairFrunre (Wu et al., 2022) and FSCL (Park et al., 2022) also underperform overall. (4) Finally, our FairTune generally achieves the best test performance overall by all metrics, with consistently good performance across all the benchmarks and sensitive attributes. (5) The second best in terms of AUC gap is training from scratch, but this corresponds to unacceptably poor performance overall, an example of the leveling-down phenomenon to be avoided (Zietlow et al., 2022). As a qualitative illustration of how FairTune influences the fine-tuning process, we repeat the initial illustrative experiment in Figure 1, but now with the FairTune discovered mask for the same Papila dataset. From the results in Figure 3, we can see that the FairTune discovered mask leads to much fairer fine-tuning with much performance for the disadvantaged subgroup, and reduced gaps between the groups compared to the vanilla fine-tuning shown in Figure 1. We further show in the Appendix that FairTune leads to strong performance also when using (1) alternative fairness metrics (Table 3), (2) self-supervised pre-trained ViT-Base (He et al., 2022) (Table 4), and (3) overlapping sensitive attributes (Table 5). ### Ablation Study on the Design of FairTune We analyse alternative design choices for FairTune in Table 2, reporting test score average across Fitzpatrick17K, HAM10000, Papila, OL3I, OASIS-1 datasets. We first ask _What is the impact of our choice of meta-objective \(\mathcal{L}^{fair}\)_, as introduced in Sec 3.1? Comparing the results for FairTune (Min AUC) and FairTune (Overall AUC), we see that the min-group performance and corresponding AUC gap are clearly improved. This directly demonstrates the value of tuning model adaptation Figure 3: FairTune leads to stable fine-tuning with reduced differences between the best and worst performing subgroups compared to conventional fine-tuning from Figure 1. capacity with a fairness-specific objective rather than general purpose validation objectives. We next ask _What is the impact of our PEFT search space_, as introduced in Sec 3.2? We compare our full 36-bit search space (which includes full fine-tuning, Attention Tuning, LayerNorm tuning, and Linear Readout as special cases), with two smaller alternative 12-bit search spaces that correspond to searching for the subset of attention and layer-norm parameters to update. Between the two search spaces, AttentionTuning is better overall, but also introduces a larger AUC gap. However, the full 36-bit FairTune space is better than both of these subspaces. Nevertheless, all FairTune variants are better than the Fine-Tune baseline, in terms of Min AUC demonstrating the value of tuning model adaptation capacity. We report the full set of results in Table 6 in the Appendix. \begin{table} \begin{tabular}{l|c|c c c c c c c c} \hline \hline \multirow{2}{*}{**Metric**} & \multirow{2}{*}{**Att.**} & \multirow{2}{*}{**Metric**} & \multicolumn{2}{c}{**FairTune**} & \multicolumn{2}{c}{**FairTune**} & \multicolumn{2}{c}{**FairTune**} & \multicolumn{2}{c}{**FairTune**} \\ & & & & **12-bit, Attention** & **12-bit, LayerNorm** & **36-bit, Full** & **36-bit, Full** \\ \cline{3-10} & & & **Min Group AUC** & **Min Group AUC** & **Overall AUC** & **Min Group AUC** & **Overall AUC** & **Min Group AUC** \\ \hline Overall (\(\uparrow\)) & 78.5 & 82.4 & 81.0 & 83.5 & **84.7** \\ Min (\(\uparrow\)) & 75.1 & 79.5 & 78.0 & 80.4 & **82.2** \\ Gap (\(\downarrow\)) & 3.7 & 4.6 & 7.1 & 5.7 & 6.4 & 5.4 & **3.4** \\ \hline \multirow{2}{*}{**Metric**} & \multirow{2}{*}{**Metric**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} \\ & & & **12-bit, Attention** & **12-bit, LayerNorm** & **36-bit, Full** & **36-bit, Full** \\ \cline{3-10} & & & **Min Group AUC** & **Min Group AUC** & **Overall AUC** & **Min Group AUC** \\ \hline Overall (\(\uparrow\)) & 78.5 & 82.4 & 81.0 & 83.5 & **84.7** \\ Min (\(\uparrow\)) & 75.1 & 79.5 & 78.0 & 80.4 & **82.2** \\ Gap (\(\downarrow\)) & 3.7 & 4.6 & 7.1 & 5.7 & 6.4 & 5.0 & **3.4** \\ \hline \multirow{2}{*}{**Metric**} & \multirow{2}{*}{**Metric**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} \\ & & & **12-bit, Attention** & **12-bit, LayerNorm** & **36-bit, Full** & **36-bit, Full** \\ \cline{3-10} & & & **Min Group AUC** & **Min Group AUC** & **Overall AUC** & **Min Group AUC** \\ \hline Overall (\(\uparrow\)) & 78.5 & 82.4 & 81.0 & 83.5 & **84.7** \\ Min (\(\uparrow\)) & 75.1 & 79.5 & 78.0 & 80.4 & **82.2** \\ Gap (\(\downarrow\)) & 3.7 & 4.6 & 7.1 & 5.7 & 6.4 & 5.0 & **3.4** \\ \hline \multirow{2}{*}{**Metric**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} \\ & & & **12-bit, Attention** & **12-bit, LayerNorm** & **36-bit, Full** & **36-bit, Full** \\ \cline{3-10} & & & **Min Group AUC** & **Min Group AUC** & **Overall AUC** & **Min Group AUC** \\ \hline Overall (\uparrow\) & 78.5 & 82.4 & 81.0 & 83.5 & **84.7** \\ Min (\(\uparrow\)) & 75.1 & 79.5 & 78.0 & 80.4 & **82.2** \\ Gap (\(\downarrow\)) & 3.7 & 4.6 & 7.1 & 5.7 & 6.4 & **82.2** \\ Gap (\(\downarrow\)) & 3.7 & 4.6 & 7.1 & 5.7 & 6.4 & **82.2** \\ \hline \multirow{2}{*}{**Metric**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} \\ & & & **12-bit, Attention** & **12-bit, LayerNorm** & **36-bit, Full** & **36-bit, Full** \\ \cline{3-10} & & & **Min Group AUC** & **Min Group AUC** & **Overall AUC** & **Min Group AUC** \\ \hline Overall (\uparrow\) & 78.5 & 82.4 & 81.0 & 83.5 & **84.7** \\ Min (\(\uparrow\)) & 75.1 & 79.5 & 78.0 & 80.4 & **82.2** \\ Gap (\(\downarrow\)) & 3.7 & 4.6 & 7.1 & 5.7 & 6.4 & 5.0 & **3.4** \\ \hline \multirow{2}{*}{**Metric**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} \\ & & & **12-bit, Attention** & **12-bit, LayerNorm** & **36-bit, Full** & **36-bit, Full** \\ \cline{3-10} & & & **Min Group AUC** & **Min Group AUC** & **Overall AUC** & **Min Group AUC** \\ \hline Overall (\(\uparrow\)) & 78.5 & 82.4 & 81.0 & 83.5 & **84.7** \\ Min (\(\uparrow\)) & 75.1 & 79.5 & 78.0 & 80.4 & **82.2** \\ Gap (\(\downarrow\)) & 3.7 & 4.6 & 7.1 & 5.7 & 6.4 & 5.0 & **3.4** \\ \hline \multirow{2}{*}{**Metric**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} \\ & & & **12-bit, Attention** & **12-bit, LayerNorm** & **36-bit, Full** & **36-bit, Full** \\ \cline{3-10} & & & **Min Group AUC** & **Min Group AUC** & **Overall AUC** & **Min Group AUC** \\ \hline Overall (\uparrow\) & 78.5 & 82.3 & 81.7 & 86.1 & 83.2 & 80.2 & 79.5 & **82.2** \\ Min (\(\uparrow\)) & 75.1 & 79.5 & 78.0 & 80.6 & **82.2** \\ Gap (\(\downarrow\)) & 3.7 & 4.6 & 7.1 & 4.2 & 5.0 & **3.4** \\ \hline \multirow{2}{*}{**Metric**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} \\ & & & **12-bit, Attention** & **12-bit, LayerNorm** & **36-bit, Full** & **36-bit, Full** \\ \cline{3-10} & & & **Min Group AUC** & **Min Group AUC** & **Overall AUC** & **Min Group AUC** \\ \hline Overall (\uparrow\) & 78.5 & 82.4 & 81.0 & 83.5 & **84.7** \\ Min (\(\uparrow\)) & 75.1 & 79.5 & 78.0 & 80.4 & **82.2** \\ Gap (\(\downarrow\)) & 3.7 & 4.6 & 7.1 & 5.7 & 6.4 & 5.0 & **3.4** \\ \hline \hline \multirow{2}{*}{**Metric**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} & \multirow{2}{*}{**FairTune**} \\ & & & **12-bit, Attention** & **12-bit, LayerNorm** & **36-bit, Full** & **36-bit, Full** \\ \cline{3-10} & & & **Min Group AUC** & **Min Group AUC** & **Overall AUC** & **Min Group AUC** \\ \hline Overall (\(\uparrow\)) & 78.5 & 82.4 & 81.0 & 83.5 & **84.7** \\ Min ( ### Analysis of Masks We finally study what FairTune has learned by analysing the estimated PEFT masks, using the same subset of datasets as for our earlier analysis. We split the analysis by normalization, attention layers and MLP components. For each block, we visualize the proportion of the number of times (over the datasets and sensitive attributes) the given component was selected for fine-tuning. We further compare the masks derived from the different optimization objectives: a) Optimizing for overall performance, and b) optimizing the performance of the most disadvantaged subgroup. From the plots, we can observe that: (1) The strategies selected are non-trivial without a simple preference for either one layer type or initial vs. later layers, as expected by prior intuitively motivated work (Touvron et al., 2022; Basu et al., 2023). This demonstrates the value of automated selection of layers for updating. (2) Furthermore, the high-variability of selection for some blocks over datasets/attributes, as indicated by probabilities close to 0.5, shows the importance of _learning_ dataset/attribute-specific fair tuning strategies, rather than relying on any single task-agnostic recipe. (3) The overall performance and min-subgroup performance objectives lead to substantially different masks, explaining their differing empirical performance earlier. (4) While there is substantial dataset/attribute specificity, there are some general common trends. For example, the min-subgroup objective consistently leads to freezing the first normalization layer, as well as the last four MLP layers. Meanwhile, a clear difference between the overall and min-subgroup objectives is the comparatively increased tendency of the overall objective to unfreeze the last four MLP layers. ## 5 Discussion Potential LimitationsThe improvement in downstream fairness performance comes at a computational cost as it requires us to try various configurations of the masks, each of which corresponds to a model re-training. For example, our full FairTune pipeline takes 48GPUh on the Fitzpatrick17k dataset, compared to about 1h for unoptimized training, and 14GPUh for our HPO-tuned fine-tuning baseline. As pointed out in Zong et al. (2023), even for conventional models, proper HPO is required for optimising fairness. So the cost of a well-tuned model is inevitably much larger than a single training run. Conveniently, the cost per HPO iteration can be substantially lower in our PEFT regime, than in typical train-from-scratch HPO (Feurer and Hutter, 2019), and it could be further alleviated via using efficient techniques such as ASHA (Li et al., 2020) or PASHA (Bohdal et al., 2023) that support parallelization. Future work could also study gradient-based meta-learning (Hospedales et al., 2021) to more efficiently search higher-dimensional masks. ## 6 Conclusion We provide an empirical demonstration to show that controlling the capacity of deep neural networks, particularly through the use of Parameter-Efficient Fine-Tuning methods, can lead to improved fairness on downstream tasks. Building on this finding, we introduce a framework, _FairTune_, that is fairness metric-agnostic and provides a guidance-free selection of model components to be fine-tuned. Through extensive ablation studies involving different datasets, sensitive attributes and fine-tuning strategies, we established our framework leads to consistent gains against standard fine-tuning baselines and vanilla PEFT approaches. Finally, the analysis of the selected masks has shown non-trivial scenario-dependent strategies are learned, showing the need for our proposed algorithm. Figure 4: Frequency of selecting a specific component for fine-tuning across different scenarios.
2305.01085
Serial Exchanges in Random Bases
It was conjectured by Kotlar and Ziv that for any two bases $B_1$ and $B_2$ in a matroid $M$ and any subset $X \subset B_1$, there is a subset $Y$ and orderings $x_1 \prec x_2 \prec \cdots \prec x_k$ and $y_1 \prec y_2 \prec \cdots \prec y_k$ of $X$ and $Y$, respectively, such that for $i = 1, \dots ,k$, $B_1 - \{ x_1, \dots ,x_i\} + \{y_1, \dots ,y_k \}$ and $B_2 - \{ y_1, \dots ,y_i\} + \{x_1, \dots ,x_k \}$ are bases; that is, $X$ is serially exchangeable with $Y$. Let $M$ be a rank-$n$ matroid which is representable over $\mathbb{F}_q.$ We show that for $q>2,$ if bases $B_1$ and $B_2$ are chosen randomly amongst all bases of $M$, and if a subset $X$ of size $k \le \ln(n)$ is chosen randomly in $B_1$, then with probability tending to one as $n \rightarrow \infty$, there exists a subset $Y\subset B_2$ such that $X$ is serially exchangeable with $Y.$
Sean McGuinness
2023-05-01T20:39:57Z
http://arxiv.org/abs/2305.01085v1
# Serial exchanges in random bases ###### Abstract It was conjectured by Kotlar and Ziv [11] that for any two bases \(B_{1}\) and \(B_{2}\) in a matroid \(M\) and any subset \(X\subset B_{1},\) there is a subset \(Y\) and orderings \(x_{1}\prec x_{2}\prec\cdots\prec x_{k}\) and \(y_{1}\prec y_{2}\prec\cdots\prec y_{k}\) of \(X\) and \(Y\), respectively, such that for \(i=1,\ldots,k,\)\(B_{1}-\{x_{1},\ldots,x_{i}\}+\{y_{1},\ldots,y_{k}\}\) and \(B_{2}-\{y_{1},\ldots,y_{i}\}+\{x_{1},\ldots,x_{k}\}\) are bases; that is, \(X\) is _serially exchangeable_ with \(Y\). Let \(M\) be a rank-\(n\) matroid which is representable over \(\mathbb{F}_{q}.\) We show that for \(q>2,\) if bases \(B_{1}\) and \(B_{2}\) are chosen randomly amongst all bases of \(M\), and if a subset \(X\) of size \(k\leq\ln(n)\) is chosen randomly in \(B_{1},\) then with probability tending to one as \(n\to\infty,\) there exists a subset \(Y\subset B_{2}\) such that \(X\) is serially exchangeable with \(Y.\) AMS Subject Classifications (2012): 05D99,05B35. ## 1 Introduction Let \(\mathscr{B}(M)\) denote the set of bases in a matroid \(M.\) For convenience, if we create a new basis from a basis \(B\) by deleting \(X\subset B\) and adding elements of \(Y,\) then we denote the resulting basis by \(B-X+Y\). In the case where \(X=\{x_{1},\ldots,x_{k}\}\) and \(Y=\{y_{1},\ldots,y_{k}\},\) we will often write the new basis as \(B-x_{1}-\cdots-x_{k}+y_{1}+\cdots+y_{k}.\) For example, if \(X=\{e\}\) and \(Y=\{f\},\) then the new basis is just \(B-e+f.\) For all positive integers \(k,\) we let \([k]\) denote the set \(\{1,\ldots,k\}.\) For positive integers \(a,b\) where \(a\leq b,\) we let \([b\backslash a]\) denote the set \([b]-[a].\) Let \(B_{i}\in\mathscr{B}(M),\ i=1,2\). For elements \(x\in B_{1}\) and \(y\in B_{2},\) we say that \(x\) and \(y\) are **symmetrically exchangeable** with respect to \(B_{i},\ i=1,2\) if \(B_{1}^{\prime}=B_{1}-x+y\) and \(B_{2}^{\prime}=B_{2}-y+x\) are bases. We have the following well-known **symmetric exchange property**: for any element \(x\in B_{1}\) there exists \(y\in B_{2}\) for which \(x\) and \(y\) are symmetrically exchangeable. For bases \(B_{i},\ i=1,2\) and subsets \(X_{i}\subseteq B_{i},\ i=1,2\) we write \(B_{1},X_{1}\to B_{2},X_{2}\) if \(B_{2}-X_{2}+X_{1}\) is a basis (and we write \(B_{1},X_{1}\gets B_{2},X_{2}\) if \(B_{1}-X_{1}+X_{2}\) is a basis). We write \(B_{1},X_{1}\leftrightarrow B_{2},X_{2}\) if we have both \(B_{1},X_{1}\to B_{2},X_{2}\) and \(B_{1},X_{1}\gets B_{2},X_{2}\). It was shown by Greene [7] and Woodall [15] that the symmetric exchange property can be generalized to sets: **1.1 Theorem** ( Greene, Woodall ) For every non-empty subset \(X_{1}\subseteq B_{1}\) there exists a subset \(X_{2}\subseteq B_{2}\) such that \(B_{1},X_{1}\leftrightarrow B_{2},X_{2}\). For \(i=1,2,\) let \(B_{i}\) be a basis in a matroid \(M\) and let \(X_{i}\) be a subset of \(B_{i}.\) If for some ordering \(x_{11}\prec x_{12}\prec\cdots\prec x_{1k}\) of \(X_{1}\) and some ordering \(x_{21}\prec x_{22}\prec\cdots\prec x_{2k}\) of \(X_{2}\), \(B_{1}-\{x_{11},\ldots,x_{1i}\}+\{x_{21},\ldots,x_{2i}\}\) and \(B_{2}-\{x_{21},\ldots,x_{2i}\}+\{x_{11},\ldots,x_{1i}\}\) are bases for \(i=1,\ldots,n,\) then we write \(B_{1},X_{1}\underset{M}{\rightleftarrows}B_{2},X_{2}\) (dropping \(M\) when it is implicit). In this case, we say that \(X_{1}\) is **serially exchangeable** with \(X_{2}\) with respect to \(B_{i},\ i=1,2.\) We refer to such orderings of \(X_{i},\ i=1,2\) as **serial orderings**. For a matroid \(M\) and positive integer \(k,\) we say that \(M\) has the **k- serial exchange property** if for any two bases \(B_{1},B_{2}\in\mathscr{B}(M)\) and any subset \(X_{1}\subseteq B_{1}\) where \(|X_{1}|=k,\) there is a subset \(X_{2}\subseteq B_{2}\) where \(B_{1},X_{1}\rightleftarrows B_{2},X_{2}.\) By the symmetric exchange property, all matroids have the 1-serial exchange property. In [11], Kotlar and Ziv made the following interesting conjecture: **1.2 Conjecture** ( Kotlar, Ziv [11] ) For all matroids \(M\) and for all integers \(k\leq r(M)\), \(M\) has the \(k\)-serial exchange property. In [11], it was shown that all matroids have the 2-serial exchange property. Furthermore, in [10], it was shown that for matroids of rank at least three, for any two bases \(B_{1},B_{2},\) there exist 3-subsets \(A_{i}\subseteq B_{i},\ i=1,2\) such that \(A_{1}\) is serially exchangeable with \(A_{2}.\) In [12], it was shown that for the set of matroids \(\mathscr{M}_{q}\) representable over \(\mathbb{F}_{q},\) it suffices to prove Conjecture 1.2 for matroids \(M\in\mathscr{M}_{q}\) having rank at most \((k+2)q^{2k}.\) In addition, it is proven that all binary matroids have the 3-serial exchange property. Conjecture 1.2 implies the following well-known conjecture of Gabow [6] and Cordovil, Moreira [3]: **1.3 Conjecture** ( Gabow, Cordovil and Moreira ) Let \(B_{i}\), for \(i=1,2\), be disjoint bases of a matroid \(M\) having rank \(n\). Then there exists orderings \(b_{1}\prec b_{2}\prec\cdots\prec b_{n}\) and \(b_{n+1}\prec b_{n+2}\prec\cdots\prec b_{2n}\) of the elements of \(B_{1}\) and \(B_{2}\) respectively, such that any \(n\) consecutive elements in the cyclic ordering \(b_{1}\prec b_{2}\prec\cdots\prec b_{n}\prec b_{n+1}\prec\cdots\prec b_{2n}\) form a basis in \(M\). In other words, the above conjecture asserts that for disjoint bases \(B_{1}\) and \(B_{2}\), the base \(B_{1}\) is serially exchangeable with \(B_{2}\). This conjecture in known to be true for transversal matroids (see [4, 6]) and strongly base orderable matroids. In [5, 14], this conjecture was proven for graphic matroids. In [2], it was verified for sparse paving matroids. Conjecture 1.2 remains open even in the case \(k=3\) and there appears to be little prospect for a significant advance. However, there are some interesting questions that arise when bases are chosen randomly. In a recent paper, Sauermann [13] proved that Rota's basis conjecture (see [9]) holds for almost all bases chosen in a matroid representable over a finite field. That is, if \(n\) bases are chosen randomly from a rank-\(n\) matroid representable over a finite field, then one find \(n\) independent transversals of these bases with probability \(1-o(1)\) as \(n\to\infty\). In light of the conjecture above, this result leads to the following natural question: **1.4 Question** Suppose one randomly chooses bases \(B_{1}\) and \(B_{2}\) from a matroid and then randomly chooses a \(k\)-subset \(X\subseteq B_{1}\). What is the probability that there is a \(k\)-subset \(Y\subseteq B_{2}\) such that \(X\) is serially exchangeable with \(Y\)? In this paper, we address this question for matroids representable over a finite field, with the exception of binary matroids. The next theorem is the main result of the paper. **1.5 Theorem** Let \(B_{1}\) and \(B_{2}\) be randomly chosen bases in \(\mathbb{F}_{q}^{n}\) and let \(X\) be a randomly chosen \(k\)-subset in \(B_{1}\) where \(k\leq\ln(n)\). If \(q>2\), then the probability that there exists a \(k\)-subset \(Y\subseteq B_{2}\) for which \(X\) is serially exchangeable with \(Y\) tends to one as \(n\to\infty\). ## 2 Notation For an \(m\times n\) matrix \(A=[a_{ij}]\) and subsets \(S\subseteq[m]\), \(T\subseteq[n]\), we denote by \(A_{S,T}\) the submatrix of \(A\) induced by the elements \(a_{ij}\) where \(i\in S\) and \(j\in T\). In the model we use, each base will have equal probability of being chosen. Rather than just choosing the bases themselves, we shall select _ordered_ bases at random. Given that each base has the same number of orderings, it will suffice to prove Theorem 1.5 for ordered bases. Each ordered base \(B\) of \(M\) corresponds to an \(n\)-tuple \((\mathbf{b}_{1},\ldots,\mathbf{b}_{n})\in\mathbb{F}_{q}^{n}\) such that \(\{\mathbf{b}_{1},\ldots,\mathbf{b}_{n}\}\) has rank \(n.\) Let \(B_{1}\) and \(B_{2}\) be randomly chosen (ordered) bases. Let \(\mathbf{U}_{1},\ldots,\mathbf{U}_{n}\) be random variables with values in \(\mathbb{F}_{q}^{n}\) such that the \(n\)-tuple \((\mathbf{U}_{1},\ldots,\mathbf{U}_{n})\) of random variables corresponds to \(B_{1}\). For \(i=1,\ldots,n,\) let \(\mathbf{U}_{i}=(u_{1i},u_{2i},\ldots,u_{ni})\) where the \(u_{ji}\)'s are also random variables having values in \(\mathbb{F}_{q}.\) Similarly, let \(\mathbf{V}_{i},\ i=1,\ldots,n\) be random variables having values in \(\mathbb{F}_{q}^{n}\) where the \(n\)-tuple \((\mathbf{V}_{1},\ldots,\mathbf{V}_{n})\) corresponds to \(B_{2}.\) For \(i=1,\ldots,n,\) let \(\mathbf{V}_{i}=(v_{1i},v_{2i},\ldots,v_{ni}).\) Let \(k\) be a positive integer where we will assume \(k\leq\ln(n)\) and let \(\ell=\lfloor\frac{n}{k}\rfloor.\) For \(i=1,\ldots,\ell\) let \(\mathscr{V}_{i}=\{\mathbf{V}_{(i-1)k+1},\ldots,\mathbf{V}_{ik}\};\) that is, \(\mathscr{V}_{i}\) corresponds to the \(k\)-subset of elements in \(B_{2}\) in positions \((i-1)k+1,\ldots,ik.\) After picking \(B_{1}\) and \(B_{2},\) we shall choose a set of \(k\) elements from \(B_{1}\) at random. By symmetry, we may assume that these elements are the first \(k\) elements in the (ordered) basis \(B_{1}.\) We let \(\mathscr{U}_{1}=\{\mathbf{U}_{1},\ldots,\mathbf{U}_{k}\},\) which corresponds to the random set of \(k\) elements chosen from \(B_{1}.\) To prove Theorem 1.5 it will suffice to show that the probability that for some \(i,\)\(B_{1},\mathscr{U}_{1}\underset{M}{\hookrightarrow}B_{2},\mathscr{V}_{i},\) tends to one as \(n\to\infty.\) To do this, we define random variables \(X_{1},\ldots,X_{\ell}\) and \(Y_{1},\ldots,Y_{\ell}\) which depend on the randomly selected \(B_{1}\) and \(B_{2}\) as follows: for \(j=1,\ldots,\ell,\) \[X_{j}=\left\{\begin{array}{ll}1&\mbox{if }B_{1},\mathscr{U}_{1}\to B_{2}, \mathscr{V}_{j}\\ 0&\mbox{otherwise}\end{array}\right.\ \ \mbox{and}\ \ Y_{j}=\left\{\begin{array}{ll}1& \mbox{if }B_{1},\mathscr{U}_{1}\gets B_{2},\mathscr{V}_{j}\\ 0&\mbox{otherwise}\end{array}\right.\] It should be noted that the random variables \(X_{1},\ldots,X_{n}\) are not necessarily independent and the same applies to \(Y_{1},\ldots,Y_{n}.\) ## 3 Lower bounds for probabilities In this section, we calculate lower bounds for the probabilities \(\mathbb{P}(X_{i}=1\big{|}\ X_{1},\ldots X_{i-1})\) and \(\mathbb{P}(Y_{i}=1\big{|}\ Y_{1},\ldots,Y_{i-1})\). The following lemma is a well-known fact and we include its proof for completeness. **3.1 Lemma** The probability that a randomly chosen \(k\times k\) matrix over \(\mathbb{F}_{q}\) is non-singular is \[\alpha_{k}=\prod_{i=1}^{k}(1-\frac{1}{q^{i}}).\] Proof.: We first calculate the number of \(k\times k\) nonsingular matrices \(A\) there are over \(\mathbb{F}_{q}.\) To do this, we fill in the rows of \(A\) one at-a-time. There are \(q^{k}-1\) ways of filling in row \(1.\) Filling in the second row, there are \(q^{k}\) possible selections with the caveat that no selection can be a multiple of the first row. Thus there are \(q^{k}-q\) ways to fill in the second row. Suppose now that we have filled in rows \(1,\ldots,i.\) Again, there are \(q^{k}\) choices for row \(i+1\) with the caveat that no such selection can be a linear combination of rows \(1,\ldots,i.\) Thus there are \(q^{k}-q^{i}\) selections for row \(i+1\). Continuing, we see that in all there are \(\prod_{i=1}^{k}(q^{k}-q^{i-1})\) non-singular matrices. Since altogether there are \(q^{k^{2}}\) possible matrices \(A\), the probability that \(A\) is singular is \[\frac{\prod_{i=1}^{k}(q^{k}-q^{i-1})}{q^{k^{2}}}=\frac{q^{\frac{k^{2}-k}{2}} \prod_{i=1}^{k}(q^{i}-1)}{q^{k^{2}}}=\prod_{i=1}^{k}\left(\frac{q^{i}-1}{q^{i} }\right)=\prod_{i=1}^{k}(1-\frac{1}{q^{i}}).\] From _Euler's Theorem_ (see [8, Theorem 353]), we have \(\prod_{i=1}^{\infty}(1-x^{n})=1-x-x^{2}+x^{5}+x^{7}-x^{12}-x^{15}\cdots\). In particular, this theorem implies that for \(q>1\), \(\alpha=\prod_{i=1}^{\infty}(1-\frac{1}{q^{i}})>1-\frac{1}{q}-\frac{1}{q^{2}}.\) We say that a \(k\times k\) matrix \(A\) has **sequential full rank** if for \(i=1,\ldots,k,\) the submatrix \(A_{[i],[i]}\) has full rank. ### Observation There are \((q-1)^{k}q^{k^{2}-k}\) matrices \(A\) over \(\mathbb{F}_{q}\) for which \(A\) has sequential full rank. Moreover, the probability that a randomly selected \(k\times k\) matrix has sequential full rank equals \((1-\frac{1}{q})^{k}\). Proof.: Let \(A=(a_{ij})\) be any \(k\times k\) matrix over \(\mathbb{F}_{q}\). We shall fill in the entries for \(A_{[i],[i]}\) in the order \(i=1,2,\ldots,k\) so as to obtain a matrix with sequential full rank. The matrix \(A_{[1],[1]},\) which has only one entry, can be any non-zero element of \(\mathbb{F}_{q}.\) Thus there are \(q-1\) possible entries. Next, we fill in the remaining \(3\) entries of \(A_{[2],[2]},\) one-by-one. The entries \(a_{12}\) and \(a_{21}\) can be any element of \(\mathbb{F}_{q}\), but to assure that \(det(A_{[2],[2]})\neq 0,\) there are only \(q-1\) possible elements for the remaining entry \(a_{22}.\) Thus there are \((q-1)^{2}q^{2}\) possible matrices \(A_{[2],[2]}.\) Continuing inductively, suppose there are \((q-1)^{j}{q^{j^{2}-j}}\) possible matrices for \(A_{[j],[j]}\) having sequential full rank. We now fill in the remaining \(2(j+1)-1=2j+1\) entries of \(A_{[j+1],[j+1]}\) one-by-one. The \(2j\) entries other than \(a_{(j+1)(j+1)},\) can be any element in \(\mathbb{F}_{q}.\) Assuring that \(det(A_{[j+1],j[+1]})\neq 0,\) means that there are only \(q-1\) possible values for the last entry \(a_{(j+1)(j+1)}.\) Note that here we use the assumption that \(det(A_{[j],[j]})\neq 0.\) Thus there are \((q-1)^{j}{q^{j^{2}-j}}\cdot(q-1)q^{2j}=(q-1)^{j+1}{q^{(j+1)^{2}-(j+1)}}\) selections for \(A_{[j+1],[j+1]}.\) It follows by induction that there are \((q-1)^{k}{q^{k^{2}-k}}\)\(k\times k\) matrices \(A\) having sequential full rank. Given that there are \({q^{k^{2}}}\) possible \(k\times k\) matrices, the probability a randomly chosen \(k\times k\) matrix has sequential full rank equals \(\frac{(q-1)^{k}{q^{k^{2}-k}}}{{q^{k^{2}}}}=(1-\frac{1}{q})^{k}.\) Let \(\beta_{k}=(1-\frac{1}{q})^{k}.\) Let \(\overline{U}=(u_{ij})\) be the \(n\times n\) (random) matrix whose \(i\)'th column corresponds to \(\mathbf{U}_{i},\) and let \(\overline{V}=(v_{ij})\) be the \(n\times n\) (random) matrix whose \(i\)'th column corresponds to \(\mathbf{V}_{i}.\) Such matrices \(\overline{U}\) and \(\overline{V}\) of full rank correspond in a one-to-one fashion with the ordered bases \(B_{1}\) and \(B_{2}.\) For \(i=0,\ldots,\ell,\) let \(n_{i}=ik\) and let \(J_{i}=[n_{i}\backslash n_{i-1}]=\{n_{i-1}+1,\ldots,n_{i}\}.\) **3.3 Lemma** For \(i=1,\ldots,\ell,\)\(\mathbb{P}(X_{i}=1\big{|}\ X_{1},\ldots,X_{i-1})\geq\alpha_{k}.\) Proof.: It suffices to prove the lemma when \(B_{2}\) is given. Since there is an isomorphism mapping any base into another, we may assume that \(B_{2}\) is the standard basis for \(\mathbb{F}_{q}^{n};\) that is, we may assume that for \(i=1,\ldots,n,\)\(\mathbf{V}_{i}=\mathbf{e}_{i}.\) We shall choose \(B_{1}\) by choosing the corresponding rank-\(n\) matrix \(\overline{U}\) randomly amongst all \(n\times n\) matrices of full rank. We do so by first choosing the first \(k\) columns of \(\overline{U}\) (that is, \(\overline{U}_{[n],[k]})\) randomly among all \(n\times k\) rank-\(k\) matrices. We select the entries for \(\overline{U}_{[n],[k]}\) by selecting the entries for \(\overline{U}_{J_{i},[k]},\) in the order \(i=1,2,\ldots,\ell\) and then selecting the remaining entries for the rest of \(\overline{U}_{[n],[k]}.\) Since \(B_{2}\) is the standard basis, we have \(B_{1},\mathscr{U}_{1}\to B_{2},\mathscr{V}_{i}\) if and only if \(\overline{U}_{J_{i},[k]}\) has full rank. Thus \(X_{i}=1\) if and only if \(\overline{U}_{J_{i},[k]}\) has full rank.Consequently, for each \(i,\) the probability \(\mathbb{P}(X_{i}=1\big{|}\ X_{1},\ldots X_{i-1}),\) will be at least the probability that \(\overline{U}_{J_{i},[k]}\) has full rank. By Lemma 3.1, this probability equals \(\alpha_{k}=\prod_{i=1}^{k}(1-\frac{1}{q^{i}}).\) Thus \(\mathbb{P}(X_{i}=1\big{|}\ X_{1},\ldots,X_{i-1})\geq\alpha_{k}.\) **3.4 Lemma** For \(i=1,\ldots,\ell,\)\(\mathbb{P}(Y_{i}=1\big{|}\ Y_{1},\ldots,Y_{i-1})\geq\alpha_{k}.\) Proof.: It suffices to prove the lemma when \(B_{1}\) is given. Since for any two bases in \(\mathbb{F}_{q}^{n}\) there is an isomorphism mapping one to the other, we may assume that \(B_{1}\) is the standard basis; that is, \(\mathbf{U}_{i}=\mathbf{e}_{i},\ i=1,\ldots,n.\) We now construct our basis \(B_{2}\) by choosing the matrix \(\overline{V}\) randomly among all \(n\times n\) matrices of full rank. We do that by choosing \(\overline{V}_{[n],J_{1}},\overline{V}_{[n],J_{2}},\ldots,\overline{V}_{[n],J_ {\ell}}\) in this order, and then selecting the remaining entries for \(\overline{V}.\) Assume that we are given \(\overline{V}_{[n],[n_{i-1}]};\) that is, \(\overline{V}_{[n],J_{1}},\ldots,\overline{V}_{[n],J_{i-1}}\) have be selected where \(rank(\overline{V}_{[n],[n_{i-1}]})=n_{i-1}.\) Among all \(n\times k\) matrices of rank \(k,\) we choose \(\overline{V}_{[n],J_{i}}\) as follows: First, we randomly choose elements for \(\overline{V}_{[k],J_{i}},\) with the restriction that it have full rank. This will insure that \(B_{1},\mathscr{U}_{1}\gets B_{2},\mathscr{V}_{i}.\) Next, we fill in the remaining columns of \(\overline{V}_{[n],J_{i}}\) with the restriction that the matrix \(\overline{V}_{[n],[n_{i}]}\) have rank \(n_{i}.\) As was seen in the proof of Lemma 3.1, there are \(q^{\frac{k^{2}-k}{2}}\prod_{j=1}^{k}(q^{i}-1)\) possible \(k\times k\) matrices having full rank. Thus we have as many choices for \(\overline{V}_{[k],J_{i}}\). Let \(r_{0}=rank(\overline{V}_{[k],[n_{i-1}]})\) and for \(j=1,\ldots,k,\) let \(r_{j}=rank(\overline{V}_{[k],[n_{i-1}+j]}),\) noting that \(r_{k}=k.\) For \(p=1,\ldots,k\) let \(\mathbf{w}_{p}\) be the \(p\)'th column vector in \(\overline{V}_{[n],J_{i}}\). Furthermore, let \(\mathbf{w}_{[k],p}\in\mathbb{F}_{q}^{k}\) be the \(p\)'th column vector of \(\overline{V}_{[k],J_{i}}\) and let \(\mathbf{w}_{[n\setminus k],p}\) be the \(p\)'th column vector of \(\overline{V}_{[n\setminus k],J_{i}}.\) That is, \(\mathbf{w}_{[k],p}\) is the vector \(\mathbf{w}_{p}\) restricted to its first \(k\) components, and \(\mathbf{w}_{[n\setminus k],p}\) is the vector \(\mathbf{w}_{p}\) restricted to its last \(n-k\) components. We shall assume that \(\mathbf{w}_{[k],p},\ p=1,\ldots,k\) have been selected so that \(rank(\overline{V}_{[k],[J_{i}]})=k\). It remains to select \(\mathbf{w}_{[n\setminus k],p},\ p=1,\ldots,k\) in such a way that \(rank(\overline{V}_{[n],[n_{i}]})=n_{i}.\) Suppose \(\mathbf{w}_{[k],1}\) is in the column space of \(\overline{V}_{[k],[n_{i-1}]};\) that is, \(r_{1}=rank(\overline{V}_{[k],[n_{i-1}+1]})=r_{0}.\) Given that \(\overline{V}_{[k],[n_{i-1}]}\) has rank \(r_{0},\) there are exactly \(q^{n_{i-1}-r_{0}}\) linear combinations of the columns of \(\overline{V}_{[k],[n_{i-1}]}\) which equal \(\mathbf{w}_{[k],1}\). Thus there are \(q^{n-k}-q^{n_{i-1}-r_{0}}\) selections for \(\mathbf{w}_{[n\setminus k],1}\) for which the matrix \(\overline{V}_{[n],[n_{i-1}+1]}\) has rank \(n_{i-1}+1.\) Suppose instead that \(\mathbf{w}_{[k],1}\) does not belong to the column space of \(\overline{V}_{[k],[n_{i-1}]}.\) Then choosing any vector \(\mathbf{w}_{[n\setminus k],1}\in\mathbb{F}_{q}^{n-k}\) will result in the matrix \(\overline{V}_{[n],[n_{i-1}+1]}\) having rank \(n_{i-1}+1.\) Thus there are \(q^{n-k}\) selections for \(\mathbf{w}_{[n\setminus k],1}\) in this case. In general, let \(p\in[k]\) and assume that the vectors \(\mathbf{w}_{[n\setminus k],1},\ldots,\mathbf{w}_{[n\setminus k],p-1}\) have been selected so that \(rank(\overline{V}_{[n],[n_{i-1}+p-1]})=n_{i-1}+p-1.\) We will now determine the number of selections for \(\mathbf{w}_{[n\setminus k],p}.\) If \(r_{p}=r_{p-1}=r,\) then there are exactly \(q^{n_{i-1}+p-1-r}\) linear combinations of the columns of \(\overline{V}_{[k],[n_{i-1}+p-1]}\) which equal \(\mathbf{w}_{[k],p}\). Thus there are \(q^{n-k}-q^{n_{i-1}+p-1-r}\) selections for \(\mathbf{w}_{[n\setminus k],p}\) for which \(\overline{V}_{[n],[n_{i-1}+p]}\) has rank \(n_{i-1}+p.\) If \(r_{p}=r_{p-1}+1,\) then \(\mathbf{w}_{[k],p}\) is not in the column space of \(\overline{V}_{[k],[n_{i-1}+p-1]}\) and hence one can choose any vector \(\mathbf{w}_{[n\setminus k],p}\in\mathbb{F}_{q}^{n-k}\) and \(\overline{V}_{[n],[n_{i-1}+p]}\) will have rank \(n_{i-1}+p.\) Thus there are \(q^{n-k}\) selections for \(\mathbf{w}_{[n\setminus k],p}\) in this case. Given that \(rank(\overline{V}_{[k],J_{i}})=k,\) there are exactly \(k_{0}=k-r_{0}\) integers \(p\in[k]\) for which \(r_{p}=rank(\overline{V}_{[k],[n_{i-1}+p]})>r_{p-1}=rank(\overline{V}_{[k],[n_{ i-1}+p-1]})\) and we let \(p_{1}<\cdots<p_{k_{0}}\) be such integers. Let \(p_{0}=0\). For \(j=1,\ldots,k_{0}\), let \(\beta_{j}=p_{j}-p_{j-1}-1\). Let \(\beta_{k_{0}+1}=k-p_{k_{0}}.\) For \(j=0,\ldots,k_{0}\), let \(s_{j}=r_{p_{j}}.\) We see that \(s_{j}=r_{0}+j,\ j=1,\ldots,k_{0}.\) Also, \(k_{0}+\beta_{1}+\beta_{2}+\cdots+\beta_{k_{0}+1}=k\) and hence \(\beta_{1}+\ldots\beta_{j}+j-1+(k-s_{j-1})+\beta_{j+1}+\cdots+\beta_{k_{0}+1}=k\). Observing that \(\beta_{1}+\cdots+\beta_{j-1}+j-1=p_{j-1}\), it follows from the previous equation that \[k-s_{j-1}+p_{j-1}=k-\beta_{j}-\cdots-\beta_{k_{0}+1}.\] and hence \[s_{j-1}-p_{j-1}=\beta_{j}+\cdots+\beta_{k_{0}+1}.\] Suppose \(\beta_{j}=p_{j}-p_{j-1}-1>0\) for some \(j\in[k_{0}].\) Then each of the vectors \(\mathbf{w}_{[k],p_{j-1}+1},\cdots,\mathbf{w}_{[k],p_{j-1}+\beta_{i}}\) belong to the column space of \(\overline{V}_{[k],n_{i-1}+p_{j-1}}.\) Thus the number of choices for the vectors \(\mathbf{w}_{[n\setminus k],p_{j-1}+1},\cdots,\mathbf{w}_{[n\setminus k],p_{j- 1}+\beta_{i}}\) is \[\prod_{u=1}^{\beta_{j}}(q^{n-k}-q^{n_{i-1}+p_{j-1}+u-1-s_{j-1}}) =\prod_{u=1}^{\beta_{j}}(q^{n-k}-q^{(n_{i-1}-k)+k+p_{j-1}+u-1-s_{j -1}})\] \[=\prod_{u=1}^{\beta_{j}}(q^{n-k}-q^{(n_{i-1}-k)+k-\beta_{j}- \cdots-\beta_{k_{0}+1}+(u-1)}).\] Let \(N\) be the number of ways one can choose \(\overline{V}_{[n],J_{i}}\) so that \(\overline{V}_{[k],J_{i}}\) has full rank and the matrix \(\overline{V}_{[n],[n_{i}]}\) has rank \(n_{i}\). By the above we have, \[N \geq\prod_{i=0}^{k-1}(q^{k}-q^{i})\prod_{p\in[k]-\{p_{1},\ldots,p_ {k_{0}}\}}(q^{n-k}-q^{n_{i-1}+p-1-r_{p-1}})\prod_{p\in\{p_{1},\ldots,p_{k_{0}} \}}q^{n-k}\] \[=\prod_{i=0}^{k-1}(q^{k}-q^{i})\left(\prod_{\beta_{j}>0}\prod_{u=1 }^{\beta_{j}}(q^{n-k}-q^{n_{i-1}+p_{j-1}+u-1-s_{j-1}})\right)q^{k_{0}(n-k)}\] \[=\prod_{i=0}^{k-1}(q^{k}-q^{i})\left(\prod_{\beta_{j}>0}\prod_{u=1 }^{\beta_{j}}\left(q^{n-k}-q^{(n_{i-1}-k)+k-\beta_{j}-\cdots-\beta_{k_{0}+1}+( u-1)}\right)\right)q^{k_{0}(n-k)}\] Given that \(k-\beta_{1}-\cdots-\beta_{k_{0}+1}=k_{0}\), one sees that \[\prod_{\beta_{j}>0}\prod_{u=1}^{\beta_{j}}(q^{n-k}-q^{(n_{i-1}-k)+k-\beta_{j}- \cdots-\beta_{k_{0}+1}+(u-1)})=\prod_{j=k_{0}}^{k-1}(q^{n-k}-q^{(n_{i-1}-k)+j}).\] Thus \[N\geq\prod_{i=0}^{k-1}(q^{k}-q^{i})\cdot\prod_{j=k_{0}}^{k-1}(q^{n-k}-q^{n_{i-1 }-k+j})\cdot q^{k_{0}(n-k)}.\] On the other hand, given \(\overline{V}_{[n],n_{i-1}}\), the total number of ways in which \(\overline{V}_{[n],J_{i}}\) can be chosen so that \(rank(\overline{V}_{[n],n_{i}})=n_{i}\) is \(\prod_{j=0}^{k-1}(q^{n}-q^{n_{i-1}+j}).\) Thus we have \[\mathbb{P}(Y_{i}=1\big{|}\ Y_{1},\ldots,Y_{i-1}) \geq\frac{N}{\prod_{j=0}^{k-1}(q^{n}-q^{n_{i-1}+j})}=\frac{N}{q^{ k^{2}}\prod_{j=0}^{k-1}(q^{n-k}-q^{n_{i-1}-k+j})}\] \[\geq\frac{\prod_{i=0}^{k-1}(q^{k}-q^{i})\cdot\prod_{j=k_{0}}^{k-1 }(q^{n-k}-q^{n_{i-1}-k+j})\cdot q^{k_{0}(n-k)}}{q^{k^{2}}\prod_{j=0}^{k-1}(q^{ n-k}-q^{n_{i-1}-k+j})}\] \[=\frac{q^{k_{0}(n-k)}\prod_{i=0}^{k-1}(q^{k}-q^{i})}{q^{k^{2}} \prod_{j=0}^{k_{0}-1}(q^{n-k}-q^{n_{i-1}-k+j})}\] \[\geq\prod_{i=0}^{k-1}\left(\frac{q^{k}-q^{i}}{q^{k}}\right)=\prod _{i=1}^{k}\left(1-\frac{1}{q^{i}}\right)=\alpha_{k}.\] ### The Chernoff bound Let \(Z_{1},\ldots,Z_{n}\) be \(n\) independent \(0,1\) random variables where for all \(i\), \(\mathbb{P}(Z_{i}=1)=p.\) Then the sum \(Z=\sum_{i=1}^{n}Z_{i}\) has a binomial \(B(n,p)\) distribution and we have the following well-known concentration inequality due to Chernoff (see [1]): **3.5 Theorem** ( Chernoff Bound ): For all \(0\leq t\leq np\), we have \[\mathbb{P}(|Z-np|>t)<2e^{-\frac{t^{2}}{3np}}.\] As a direct consequence of this bound, we have **3.6 Lemma** For \(Z\sim Bin(n,p)\), \(\mathbb{P}(Z>(1-\epsilon)np)>1-2e^{-\frac{\epsilon^{2}np}{3}}\). Let \(X=\sum_{i=1}^{\ell}X_{i}\) and \(Y=\sum_{i=1}^{\ell}Y_{i}\) By Lemmas 3.3 and 3.4, it follows that the probability distributions for \(X\) and \(Y\) dominate the probability distribution for a random variable \(Z\sim Bin(\ell,\alpha_{k})\). That is, for all \(a>0\), \(\mathbb{P}(X\geq a)\geq\mathbb{P}(Z\geq a)\) and \(\mathbb{P}(Y\geq a)\geq\mathbb{P}(Z\geq a)\). Thus by Lemma 3.6 we have the following: **3.7 Lemma** By fixing \(\epsilon\) in the above lemma, and given \(k\leq\ln(n)\), we have \(\ell=\lfloor\frac{n}{k}\rfloor\to\infty\) as \(n\to\infty.\) Thus \(\mathbb{P}(X\geq(1-\epsilon)\ell\alpha_{k})\to 1\) and \(\mathbb{P}(Y\geq(1-\epsilon)\ell\alpha_{k})\to 1\) as \(n\to\infty\). We define random variables \(Z_{i},\ Z_{i}^{\prime},\ i\in[\ell]\) where \[Z_{i}=\left\{\begin{array}{ll}1&\mbox{if}\ \ B_{1},\mathscr{U}_{1}\leftrightarrow B _{2},\mathscr{V}_{i}\\ 0&\mbox{otherwise}\end{array}\right.\ \ \ Z_{i}^{\prime}=\left\{\begin{array}{ll}1& \mbox{if}\ \ B_{1},\mathscr{U}_{1}\leftrightarrow B_{2},\mathscr{V}_{i}\\ 0&\mbox{otherwise}\end{array}\right.\] Let \(Z=Z_{1}+\cdots+Z_{\ell}\) and \(Z^{\prime}=Z_{1}^{\prime}+\cdots+Z_{\ell}^{\prime}\). Recall that \(\alpha=\prod_{i=1}^{\infty}(1-\frac{1}{q^{i}})\). **3.8 Observation** For \(\epsilon>0\), \(\mathbb{P}(Z\geq(2(1-\epsilon)\alpha-1)\ell)>1-4e^{-\frac{\epsilon^{2}\ell \alpha}{3}}\). Furthermore, if \(q>2\), then there is a constant \(c>0\) such that \(\mathbb{P}(Z\geq c\ell)\to 1\), as \(n\to\infty\). Proof.: By Lemma 3.7, we have for \(\epsilon>0\), \(\mathbb{P}(X\geq(1-\epsilon)\ell\alpha_{k})>1-2e^{-\frac{\epsilon^{2}\ell \alpha_{k}}{3}}\). Since \(\alpha=\prod_{i=1}^{\infty}(1-\frac{1}{q^{i}})<\alpha_{k}\), it follows that \(\mathbb{P}(X\geq(1-\epsilon)\ell\alpha)>1-2e^{-\frac{\epsilon^{2}\ell\alpha}{ 3}}\). Likewise, we have \(\mathbb{P}(Y\geq(1-\epsilon)\ell\alpha)>1-2e^{-\frac{\epsilon^{2}\ell\alpha}{ 3}}\). Thus \[\mathbb{P}(X\geq(1-\epsilon)\ell\alpha\mbox{ and }Y\geq(1-\epsilon)\ell\alpha) =1-\mathbb{P}(X<(1-\epsilon)\ell\alpha\mbox{ or }Y<(1-\epsilon)\ell\alpha)\] \[\geq 1-\mathbb{P}(X<(1-\epsilon)\ell\alpha)-\mathbb{P}(X<(1- \epsilon)\ell\alpha)\] \[\geq 1-4e^{-\frac{\epsilon^{2}\ell\alpha}{3}}\] Thus it follows that \(\mathbb{P}(X+Y\geq 2(1-\epsilon)\ell\alpha)\geq 1-4e^{-\frac{\epsilon^{2}\ell \alpha}{3}}\). Noting that \(Z\geq X+Y-\ell\), it follows that \(\mathbb{P}(Z+\ell\geq(1-\epsilon)2\ell\alpha)\geq 1-4e^{-\frac{\epsilon^{2}\ell \alpha}{3}}\) and thus \(\mathbb{P}(Z\geq(2(1-\epsilon)\alpha-1)\ell)\geq 1-4e^{-\frac{\epsilon^{2}\ell \alpha}{3}}\). By Euler's theorem, we have \(\alpha>1-\frac{1}{q}-\frac{1}{q^{2}}\). Thus when \(q>2\),we have \(\alpha>\frac{5}{9}\) and \(\mathbb{P}(Z\geq(\frac{1}{9}-\frac{10}{9}\epsilon)\ell)>1-4e^{-\frac{\epsilon^ {2}\ell\alpha}{3}}\). In particular, choosing \(\epsilon<\frac{1}{10}\), there is a constant \(c>0\) such that \(\mathbb{P}(Z\geq c\ell)\to 1\), as \(n\to\infty\) We shall exploit the following observation. **3.9 Observation** Suppose \(D_{i},\ i=1,2\) are bases in a matroid and \(X_{i}\subseteq D_{i},\ i=1,2\) are subsets where \(D_{1},X_{1}\to D_{2},X_{2}.\) Then for any ordering \(x_{21}\prec x_{21}\prec\cdots\prec x_{2k}\) of the elements of \(X_{2},\) there is an ordering \(x_{11}\prec x_{11}\prec\cdots\prec x_{1k}\) of the elements of \(X_{1}\) such that for \(i=1,\ldots,k,\)\(D_{2}-\{x_{21},\ldots,x_{2i}\}+\{x_{11},\ldots,x_{1i}\}\) is a basis. Proof.: Let \(D_{2}^{\prime}=D_{2}-X_{2}+X_{1},\) which by assumption is a basis. Assume that the elements of \(X_{2}\) are ordered as \(x_{21}\prec x_{22}\prec\cdots\prec x_{2k}\). Since \(|D_{2}-x_{21}|<|D_{2}^{\prime}|,\) there exists an element \(x_{11}\in D_{2}^{\prime}\) for which \(D_{2}^{1}=D_{2}-x_{21}+x_{11}\) is a basis. We note that \(x_{11}\in X_{1}.\) Next, since \(|D_{2}^{1}-x_{22}|<|D_{2}^{\prime}|,\) there exists \(x_{12}\in D_{2}^{\prime}\) for which \(D_{2}^{2}=D_{2}^{1}-x_{22}+x_{12}\) is a basis. Again, we note that \(x_{12}\in X_{1}\). Now suppose that for some \(1\leq i<k\) we have that \(D_{2}^{i}=D_{2}-\{x_{21},\ldots,x_{2i}\}+\{x_{11},\ldots,x_{1i}\}\) is a basis, where \(\{x_{11},\ldots,x_{1i}\}\subseteq X_{1}.\) Since \(|D_{2}^{i}-x_{2(i+1)}|<|D_{2}^{\prime}|,\) there exists \(x_{1(i+1)}\in D_{2}^{\prime}\) such that \(D_{2}^{i+1}=D_{2}^{i}-x_{2j}+x_{1(j+1)}\) is a basis, and furthermore, we must have \(x_{1(i+1)}\in X_{1}.\) Continuing, we generate the elements \(x_{11},x_{12},\ldots,x_{1k}\) and this will be the desired ordering of the elements of \(X_{1}.\) Recall that \(\beta_{k}=\left(1-\frac{1}{q}\right)^{k}.\) We have the following lemma: **3.10 Lemma** Let \(S\subseteq[\ell]\) where \(s=|S|.\) Then \[\mathbb{P}(Z_{i}^{\prime}=0,\ \forall i\in S\ \big{|}\ Z_{i}=1,\forall i\in S) \leq\ (1-\beta_{k})^{s}.\] In addition, \(\mathbb{P}(Z^{\prime}=0\ \big{|}\ Z=s)\leq(1-\beta_{k})^{s}.\) Proof.: Assume that \(Z_{i}=1,\forall i\in S.\) Then \(B_{1},\mathscr{U}_{1}\to B_{2},\mathscr{V}_{i}\) and \(B_{1},\mathscr{U}_{1}\gets B_{2},\mathscr{V}_{i},\) for all \(i\in S.\) Let \(i\in S.\) It follows from Observation 3.9 that for some ordering \(V_{i_{1}}\prec V_{i_{2}}\prec\cdots\prec V_{i_{k}}\) of \(\mathscr{V}_{i}\) we have, for \(j=1,\ldots,k,\)\(B_{1}-\{U_{1},\ldots,U_{j}\}+\{V_{i_{1}},\ldots,V_{i_{j}}\}\) is a basis. In light of this, we may assume that for each \(i\in S,\) we have already re-ordered the vectors in \(\mathscr{V}_{i}\) so that for \(j=1,\ldots,k,\)\(B_{1}-\{U_{1},\ldots,U_{j}\}+\{V_{n_{i-1}+1},\ldots,V_{n_{i-1}+j}\}\) is a basis. Let \(\overline{W}\) be the matrix obtained by concatenating \(\overline{U}\) with \(\overline{V}\) and let \(\overline{W}^{\prime}\) be the matrix obtained by performing row operations on \(\overline{W}\) so that \(\overline{V}\) is reduced to the identity matrix \(I_{n}\) and \(\overline{U}\) is reduced to a matrix \(\overline{U}^{\prime}=(u_{ij}^{\prime}).\) Note that the the row operations do not change the independence of subsets of columns in the column space of \(\overline{W};\) that is, a set of columns in \(\overline{W}\) is linearly independent if and only if the corresponding subset of columns in \(\overline{W}^{\prime}\) is linearly independent. Let \(i\in S.\) Since \(B_{1},\)\(\mathcal{U}_{1}\underset{M}{\rightarrow}B_{2},\)\(\mathcal{V}_{i},\) it follows that \(\overline{U}^{\prime}_{J_{i},[k]}\) has full rank. Moreover, if \(\overline{U}^{\prime}_{J_{i},[k]}\) has sequential full rank, then we see that for \(j=1,\ldots,k,\) replacing the first \(j\) columns of \(I_{n}\) with first \(j\) columns of \(\overline{U}^{\prime}_{[n],[k]}\) will yield a matrix of full rank. Thus if \(\overline{U}^{\prime}_{J_{i},[k]}\) has sequential full rank, then for \(j=1,\ldots,k,\)\(B_{2}-\{V_{n_{i-1}+1},\ldots,V_{n_{i-1}+j}\}+\{U_{1},\ldots,U_{j}\}\) is a basis. In this case, we see that \(Z^{\prime}_{i}=1.\) By Observation 3.2, the probability that \(\overline{U}^{\prime}_{J_{i},[k]}\) has sequential full rank is \(\beta_{k}=(1-\frac{1}{q})^{k}\). Thus \(\mathbb{P}(Z^{\prime}_{i}=1\big{|}\ Z_{i}=1)\geq\beta_{k},\) and this bound is independent of the other random variables. We now see that \(\mathbb{P}(Z^{\prime}_{i}=0,\ \forall i\in S\ \big{|}\ Z_{i}=1,\forall i\in S)\leq(1- \beta_{k})^{s}.\) The second assertion follows from the first statement. ### Proof of Theorem 1.5 Assume that \(q>2.\) By Lemma 3.8, there is a constant \(c>0\) such that \(\mathbb{P}(Z\geq c\ell)\to 1,\) as \(\ell\rightarrow\infty.\) To complete the proof of Theorem 1.5, it suffices to show that as \(n\rightarrow\infty,\)\(\mathbb{P}(Z^{\prime}>0)\to 1.\) We have by Lemma 3.10 that \[\mathbb{P}(Z^{\prime}=0) =\sum_{s=0}^{\ell}\mathbb{P}(Z^{\prime}=0,Z=s)=\sum_{s=0}^{\ell} \mathbb{P}(Z^{\prime}=0\ \big{|}\ Z=s)\mathbb{P}(Z=s)\] \[\leq\sum_{s=0}^{\ell}(1-\beta_{k})^{s}\mathbb{P}(Z=s)=\sum_{s<c \ell}(1-\beta_{k})^{s}\mathbb{P}(Z=s)+\sum_{s\geq c\ell}(1-\beta_{k})^{s} \mathbb{P}(Z=s)\] \[\leq\mathbb{P}(Z<c\ell)+(1-\beta_{k})^{c\ell}\mathbb{P}(Z\geq c \ell).\] We have that \(\lim_{n\rightarrow\infty}\mathbb{P}(Z<c\ell)=0.\) We claim that \(\lim_{n\rightarrow\infty}(1-\beta_{k})^{c\ell}=0.\) It suffices to prove this when \(k=\ln(n)\) (since \(\beta_{k^{\prime}}>\beta_{k}\) and \(\frac{n}{k^{\prime}}>\frac{n}{k}\) if \(k^{\prime}<k\)). Let \(\delta=\ln(1-\frac{1}{q}).\) Observing that \(-1<\delta<0\) we have \[(1-\beta_{k})^{c\ell} =(1-(1-\frac{1}{q})^{k})^{c\ell}=(1-e^{\delta k})^{c\ell}\] \[=(1-e^{\delta\ln(n)})^{c\ell}=(1-n^{\delta})^{c\ell}=\left((1- \frac{1}{n^{-\delta}})^{n^{-\delta}}\right)^{\frac{cn^{1+\delta}}{\ln(n)}}\] Since \(\lim_{n\rightarrow\infty}(1-\frac{1}{n^{-\delta}})^{n^{-\delta}}=e^{-1},\) it follows that \(\lim_{n\rightarrow\infty}\left((1-\frac{1}{n^{-\delta}})^{n^{-\delta}}\right) ^{\frac{cn^{1+\delta}}{\ln(n)}}=0.\) Thus \(\lim_{n\rightarrow\infty}(1-\beta_{k})^{c\ell}=0.\) It now follows from the above that as \(n\rightarrow\infty,\)\(\mathbb{P}(Z^{\prime}>0)\to 1.\) This completes the proof. Discussion Given that Theorem 1.5 does not apply to binary matroids, the following is a natural problem: **4.1 Problem** Let \(M\) be a rank-\(n\) binary matroid and let \(k\) be a fixed integer. Suppose one chooses randomly two bases \(B_{1}\) and \(B_{2}\) and then chooses randomly a \(k\)-subset \(X\subseteq B_{1}.\) Is the probability of finding a \(k\)-subset \(Y\subseteq B_{2}\) which is serially exchangeable with \(X\) tending to one as \(n\to\infty\)?
2308.07408
Cu-substituted lead phosphate apatite as an inversion-asymmetric Weyl semimetal
Based on symmetry arguments and the latest density functional results for the copper-substituted lead phosphate apatite (`LK-99'), we show that, at the non-interacting level, the material is an inversion-asymmetric Weyl semimetal. A pair of Weyl nodes with opposite chiralities emerge at different energies in the vicinity of the time-reversal-invariant $\Gamma$ and ${\rm A}$ points of the 3D Brillouin zone. These are characterized by unusual Weyl charges of $C_{\rm W} = \pm 2$ and are connected by two branches of topologically protected Fermi arc states on surfaces parallel to the principal $c$-axis. We further study important effects of the atomic spin-orbit coupling on the band structure and the electronic properties of the material in general. Possible implications of the proposed band topology on the strong correlation physics are also discussed.
Benjamin T. Zhou, Marcel Franz
2023-08-14T18:53:36Z
http://arxiv.org/abs/2308.07408v1
# Cu-substituted lead phosphate apatite as an inversion-asymmetric Weyl semimetal ###### Abstract Based on symmetry arguments and the latest density functional results for the copper-substituted lead phosphate apatite ('LK-99'), we show that, at the non-interacting level, the material is an inversion-asymmetric Weyl semimetal. A pair of Weyl nodes with opposite chiralities emerge at different energies in the vicinity of the time-reversal-invariant \(\Gamma\) and A points of the 3D Brillouin zone. These are characterized by unusual Weyl charges of \(C_{\rm W}=\pm 2\) and are connected by two branches of topologically protected Fermi arc states on surfaces parallel to the principal \(c\)-axis. We further study important effects of the atomic spin-orbit coupling on the band structure and the electronic properties of the material in general. Possible implications of the proposed band topology on the strong correlation physics are also discussed. _Introduction._-- Achieving room-temperature superconductivity at ambient pressure is one of the ultimate goals of modern condensed matter research. In recent experiments on copper-substituted lead phosphate apatite Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O [1; 2], known also as 'LK-99', tentative signatures of strong diamagnetism and low-resistance states have been observed. These results hint at the possibility of room-temperature superconductivity and have ignited world-wide interest in this family of materials [3; 4; 5; 6; 7; 8; 9; 10; 11; 12]. To illuminate physical mechanisms behind the observed phenomena, a microscopic theoretical understanding of the electronic properties of lead apatite is of critical importance. Following this line of thought several first-principle density functional theory (DFT) studies have been performed recently on LK-99 [13; 14; 15; 16], which suggest that the stable crystal has a distorted trigonal prismatic structure with six-fold coordinated Cu atoms (Fig. 1a). The relevant electronic states consist of two isolated bands stemming from the \(d_{xz},d_{yz}\) orbitals of Cu with an overall bandwidth of \(\sim 0.1\) eV and the Fermi level lying in the middle of the bands suggesting metallic behavior. The two isolated bands further exhibit interesting band crossing features at the time-reversal-invariant \(\Gamma=(0,0,0)\) and A \(=(0,0,\pm\pi/c)\) points of the three-dimensional hexagonal prismatic Brillouin zone (BZ), while bandgaps are found in the rest of the BZ (Fig. 1b). With the electronic band structure and the relevant atomic orbital compositions at hand, an important next step is to understand the symmetry and topological properties of the bands as well as the role of electron-electron interactions which are expected to be strong compared to the bandwidth. In this Letter, we uncover an important topological aspect of the LK-99 band structure - we show that the band crossings near \(\Gamma\) and A are Weyl points with opposite Weyl charges. Using a combined approach of symmetry analysis and microscopic modeling, we show that the emergence of Weyl points results from a combination of quadratic band touching enforced by the three-fold rotation symmetry (\(\mathcal{C}_{3z}\)) on the \(\left|d_{xz}\right\rangle\pm i\left|d_{yz}\right\rangle\) doublet, and a finite band splitting along the \(\Gamma-\text{A}\) line caused by broken mirror symmetries. Together, these endow the crossing points with an unusual Weyl charge of \(C_{\rm W}=\pm 2\) (Fig. 1c-d). In accord with the bulk-boundary correspondence the two Weyl nodes with opposite charges are connected by a pair of topologically protected surface Fermi arcs which we visualize by a direct calculation. We further study the important role of atomic spin-orbit coupling (SOC) on the band topology and other electronic properties of the system, as well as discuss possible implications of the Weyl physics on the role of strong correlations in this narrow-band system. _Effective spinless Weyl Hamiltonian._-- For simplicity, we first consider a spinless model to capture the spin-polarized bands obtained by DFT. As shown in Ref. [13], Figure 1: (a) Schematic of the six-fold coordinated copper (Cu) and lead (Pb) atoms arranged in a triangular lattice. Mirror reflection is broken by the rotations of the triangles formed by oxygen (O) atoms. (b) Hexagonal prismatic 3D Brillouin zone. (c) Energy bands obtained from two-band tight-biding model Eq. (2) with parameters obtained by fitting DFT bands in Ref. [13]. (d) Chern number \(\mathcal{C}\) as a function of \(k_{z}\). the stable crystalline structure of LK-99 exhibits an approximate \(C_{3v}\) point group symmetry. Given the relevant \(d_{xz},d_{yz}\)-orbitals forming the two isolated bands, the doublet \(\left|d,\pm 1\right\rangle=\left|d_{xz}\right\rangle\pm i\left|d_{yz}\right\rangle\) associated with orbital angular momenta \(m_{z}=\pm 1\) forms the two-dimensional irreducible representation \(E\) of \(C_{3v}\). In addition, rotations in the triangles formed by oxygen (O) atoms break the vertical mirror plane \(\sigma_{v}\) and reduce the point group from \(C_{3v}\) to the chiral \(C_{3}\) point group, which contains only the three-fold rotation \(\mathcal{C}_{3z}\equiv e^{-i\frac{2\pi}{3}\tau_{z}}\) under the basis of \(\left|d,\pm 1\right\rangle\) but no improper rotations. The \(C_{3z}\)-symmetry together with the spinless time-reversal \(\mathcal{T}^{\prime}=\tau_{x}\mathcal{K}\) dictates that up to lowest-order terms the spinless effective \(\mathbf{k}\cdot\mathbf{p}\) Hamiltonians near time-reversal-invariant \(\Gamma=\mathbf{Q}_{+}=(0,0,0)\), \(\mathrm{A}=\mathbf{Q}_{-}=(0,0,\pm\pi/c)\) points can be written as \[H_{W}^{\pm}(\mathbf{p})=v_{\pm}[(p_{x}^{2}-p_{y}^{2})\tau_{x}-2p_{x}p_{y}\tau_{y}] \pm v_{z}p_{z}\tau_{z}, \tag{1}\] where momentum \(\mathbf{p}=\mathbf{k}-\mathbf{Q}_{\pm}\) is measured from \(\mathbf{Q}_{\pm}\) points and \(v_{\pm}\) denotes the velocity resulting from electron hopping within the \(ab\)-plane with \(v_{+}\neq v_{-}\) in general. The \(v_{z}p_{z}\) term originates from the breaking of mirror symmetry \(\sigma_{v}\) due to rotations in O-triangles and causes the band splitting along the \(\sigma_{v}\)-invariant \(\Gamma\)-A line (Fig. 1b), the \(\tau_{\alpha=x,y,z}\) operate on the orbital subspace formed by \(\left|d,\pm 1\right\rangle\). It is worth noting that the leading inter-orbital term is quadratic in the in-plane momentum \(\mathbf{p}_{||}=(p_{x},p_{y})\) and has the same form as the well-known quadratic band touching in bilayer graphene [17; 18]. The latter is known to generate a \(2\pi\) Berry phase around the origin and results in a nonzero Chern number \(\mathcal{C}=\mathrm{sgn}(\Delta)\) in the quadratic band when a mass term \(\Delta\) is introduced to gap out the band touching [19]. In Eq. (1), the mass term \(\Delta\) is given by the \(v_{z}p_{z}\) term which changes sign as \(p_{z}\) goes across the band touching points. This implies that the Chern number must change from \(\pm 1\) to \(\mp 1\) across the band touching at \(\mathbf{Q}_{\pm}\), which indicates the presence of monopoles of charge \(C_{\mathrm{W}}=\pm 2\)[20; 21; 22]. To confirm this result, we construct a symmetry-based realistic two-band tight-binding model in the Bloch basis \(\left|\mathbf{k},d,\pm 1\right\rangle\): \[H_{\mathrm{TB}}(\mathbf{k})=\begin{pmatrix}h_{++}(\mathbf{k})&h_{+-}(\mathbf{k})\\ h_{-+}(\mathbf{k})&h_{--}(\mathbf{k})\end{pmatrix}, \tag{2}\] where \(m,m^{\prime}=\pm 1\) denotes the orbital index for \(\left|d,\pm 1\right\rangle\) and details of the matrix elements \(h_{mm^{\prime}}(\mathbf{k})\) are presented in the Supplemental Material (SM) [23]. The energy bands obtained from \(H_{\mathrm{TB}}(\mathbf{k})\) are shown in Fig. 1b, and are in excellent agreement with DFT bands. They also capture the effective quadratic Weyl Hamiltonian in Eq. (1) near \(\mathbf{Q}_{\pm}\). The Chern number \(\mathcal{C}\) as a function of \(k_{z}\) is calculated using the eigenstates of \(H_{\mathrm{TB}}(\mathbf{k})\) as presented in Fig. 1d, which clearly shows the sudden change in \(\mathcal{C}\) at \(k_{z}=0\) (\(k_{z}=\pm\pi/c\)) and signifies Weyl charges of \(C_{\mathrm{W}}=\pm 2\). _Topologically protected Fermi arcs.-_ As mandated by the bulk-boundary correspondence principle, Weyl points of opposite charges must be connected by topologically protected gapless states, known as Fermi arcs [20; 21; 22], which live on surfaces where the projected Weyl charges do not cancel. As the Weyl nodes of opposite charges in LK-99 are located along the \(\Gamma-\mathrm{A}\) line, Fermi arcs are expected to emerge on the side surfaces parallel to the \(c\)-axis. We demonstrate this explicitly by solving the tight-binding model in Eq. (2) in a slab geometry, infinite in the \(xz\)-plane with open boundaries terminated at \(y=0\) and \(y=L_{y}\) along the \(y\)-direction (see SM [23] for details). The resulting energy spectrum at \(k_{z}=\pi/c\), presented in Fig. 2a, clearly shows two branches of Fermi arc states associated with Weyl charge of \(C_{\mathrm{W}}=\pm 2\), separated from the bulk continuum and emanating from the projected Weyl point at \((k_{x},k_{z})=(0,\pi/c)\). To demonstrate that these states are predominantly localized on the surfaces, we further calculate the local density of states on the surfaces at \(y=0,L_{y}\) at energy \(E=5\) meV (indicated by dashed line in Fig. 2a) throughout the entire surface Brillouin zone defined by conserved momenta \(k_{x}\) and \(k_{z}\) (Fig. 2b). It is evident that the states connecting the two Weyl points are predominantly localized on the surface (brightness in color scale indicates the density of states). Note that due to the energy difference \(\sim 50\) meV between Weyl points at \(\Gamma\) and A (Fig. 1b), the bulk Fermi surface around \(\Gamma\) is larger than that around A when the Fermi level is close to A (Fig. 2a). Hence the Weyl point \(\mathbf{Q}_{+}\) is embedded in the bulk bands as shown Fig. 2b. _Effects of atomic spin-orbit coupling.--_ We note that the emergence of Weyl points at the time-reversal-invariant \(\Gamma\) and A points in the spinless models (Eq. 1-2) above is a result of the combination of spinless time-reversal \(\mathcal{T}^{\prime}\) and three-fold rotation \(\mathcal{C}_{3z}\). However, the physical time reversal \(\mathcal{T}=i\sigma_{y}\tau_{x}\mathcal{K}\) (\(\sigma_{\alpha=x,y,z}\): spin Pauli matrices) involves the spin degrees of freedom and the Figure 2: (a) Energy spectrum at \(k_{z}=\pi/c\) as a function of \(k_{x}\) of an infinite slab with number of sites \(N_{y}=200\) along the \(y\)-direction. (b) Local density of states on the surfaces at \(y=0\) and \(y=N_{y}\) in the \(k_{x}-k_{z}\) plane obtained at energy \(E=5\) meV (dashed line in (a)). Color bar indicates the magnitude of local density of states on logarithmic scale. true Kramers doublets are formed by two distinct pairs: \(\{\left|d,+1,\uparrow\right\rangle,\left|d,-1,\downarrow\right\rangle\}\) and \(\{\left|d,+1,\downarrow\right\rangle,\left|d,-1,\uparrow\right\rangle\}\), which indicates that the two-fold degeneracy at \(\Gamma\) and A within the same spin sector can generally be lifted when \(\mathcal{T}^{\prime}\) is broken while \(\mathcal{T}\) is respected. This happens if we include atomic spin-orbit coupling, which indeed seems relevant for LK-99 as evidenced by the energy difference of a few meVs found between spin-up and spin-down DFT bands [13]. This suggests that the Weyl points, while being topological objects and immune to weak perturbations, are not necessarily pinned at the time-reversal-invariant points by any symmetry and can therefore be shifted by an appropriate perturbation. It can be shown (see SM [23] for details) that in the relevant subspace spanned by \(\{\left|d,m=\pm 1,\sigma=\uparrow,\downarrow\right\rangle\}\), the atomic SOC takes the simple form of \[H_{\text{SOC}}=\frac{\lambda}{2}\tau_{z}\otimes\sigma_{z}, \tag{3}\] where \(\lambda\) characterizes the SOC strength. Fig. 3a shows that SOC causes an energy level splitting of \(\Delta E=\lambda\) at \(\Gamma\) and A between \(\left|d,m=+1,\sigma\right\rangle\) and \(\left|d,m=-1,\sigma\right\rangle\) for a given spin \(\sigma\). Note that because \(H_{\text{SOC}}\) in Eq. (3) is diagonal in the spin basis the two spin sectors remain decoupled in the presence of SOC. However, due to the splitting induced by \(\lambda\neq 0\), the Weyl points from each spin sector are shifted away from their high-symmetry positions to generic points \(k_{z,0}\neq 0,\pi/c\) along the \(\Gamma-\text{A}\) line as shown in Fig. 3a,b. Notably, for large enough \(\lambda\) the Weyl points with opposite charges can collide and annihilate, rendering the system topologically trivial (Fig. 3d). In the present model this occurs for \(\lambda_{\text{c}}\simeq 20\) meV. Given the spin splitting of \(<10\) meV found in the DFT calculations [13], our results suggest that LK-99 is within the \(\lambda<\lambda_{\text{c}}\) regime where the nontrivial Weyl physics remains valid in the presence of atomic SOC. On the other hand, we note that even in the strong SOC limit with \(\lambda>\lambda_{\text{c}}\) where the system is topologically trivial, the strong SOC effect shown in Fig. 3b has important consequences on the electronic properties. In fact, the form of the SOC splitting shown in Fig. 3b is reminiscent of the well-known spin-valley locking [24; 25; 26] or Ising SOC [27; 28] in atomic layers of transition-metal dichalcogenides (TMDs). In particular, in the high hole (electron) doping limit where only the \(K\)-valleys (\(H\)-valleys) are accessed, the strong spin splitting induced by SOC leads to similar coupled spin-valley physics in TMDs which can give rise to long spin and valley life times as the flip of spin and valley indices must occur simultaneously. These results suggest that LK-99 under strong SOC limit may have potential applications in spintronics and valleytronics [29; 30]. _Implications for diamagnetism, low-resistivity and correlation physics.--_ Having established the Weyl physics in the above sections, we now discuss its possible implications for the recent experimental observations interpreted as signatures of high-\(T_{\text{c}}\) superconductivity in LK-99. First, we note that strong diamagnetism need not originate exclusively from the Meisser effect in a superconductor - a well-known alternative is atomically thin graphene, where the orbital diamagnetic susceptibility in principle diverges when the Fermi level lies at the charge neutrality point with either linear or quadratic band touching [31; 32; 33; 34; 35]. The origin of this effect can be traced back to the formation of zero-energy Landau levels in monolayer and bilayer graphene under applied magnetic field [36; 17; 37]: electrons in occupied states need to increase their energy to join the zero-energy Landau level, which then increases the total energy of the system. Therefore, it is energetically favorable for electrons to partially expel the external field via their orbital motion. Similar physics is also known to occur in systems with three-dimensional Dirac spectrum where zero-energy Landau levels can also emerge [38; 39; 40]. As we discussed above, bulk Weyl fermions described by Eq. (1) can be regarded as multiple copies of bilayer graphene where zero-energy Landau level is expected to emerge under applied magnetic field. This suggests that a similar mechanism of Landau diamagnetism could be Figure 3: (a) Energy bands with atomic SOC strength \(\lambda=6\) meV in the 3D BZ (upper panel) and along the A\({}^{\prime}\) - \(\Gamma\) - A line (lower panel) with A\({}^{\prime}=(0,0,-\pi/\text{c})\). Weyl points from spin-up and spin-down sectors are shifted. (b) Upper panel: schematic of Weyl point locations under finite SOC. Lower panel: Chern number as a function of \(k_{z}\) with the same \(\lambda\) in (a). (c) Energy bands with critical atomic SOC strength \(\lambda_{\text{c}}=20\) meV in the 3D BZ (upper panel) and along the A\({}^{\prime}\) - \(\Gamma\) - A line (lower panel). Weyl crossings within the same spin sectors disappear. The remaining band crossing points between different spin sectors at \(\Gamma\), A are due to Kramers degeneracy, and are topologically trivial. (d) Upper panel: schematic of annihilation of Weyl points with opposite charges for each spin sectors at \(\lambda_{\text{c}}=20\) meV. Lower panel: Chern number as a function of \(k_{z}\) becomes uniformly zero at \(\lambda_{\text{c}}\). relevant to the microscopic origin of the strong diamagnetic effects found in some experiments on LK-99 [2; 3]. Moreover, we note that low-dissipation electronic transport can also occur via topologically protected boundary modes due to suppressed back-scattering from spatial separation of left- and right-moving carriers [41; 42; 43]. In the context of Weyl semimetals, the currents carried by Fermi arcs, while not completely immune to dissipation due to scattering processes involving bulk channels [44], have been shown to be remarkably disorder-tolerant and often lead to ultra-high electron mobility when the surface transport via Fermi arcs dominates [45; 46]. In fact, as shown in Fig. 2b, Fermi arcs can cover a large part of the surface Brillouin zone. This indicates a large number of Fermi arc channels, which could potentially explain the low-dissipation transport reported in LK-99 by some groups [5]. To fully address the issues raised above, detailed calculations of the orbital diamagnetic effects and mesoscopic transport in LK-99 must be performed. While this lies beyond the scope of the current work we note that a lattice model is a necessary starting point for the study of these effects. Our tight-binding model Eq. (2) captures the key features of the LK-99 band topology and could thus enable these calculations as well as serve as a basis for the study of strong correlation effects. On the other hand, while our proposed Weyl physics suggests alternative explanations for the reported experimental signatures of superconductivity, it is important to note that the nontrivial band topology established in this work does not rule out the possibility of high-\(T_{\rm c}\) superconductivity in this family of materials. On the contrary, Weyl physics opens even richer possibilities for the superconducting states as the interplay between correlations and topology often leads to exotic phases of matter. In particular, distinct from conventional superconductors with trivial normal-state band topology, the existence of protected surface states opens a new possibility of superconducting instability nucleated at the surface. In fact, recent studies have suggested the possibility that the \(T_{\rm c}\) of surface superconductivity in a Weyl semimetal can actually exceed that in the bulk [47; 48; 49; 50]. If the high \(T_{\rm c}\) superconductivity were eventually confirmed in LK-99, our results would be compatible with possible high-\(T_{\rm c}\) superconductivity driven by the Fermi arc states. Should the possible electron pairing occur alternatively in the bulk, the nontrivial band topology, particularly the Berry curvature generated by the Weyl points, also impose important topological constraints on the pairing symmetry which usually implies unconventional (non-\(s\)-wave) pairing [51; 52]. Even in the strong SOC limit with \(\lambda>\lambda_{\rm c}\) where the bulk topology becomes trivial, the strong SOC splitting has consequences for the superconducting states as the breaking of spin SU(2) symmetry in the non-centrosymmetric superconductor would necessarily imply mixing between spin-singlet and spin-triplet pairing [53; 54], which often leads to topological superconductivity [55; 56] and has potential applications for superconducting spintronics [57]. _Conclusions.-_ Much remains unknown about the LK-99 family of materials studied in recent experiments. Perhaps most importantly it is not yet clear whether Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O is the relevant stable crystal structure present in experimental samples. Indeed, a recent DFT study [58] suggested Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) to be a more likely candidate. Nevertheless, the latter compound shows very similar electronic structure near the Fermi level and our analysis and conclusions remain applicable with minor modifications. On the theory side some DFT results suggest the narrow bands to be spin-polarized [13; 58], making the material an unlikely candidate for a high-temperature superconductor. Estimates of the copper on-site repulsion parameter \(U\) further indicate that LK-99 could be in a very strongly correlated limit with \(U\gg w\), the bandwidth [14]. It is therefore unclear how electrons in narrow bands avoid forming a large-gap Mott insulator at integer filling suggested by the chemical formula. Clearly, more experimental work is needed to gain insight into these issues. On the other hand, our analysis of the SOC effects on band topology remain applicable to spin-polarized bands: since the atomic SOC in Eq. (3) is block-diagonal in spinor basis, the SOC effect in each set of spin-polarized bands remains the same as those shown in Fig. 3. We close by noting that results presented in this work are relevant whether or not the material studied in Refs. [1; 2] ends up being a superconductor, or even contains LK-99 as a major ingredient. They establish an interesting band topology in Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O with two second-order Weyl points located very close to the Fermi level. By contrast most other Weyl semimetals known today exhibit a large number of Weyl points that tend to be buried among trivial bands far away from the Fermi level, making experimental observation of fundamental phenomena (e.g. Fermi arcs, chiral anomaly) difficult. In this sense Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O/(OH)\({}_{2}\) may furnish a convenient realization of the minimal model Weyl semimetal which could facilitate experimental studies simply not feasible with existing materials. _Note._ - During the preparation of this manuscript we became aware of recent preprints (Ref. [59; 60]) which reported minimal tight-binding models of LK-99 and Ref. [60] also mentioned the existence of Weyl points in the bulk energy band structure. _Acknowledgement._ - The authors are indebted to J. Berlinsky, D. A. Bonn, A. Damascelli, C. Felser, A. Hallas, P. Kim and V. Pathak for insightful discussions and correspondence. The work presented here was supported by NSERC, CIFAR and the Canada First Research Excellence Fund, Quantum Materials and Future Technologies Program.
2310.12529
$4d$ steady gradient Ricci solitons with nonnegative curvature away from a compact set
In the paper, we analysis the asymptotic behavior of noncompact $\kappa$-noncollapsed steady gradient Ricci soliton $(M, g)$ with nonnegative curvature operator away from a compact set $K$ of $M$. In particular, we prove: any $4d$ noncompact $\kappa$-noncollapsed steady gradient Ricci soliton $(M^4, g)$ with nonnegative sectional curvature must be a Bryant Ricci soliton up to scaling if it admits a sequence of rescaled flows of $(M^4, g)$, which converges subsequently to a family of shrinking quotient cylinders.
Ziyi Zhao, Xiaohua Zhu
2023-10-19T07:05:45Z
http://arxiv.org/abs/2310.12529v3
# \(4d\) Steady gradient Ricci solitons with nonnegative curvature away from a compact set ###### Abstract. In the paper, we analysis the asymptotic behavior of noncompact \(\kappa\)-noncollapsed steady gradient Ricci soliton \((M,g)\) with nonnegative curvature operator away from a compact set \(K\) of \(M\). In particular, we prove: any \(4d\) noncompact \(\kappa\)-noncollapsed steady gradient Ricci soliton \((M^{4},g)\) with nonnegative sectional curvature must be a Bryant Ricci soliton up to scaling if it admits a sequence of rescaled flows of \((M^{4},g)\), which converges subsequently to a family of shrinking quotient cylinders. Key words and phrases:Steady Ricci soliton, Ricci flow, ancient \(\kappa\)-solution, Bryant Ricci soliton 2000 Mathematics Subject Classification: Primary: 53E20; Secondary: 53C20, 53C25, 58J05 \(\ddagger\) partially supported by National Key R&D Program of China 2020YFA0712800 and NSFC 12271009. _(i) The scalar curvature satisfies \(\frac{C_{1}}{\rho(x)}\leq R(x)\leq\frac{C_{2}}{\rho(x)}\) as \(\rho(x)>>1\), where \(C_{1}\), \(C_{2}\) are two positive constants._ _(ii) Let \(p_{i}\) be an arbitrary sequence of marked points going to infinity. Consider the rescaled metrics_ \[g_{p_{i}}(t)=r_{i}^{-1}\phi_{r_{i}t}^{*}(g), \tag{0.1}\] _where \(r_{i}R(p_{i})=1+o(1)\) as \(i\to\infty\) and \(\phi_{t}\) is a family of transformations generated by \(-\nabla f\), the flow \((M,g_{p_{i}}(t);p_{i})\) converges in the Cheeger-Gromov sense to a family of shrinking cylinders \((\mathbb{S}^{n-1}\times\mathbb{R},\ \bar{g}(t))\), \(t\in(0,1)\). The metric \(\bar{g}(t)\) is given by_ \[\bar{g}(t)=(1-t)g_{\mathbb{S}^{n-1}(1)}+ds^{2}, \tag{0.2}\] _where \(\mathbb{S}^{n-1}(1)\) is the unit sphere in the Euclidean space._ In [5], Brendle proved that any steady (gradient) Ricci soliton with positive sectional curvature must be isometric to the Bryant Ricci soliton up to scaling if it is asymptotically cylindrical. Latterly, Deng-Zhu found that the Brendle's result still holds if one of two conditions in Definition 0.1 is satisfied for \(\kappa\)-noncollapsed steady Ricci solitons with nonnegative curvature operator [16, 17]. In a very recent paper [29], the above results of Brendle and Deng-Zhu have been improved instead of curvature condition by the nonnegativity of curvature operator away from a compact set of \(M\) together with \(P\)-curvature pinching condition. Thus it is a natural question to ask: is the Brendle's result true if there is only one sequence in the condition (ii) satisfied? In this paper, we give a positive answer for 4d \(\kappa\)-noncollapsed steady Ricci solitons with nonnegative sectional curvature. Let \((M^{n},g)\) be a noncompact \(\kappa\)-noncollapsed steady Ricci soliton with curvature operator \(\mathrm{Rm}\geq 0\) (sectional curvature \(\mathrm{Km}\geq 0\) for \(n=4\)) and \(\mathrm{Ric}>0\) away from a compact set \(K\) of \(M\). Let \(p_{i}\to\infty\) be any sequence in \(M\) and \(g_{p_{i}}(t)\) the rescaled flow of Ricci soliton \(g\) as in Definition 0.1. Then \((M,g_{p_{i}}(t);p_{i})\) converges to a splitting flow in the Cheeger-Gromov sense, \[\bar{g}(t)=h(t)+ds^{2},\ \mathrm{on}\ N\times\mathbb{R}, \tag{0.3}\] where \(h(t)\) (\(t\in(-\infty,0]\)) is an ancient \(\kappa\)-solution on an \((n-1)\)-dimensional \(N\), see Proposition 1.2. The following alternative principle is the main result in this paper. **Theorem 0.2**.: _Let \((M^{4},g)\) be a \(4d\) noncompact \(\kappa\)-noncollapsed steady gradient Ricci soliton with \(\mathrm{Km}\geq 0\) and \(\mathrm{Ric}>0\) away from a compact set \(K\) of \(M\). Let \(p_{i}\to\infty\) be any sequence in \(M\) and \(\bar{g}(t)=h(t)+ds^{2}\) the splitting limit flow of \((M,g_{p_{i}}(t);p_{i})\) as in (0.3). Then either all \(h(t)\) is a family of \(3d\) shrinking quotient spheres, or all \(h(t)\) is a \(3d\) noncompact ancient \(\kappa\)-solution._ We note that both of cases will happen in Theorem 0.2 with following examples. For any \(2n\geq 4\) and each \(Z_{k}\)-group, Appleton [2] has constructed an example of noncompact \(\kappa\)-noncollapsed steady gradient Ricci soliton with \(\mathrm{Rm}>0\) on \(M\setminus K\), where all split ancient \(\kappa\)-solution \(h(t)\) is a family of shrinking quotient spheres of dimension \((2n-1)\) with \(Z_{k}\)-group. In each of Lai's examples [22] of noncompact \(\kappa\)-noncollapsed steady gradient Ricci solitons with \(\mathrm{Rm}>0\) on \(M\), all split ancient \(\kappa\)-solutions \(h(t)\) are noncompact. We also note that \(3d\) noncompact ancient \(\kappa\)-solution has been recently classified by Brendle [6] and Bamlar-Kleiner [3], independently. Namely, it is isometric to either a family of shrinking quotient cylinders, or a Bryant Ricci soliton flow. As an application of Theorem 0.2, we prove **Corollary 0.3**.: _Let \((M^{4},g)\) be a \(4d\) noncompact \(\kappa\)-noncollapsed steady gradient Ricci soliton with nonnegative sectional curvature. Suppose that there exists a sequence of rescaled flows \((M,g_{pi}(t);p_{i})\) of \((M,g)\) which converges subsequently to a family of shrinking quotient cylinders. Then \((M,g)\) is isometric to the \(4d\) Bryant Ricci soliton up to scaling. Moreover, if \((M,g)\) has nonnegative sectional curvature and positive Ricci curvature, the result is still true just under assumption of the existence of \(3d\) compact split limit flow \((N,h(t))\) as in (0.3)._ Our proof of Theorem 0.2 depends on a deep classification result for \(3d\) compact \(\kappa\)-solutions proved by Brendle-Daskalopoulos-Sesum [7] (also see Theorem 3.2). But we guess that Theorem 0.2 and Corollary 0.3 are both true for any dimension. The paper is organized as follows. In Section 1, we prove a splitting result for any limit flow of rescaled flows sequence from a \(\kappa\)-noncollapsed steady gradient Ricci soliton \((M,g)\) with \(\mathrm{Rm}\geq 0\) on \(M\setminus K\), see Proposition 1.2. In Section 2, we first get a decay estimate of curvature and then study the level set geometry of \((M,g)\) by assuming the existence of compact split ancient \(\kappa\)-solution \((N,h(t))\), see Lemma 2.2, Proposition 2.7, etc. All results in this section holds for any dimension. In Section 3, we focus on a \(4d\) steady Ricci soliton to get a diameter estimate of \((N,h(0))\) for all \(3\)d split flows \((N,h(t))\), see Proposition 3.6. Theorem 0.2 and Corollary 0.3 will be proved in Section 4. ## 1. A splitting theorem A complete Riemannian metric \(g\) on \(M\) is called a gradient Ricci soliton if there exists a smooth function \(f\) (which is called a defining function) on \(M\) such that \[R_{ij}(g)+\rho g_{ij}=\nabla_{i}\nabla_{j}f, \tag{1.1}\] where \(\rho\in\mathbb{R}\) is a constant. The gradient Ricci soliton is called expanding, steady and shrinking according to the sign \(\rho>,=,<0\), respectively. These three types of Ricci solitons correspond to three different blow-up solutions of Ricci flow [20]. In case of steady Ricci solitons, we can rewrite (1.1) as \[2\operatorname{Ric}(g)=\mathscr{L}_{X}g, \tag{1.2}\] where \(\mathscr{L}_{X}\) is the Lie operator along the gradient vector field (VF) \(X=\nabla f\) generalized by \(f\). Let \(\{\phi_{t}^{*}\}_{t\in(-\infty,\infty)}\) be a \(1\)-ps of transformations generated by \(-X\). Then \(g(t)=\phi_{t}^{*}(g)\) (\(t\in(-\infty,\infty)\)) is a solution of Ricci flow. Namely, \(g(t)\) satisfies \[\frac{\partial g}{\partial t}=-2\operatorname{Ric}(g),\ g(0)=g. \tag{1.3}\] For simplicity, we call \(g(t)\) the soliton Ricci flow of \((M,g)\). By (1.2), we have \[\langle\nabla R,\nabla f\rangle=-2\operatorname{Ric}(\nabla f,\nabla f), \tag{1.4}\] where \(R\) is the scalar curvature of \(g\). It follows \[R+|\nabla f|^{2}=\operatorname{Const}.\] Since \(R\) is alway positive ([28, 11]), the above equation can be normalized by \[R+|\nabla f|^{2}=1. \tag{1.5}\] We recall that an ancient \(\kappa\)-solution is a \(\kappa\)-noncollapsed solution of Ricci flow (1.3) with \(\operatorname{R_{m}}(\cdot,t)\geq 0\) defined for any \(t\in(-\infty,T_{0}]\). The following result is a version of Perelman's compactness theorem for higher dimensional ancient \(\kappa\)-solutions. **Theorem 1.1**.: _Let \((M,g_{i}(t);p_{i})\) be any sequence of \(n\)-dimensional ancient \(\kappa\)-solutions on a noncompact manifold \(M\) with \(R\left(p_{i},0\right)=1\). Then \((M,g_{i}(t);\)\(p_{i})\) subsequently converge to a splitting flow \((N\times\mathbb{R},\bar{g}(t);p_{\infty})\) in Cheeger-Gromov sense. Here_ \[\bar{g}(t)=h(t)+ds^{2}, \tag{1.6}\] _and \((N,h(t))\) is an \((n-1)\)-dimensional ancient \(\kappa\)-solution._ The convergence of \((M,g_{i}(t);p_{i})\) comes from [15, Theorem 3.3]. The splitting property in (1.6) can be also obtained by Hamilton's argument [20, Lemma 22.2] with help of Perelman's asymptotic volume ratio estimate for \(\kappa\)-solutions [21, Proposition 41.13]. In fact, for a sequence of rescaling Ricci flows arising from a steady Ricci soliton, we can improve Theorem 1.1 under a weaker curvature condition as follows. **Proposition 1.2**.: _Let \((M^{n},g)\) be a noncompact \(\kappa\)-noncollapsed steady gradient Ricci soliton with \(\mathrm{Rm}\geq 0\) away from \(K\). Let \(p_{i}\to\infty\) and \((M,g_{p_{i}}(t);p_{i})\) a sequence of rescaling flows with \(R_{p_{i}}\left(p_{i},0\right)=1\) as in (0.1). Then \((M,g_{p_{i}}(t);\)\(p_{i})\) subsequently converge to a splitting flow \((N\times\mathbb{R},\bar{g}(t);p_{\infty})\) as in Theorem 1.1. Moreover, for \(n=4\), \(\mathrm{Rm}\geq 0\) can be weakened to \(\mathrm{Km}\geq 0\) away from \(K\)._ Proof.: Since \(\mathrm{Km}\geq 0\) on \(M\setminus K\), we have the Harnack estimate by (1.4), \[\frac{d}{dt}R(x,t)\geq 0,\text{ on }M\setminus K. \tag{1.7}\] Then according to the proof of Theorem 1.1 (to see Lemma 3.5-3.7 for details there), for the convergence part in the proposition, we need only to show that the following asymptotic scalar curvature estimate, \[\mathrm{limsup}_{x\to\infty}R(x)d^{2}(o,x)=\infty, \tag{1.8}\] where \(o\in M\) is a fixed point. As a consequence, the rescaled flow \((M,g_{p_{i}}(t);p_{i})\) has locally uniformly curvature estimate, and so \((M,g_{p_{i}}(t);p_{i})\) subsequently converges to a limit ancient \(\kappa\)-solution \((M_{\infty},\bar{g}(t);p_{\infty})\). We note that (1.8) is true for any ancient \(\kappa\)-solution by the Perelman's result of asymptotic zero volume ratio [26, 21] (cf. [15, Corollary 2.4]). In our case, we have only \(\mathrm{Rm}\geq 0\) away from \(K\). We will use a different argument to prove (1.8) below. On contrary, we suppose that (1.8) is not true. Then there exists a constant \(C>0\), such that \[R(x)\leq\frac{C}{d^{2}(o,x)}=o(\frac{1}{d(o,x)}). \tag{1.9}\] In particular, the scalar curvature decays to zero uniformly. Due to a result in [12, Theorem 2.1], we know that there are two constants \(c_{1},c_{2}>0\) such that \[c_{1}\rho(x)\leq f(x)\leq c_{2}\rho(x). \tag{1.10}\] Thus by [16, Theorem 6.1] with the help of (1.9) and (1.10), we get \[R(x)\geq\frac{C_{0}}{d(o,x)},\] for some constant \(C_{0}\). But this is a contradiction with (1.9). Hence (1.8) is true. In the following, our goal is to show that \(\bar{g}(t)\) is of form (1.6). First we prove the volume ratio estimate, \[\mathrm{AVR(g)}=\lim_{\mathrm{r}\to\infty}\frac{\mathrm{Vol(B(p,r))}}{\mathrm{ r}^{n}}=0. \tag{1.11}\] By (1.8), we can use the Hamilton's argument in [20, Lemma 22.2] to find sequences of points \(q_{i}\to\infty\) and number \(s_{i}>0\) such that \(\frac{s_{i}}{d(q_{i},o)}\to 0\), \[R(q_{i})s_{i}^{2}\to\infty, \tag{1.12}\] and \[R(x)\leq 2R(q_{i}),\ \forall\ x\in B(q_{i},s_{i}). \tag{1.13}\] Consider a sequence of the rescaled flows \((M,g_{q_{i}}(t);q_{i})\), \(t\in(-s_{i},0]\), such that \(R_{q_{i}}\left(q_{i},0\right)=1\), where \(R_{q_{i}}(\cdot,t)\) is the scalar curvature of \(g_{q_{i}}(t)\). Then by (1.7), \(R_{q_{i}}(x,t)\leq 2\) whenever \(t\in(-s_{i},0]\) and \(d_{q_{i}}(q_{i},x)\leq R(q_{i})^{\frac{1}{2}}s_{i}\), where \(d_{q_{i}}(q_{i},\cdot)\) is the distance function from \(q_{i}\) w.r.t \(g_{q_{i}}(t)\). It follows that \((M,g_{q_{i}}(t);q_{i})\) with \(t\in(-s_{i},0]\) converges subsequently to a limit ancient \(\kappa\)-solution \((M_{\infty},g_{\infty}(t);q_{\infty})\). Moreover, by (1.12) and the curvature condition \(\mathrm{Km}\geq 0\) on \(M\setminus K\), one can construct a geodesic line on \((M_{\infty},g_{\infty}(0);q_{\infty})\) (cf. [23, Theorem 5.35]). Thus, by Cheeger-Gromoll splitting theorem, \((M_{\infty},g_{\infty}(t);q_{\infty})\) is in fact a splitting ancient flow \((N^{\prime}\times\mathbb{R},h^{\prime}(t)+ds^{2};q_{\infty})\), where \((N^{\prime},h^{\prime}(t);q_{\infty})\) is an \((n-1)\)-dimensional \(\kappa\)-noncollapsed ancient solution. Clearly, \((N^{\prime},h^{\prime}(0);q_{\infty})\) can not be flat since \(R_{\infty}(q_{\infty},0)=1\), and so \((M_{\infty},g_{\infty}(t);q_{\infty})\) is a non-flat ancient solution. Hence, by [21, Proposition 41.13], the asymptotic volume ratio of \((M_{\infty},g_{\infty}(t);q_{\infty})\) must be zero. This will imply (1.11) by the volume monotone since the (1.11) is invariant under the rescaling. Next we let \[r(p_{i})=\sup\{\rho|\ \mathrm{Vol}(B(p_{i},\rho))\geq\frac{\omega}{2}\rho^{n}\}. \tag{1.14}\] We prove \[C_{0}^{-1}r(p_{i})\leq R^{-\frac{1}{2}}(p_{i})\leq C_{0}r(p_{i}). \tag{1.15}\] In fact, for the first inequality in (1.15), by the volume comparison, there is \(C_{1}(D)>0\) for any \(D>0\) such that \[\mathrm{Vol}(B(x,r(p_{i})))\geq C_{1}^{-1}r(p_{i})^{n},\ \forall\ x\in B(p_{i}, Dr(p_{i})).\] Then by [15, Lemma 3.5], there is \(C_{0}(D)>0\) such that \[R\leq C_{0}^{2}r(p_{i})^{-2},\forall\ x\in B(p_{i},\frac{D}{2}r(p_{i})). \tag{1.16}\] Thus we only need to prove the second inequality. We use the above argument in the proof of (1.11). On contrary, there is a sequence \(p_{i}\to\infty\) (still denoted by \(\{p_{i}\}\)) such that \[\lim_{i\to\infty}\frac{R^{-1/2}(p_{i})}{r(p_{i})}=\infty. \tag{1.17}\] On the other hand, by (1.16) and (1.7), we have \[R(x,t)\leq C_{0}r(p_{i})^{-2},\ \forall\ x\in B(p_{i},\frac{D}{2}r(p_{i})),t\in(- \frac{D}{2},0].\] Then the rescaled flow \(\big{(}M,r(p_{i})^{-2}g(r(p_{i})^{2}t);p_{i}\big{)}\) converges subsequently to a limit ancient solution \((M^{\prime}_{\infty},g^{\prime}_{\infty}(t);p^{\prime}_{\infty})\). Note that \(r(p_{i})<\infty\) for each \(p_{i}\) by (1.11). Moreover, by the volume comparison, it follows \[\lim_{i\to\infty}\frac{r(p_{i})}{d(p_{i},o)}=0. \tag{1.18}\] Hence, by (1.18) and the curvature condition \(\mathrm{Km}\geq 0\) on \(M\setminus K\), one can construct a geodesic line on \((M^{\prime}_{\infty},g^{\prime}_{\infty}(0);p^{\prime}_{\infty})\) (cf. [23, Theorem 5.35]), and so \((M^{\prime}_{\infty},g^{\prime}_{\infty}(t);p^{\prime}_{\infty})\) is a splitting ancient flow \((\hat{N}\times\mathbb{R},\hat{h}(t)+ds^{2};p_{\infty})\), where \(\hat{h}(t)\) is an \((n-1)\) dimensional ancient \(\kappa\)-solution. As a consequence, by (1.17), we have \[R_{\infty}(p^{\prime}_{\infty},0)=0. \tag{1.19}\] By the strong maximum principle and (1.19), \((N,\hat{h}(0))\) is flat and so as \((M^{\prime}_{\infty},g^{\prime}_{\infty}(0))\). Then by the injective radius estimate (cf. [15, Lemma 3.6]), one can show that \((M^{\prime}_{\infty},g^{\prime}_{\infty}(0))\) must be isometric to the Euclidean space. In particular, \(\mathrm{Vol}(\mathrm{B}_{g^{\prime}_{\infty}(0)}(p^{\prime}_{\infty},1))=\omega\). But this is impossible by (1.14). Hence we finish the proof of (1.15). At last, by (1.18) and (1.15), we have \[\lim_{i\to\infty}\frac{R(p_{i})^{-1}}{d^{2}(p_{i},o)}=0. \tag{1.20}\] Then instead of the rescaled flow \(\big{(}M,r(p_{i})^{-2}g(r(p_{i})^{2}t);p_{i}\big{)}\) by \((M,g_{p_{i}}(t);p_{i})\), the limit ancient solution \((M_{\infty},\bar{g}(t);p_{\infty})\) will split off a line as \((M^{\prime}_{\infty},g^{\prime}_{\infty}(0);p^{\prime}_{\infty})\). Thus \(\bar{g}(t)\) is of form (1.18). In case of \(n=4\), we note that both of split \(3d\)\(\kappa\)-noncollapsed ancient flows \(h^{\prime}(t)\) and \(\hat{h}(t)\) in the above arguments are non-negatively curved under \(\mathrm{K}_{\mathrm{m}}\geq 0\) away from \(K\). Thus both of \(h^{\prime}(t)\) and \(\hat{h}(t)\) are same as ancient \(\kappa\)-solutions. Hence the proofs above work for \(4d\) steady Ricci solitons when the assumption \(\mathrm{R}_{\mathrm{m}}\geq 0\) is replaced by \(\mathrm{K}_{\mathrm{m}}\geq 0\) away from \(K\). According to the proof in Proposition 1.2, we also get the following curvature comparison. **Lemma 1.3**.: _Let \((M^{n},g)\) be a noncompact \(\kappa\)-noncollapsed steady Ricci soliton as in Proposition 1.2. Let \(\{p_{i}\}\to\infty\) be any sequence of \((M^{n},g)\). Then for any \(q_{i}\in B_{g_{p_{i}}}(p_{i},D)\), there is a \(C_{0}(D)>0\) such that_ \[C_{0}^{-1}R(p_{i})\leq R(q_{i})\leq C_{0}R(p_{i}). \tag{1.21}\] Proof.: We note that the rescaling flow \((M,g_{p_{i}}(t);p_{i})\) will converges to a splitting of ancient solution \((M_{\infty},\bar{g}(t)=h(t)+ds^{2};p_{\infty})\). Then by (1.15) and (1.16) in the proof of Proposition 1.2, we get the second inequality of (1.21) immediately. Thus we only need to prove the first inequality. By contradiction, there exists a sequence of points \(q_{i}\in B_{g_{p_{i}}}(p_{i},D)\) for some \(D>0\) such that \[\frac{R(q_{i})}{R(p_{i})}\to 0,\text{ as }i\to\infty. \tag{1.22}\] Then \[R_{\bar{g}(0)}(q_{\infty})=0,\] where \(q_{\infty}\) is a limit of \(\{q_{i}\}\) from the convergence of \((M,g_{p_{i}}(t);p_{i})\). By the strong maximum principle, it follows that \(\bar{g}(0)\) is a flat metric, which contradicts to \(R_{\bar{g}(0)}(p_{\infty})=1\). Thus (1.21) is proved. ## 2. Compact case of \((N,h(t))\) In this section, we assume that there exists a sequence of \(p_{i}\to\infty\) on an \(n\)-dimensional steady Ricci soliton \((M^{n},g)\) such that the corresponding split ancient \(\kappa\)-solution \(h(t)\) of \((n-1)\)-dimension in Proposition 1.2 satisfies \[\operatorname{Diam}(\operatorname{h}(0))\leq\operatorname{C}. \tag{2.1}\] We will use the method in [16, 17] to study level set geometry of \((M^{n},g)\) under the condition (2.1). All results in this section holds for any dimension. Firstly we show that \((M,g)\) has a convexity property in sense of geodesics. **Lemma 2.1**.: _Suppose that there exists a sequence of \(p_{i}\to\infty\) such that the split \((n-1)\)-dimensional ancient \(\kappa\)-solution \((N,h(t))\) in Proposition 1.2 satisfies (2.1). Then there exists a compact set \(K^{\prime}\)\((K\subset K^{\prime})\) such that for \(x_{1},x_{2}\in M\setminus K^{\prime}\) the minimal geodesic curve connecting \(x_{1}\) and \(x_{2}\), \(\sigma(s)\subset M\setminus K\), where \(K\) is the compact set in Proposition 1.2._ Proof.: By the convergence of \((M,g_{p_{i}}(t);p_{i})\) together with (2.1), it is easy to see that one can choose a point \(p\in\{p_{i}\}\) such that \(B_{g}(p_{i},10CR(p_{i})^{-\frac{1}{2}})\) divides \(M\) into three parts with a compact part \(\Sigma_{p}\) which contains \(K\) as follows, \[M=B_{g}(p_{i},10CR(p_{i})^{-\frac{1}{2}})\cup\Sigma_{p}\cup M^{\prime}, \tag{2.2}\] where \(B_{g}(p_{i},10CR(p_{i})^{-\frac{1}{2}})\cap K=\emptyset\) and \(M^{\prime}=M\setminus(B_{g}(p_{i},10CR(p_{i})^{-\frac{1}{2}}\cup\Sigma_{p})\) is a noncompact set of \(M\). Set \[K^{\prime}=\Sigma_{p}\cup B_{g}(p,10CR(p)^{-\frac{1}{2}}).\] We need to verify \(K^{\prime}\) chosen as required in the lemma. On contrary, there will exist two points \(x_{1},x_{2}\in M\setminus K^{\prime}\) and another point \(x\in\sigma(s)\cap K\), where \(\sigma(s)\) is the minimal geodesic curve connecting \(x_{1}\) and \(x_{2}\). Then \(\sigma(s)\) will pass through \(B_{g}(p,10CR(p)^{-\frac{1}{2}})\) at least twice. Denote \(q_{1}\) to be the first point and \(q_{2}\) to be the last point in \(B_{g}(p,10CR(p)^{-\frac{1}{2}})\) respectively, which intersects with \(\sigma(s)\). Let \(\sigma^{\prime}\) be the part of \(\sigma(s)\) between \(q_{1}\) and \(q_{2}\). Thus by the triangle inequality, we have \[d_{g}(q_{1},q_{2}) =\operatorname{Length}(\sigma^{\prime})=\operatorname{d_{g}(q_{1 },x)}+\operatorname{d_{g}(x,q_{2})}\] \[\geq d_{g}(q_{1},o)+d_{g}(o,q_{2})-2d_{g}(o,x)\] \[\geq 2d_{g}(o,p)-d_{g}(q_{1},p)-d_{g}(q_{2},p)-2C^{\prime} \tag{2.3}\] \[\geq 2d_{g}(o,p)-20R(p)^{-\frac{1}{2}}-2C^{\prime}.\] On the other hand, by the estimate (1.20), we see that for any small \(\delta\) it holds \[R(p_{i})^{-\frac{1}{2}}\leq\delta d_{g}(p_{i},o), \tag{2.4}\] as long as \(i>>1.\) By (2.3), it follows \[d_{g}(q_{1},q_{2})\geq 2d_{g}(o,p)-20\delta d_{g}(o,p)-2C^{\prime}\geq d_{g}( o,p).\] However, \[d_{g}(q_{1},q_{2}) \leq d_{g}(q_{1},p)+d_{g}(p,q_{2})\leq 20R(p)^{-\frac{1}{2}}\] \[\leq 20\delta d_{g}(o,p)\leq\frac{1}{2}d_{g}(o,p).\] Thus we get a contradiction! The lemma is proved. ### Curvature decay estimate By Lemma 1.3 and Lemma 2.1, we prove **Lemma 2.2**.: _Let \((M^{n},g)\) be the steady Ricci soliton in Proposition 1.2 with \(\operatorname{Ric}>0\) away from \(K\). Suppose that there exists a sequence of \(p_{i}\to\infty\) such that the split \((n-1)\)-dimensional ancient \(\kappa\)-solution \((N,h(t))\) in Proposition 1.2 satisfies (2.1). Then the curvature of \((M^{n},g)\) decays to zero uniformly. Namely,_ \[\lim_{x\to\infty}R(x)=0. \tag{2.5}\] Proof.: First we prove \[\lim_{p_{i}\to\infty}R(p_{i})=0. \tag{2.6}\] On contrary, we assume that \(R(p_{i})\geq c\) for some constant \(c>0\). We consider a sequence of functions \(f_{pi}=f-f(p_{i})\) on Riemannian manifolds \((M,g_{p_{i}}(0);p_{i})\). By (1.5), it is easy to see \[|\nabla f_{p_{i}}|_{g_{p_{i}}}\leq c^{-\frac{1}{2}}.\] Thus for any \(D>0\) it holds \[|f_{p_{i}}(x)|\leq 2c^{-\frac{1}{2}}D,\ \forall x\in B_{g_{p_{i}}(p_{i},D)}.\] By the regularity of Laplace equation, \[\Delta_{g_{p_{i}}}f_{p_{i}}=R(g_{p_{i}(0)}),\] \(f_{i}\) converges subsequently to a smooth function \(f_{\infty}\) on \(N\times\mathbb{R}\) which satisfies the gradient steady Ricci soliton equation, \[\mathrm{Ric}(\bar{\mathrm{g}}(0))=\nabla^{2}\mathrm{f}_{\infty}.\] Note that \(\bar{g}(0)=h(0)+ds^{2}\) is a product metric. Hence \((N,h(0))\) is also a steady gradient Ricci soliton, On the other hand, by the maximum principle, \((N,h(0);p_{\infty})\) should be Ricci-flat. However, by the normalization of \[R(g_{p_{i}}(0))(p_{i})=1,\] \(R(h(0))(p_{\infty})\) is also \(1\). This is a contradiction! (2.6) is proved. By (2.6) and Lemma 1.3, we get \[\lim_{i\to\infty}\sup_{B_{g}(p_{i},10CR(p_{i})^{-\frac{1}{2}})}R(x)=0. \tag{2.7}\] Next we use (2.7) to derive (2.5). Recall that the set of equilibrium points of \((M,g,f)\) is given by \[S:=\{x\big{|}|\ \nabla f|(x)=0\}.\] In general, \(S\) may be not empty. But we have **Claim1**: There is no any equilibrium point away from a compact set \(\hat{K}\) of \(M\) which containing \(K^{\prime}\). Here \(K^{\prime}\) is the set of \(M\) determined in Lemma 2.1. If \(S\) is not empty and **Claim1** is not true, there will be two equilibrium points \(x_{1}\) and \(x_{2}\) and a compact set \(\hat{K}\) which containing \(K^{\prime}\) such that \(x_{1},x_{2}\in M\setminus\hat{K}\). Then by Lemma 2.1, there is a minimal geodesic curve \(\sigma(s)\) connecting \(x_{1}\) and \(x_{2}\) such that \(\sigma(0)=x_{1}\) and \(\sigma(T)=x_{2}\) and \(\sigma(s)\subset M\setminus K\). Note \[\frac{d}{ds}(\langle\nabla f,\sigma^{\prime}\rangle)(\sigma(s)))=\nabla^{2}f( \sigma^{\prime},\sigma^{\prime})(s)=\mathrm{Ric}(\sigma^{\prime},\sigma^{ \prime}).\] Thus we get \[0 =\langle\nabla f,\sigma^{\prime}\rangle)(\sigma(t))-\langle \nabla f,\sigma^{\prime}\rangle)(\sigma(0))\] \[=\int_{0}^{T}\mathrm{Ric}(\sigma^{\prime},\sigma^{\prime}) \mathrm{ds}>0,\] which is a contradiction! Hence, **Claim1** is true. By (1.20), we can choose a subsequence of \(\{p_{i}\}\), still denoted by \(\{p_{i}\}\) such that \[B_{g}(p_{i},10CR(p_{i})^{-\frac{1}{2}})\cap B_{g}(p_{j},10CR(p_{i+1})^{-\frac{1}{ 2}})=\emptyset,\ \forall\ i,j>>1. \tag{2.8}\] Then as in (2.2), there are a compact set \(\bar{K}\) and a sequence of compact set \(\{K_{i}\}\)\((i\geq i_{0})\) of \(M\) such that \(\check{K}\subset\bar{K}\) and \[\partial K_{i}\subset\partial B_{g}(p_{i},10CR(p_{i})^{-\frac{1}{2}})\cup \partial B_{g}(p_{i+1},10CR(p_{i+1})^{-\frac{1}{2}}),\] and \(M\) is decomposed as \[M=\bar{K}\cup_{i\geq i_{0}}(K_{i}\cup(B_{g}(p_{i+1},10CR(p_{i+1})^{-\frac{1}{2} })), \tag{2.9}\] **Claim2**: For any \(q_{i}\in K_{i}\), there exists \(t_{i}>0\) such that \[q_{i}^{t_{i}}=\phi_{t_{i}}(q_{i})\in B_{g}(p_{i},10R(p_{i})^{-\frac{1}{2}}C) \cup B_{g}(p_{i+1},10R(p_{i+1})^{-\frac{1}{2}}C). \tag{2.10}\] On contrary, we see that \(\phi_{t}(q_{i})\subset K_{i}\) for all \(t\geq 0\). Since \(K_{i}\) is compact, there exists a \(c^{\prime}>0\) by **Claim1** such that \[\mathrm{Ric}\geq\mathrm{c^{\prime}g},\mathrm{c^{\prime}}^{-1}\leq|\nabla \mathrm{f}|\leq\mathrm{c^{\prime}}.\] It follows \[\frac{d}{dt}R(\phi_{t}(q_{i})) =-\langle\nabla R,\nabla f\rangle(\phi_{t}(q_{i}))=2\mathrm{Ric}( \nabla\mathrm{f},\nabla\mathrm{f})\] \[\geq 2\mathrm{c^{\prime}}^{-1}>0,\ \forall\ t\geq 0. \tag{2.11}\] As a consequence, \[R(\phi_{t}(q_{i}))\geq 2\mathrm{c^{\prime}}^{-1}t\to\infty,\ \text{as}\ t\to\infty.\] This is impossible since \(R(\cdot)\) is uniformly bounded. Hence, **Claim2** is true. By **Claim2** and (2.11), for any \(q_{i}\in K_{i}\) we see \[R(q_{i})\leq R(q_{i}^{t_{i}})\] \[\leq\max\{R(x)|\ x\in B_{g}(p_{i},10R(p_{i})^{-\frac{1}{2}}C)\cup B _{g}(p_{i+1},10R(p_{i+1})^{-\frac{1}{2}}C)\}.\] Thus we get (2.5) from (2.7) and (2.9) immediately. **Remark 2.3**.: _The steady Ricci soliton in Lemma 2.2 has a uniform curvature decay to zero. Then \(|\nabla f(x)|\to 1\) as \(\rho(x)\to\infty\) by (1.5). Moreover, by [17, Lemma 2.2] (or [12, Theorem 2.1]), \(f\) satisfies (1.10). Hence, the integral curve \(\gamma(s)\) generated by \(\nabla f\) extends to the infinity as \(s\to\infty\)._ ### Estimate of level sets By Lemma 2.2 and (1.5), there exists a point \(p_{0}\in M\) such that \[R_{max}=\sup_{p\in M}R(p)=R(p_{0})=1.\] For any positive \(c<1\), we set \[S(c)=\{p\in M|\ R(p)\geq R_{max}-c\}.\] Then \(S(c)\) is a compact set. Moreover, by Remark 2.3 there exists a \(c_{0}\) such that \(\hat{K}\subset S(c_{0})\) and \(\nabla f\neq 0\) on \(S(c_{0})\setminus\hat{K}\). Thus VF \(\hat{X}=\frac{\nabla f}{|\nabla f|}\) is well-defined on \(S(c)\setminus\hat{K}\) for any \(c\geq c_{0}\). By [12, Lemma 2.2, 2.3], it is known that there exists a \(t_{q}\) such that \(\phi_{t_{q}}(q)\in S(c_{0})\) for any \(q\in M\setminus S(c_{0})\). Consequently, for any integral curve of \(\hat{X}=\frac{\nabla f}{|\nabla f|}\), \(\Gamma(s):[0,\infty)\to M\), we can reparametrize \(s\) such that \(\Gamma(0)=p\in S(c_{0})\setminus\hat{K}\), and so \(\Gamma(s)\subset M\setminus\hat{K}\) is a smooth curve for any \(s>0\). **Lemma 2.4**.: _Let \((M^{n},g)\) be an \(n\)-dimensional steady soliton as in Lemma 2.2 and \(\Gamma(s)\) any integral curve of \(\hat{X}\) with \(\Gamma(0)=p\in S(c_{0})\setminus\hat{K}\). Then for any \(\epsilon\), there exists a uniform constant \(C=C(\epsilon)>0\) such that_ \[(1-\epsilon)(s_{2}-s_{1})\leq d(\Gamma(s_{2}),\Gamma(s_{1}))\leq(s_{2}-s_{1}), \ \forall s_{2}>s_{1}>C. \tag{2.12}\] _In particular,_ \[(1-\epsilon)s\leq d(\Gamma(s),p)\leq s,\ \forall\ s>C. \tag{2.13}\] Proof.: Firstly by Remark 2.3, we note that for any \(\epsilon>0\) there exists a compact set \(S^{\prime}\) such that \[|\nabla f|(x)>1-\epsilon,\ \forall x\in M\setminus S^{\prime}. \tag{2.14}\] Moreover, (2.14) holds whenever \(f(x)>L\). Since \(\Gamma(s)\subset M\setminus\hat{K}\), \(|\nabla f|(\Gamma(s))\geq c_{0}>0\) by (1.4) for all \(s\geq 0\). It follows \[f(\Gamma(s))-f(\Gamma(0))=\int_{0}^{s}\frac{d}{dt}f(\Gamma(t))dt=\int_{0}^{s} |\nabla f|(\Gamma(t))dt\geq cs.\] Thus there exists a uniform constant \(C=\frac{L}{c}+1\) such that (2.14) holds as long as \(s>C\). Let \(\gamma:[0,D]\to M\) be a minimal geodesic from \(\Gamma(s_{1})\) to \(\Gamma(s_{2})\), where \(D=d(\Gamma(s_{1}),\Gamma(s_{2}))\). Then by \[\frac{d}{dr}\langle\nabla f,\gamma^{\prime}(r)\rangle=\nabla^{2}f(\gamma^{ \prime}(r),\gamma^{\prime}(r))\geq 0,\] we obtain \[f(\Gamma(s_{2}))-f(\Gamma(s_{1}))=\int_{0}^{D}\langle\nabla f,\gamma^{\prime} (r)\rangle dr\leq D\langle\nabla f,\gamma^{\prime}(D)\rangle.\] This implies \[f(\Gamma(s_{2}))-f(\Gamma(s_{1}))\leq d(\Gamma(s_{1}),\Gamma(s_{2})). \tag{2.15}\] On the other hand, by (2.14), we have \[f(\Gamma(s_{2}))-f(\Gamma(s_{1})) =\int_{s_{1}}^{s_{2}}\langle\nabla f,\Gamma^{\prime}(r)\rangle dr \tag{2.16}\] \[=\int_{s_{1}}^{s_{2}}|\nabla f|(\Gamma(r))dr\geq(1-\epsilon)(s_{2 }-s_{1}).\] Thus the first inequality in (2.12) follows from (2.15) and (2.16) immediately. Note that \[d(\Gamma(s_{1}),\Gamma(s_{2}))\leq\operatorname{Length}(\Gamma(\mathrm{s}))|_{ \mathrm{s}_{1}}^{\mathrm{s}_{2}}=\mathrm{s}_{2}-\mathrm{s}_{1}. \tag{2.17}\] Hence, the second inequality in (2.12) also holds. (2.13) is a direct consequence of (2.12) by the triangle inequality. As in Lemma 2.4, we let \(\Gamma_{i}(s)\) be an integral curve of \(\hat{X}\) through \(p_{i}\) with \(\Gamma_{i}(0)\in S(c_{0})\) and \(\Gamma_{i}(s_{i})=p_{i}\). For any \(D>0\), we set \[\hat{\Gamma}_{i}(s)=\Gamma_{i}(R(p_{i})^{-\frac{1}{2}}s+s_{i}),\ s\in[-D,D].\] Then it is easy to see \[|\frac{d\hat{\Gamma}_{i}(s)}{ds}|_{g_{p_{i}}(0)}=1,\ s\in[-D,D].\] Thus \(\hat{\Gamma}_{i}(s)\) is an integral curve of \(\hat{X}_{i}=\frac{\nabla_{i}f}{|\nabla_{i}f|}\) through \(p_{i}\), where \(\nabla_{i}\) is the gradient operator w.r.t. the metric \((M,g_{p_{i}}(t);p_{i})\). With help of Lemma 2.4, we prove that the splitting line obtained by Proposition 1.2 is actually a limit of a family of integral curves of \(\hat{X}_{i}\) under the condition in Lemma 2.2. **Lemma 2.5**.: _Let \((M^{n},g)\) be the steady soliton in Lemma 2.2 and \((N\times\mathbb{R},h(t)+ds^{2};p_{\infty})\) the splitting limit flow of \((M,g_{p_{i}}(t);p_{i})\). Then \(\hat{\Gamma}_{i}(s)\) converges locally to a geodesic line on \(N\times\mathbb{R}\) w.r.t. the metric \((M,g_{p_{i}}(t);p_{i})\)._ Proof.: Since \(X_{i}=\nabla_{i}f\) is convergent w.r,t. the metrics \((M,g_{p_{i}}(t);p_{i})\) (cf. [16, Lemma 4.6]), \(\hat{X}_{i}\) also converges subsequently to a VF \(\hat{X}_{\infty}\) on \((N\times\mathbb{R},h(t)+ds^{2};p_{\infty})\). Thus \(\hat{\Gamma}_{i}(s)\) converges to an integral curve \(\hat{\Gamma}_{\infty}(s)\) of \(\hat{X}_{\infty}\) on \(N\times\mathbb{R}\), where \(s\in(-\infty,\infty)\). It remains to show that \(\hat{\Gamma}_{\infty}(s)\) is a line. Since \(p_{i}\to\infty\), we have \(s_{i}\to\infty\). Then by (1.11) and (1.15), for any number \(D>0\), it holds \[s_{i}-\operatorname{D}R(p_{i})^{-\frac{1}{2}}\to\infty.\] By applying (2.12) to each \(\hat{\Gamma}_{i}(s^{\prime})\), we get \[(1-\epsilon)DR(p_{i})^{-\frac{1}{2}}\leq d(\hat{\Gamma}_{i}(-D),\hat{\Gamma}_{i} (0))\leq DR(p_{i})^{-\frac{1}{2}}\] and \[(1-\epsilon)DR(p_{i})^{-\frac{1}{2}}\leq d(\hat{\Gamma}_{i}(D),\hat{\Gamma}_{i} (0))\leq DR(p_{i})^{-\frac{1}{2}}.\] It follows \[2(1-\epsilon)DR(p_{i})^{-\frac{1}{2}}\leq d(\hat{\Gamma}_{i}(-D),\hat{\Gamma}_ {i}(D))\leq 2DR(p_{i})^{-\frac{1}{2}},\] and consequently, \[2(1-\epsilon)D\leq d_{g_{p_{i}}}(\hat{\Gamma}_{i}(-D),\hat{\Gamma}_{i}(D))\leq 2D.\] Thus by taking the limit of \(\hat{\Gamma}_{i}(s)\) as well as \(\epsilon\to 0\), we obtain \[d_{g_{\infty}}\left(\hat{\Gamma}_{\infty}(-D),\hat{\Gamma}_{\infty}(D)\right) =2D. \tag{2.18}\] Note that \(2D\) is the number of length of \(\hat{\Gamma}_{\infty}(s)\) between \(\hat{\Gamma}_{\infty}(-D)\) and \(\hat{\Gamma}_{\infty}(D)\). Hence, \(\hat{\Gamma}_{\infty}(s)\) must be a minimal geodesic connecting \(\hat{\Gamma}_{\infty}(-D)\) and \(\hat{\Gamma}_{\infty}(D)\). Since \(D\) is arbitrary, \(\hat{\Gamma}_{\infty}(s)\) can be extended to a geodesic line. Now we begin to prove main results in this subsection. **Lemma 2.6**.: _Let \((M^{n},g)\) be the steady soliton in Lemma 2.2 and \((N\times\mathbb{R},h(t)+ds^{2};p_{\infty})\) the splitting limit flow of \((M,g_{p_{i}}(t);p_{i})\), which satisfies (2.1). Then \(f^{-1}(f(p_{i}))\subseteq B_{g_{p_{i}}}(p_{i},200C)\) when \(i>>1\)._ Proof.: On contrary, there will exist a \(q^{\prime}_{i}\in\partial B_{g_{p_{i}}}(p_{i},100C)\cap f^{-1}(f(p_{i}))\) and a minimal geodesic \(\bar{\gamma}_{i}\subset f^{-1}(f(p_{i}))\) connecting \(p_{i}\) and \(q^{\prime}_{i}\) w.r.t. the induced metric \(\bar{g}_{p_{i}}\) on \(f^{-1}(f(p_{i}))\) such that \[\bar{\gamma}_{i}\subset B_{g_{p_{i}}}(p_{i},100C).\] Then \[\operatorname{Length}_{\bar{g}_{p_{i}}}(\bar{\gamma}_{i})\geq\mathrm{d}_{g_{ p_{i}}}(\mathrm{p_{i}},\mathrm{q^{\prime}_{i}})=100\mathrm{C}. \tag{2.19}\] On the other hand, according to the proofs in [16, Lemma 4.3-Proposition 4.5], the part \(\Sigma_{i}=f^{-1}(f(p_{i}))\cap B_{g_{p_{i}}}(p_{i},100C)\) of level set \(f^{-1}(f(p_{i}))\), which contains \(\bar{\gamma}_{i}\), converges subsequently to an \((n-1)\)-dimensional open manifold \((\Sigma_{\infty},h^{\prime};p_{\infty})\) w.r.t. the induced metric \(\bar{g}_{p_{i}}\). As a consequence, the minimal geodesic \(\bar{\gamma}_{i}\) converges subsequently to a minimal geodesic \(\bar{\gamma}\) in \(\Sigma_{\infty}\). Thus by (2.19), we get \[\operatorname{Length}_{h^{\prime}}(\bar{\gamma})\geq 100\mathrm{C}. \tag{2.20}\] Next we show that \((\Sigma_{\infty},h^{\prime})\) is an open set of \((N,h(0))\). Then it follows \[\operatorname{Diam}(N,h(0))\geq\operatorname{Diam}(\Sigma_{\infty},h^{\prime })\geq 100C,\] which contradicts to (2.1). The lemma will be proved. Let \(\hat{X}_{i}=\frac{\nabla_{i}f}{|\nabla_{i}f|}\). By Lemma 2.2, (2.14) and Shi's estimates we can calculate that \[\sup_{B(p_{i},2D)_{gp_{i}}}|\nabla_{i}\hat{X}_{i}|_{g_{p_{i}}} =\sup_{B(p_{i},2D)_{gp_{i}}}R(p_{i})^{-\frac{1}{2}}(\frac{|\text{ Ric}|}{|\nabla f|}+\frac{|\text{Ric}(\nabla\text{f},\nabla\text{f})|}{| \nabla f|^{3}})\] \[\leq CR(p_{i})^{\frac{1}{2}}\to 0,\] and \[\sup_{B(p_{i},2D)_{gp_{i}}}|\nabla_{i}^{m}\hat{X}_{i}|_{g_{p_{i}}}\leq C(m)\sup _{B(p_{i},2D)_{gp_{i}}}|\nabla_{i}^{m-1}\text{Ric}(\text{g}_{\text{p}_{i}})|_{ \text{g}_{\text{p}_{i}}}\leq\text{C}^{\prime}.\] Thus \(\hat{X}_{i}\) converges subsequently to a parallel vector field \(\hat{X}_{\infty}\) on \((N\times\mathbb{R},h(t)+ds^{2};p_{\infty})\). Moreover, \[\sup_{B(p_{i},2D)_{gp_{i}}}|\hat{X}_{i}|_{g_{p_{i}}}=1.\] Hence, \(\hat{X}_{\infty}\) is a non-trivial parallel vector field on \(N\times\mathbb{R}\). \(\hat{X}_{\infty}\) is also perpendicular to \((\Sigma_{\infty},h^{\prime})\). In fact, for any \(V\in T\Sigma_{\infty}\) with \(|V|_{h^{\prime}}=1\), by [16, Proposition 4.5], there is a sequence of \(V_{i}\in T\Sigma_{i}\) such that \(R(p_{i})^{-\frac{1}{2}}V_{i}\to V\). Thus \[h^{\prime}(V,\hat{X}_{\infty})=\lim_{i\to\infty}g_{p_{i}}(R(p_{i})^{-\frac{1} {2}}V_{i},\hat{X}_{i})=\lim_{i\to\infty}g(V_{i},\frac{\nabla f}{|\nabla f|})=0.\] By Lemma 2.5, we have already known that \(\hat{X}_{\infty}\) generates a geodesic line \(\hat{\Gamma}_{\infty}\) through \(p_{\infty}\) on \(N\times\mathbb{R}\). Note that \((N,h(0))\) is compact by (2.1). \(\hat{X}_{\infty}\) must be tangent to the splitting line direction of \(N\times\mathbb{R}\), and consequently, \((\Sigma_{\infty},h^{\prime};p_{\infty})\subset(N,h(0);p_{\infty})\). Namely, \((\Sigma_{\infty},h^{\prime})\) is an open set of \((N,h(0))\). The proof is complete. By Lemma 2.6, we prove **Proposition 2.7**.: _Let \((M^{n},g)\) be the steady soliton in Lemma 2.2 and \((N\times\mathbb{R},h(t)+ds^{2};p_{\infty})\) the splitting limit flow of \((M,g_{p_{i}}(t);p_{i})\), which satisfies (2.1). Then there exists \(C_{0}(C)>0\) such that for any \(q_{i}\in f^{-1}(f(p_{i}))\) the splitting limit flow \((h^{\prime}(t)+ds^{2},N^{\prime}\times\mathbb{R};q_{\infty})\) of rescaled flow \((M,g_{q_{i}}(t);q_{i})\) satisfies_ \[\operatorname{Diam}(h^{\prime}(0))\leq C_{0}. \tag{2.21}\] Proof.: The convergence part comes from Proposition 1.2. We need to check (2.21). In fact, by Lemma 2.6 and Lemma 1.3, there are \(C_{1},C_{2}>0\) such that for any \(D>0\) such that \[B_{g_{q_{i}}}(q_{i},D)\subset B_{g_{p_{i}}}(q_{i},C_{1}D)\subset B_{g_{p_{i}} }(p_{i},C_{1}D+C_{2}),\] where \(g_{q_{i}}=R(q_{i})g\). Similarly, we have \[B_{g_{q_{i}}}(q_{i},D)\supset B_{g_{p_{i}}}(q_{i},C_{1}^{-1}D)\supset B_{g_{p_{i} }}(p_{i},C_{1}^{-1}D-2C_{2}).\] Then it is easy to see that the splitting Ricci flow \((h^{\prime}(t)+ds^{2},N^{\prime}\times\mathbb{R};q_{\infty})\) of \((M,g_{q_{i}}(t);q_{i})\) is isometric to \((h(t)+ds^{2},N\times\mathbb{R};p_{\infty})\) up to scaling. As a consequence, we get \[\mathrm{Diam}(\mathrm{h}^{\prime}(0))\leq(\mathrm{C}_{1}+10)\mathrm{Diam}( \mathrm{h}(0))\leq(\mathrm{C}_{1}+10)\mathrm{C}.\] The proposition is proved. **Remark 2.8**.: _By Lemma 2.6 and Proposition 2.7, the argument in the proof Lemma 2.6 also implies that both of \(N\) and \(N^{\prime}\) are diffeomorphic to each level set \(f^{-1}(f(p_{i}))\) when \(i>>1\). In fact, the submanifolds \((f^{-1}(f(p_{i})),\bar{g}_{p_{i}})\) converge to \((N,h(0))\) (also to \((N^{\prime},h^{\prime}(0))\)) in the Cheeger-Gromov sense._ ## 3. \(4d\) steady Ricci solitons In this section, we first recall recent works on compact \(3d\) ancient \(\kappa\)-solitons by Angenent-Brendle-Daskalopoulos-Sesum and Brendle-Daskalopoulos-Sesum [1, 7], then we estimate the diameter of \((N,h(0))\) for all split limit flow \((N,h(t))\) of rescaled flow sequence. As we know, Perelman model ancient solution is of type II, which is defined on \(S^{3}\) with \(Z_{2}\times O(2)\)-symmetry for any \(t\in(-\infty,0)\)[27]. According to [20], we have the definition, **Definition 3.1**.: _An ancient solution with \(\mathrm{K}_{\mathrm{m}}\geq 0\) is called type I if it satisfies_ \[\sup_{M\times(-\infty,0]}(-t)R(x,t)<\infty.\] _Otherwise, it is called type II, i.e., it satisfies_ \[\sup_{M\times(-\infty,0]}(-t)R(x,t)=\infty.\] Fix \(p_{0}\in S^{3}\). We normalize the Perelman solution by \[R_{max}(-1)=R(p_{0},-1)=1. \tag{3.1}\] For simplicity, we denote it by \((S^{3},g_{Pel}(t);p_{0})\), \(t\in(-\infty,0)\). The asymptotic behavior of \((S^{3},g_{Pel}(t);p_{0})\) has been computed in [1] as follows, \[\mathrm{Diam}(\mathrm{g_{Pel}(t)})\geq 2.1\sqrt{(-t)\log(-t)},\] \[R_{max}\leq 1.1\frac{\log(-t)}{-t}, \tag{3.2}\] \[R_{min}\geq\frac{C}{-t}.\] Here \(-t\geq L\) for some large \(L>10000C>10000\). Thus \[\mathrm{Diam}(\mathrm{g_{Pel}(t)})\mathrm{R}^{\frac{1}{2}}(\mathrm{q},t)\] is strictly increasing as \(t\to-\infty\), and \[\lim_{t\to-\infty}\mathrm{Diam}(\mathrm{g_{Pel}(t)})\mathrm{R}^{\frac{1}{2}}( \mathrm{q},t)=\infty,\ \forall\ \mathrm{q}\in\mathrm{S}^{3}. \tag{3.3}\] In particular, there exists a constant \(C_{Diam}\) such that \(\mathrm{Diam}(\mathrm{g_{Pel}(t)})\geq\mathrm{C_{Diam}}\), when \(-t\geq 2L\). Usually, we call all \(3d\) ancient \(\kappa\)-solitons of type II on \(S^{3}\) as Perelman (ancient) solutions. The following classification result of 3d compact ancient \(\kappa\)-solutions of type II was proved in [7]. **Theorem 3.2**.: _Any 3d compact simply connected ancient \(\kappa\)-solution of type II coincides with a reparametrization in space, a translation in time, and a parabolic rescaling of Perelman solution \((S^{3},g_{Pel}(t);p_{0})\)._ By Theorem 3.2, for any simply connected compact \(3d\)\(\kappa\)-solution \((M,h(t);q)\) of type II and a point \(q\in M\), there exist constant \(\lambda\), a time \(T\), \(p\in S^{3}\) and a diffeomorphism \(\Psi\) from \(S^{3}\) to \(M\) such that \(\Psi(p)=q\) and \[(\Psi^{-1}(M),\lambda\Psi^{*}(h(\lambda^{-1}t)),\Psi^{-1}(q))=(S^{3},g_{Pel}(t -T);p_{0}). \tag{3.4}\] We note that the Perelman's solution \((S^{3},g_{Pel}(t);p_{0})\) is \(Z_{2}\times O(2)\)-symmetric. Then the isometric subgroup of \((S^{3},g_{Pel}(t);p_{0})\) must be as \(Z_{2}\times G\), where \(G\) is a subgroup of \(O(2)\). Thus \(G\) fixes the minimal geodesic connecting two tips of the Perelman solution. It follows that any quotient of Perelman solution, which is also an ancient \(\kappa\)-solutions, satisfies the above asymptotic behavior (3.2). Hence, by the classification Theorem 3.2, we get **Proposition 3.3**.: _Let \((M,h(t))\) be a 3d compact ancient \(\kappa\)-solutions of type II and \(p\in M\), which satisfies_ \[R(p,0)=1 \tag{3.5}\] _and_ \[\mathrm{Diam}(\mathrm{h}(0))=\mathrm{C}^{\prime}>10\mathrm{C_{Diam}}. \tag{3.6}\] _Then for any \(q\in M\), \(\mathrm{Diam}(\mathrm{h}(\mathrm{t}))\mathrm{R}^{\frac{1}{2}}(\mathrm{q},\mathrm{t})\) is strictly decreasing for \(t\leq 0\). Moreover, there exists a \(T(C^{\prime})\) such that_ \[\mathrm{Diam}(\mathrm{h}(\mathrm{T}(\mathrm{C}^{\prime})))\mathrm{R}^{\frac{1} {2}}(\mathrm{q},\mathrm{T}(\mathrm{C}^{\prime}))=2\mathrm{C}^{\prime}. \tag{3.7}\] By Theorem 3.2 and Proposition 3.3, we are able to classify the split ancient \(\kappa\)-solutions of dimension \(3\) when the \(4d\) noncompact \(\kappa\)-noncollapsed steady Ricci soliton in Theorem 0.2 admits a split noncompact ancient \(\kappa\)-solution \((N,h(t))\). We need the following definition introduced by Perelman (cf. [27]). **Definition 3.4**.: _For any \(\epsilon>0\), we say a pointed Ricci flow \(\left(M_{1},g_{1}(t);p_{1}\right),t\in[-T,0]\), is \(\epsilon\)-close to another pointed Ricci flow \(\left(M_{2},g_{2}(t);p_{2}\right),t\in[-T,0]\), if there is a diffeomorphism onto its image \(\bar{\phi}:B_{g_{2}(0)}\left(p_{2},\epsilon^{-1}\right)\to M_{1}\), such that \(\bar{\phi}\left(p_{2}\right)=p_{1}\) and \(\left\|\bar{\phi}^{*}g_{1}(t)-g_{2}(t)\right\|_{C^{[\epsilon^{-1}]}}<\epsilon\) for all \(t\in\left[-\min\left\{T,\epsilon^{-1}\right\},0\right]\), where the norms and derivatives are taken with respect to \(g_{2}(0)\)._ By Proposition 1.2 together with the above definition, we get immediately, **Proposition 3.5**.: _Let \((M^{n},g)\) be the steady Ricci soliton in Proposition 1.2. Then for any \(\epsilon>0\), There exists a compact set \(D(\epsilon)>0\), such that for any \(p\in M\setminus D\), \((M,g_{p}(t);p)\) is \(\epsilon\)-close to a splitting flow \((h_{p}(t)+ds^{2};p)\), where \(h_{p}(t)\) is an \((n-1)\)-dimensional ancient \(\kappa\)-solution._ We note that for a given \(p\) and a number \(\epsilon>0\) the \(\epsilon\)-close splitting flow \((h_{p}(t)+ds^{2};p)\) may not be unique in Proposition 3.5. Due to [22], we introduce a function on \(M\) for each \(\epsilon\) by \[F_{\epsilon}(p)=\inf_{h_{p}}\{\mathrm{Diam}(h_{p}(0))\in(0,\infty)\}. \tag{3.8}\] For simplicity, we always omit the subscribe \(\epsilon\) in \(F_{\epsilon}(p)\) below. By estimating (3.8), we prove **Proposition 3.6**.: _Let \((M^{4},g)\) be a noncompact \(\kappa\)-noncollapsed steady Ricci soliton in Theorem 0.2. Suppose that there exists a sequence of pointed rescaled Ricci flows \((M,g_{p_{i}}(t);p_{i})\) converges subsequently to a splitting Ricci flow \((h(t)+ds^{2};p_{\infty})\) for some noncompact ancient \(\kappa\)-solution \(h(t)\). Then for any limit flow \((h^{\prime}(t)+ds^{2};q_{\infty})\) of rescaled Ricci flows \((M,g_{q_{i}}(t);q_{i})\), \(h^{\prime}(t)\) is a noncompact ancient \(\kappa\)-solution._ Proof.: We argue by contradiction. Suppose that there exists a limit flow \((h^{\prime}(t)+ds^{2};q_{\infty})\) converged by rescaled flows \((M,g_{q_{i}}(t);q_{i})\), which satisfies (2.1). Then by Proposition 2.7, there exists a uniform constant \(C_{3}(C)>0\), such that \[F(p_{i}^{\prime})\leq C_{3} \tag{3.9}\] for all \(i\), and all \(p_{i}^{\prime}\in f^{-1}(f(q_{i}))\). Fix \(C^{\prime}=\max\{100C_{Diam},10C_{3}\}\) and \(T(C^{\prime})\) as in Proposition 3.3. We choose an \(\epsilon>0\) such that \(\epsilon^{-1}>\max\{10T(C^{\prime}),100C^{\prime}\}\). Thus for the sequence of \((M,g_{p_{i}}(t);p_{i})\) in Proposition 3.6, we can choose a point \(p_{i_{0}}\in\{p_{i}\}\) such that \[F(p_{i_{0}})>C^{\prime}\geq 100C_{Diam}. \tag{3.10}\] Let \(\Gamma_{1}\) be the integral curve of \(\hat{X}\) passing through \(p_{i_{0}}\) with \(\Gamma_{1}(0)=p_{i_{0}}\), which tends to the infinity by Lemma 2.4. We claim: \[F(\Gamma_{1}(s))>\frac{1}{2}C^{\prime},\ \forall\ s\geq 0. \tag{3.11}\] Define \[s_{0}=\sup\{s\geq 0|F(s^{\prime})\geq C^{\prime}\ \text{for all}\ s^{\prime}\in[0,s]\}.\] If \(s_{0}=\infty\), \(F(s)>C^{\prime}\) for all \(s\geq 0\). Then (3.11) is obvious true in this case. Thus we may consider the case \(s_{0}<\infty\), i.e., \(F(s_{0})=C^{\prime}\) for some \(s^{\prime}=s_{0}\). It follows that there exists a \(3d\) compact ancient \(\kappa\)-solution \(h_{\Gamma_{1}(s_{0})}(t)\) such that \[(M,R(\Gamma_{1}(s_{0}))g(R(\Gamma_{1}(s_{0}))^{-1}t);\Gamma_{1}(s _{0}))\] \[\overset{\epsilon-\text{close}}{\sim}(N\times\mathbb{R},h_{ \Gamma_{1}(s_{0})}(t)+ds^{2};\Gamma_{1}(s_{0})). \tag{3.12}\] Since the diameter of \(h_{\Gamma_{1}(s_{0})}(0)\) is large, \(h_{\Gamma_{1}(s_{0})}(t)\) can not be a family of shrinking quotient spheres. Hence, by Theorem 3.2, it must be a quotient of Perelman solution after a reparametrization. By Proposition 3.3, we see that \(\text{Diam}(\text{h}_{\Gamma_{1}(s_{0})}(\text{t}))\text{R}_{\text{h}}^{\frac {1}{2}}(\Gamma_{1}(\text{s}_{0}),\text{t})\) is strictly decreasing for \(t\in(-\epsilon^{-1},0]\). By (3.10), it follows \[\text{Diam}(\text{h}_{\Gamma_{1}(s_{0})}(\text{t}))\text{R}_{\text{h}}^{\frac {1}{2}}(\Gamma_{1}(\text{s}_{0}),\text{t})>\text{C}^{\prime},\ \text{t}\in(-\epsilon^{-1},0]. \tag{3.13}\] Moreover, by the choice of \(T(C^{\prime})\), we have \[\text{Diam}(\text{h}(\text{T}(\text{C}^{\prime}))\text{R}_{\text{h}}^{\frac{ 1}{2}}(\Gamma_{1}(\text{s}_{0}),-\text{T}(\text{C}^{\prime}))=2\text{C}^{\prime}.\] Let \(t_{1}=\min\{-1000,-T(C^{\prime})\}\geq-\frac{\epsilon^{-1}}{2}\). Thus \[\text{Diam}(\text{h}_{\Gamma_{1}(s_{0})}(\text{t}_{1}))\text{R}_{\text{h}}^{ \frac{1}{2}}(\Gamma_{1}(\text{s}_{0}),\text{t}_{1})\geq 2\text{C}^{\prime}. \tag{3.14}\] Recall that \(\{\phi_{t}\}_{t\in(-\infty,\infty)}\) is the flow of \(-\nabla f\) with \(\phi_{0}\) the identity and \((g(t),\Gamma_{1}(s))\) is isometric to \((g,\phi_{t}(\Gamma_{1}(s)))\). Then \[\phi_{t}(\Gamma_{1}(s))=\Gamma_{1}\left(s-\int_{0}^{t}|\nabla f|\left(\phi_{ \mu}(\Gamma_{1}(s))\right)d\mu\right)\] Let \(T=tR\left(\Gamma_{1}\left(s_{0}\right)\right)^{-1}<0\) and \[s=s_{0}-\int_{0}^{T}\left|\nabla f\right|\left(\phi_{\mu}\left(\Gamma_{1}\left(s _{0}\right)\right)\right)d\mu. \tag{3.15}\] Set \[s_{1}=s_{0}-\int_{0}^{T_{1}}\left|\nabla f\right|\left(\phi_{\mu}\left(\Gamma_{1 }\left(s_{0}\right)\right)\right)d\mu,\] where \(T_{1}=t_{1}R\left(\Gamma_{1}\left(s_{0}\right)\right)^{-1}.\) Since the scalar curvature \(R\) of \(\left(M,g\right)\) decays to \(0\) uniformly by Proposition 2.2, we may assume \(\left|\nabla f\right|\geq\frac{1}{2}\) along \(\Gamma_{1}\). Thus \[s_{1}-s_{0}\geq 500R^{-1}\left(\Gamma_{1}\left(s_{0}\right)\right)\geq 500R_{ max}^{-1}=500. \tag{3.16}\] Note that \(\phi_{T}\left(\Gamma_{1}\left(s_{0}\right)\right)=\Gamma\left(s\right)\) and \(\left(g\left(T\right),\Gamma_{1}\left(s_{0}\right)\right)\) is isometric to \(\left(g,\Gamma_{1}\left(s\right)\right)\) for all \(s\in\left[s_{0},s_{1}\right]\). Then \[\left(M,R(\Gamma_{1}(s))g;\Gamma_{1}(s)\right) \cong\left(M,R(\Gamma_{1}(s_{0}),T)g(T);\Gamma_{1}(s_{0})\right) \tag{3.17}\] \[\cong\left(M,\frac{R(\Gamma_{1}(s_{0}),T)}{R(\Gamma_{1}(s_{0}))} R(\Gamma_{1}(s_{0}))g(T);\Gamma_{1}(s_{0})\right).\] Since \(R(\Gamma_{1}(s_{0}),T)\leq R(\Gamma_{1}(s_{0}))\) by (1.4), we get from (3.12), \[\overset{\epsilon-\text{close}}{\sim}(N\times\mathbb{R},\frac{R(\Gamma_{1}( s_{0}),T)}{R(\Gamma_{1}(s_{0}))}h_{\Gamma_{1}(s_{0})}(t)+ds^{2};\Gamma_{1}(s_{0})). \tag{3.18}\] On the other hand, there is another \(3d\) compact ancient \(\kappa\)-solution \(h_{\Gamma_{1}(s_{1})}(t)\) corresponding to the point \(\Gamma_{1}(s)\) such that \[(M,R(\Gamma_{1}(s))g;\Gamma_{1}(s))\overset{\epsilon-\text{close}}{\sim}(h_ {\Gamma_{1}(s)}(0)+ds^{2},\Gamma_{1}(s)). \tag{3.19}\] Hence, combining (3.18) and (3.19), we derive \[h_{\Gamma_{1}(s)}(0)\overset{\epsilon-\text{close}}{\sim}\frac{R(\Gamma_{1}( s_{0}),T)}{R(\Gamma_{1}(s_{0}))}h_{\Gamma_{1}(s_{0})}(t). \tag{3.20}\] By the convergence of \((M,g_{p}(t);p)\), we have \[\frac{R(\Gamma_{1}(s_{0}),T)}{R(\Gamma_{1}(s_{0}))}\overset{\epsilon-\text{ close}}{\sim}R_{h}(\Gamma_{1}(s_{0}),t),\ \forall\ t\in[t_{1},0],\] and so, \[\text{Diam}(\frac{\text{R}(\Gamma_{1}(s_{0}),\text{T})}{\text{R}(\Gamma_{1}(s _{0}))}\text{h}_{\Gamma_{1}(s_{0})}(\text{t}))\overset{\epsilon-\text{close}}{ \sim}\text{Diam}(\text{R}_{\text{h}}(\Gamma_{1}(s_{0}),\text{t})\text{h}_{ \Gamma_{1}(s_{0})}(\text{t}))).\] Then by (3.20), the monotonicity (3.13) implies that \[F(\Gamma_{1}(s))\geq C^{\prime}-2\epsilon>\frac{1}{2}C^{\prime},\ \forall\ s\in\left[s_{0},s_{1}\right]. \tag{3.21}\] Moreover, by (3.14), \[F\left(\Gamma_{1}(s_{1})\right)>2C^{\prime}-2\epsilon>C^{\prime}. \tag{3.22}\] By (3.22) together with (3.21) and (3.16), we can repeat the above argument to obtain (3.11). On the other hand, the curve \(\Gamma_{1}(s)\) passes through level sets \(f^{-1}(f(q_{i}))\) because of \(\lim_{s\to\infty}f(\Gamma_{1}(s))=\infty\). Thus for each \(q_{i}\) (\(i>>1\)) there exists \(p_{i}^{\prime}\in f^{-1}(f(q_{i}))\) such that \(p_{i}^{\prime}=\Gamma_{1}(s_{i})\) for some \(s_{i}\). By (3.9), \(F(p_{i}^{\prime})\leq C_{3}\), which contradicts with (3.11). Hence, the proposition is proved. ## 4. Proofs of the main results In this section, we prove Theorem 0.2 and Corollary 0.3. Firstly, we consider a special case: there is a uniform constant \(C\) such that all split ancient \(\kappa\)-solution \(h(t)\) in Proposition 1.2 satisfies (2.1). By generalizing the argument in Section 3 we prove **Proposition 4.1**.: _Let \((M^{4},g)\) be a \(4d\) noncompact \(\kappa\)-noncollapsed gradient steady Ricci soliton with \(\operatorname{Km}\geq 0\) and \(\operatorname{Ric}>0\) away from a compact set \(K\) of \(M\). Suppose that there is a uniform constant \(C\) such that all split ancient \(\kappa\)-solution \(h(t)\) in Proposition 1.2 satisfies (2.1). Then all \(h(t)\) must be a family of shrinking quotient spheres._ By a result of Ni [25], it suffices to prove all \(h(t)\) is a compact \(\kappa\)-noncollapsed ancient solution of type I. In other words, we shall exclude the existence of \(\kappa\)-noncollapsed ancient solutions of compact type II. The proof is based on two lemmas [1, Lemma 2.1, Lemma 2.2]. For the reader's convenience, we give a sketch proof of those lemmas below. **Lemma 4.2**.: _Let \((N^{3},h(t))\) be a \(3d\) compact \(\kappa\)-solution of type II.1 Fix \(p\in N\), we consider \(t_{k}\to-\infty\) and a sequence of points \(x_{k}\in N\) such that \(\ell(x_{k},t_{k})<\frac{3}{2}\), where \(\ell\) denotes the reduced distance from \((p,0)\). Then the rescaled manifold by dilating the manifold \((N,h(t_{k}))\) around the point \(x_{k}\) by the factor \(\frac{1}{\sqrt{-t_{k}}}\) converges to a \(3d\) noncompact shrinking gradient Ricci soliton._ Footnote 1: By Theorem 3.2, \((N,h(t))\) coincides with a quotient of Perelman solution \((S^{3},g_{Pel}(t);p_{0})\). Proof.: By Perelman's argument in [26, Section 11], the rescaled manifold converges in the Cheeger-Gromov sense to a \(3d\)\(\kappa\)-noncollapsed shrinking gradient Ricci soliton \((N^{\prime},h^{\prime}(t))\) with nonnegative curvature operator. If \((N^{\prime},h^{\prime}(t))\) is compact, it must be a quotient of \(3d\) round sphere by a result of Perelman [27, Section 1] (also see [24, Corollary 4]). In particular, the sectional curvatures of \((N,h(t_{k}))\) must lie in the interval \([\frac{c-\epsilon_{k}}{-t_{k}},\frac{c+\epsilon_{k}}{-t_{k}}]\), where \(\epsilon_{k}\to 0\) as \(k\to\infty\). Then by curvature pinching estimates, \((N,h(t))\) has constant sectional curvature for each \(t\)[18] (see also [9, 8]). It follows that \((N,h(t))\) is also a family of shrinking round quotient spheres, which contradicts with the type II condition. Hence, \((N^{\prime},h^{\prime}(t))\) must be non-compact. The lemma is proved. **Lemma 4.3**.: _Let \((N^{3},h(t))\) be a \(3d\) compact \(\kappa\)-solution of type II. Then for any sequence of times \(t_{k}\to-\infty\), it holds_ \[\mathrm{R}_{\mathrm{max}}(\mathrm{t}_{\mathrm{k}})\mathrm{Diam}(\mathrm{h}( \mathrm{t}_{\mathrm{k}}))^{2}\to\infty,\] _where \(\mathrm{R}_{\mathrm{max}}(\mathrm{t})=\max\{\mathrm{R}(\mathrm{h}(\cdot, \mathrm{t})\}\). In particular,_ \[\lim_{t\to-\infty}\mathrm{R}_{\mathrm{max}}(\mathrm{t})\mathrm{Diam}(\mathrm{ h}(\mathrm{t}))^{2}\to\infty. \tag{4.1}\] Proof.: (4.1) is true according to (3.3) and the classification Theorem 3.2. In the following, we give a direct proof of (4.1) by Lemma 4.2. In fact, by a result of Perelman [26, Section 11], for any sequence of times \(t_{k}\to-\infty\), we can always find a sequence of points \(x_{k}\in N\) such that \(\ell(x_{k},t_{k})\leq\frac{3}{2}\) for each \(k\). Then by Lemma 4.2, the rescaled flow \((N,(-t_{k})^{-1}h((-t_{k})t);x_{k})\) converges to a noncompact shrinking Ricci soliton with non-negative curvature operator. It follows \[\mathrm{Diam}((-\mathrm{t}_{\mathrm{k}})^{-1}\mathrm{h}(\mathrm{t}_{\mathrm{ k}}))=(-\mathrm{t}_{\mathrm{k}})^{-\frac{1}{2}}\mathrm{Diam}(\mathrm{h}( \mathrm{t}_{\mathrm{k}}))\to\infty. \tag{4.2}\] Moreover, such a limit soliton is non-flat [21, Proposition 39.1]. Thus there exists a uniform constant \(\delta>0\), such that \((-t_{k})R(x_{k},t_{k})\geq\delta\) for all \(k>>1\). Hence, by (4.2), we derive \[\mathrm{R}_{\mathrm{max}}(\mathrm{t}_{\mathrm{k}})\mathrm{Diam}( \mathrm{h}(\mathrm{t}_{\mathrm{k}}))^{2} \geq\mathrm{R}(\mathrm{x}_{\mathrm{k}},\mathrm{t}_{\mathrm{k}}) \mathrm{Diam}(\mathrm{h}(\mathrm{t}_{\mathrm{k}}))^{2}\] \[\geq\delta(-t_{k})^{-1}\mathrm{Diam}(\mathrm{h}(\mathrm{t}_{ \mathrm{k}}))^{2}\] \[\to\infty.\] Proof of Proposition 4.1.: If the proposition is false, by the Ni's result [25] (also see [7]), there will exist a sequence of rescaled flow \((M,g_{p_{i}}(t),p_{i})\) converges subsequently to a splitting Ricci flow \((N\times\mathbb{R},h(t)+ds^{2};p_{\infty})\), where \((N,h(t))\) is a \(3d\) compact ancient \(\kappa\)-solution of type II. Choose \(t_{k}\to-\infty\). Then for each \(t_{k}\) there exists \(q_{ik}\in f^{-1}(f(p_{i}))\) such that \[R(q_{ik},R(p_{i})^{-1}t_{k})=\max\{R(x,R(p_{i})^{-1}t_{k})|\ x\in f^{-1}(f(p_{ i}))\}.\] It follows that for \(x\in f^{-1}(f(p_{i}))\) it holds \[R(q_{ik})^{-1}R(q_{ik},R(p_{i})^{-1}t_{k})\geq R(q_{ik})^{-1}R(x,R(p_{i})^{-1} t_{k}). \tag{4.3}\] For each \(k\), we know that rescaled flow \((M,g_{q_{ik}}(t);q_{ik})\) converges subsequently to a splitting Ricci flow \((N_{k}\times\mathbb{R},h_{k}(t)+ds^{2};q_{\infty k})\). Moreover, by Remark 2.8, submanifold \((f^{-1}(f(p_{i})),\bar{g}_{q_{ik}};q_{ik})\) converges subsequently to \((N_{k},h_{k}(0),q_{\infty k})\) w.r.t. the induced metric, where each \((N_{k},h_{k}(0))\) is isometric \((N,h(0))\) up to rescaling. We note that \(\phi_{t}(q_{i}),\phi_{t}(p_{i})\in f^{-1}(f(\phi_{t}(p_{i})))\) for any \(t\leq 0\) when \(i>>1\). Then by Lemma 1.3, there is a constant \(C_{0}(C)>1\), where the constant \(C\) is determined in (2.1) such that \[\frac{R(\phi_{t}(q_{ik}))}{R(\phi_{t}(p_{i}))}\in[C_{0}^{-1},C_{0}],\ \forall t\leq 0. \tag{4.4}\] In particular, \(\frac{R(q_{ik})}{R(p_{i})}\) converge subsequently to a constant \(\lambda_{k}\in[C_{0}^{-1},C_{0}]\). Thus by (4.3) with help of the convergence of \((M,g_{q_{ik}}(t);q_{ik})\), we get \[R_{h_{k}}(q_{\infty k},\lambda_{k}t_{k})\geq R_{h_{k}}(x,\lambda_{k}t_{k}),\ \forall x\in N. \tag{4.5}\] Since each \(h_{k}(\lambda_{k}t_{k})\) is isometric to \(h(t_{k})\) up to scaling, we derive \[R_{h}(q_{\infty k},t_{k})=\max\{R_{h}(x,t_{k})|\ x\in N\}.\] Hence, by Lemma 4.3, \[\lim_{k\to\infty}\mathrm{Diam}(\mathrm{h}(\mathrm{t}_{k}))\mathrm{R}_{h}^{ \frac{1}{2}}(\mathrm{q}_{\infty k},\mathrm{t}_{k})=\lim_{k\to\infty}\mathrm{ Diam}(\mathrm{h}(\mathrm{t}_{k}))\mathrm{R}_{h,\max}^{\frac{1}{2}}(\mathrm{t}_{k})=\infty.\] As a consequence, there is \(t_{0}\in\{t_{k}\}\) for some \(k_{0}\) such that \[\mathrm{Diam}(\mathrm{h}(\mathrm{t}_{0}))\mathrm{R}_{h}^{\frac{1}{2}}( \mathrm{q}_{\infty k_{0}},\mathrm{t}_{0})>100\mathrm{CC}_{0}^{\frac{1}{2}}. \tag{4.6}\] For simplicity, we let \(q_{i}=q_{ik_{0}}\) and \(q_{\infty}=q_{\infty k_{0}}\). Set \(T=t_{0}R^{-1}(p_{i})\) and choose \(\epsilon<-\frac{1}{100t_{0}}\). Then \((M,g_{p_{i}}(t);q_{i})\) is \(\epsilon\)-close to \((N\times\mathbb{R},h(t)+ds^{2};q_{\infty})\) when \(i>>1\). Thus by (4.6), it holds \[\mathrm{Diam}(\bar{g}(T))R^{\frac{1}{2}}(q_{i},T)=\mathrm{Diam}(\bar{g}_{p_{i }}(t_{0}))R_{p_{i}}^{\frac{1}{2}}(q_{i},t_{0})>90CC_{0}^{\frac{1}{2}}, \tag{4.7}\] as long as \(i>>1\), where \(\bar{g}(T)\) is the induced metric of \(g\) on \(f^{-1}(f(\phi_{T}(p_{i})))\) and \(\bar{g}_{p_{i}}(t)\) is the induced metric of \(g_{p_{i}}(t)\) on \(f^{-1}(f(p_{i}))\), respectively. On the other hand, since \[\frac{R(\phi_{T}(p_{i}))}{R(\phi_{T}(q_{i}))}\in[C_{0}^{-1},C_{0}],\] we have \[\frac{R(p_{i},T)}{R(q_{i},T)}\in[C_{0}^{-1},C_{0}]. \tag{4.8}\] Hence combining (4.7) and (4.8), we obtain \[\mathrm{Diam}(\bar{g}(T))R^{\frac{1}{2}}(p_{i},T)\] \[=(\frac{R(p_{i},T)}{R(q_{i},T)})^{\frac{1}{2}}\mathrm{Diam}(\bar {g}(T))R^{\frac{1}{2}}(q_{i},T)\] \[>90C,\ i>>1.\] This means \[\operatorname{Diam}(R(\phi_{T}(p_{i}))\bar{g}(T)) =\operatorname{Diam}(\bar{g}(T))R^{\frac{1}{2}}(\phi_{T}(p_{i}))\] \[=\operatorname{Diam}(\bar{g}(T))R^{\frac{1}{2}}(p_{i},T) \tag{4.9}\] \[>90C,\ i>>1.\] Let \((\tilde{N},\tilde{h}(t))\) be a \(3d\) split ancient flow of limit of \((M,g_{(\phi_{T}(p_{i}))};\phi_{T}(p_{i}))\). Then \((\tilde{N},\tilde{h}(0))\) is a limit of submanifolds \((f^{-1}(f(\phi_{T}(p_{i}))),R(\phi_{T}(p_{i}))\bar{g}(T);\phi_{T}(p_{i}))\) (see Remark 2.8). Thus by (4.9), we get \[\operatorname{Diam}(\tilde{N},\tilde{h}(0))\geq 90C.\] It follows \[F(\phi_{T}(p_{i}))>50C,\ i>>1,\] and so \[\limsup_{i\to\infty}F(\phi_{T}(p_{i}))\geq 50C. \tag{4.10}\] On the other hand, by the condition of proposition, we see \[\limsup_{p\to\infty}F(p)<2C. \tag{4.11}\] Hence, we get a contradiction between (4.11) and (4.10). The proposition is proved. Now we are able to prove Theorem 0.2 by Proposition 4.1 together with Proposition 3.6. Proof of Theorem 0.2.: Case 1: \[\limsup_{p\to\infty}F_{\epsilon}(p)<C.\] for any \(\epsilon<1\). Then (2.1) holds for all split ancient \(\kappa\)-solution \(h(t)\). Thus by Proposition 4.1, all \(h(t)\) must be a family of shrinking quotient spheres. Case 2: \[\limsup_{\epsilon\to 0}\limsup_{p\to\infty}F_{\epsilon}(p)=\infty.\] In this case, by taking a diagonal subsequence, there is a sequence of pointed flows \((M,g_{q_{i}}(t);q_{i})\), which converges subsequently to a splitting Ricci flow \((N^{\prime}\times\mathbb{R},h^{\prime}(t)+ds^{2};q_{\infty})\) for some noncompact ancient \(\kappa\)-solution \(h^{\prime}(t)\). Then by Proposition 3.6, \(h(t)\) is a noncompact \(\kappa\)-solution for any splitting limit flow \((N\times\mathbb{R},h(t)+ds^{2};p_{\infty})\). Proof of Corollary 0.3.: By the assumption, the split \(3\)-dimensional ancient flows \((N,h(t))\) of limit of \((M,g_{p_{i}}(t),p_{i})\) is a family of shrinking quotient spheres. Namely, \((N,h(0))\) is a quotient of round sphere. We claim: \((M,g)\) has positive Ricci curvature on \(M\). On contrary, \(\operatorname{Ric}(g)\) is not strictly positive. We note that (2.6) is still true in the proof of Lemma 2.2 without \(\operatorname{Ric}(g)>0\) away from a compact set of \(M\). Then as in the proof of [16, Lemma 4.6], we see that \(X_{i}=R(p_{i})^{-\frac{1}{2}}\nabla f\to X_{\infty}\) w.r.t. \((M,g_{p_{i}}(t),p_{i})\), where \(X_{\infty}\) is a non-trivial parallel vector field. Thus according to the argument in the proof of [16, Theorem 1.3], the universal cover of \((N,h(t))\) must split off a flat factor \(\mathbb{R}^{d}\)\((d\geq 1)\). However, the universal cover of \(N\) is \(S^{3}\). This is a contradiction! Hence, we prove \(\operatorname{Ric}(g)>0\) on \(M\). Now we can apply Theorem 0.2 to see that any split \(3\)-dimensional ancient flow \((N^{\prime},h^{\prime}(t))\) of limit of \((M,g_{q_{i}}(t),q_{i})\) is a family of shrinking quotient spheres. We claim: \((N^{\prime},h^{\prime}(t))\) is in fact a family of shrinking spheres. By Lemma 2.2, the scalar curvature of \((M,g)\) decays to zero uniformly. Then \((M,g)\) has unique equilibrium point \(o\) by the fact \(\operatorname{Ric}(g)>0\). Thus the level set \(\Sigma_{r}=\{f(x)=r\}\) is a closed manifold for any \(r>0\), and it is diffeomorphic to \(S^{3}\) (cf. [16, Lemma 2.1]). On the other hand, as in the proof of Lemma 2.6, the level set \((\Sigma_{f(q_{i})},\bar{g}_{q_{i}};q_{i})\) converges subsequently to \((N^{\prime},h^{\prime}(0);q_{\infty})\) w.r.t. the induced metric \(\bar{g}_{q_{i}}\) on \(\Sigma_{f(q_{i})}\) by \(g_{q_{i}}\). Since each \(\Sigma_{f(q_{i})}\) is diffeomorphic to \(S^{3}\), \(N^{\prime}\) is also diffeomorphic to \(S^{3}\). Thus \((N^{\prime},h^{\prime}(t))\) is a family of shrinking spheres. By the above claim, the condition (ii) in Definition 0.1 is satisfied. Thus by [17, Lemma 6.5], \((M,g)\) is asymptotically cylindrical. It follows that \((M,g)\) is isometric to the Bryant Ricci soliton up to scaling by [5]. In addition of \(\operatorname{Ric}(g)>0\), we see that all level set \(\Sigma_{r}=\{f(x)=r\}\) is diffeomorphic to \(S^{3}\) and so as \(N\). Then by Theorem 0.2 together with the existence of \(3d\) compact split limit flow, all \(3d\) split limit flow \((N^{\prime},h^{\prime}(t))\) must be a family of shrinking spheres. Thus \((M,g)\) is also isometric to the Bryant Ricci soliton up to scaling. The proof of corollary is complete. \(\square\)
2302.05244
A Song of Ice and Fire: Analyzing Textual Autotelic Agents in ScienceWorld
Building open-ended agents that can autonomously discover a diversity of behaviours is one of the long-standing goals of artificial intelligence. This challenge can be studied in the framework of autotelic RL agents, i.e. agents that learn by selecting and pursuing their own goals, self-organizing a learning curriculum. Recent work identified language as a key dimension of autotelic learning, in particular because it enables abstract goal sampling and guidance from social peers for hindsight relabelling. Within this perspective, we study the following open scientific questions: What is the impact of hindsight feedback from a social peer (e.g. selective vs. exhaustive)? How can the agent learn from very rare language goal examples in its experience replay? How can multiple forms of exploration be combined, and take advantage of easier goals as stepping stones to reach harder ones? To address these questions, we use ScienceWorld, a textual environment with rich abstract and combinatorial physics. We show the importance of selectivity from the social peer's feedback; that experience replay needs to over-sample examples of rare goals; and that following self-generated goal sequences where the agent's competence is intermediate leads to significant improvements in final performance.
Laetitia Teodorescu, Xingdi Yuan, Marc-Alexandre Côté, Pierre-Yves Oudeyer
2023-02-10T13:49:50Z
http://arxiv.org/abs/2302.05244v5
# A Song of Ice and Fire: Analyzing Textual Autotelic Agents in ScienceWorld ###### Abstract Building open-ended agents that can autonomously discover a diversity of behaviours is one of the long-standing goals of artificial intelligence. This challenge can be studied in the framework of autotelic RL agents, i.e. agents that learn by selecting and pursuing their own goals, self-organizing a learning curriculum. Recent work identified language as a key dimension of autotelic learning, in particular because it enables abstract goal sampling and guidance from social peers for hindsight relabelling. Within this perspective, we study the following open scientific questions: _What is the impact of hindsight feedback from a social peer (e.g. selective vs. exhaustive)? How can the agent learn from very rare language goal examples in its experience replay? How can multiple forms of exploration be combined, and take advantage of easier goals as stepping stones to reach harder ones?_ To address these questions, we use ScienceWorld, a textual environment with rich abstract and combinatorial physics. We show the importance of selectivity from the social peer's feedback; that experience replay needs to over-sample examples of rare goals; and that following self-generated goal sequences where the agent's competence is intermediate leads to significant improvements in final performance.1 Footnote 1: The anonymized code can be found at this url. ## 1 Introduction We are interested in the problem of building and training open-ended autonomous agents, exploring on their own and mastering a wide diversity of tasks once trained. This can be approached within the autotelic reinforcement learning framework (Colas et al., 2022). An autotelic agent (autotelos, one's own goals) is an intrinsically-motivated, goal-conditioned agent equipped with a goal-sampler that uses the agent's previous experience to propose goals for learning and exploration. This goal sampler allows the agent to use previously mastered skills as stepping stones to achieve new ones, and to form a self-curriculum for exploration. This developmental framework is general and is linked to goal-exploration processes like Go-explore (Ecoffet et al., 2021) and adversarial goal generation (Florensa et al., 2018; Campero et al., 2021). Autotelic agents have already been shown to efficiently explore sensorimotor spaces (Pere et al., 2018) and to be able to build their own goal curriculum using learning-progress-based task sampling (Colas et al., 2019). Recent work has shown the potential of language to drive autotelic learning (Colas et al., 2022), both due to its compositional structure and its ability to convey cultural knowledge. For example, post-episode language feedback by a social peer (\(\mathcal{SP}\)) can be internalized to identify and imagine future relevant goals (Colas et al., 2020), acting as a cognitive tool (Clark, 2006). It has also been shown to help scaffold the agent's exploration through abstraction (Mu et al., 2022). It can be reused once the agent is trained for instruction-following (Colas et al., 2022). Exploring in language space directly is akin to learning to plan at a higher, abstract level; the skills learned at this level can be executed by lower-level modules in an embodied environment (Shridhar et al., 2020; Ahn et al., 2022; Huang et al., 2022). Additionally, language conveys our morals and values, and be used as a tool to align autotelic agents towards human preferences (Sigaud et al., 2021). The common topic in previous work (Colas et al., 2020; Mirchandani et al., 2021; Mu et al., 2022) on language-based autotelic agents has been to study various forms of exploration mechanisms, some of them relying on language. However, these works did not investigate how to deal with the exploration challenges posed by linguistic spaces themselves, featuring immense action and goal spaces. More precisely, _how specific should \(\mathcal{SP}\)'s feedback be? How to deal with very hard goals that will be very rarely seen compared to easy ones? How to use easy goals as stepping stones to achieve hard ones?_ These are the questions we focus on in this work. For studying this, we place ourselves in ScienceWorld (Wang et al., 2022), a text world (Cote et al., 2019) with very rich dynamics (thermodynamics, biology, electricity) allowing for complex goals such as freezing or boiling water, which requires getting water first, thus defining an optional dependency amongst goals (ScienceWorld is rich enough that goals can be accomplished in multiple ways). To tackle these challenges, we identify the main drivers of discovery in autotelic agents. In general, there are four ways in which they can discover novel things through goal exploration in an environment. The agent can **discover from failure**: it can target a goal it misses, and the social peer will relabel this trajectory with the actually achieved goals (if any), an idea exploited in hindsight experience replay (HER) (Andrychowicz et al., 2017). The agent can **discover from babbling**: it is equipped with standard RL exploration mechanisms, such as stochastic policies or epsilon-greedy action sampling. This will induce randomness in goal-conditioned trajectories and allow the agent to stumble onto novel states. The agent can **discover from stepping-stone exploration**: after a goal has been reached or the episode has timed out, the agent can perform random actions or follow another goal from there, a process first investigated by Go-explore methods. Here exploration bonuses (like pseudo-count rewards (Bellemare et al., 2016)) can be used to make this process more efficient. And finally, the agent can **discover from imagination**: combine known goals to create novel ones, leading to the discovery of novel states and affordances (which can in turn be reused as goals in further exploration). To tackle the exploration challenges posed by large linguistic spaces and nested goals, we especially focus on the first three drivers of discovery. In particular, we show how different methods for learning from further exploration interact together to help the agent navigate the goal hierarchy. In this work, we study specific challenges posed by autotelic exploration in large language spaces: 1. _How should the_ \(\mathcal{SP}\) _provide hindsight feedback (relabeling) to the agent in very large linguistic spaces? Should it be selective or exhaustive?_ We show the social peer must give targeted hindsight feedback to the agent to avoid populating the replay buffer with a too wide diversity of detailed goals that prevents making non-trivial discoveries. 2. _In the presence of goals with very different difficulty and occurrences, what is the influence of different goal sampling distributions on the efficiency of learning diverse and complex goals?_ We show that the agent needs to bias replay transition sampling towards transitions in trajectories where rare, hard goals are accomplished. 3. _How do methods for learning from stepping-stone exploration influence learning in the goal hierarchy?_ We find that sampling goal sequences according to the agent's estimated intermediate competence, and then exploring randomly, significantly improves aggregate competence and Figure 1: Left: overview of the autotelic agent architecture. At the beginning of an episode, a goal \(g\) is sampled from the list \(\mathcal{G}_{a}\) maintained in the modular replay buffer. At each timestep \(t\), ScienceWorld emits an observation \(o_{t}\) (see Section 2.2). The agent combines \(o_{t}\) and \(g\) to decide on an action \(a_{t}\). \(o_{t}\) and \(g\) are also used by the agent to compute the reward \(r_{t}\). When the episode ends (\(r_{t}\neq 0\) or \(t=T\)), either the environment is reset, a new goal is sampled or random exploration steps are taken. Right: overview of different exploration methods. The agent is conditioned on a goal and tries to achieve it. In the goal-chain configuration, on achieving a goal or at the end of the allowed \(T\) timesteps the agent samples a new goal with a 0.5 probability. In the go-explore configuration, on achieving a goal or at the end of the \(T\) timesteps the agent performs 5 timesteps of random exploration. reduces variance over seeds. ## 2 Problem setting We are interested in studying the behavior of autonomous agents that freely explore their environments to uncover their possibilities. Such agents are especially needed in environments that are reward-less, or that have sparse or deceptive rewards. ### Definitions Reward-less POMDPFormally, we define a reward-less partially observable Markov decision process (POMDP) (Sutton and Barto, 2018) as \((S,\mathcal{A},\mathcal{T},\Omega,\mathcal{O})\), where \(S\) is the state space, \(\mathcal{A}\) is the action space, \(\mathcal{T}\) is the transition function, \(\Omega\) is the observation space, and \(\mathcal{O}\) is the observation function. We also define a trajectory as a sequence of state-action pairs and is represented by \(\tau\) = \([(s_{0},a_{0}),\ldots,(s_{t},a_{t}),\ldots,(s_{T},a_{T})]\) where \(t\) is a timestep and \(T\) the length of the trajectory. Autotelic agentsIn this work, we consider a certain kind of autonomous agents that are driven by an intrinsically-motivated goal-exploration process: autotelic agents. These agents operate without external reward by iteratively targeting goals and trying to reach them, using their own internal goal-achievement (reward) function to measure success. In the process, they observe their own behavior and learn from it. We define \(\mathcal{G}_{a}\subseteq\mathcal{G}\) as the subset of goals experienced by the agent so far during its learning process. We additionally define the agent's goal-conditioned internal reward function as \(R:S\times A\times\mathcal{G}\rightarrow\{0,1\}\). Ideally, we would want to maximize the agent's performance over the entire goal space \(\mathcal{G}\), i.e., to find the goal-conditioned policy \(\pi\) that maximizes the expected sum of internal rewards on all goals. However, these goals are not known in advance and have to be discovered by the agent through structured exploration. This means that this objective cannot be computed and used directly by the agent. Rather, it can be used _a posteriori_ by the experimenter as a measure to characterize what the agent discovered and learned. Social peerSince \(\mathcal{G}\) can be quite large, it would be desirable to guide the exploration without forcing it. We consider the agent interacts with a social peer (\(\mathcal{SP}\)) that gives feedback on the agent's trajectories, i.e., the \(\mathcal{SP}\) gives a list of achieved goals at the end of an episode (this list may not be exhaustive and reflects a model of relevance from the perspective of the \(\mathcal{SP}\)). Formally, we define the social peer as \(\mathcal{SP}:\left(S\times A\right)^{T}\rightarrow\left(\mathcal{G}_{ \mathcal{SP}}\right)^{m}\) which takes in a trajectory and outputs a set of \(m\) goals accomplished within this trajectory, where \(\mathcal{G}_{\mathcal{SP}}\subseteq\mathcal{G}\) are goals relevant to \(\mathcal{SP}\). Then, those accomplished goals can be added to the agent's discovered goals \(\mathcal{G}_{a}\). In principle, goal relabelling can be implemented in many ways, including leveraging pre-trained large language models. In this study, for simplicity, the \(\mathcal{SP}\) labels objects presented in trajectories as goals (see Section 2.2). In practice, we are maximizing the agent's performance to achieve the goals it has discovered: \[\sum_{g\in\mathcal{G}_{a}}\mathbb{E}_{\tau\rightarrow\pi(\cdot|g)}\Big{[} \sum_{(s_{t},a_{t})\in\tau}\gamma^{t}\,R(s_{t},a_{t},g)\Big{]} \tag{1}\] while \(\mathcal{G}_{a}\) converges towards \(\mathcal{G}_{\mathcal{SP}}\) as trajectories gets relabelled by \(\mathcal{SP}\). In which, \(\gamma\) denotes the discount factor. ### ScienceWorld: a text-based environment ScienceWorld2(Wang et al., 2022) is one of the text world frameworks (Jansen, 2022), coming with procedurally-generated household environments and an associated elementary-school science related benchmark. Unlike many Figure 2: A ScienceWorld trajectory illustrating an agent puts a glass cup full of water on the stove. For accessibility purpose, we omit repetitive text (agent receives obs, look, and inv at every step, as described in Section 2.2). Italic and boldface denote _action_ and **relevant object**. other interactive environments that facilitates RL research, in ScienceWorld, the observation space \(\Omega\) and the action space \(\mathcal{A}\) are textual. Specifically, at every timestep, the **observation** consists of three channels (please refer to Figure 2 for an in-game example): \(\bullet\)obs: a raw observation describing the immediate effect of the agent's actions. We show an example in Figure 2, highlighted in green. \(\bullet\)look: the description of the agent's surrounding. It is composed of a textual rendering of the underlying object tree, with receptacle-content hierarchy. We show an example in Figure 2, highlighted in pink. \(\bullet\)inv: a description of the agent's inventory. We show an example in Figure 2, highlighted in blue. The (text-based) **action** space is combinatorial and extremely large. Actions are templated phrases built by composing predicates (from a list of 25 possible unary or binary predicates) with compatible attributes, leading to 200k possible actions at each timestep on average. To alleviate this issue, ScienceWorld provides a shorter list of valid actions \(\mathcal{A}_{t}\) at each timestep \(t\), which makes the action space choice-based. Nevertheless, the size of \(\mathcal{A}_{t}\) is much larger than typical RL environments (on average 800 while in the kitchen, and increases to around 1500 as the agent gets more competent and creates new objects), and these actions are not guaranteed to have an effect to the state, which pose great challenges for RL agents to discover new experiences. We use elements in room descriptions to represent **goals**. For instance, examples of valid goals for the trajectory shown in Figure 2 could be the agent (which is always true) or a substance called sodium chloride. This simplified goal representation facilitates re-labelling and building the goal-conditioned reward function. As outlined in Section 1, language-based autotelic agents require a reward function to be able to score trajectories against the original goal. Our uni-modal reward function operates in language space, which enables sub-string matching: a goal \(g\) is valid if it can be found verbatim in the look feedback provided by the environment. While conceptually straightforward, this goal representation is rather expressive. For instance, if the goal targets an immovable object that can only be found in a certain room, this amounts to a navigation goal; if the goal targets an object that is found in a closed receptacle, accomplishing the goal requires opening the receptacle; if the goal targets an object that does not exist in the environment, then the goal amounts to making this object, which can imply a long action sequence. ### A song of ice and fire ScienceWorld tasks are hard exploration problems, as outlined above when considering the number of valid actions per step. In this work, we restrict ourselves to the kitchen to maintain manageable exploration, and we focus on a subset of ScienceWorld tasks: freezing and boiling water. We study the agent's progress through a self-organized curriculum of nested goals of varying difficulty, guiding the agent towards the most difficult ones. The entire tech tree and its dependency structure is presented in Figure 3. In fact, due to the non-linear design of ScienceWorld, some of the goals can be achieved without following the goal dependency structure: nevertheless this structure is useful to guide the agent's exploration. In this work, we will in particular look at how achieving the first, easier goals can allow the agent to master the harder ones, e.g. how the agent can create its own curriculum for learning. Importantly, this paper is not about solving the ScienceWorld benchmark Figure 3: Goal hierarchy for goals in \(\mathcal{G}_{\mathcal{SP}}\). Goals are object descriptions in the environment (see main text). Light-colored arrows indicate the first goal is helpful for achieving the second one, dark-colored arrows indicate the first goal is necessary to achieve the second one. _Hard goals_ are the last goals in the hierarchy, inside the dashed area. For instance, if the goal does not specify steam must be made on the stove, it can be obtained in some other way.The same goes for creating water; if the goal does not specify in which way this must be done, any container will work: either close drain once the sink is open, or pouring the contents of the table (the glass cup) into the sink. itself, but about understanding how multi-goal agents can explore their environment in the presence of large linguistic action spaces and nested goals of very different difficulty. ## 3 Autotelic agents in ScienceWorld ### Base autotelic agent Autotelic agents can be implemented with either on-policy or off-policy methods. We adopt the strongest-performing system on the original ScienceWorld benchmark, the deep reinforcement relevance network (DRRN) (He et al., 2016), as our internal goal-conditioned policy \(\pi(.|g)\). The DRRN is an off-policy deep RL agent equipped with a replay buffer. At inference time, the three observation channels (i.e., obs, look and inv) as well as the goal \(g\) are tokenized using a pre-trained sentencepiece 3 tokenizer, they are encoded by a GRU (Cho et al., 2014) and the output representations are concatenated as a vector. In parallel, all valid actions in \(\mathcal{A}_{t}\) are tokenized and encoded with another GRU, to produce a list of valid action encodings. The goal-state encoding is concatenated to all valid action encodings and passed through a 1-layer MLP with ReLU activation to produce \(Q(s_{t},a)\) for all \(a\in\mathcal{A}_{t}\). The action is either sampled from the resulting distribution (at training time) or the argmax is taken (at evaluation time). Footnote 3: [https://github.com/google/sentenceepiece](https://github.com/google/sentenceepiece) **Goal sampling** At the beginning of every episode, a linguistic goal \(g\) is sampled from \(\mathcal{G}_{a}\), i.e. the set of goals experienced by the agent so far. In the basic case, we assume goal sampling is done uniformly. The agent conditions its policy on that goal towards achieving it. An episode terminates either when the goal is achieved or after \(T\) timesteps. Then, the social peer (\(\mathcal{SP}\)) relabels the trajectory with goals they find relevant, i.e. contained in \(\mathcal{G}_{\mathcal{SP}}\), that were effectively achieved during that particular trajectory. The relabelling process is a discrete, linguistic version of hindsight experience replay (HER) (Andrychowicz et al., 2017). The resulting relabelled trajectories, with their associated internal reward, are then pushed into the agent's replay buffer, and any goals discovered in this way are added to \(\mathcal{G}_{a}\). **Goal-modular replay buffer** We develop a trajectory-based, modular replay buffer. Specifically, we store each trajectory, paired with accomplished goals (labeled by \(\mathcal{SP}\)) in an individual slot. During experience replay (in standard deep Q-learning), we use a multi-step strategy to control the transition sampling. First, we sample a goal from the replay buffer using a certain distribution \(w(g)\) (uniform unless specified otherwise). Given the goal, we sample a trajectory which has 0.5 probability being a positive example to the goal. If the sampled trajectory is a positive example, we sample a transition from it, with a probability of 0.5 that the transition has a reward. In preliminary experiments, we observe that the above procedure (especially controlling the amount of reward the agent sees) improves sample efficiency. **Learning** To learn the goal-conditioned policy \(\pi(.|g)\), we minimize the temporal-difference (TD) loss over transitions in the replay buffer. Given a transition \(\tau_{t}=(s_{t},a_{t},s_{t+1},r_{t},\mathcal{A}_{t},\mathcal{A}_{t+1})\), the TD loss is given by: \[TD(\tau_{t})=l(Q(s_{t},a_{t}),(r_{t}+\gamma\max_{a^{\prime}e_{\mathcal{A}_{t+ 1}}}Q(s_{t+1},a^{\prime}))), \tag{2}\] where \(\gamma\) is the discount factor and \(r_{t}\) is the internal reward given by \(R(s_{t},a_{t},g)\) when \(\tau_{t}\) was first collected. \(Q(s_{t},a_{t})\) is the Q-value for taking action \(a_{t}\) in state \(s_{t}\) and is predicted by the DRRN. The function \(l\) is the smooth-L1 loss: \[l(x,y)=\begin{bmatrix}|x-y|-0.5&\text{if }|x-y|>1;\\ 0.5\left(x-y\right)^{2}&\text{otherwise.}\end{bmatrix} \tag{3}\] To ensure exploration at training time, we add an entropy penalty term which is computed over the Q function with respect to a given \(s_{t}\). The entropy term \(H\) is also normalized by \(\text{log}(|\mathcal{A}_{t}|)\) to account for varying numbers of valid actions across timesteps. Therefore, the final loss is: \[L(\tau_{t})=TD(\tau_{t})+H(s_{t},(a)_{a\in\mathcal{A}_{t}}). \tag{4}\] Note there is no separate target network as no particular instability or over-optimism was found in our preliminary experiments. ### Discovery from stepping-stone exploration In this section we present shortly the 2 configurations we use to study the impact of discovery from serendipity in this work: go-explore and goal-chain. Both these mechanisms are explicitly designed to allow the agent to overcome hard-exploration problems and to master nested sets of goals, where achieving the first one is a stepping stone towards mastering the second one. One of the aims of this work is to study this effect on our ScienceWorld goals. **Go-explore** This mechanism is very similar to the policy-based version of go-explore (Ecoffet et al., 2021). That is, after sampling a goal \(g\) and the policy rollout is terminated (either by completing the goal or completing \(T\) timesteps), an additional num_steps_exploration actions, set to 5 in what follows, are sampled uniformly from the set of valid actions at each timestep. **Goal-chain** This mechanism works in a similar way as go-explore but is more deliberate: after goal \(g\) is achieved or \(T\) timesteps have been achieved, with probability \(p\) (0.5 in what follows) another goal \(g^{i}\) is sampled and used to condition the policy. Both go-explore and goal-chain can be combined in a single agent. ## 4 Experimental results ScienceWorld is a challenging environment. Baseline agents are barely able to master the simplest tasks of the benchmark (Wang et al., 2022). The large action space and amount of irrelevant state changes the agent can elicit are obstacles to goal discovery, and text environments present challenges for optimization. To answer our questions, we define a set of configurations for agents that we study in what follows: * **base**: Our baseline agent, as described in section 3.1; * **go-explore**: The **base** agent equipped with go-explore; * **chain**: The **base** agent equipped with goal-chain; * **go-explore-chain**: The base agent equipped with go-explore and goal-chain; * **no-feedback**: A non-autotelic goal-conditioned agent that gets its goals uniformly from \(\mathcal{G}_{\mathcal{SP}}\), and without relabeling from \(\mathcal{SP}\): it only learns when stumbling upon a targeted goal by normal exploration. It is not using the goal-modular replay buffer (i.e., transition-based instead of trajectory-based). * **unconstrained**: A **base** agent where the \(\mathcal{SP}\) relabels all possible goals (\(\mathcal{G}_{\mathcal{SP}}=\mathcal{G}\)); * **uniform-transition**: A base agent, with weights \(w_{g}\) for replay goal-sampling proportional to the total number of transitions for this goal in the replay buffer; * **metacognitive** An autotelic agent using both go-explore and goal-chain, which samples its goals for an episode according to recorded intermediate competence of these goals; * **extrinsic-impossible**: A non-autotelic agent that gets its goals uniformly from \(\mathcal{G}_{\mathcal{SP}}\), which contains an additional 100 impossible _nonsense_ goals. We train our agent on 800k steps with an episode length of \(T=30\). We use as evaluation metrics the aggregate scores on all our goals as well as evaluation on _hard goals_, that we define as goals where the agent needs to get some water first (see Figure 3). Table 1 presents the results. We notice the variance is very high in all but the **metacognitive** configuration: this is due to the compounding effects of goal discovery. An agent that stumbles on the easiest goals by chance early in training will be heavily advantaged in its goal-discover compared to an agent that only sees the goal later. What is the role of \(\mathcal{SP}\) relabels in autotelic learning? (\(\clubsuit\) vs \(\clubsuit\)) The dynamics of the relabelling process is of paramount importance for any learning to take place at all. In Table 1, third section, we present the evaluation scores for a set of experiments with respectively an absent or a talkative \(\mathcal{SP}\): in the **no-feedback** configuration the trajectories are simply input with the original goal that was targeted and the associated sparse reward; whereas for the **unconstrained** configuration, any goal that the agent accomplishes is given by the social peer as a relabel of the current trajectory (\(\mathcal{G}_{\mathcal{SP}}==\mathcal{G}\)). As in other configurations, goals are given if they are accomplished at any point in time. Since the goal space includes any possible descriptive changes on currently observed objects and that most actions result in such changes, the number of relabelled goals per episode in the **unconstrained** experiments is extraordinary (this agent discovers on the order of 50k goals). Both **no-feedback** and **unconstrained** configurations result in almost-null evaluation scores. In the **no-feedback** configuration, sparseness of reward is to blame. If we let a random agent explore the room for 800k timesteps, it will encounter goals in \(\mathcal{G}_{\mathcal{SP}}\) only a handful of times, and none of the hard ones (see Table 2.) For the **unconstrained** configuration, there are such an important number of discovered goals (e.g., containers inside other containers inside other containers, leading to a combinatorial explosion of possible goals). This means that trajectories leading to goals in \(G_{\mathcal{SP}}\) are drowned out in the set of other, non-relevant trajectories, and are only sampled a handful of times for replay; the agent's network almost never sees any \begin{table} \begin{tabular}{l|l||c|c|c} \hline \hline & Configuration Name & go-explore & goal-chain & eval score (_all_) & eval score (_hard goals_) \\ \hline \hline \multirow{4}{*}{\(\clubsuit\)} & **base** & \(\times\) & \(\times\) & \(71.89\pm 16.51\) & \(50.52\pm 47.52\) \\ & **go-explore** & ✓ & \(\times\) & \(\mathbf{80.17\pm 12.37}\) & \(\mathbf{59.12\pm 44.47}\) \\ & **chain** & \(\times\) & ✓ & \(63.60\pm 19.48\) & \(38.24\pm 45.01\) \\ & **go-explore-chain** & ✓ & ✓ & \(77.77\pm 8.81\) & \(55.48\pm 43.63\) \\ \hline \multirow{4}{*}{\(\clubsuit\)} & **no-feedback** & \(\times\) & \(\times\) & \(4.18\pm 3.28\) & \(0.00\pm 0.00\) \\ & **unconstrained** (stopped at 400k timesteps) & \(\times\) & \(\times\) & \(0.00\pm 0.00\) & \(0.00\pm 0.00\) \\ \hline \multirow{2}{*}{\(\clubsuit\)} & **uniform-transition** & \(\times\) & \(\times\) & \(50.29\pm 8.80\) & \(17.20\pm 36.44\) \\ \hline \multirow{2}{*}{\(\clubsuit\)} & **metacognitive** & ✓ & ✓ & \(\mathbf{87.31\pm 4.97}\) & \(\mathbf{76.36\pm 36.52}\) \\ & **extrinsic-impossible** & \(\times\) & \(\times\) & \(67.71\pm 16.18\) & \(42.00\pm 45.17\) \\ \hline \hline \end{tabular} \end{table} Table 1: Main results of our agents on ScienceWorld. We report in the leftmost column the configuration name, in the next two if it uses go-explore or goal-chain. The configurations are clustered by which question they answer. We then report aggregate eval scores on all our goals and aggregate eval scores on our hard goals (defined in Figure 3). All eval scores are computed over the last 10 evals for stability, and are averaged across 10 random seeds. See main text for description of the configurations and commentary. reward on them, to tell nothing of the optimization process. Lessons learned: for a multi-task agent to learn correctly in this setting, it needs to have relevant feedback from its social peer, i.e. feedback that is not too descriptive and that is relevant to the goals the social peer wants to instill in the agent. How to correctly prioritize hard goals in replay for an agent to be able to learn them? (\(\clubsuit\) vs \(\clubsuit\)) Another important feature of off-policy multi-goal text agents is their ability to learn a distribution of goals of varying difficulty from their replay buffer. Because the data is collected online, the replay buffer will contain many more exemplars of trajectories for the easy goals compared to the hard goals. With a vanilla replay buffer with uniform replay probabilities over transitions, the transitions corresponding to a difficult goal will hardly get replayed, drowned in the abundance of transitions from easier goals. The ratio of easy-goal transitions to hard-goal transitions only gets worse as the agent achieves mastery of the easy goals and more such transitions fill the buffer. This motivates the use of the modular goal buffer described in section 3.1. To empirically validate our choice, we compare the modular buffer with one mimicking the functioning of a basic replay buffer: to do so, instead of sampling goals to replay uniformly, we sample goals with a weight proportional to the number of transitions in all trajectories corresponding to this goal (the **uniform-transition** configuration). This configuration achieves lower performance and plateaus sooner compared to the **base** configuration which serves as our baseline. Additionally, we investigate whether other replay distribution over goals are important for learning. We investigate difficulty-based sampling (the weight of a goal is given by agent competence over the last 50 attempts of the goal). We also investigate the intermediate difficulty configuration, where the weight given to a goal in goal-sampling \(w_{g}\) depends on empirical competence \(c\) on this goal using the following formula: \[w_{g}=f_{c}(c)=\alpha\text{ exp}\bigg{(}\frac{(c-0.5)^{2}}{2\sigma^{2}}\bigg{)}+\beta, \tag{5}\] \(\alpha\) and \(\beta\) are set to \(1.0\) and \(0.2\) respectively. We provide the results in the first row of the Table 1. We hardly see any difference between these different goal sampling configurations. Lessons learned: in a multitask agent with tasks of varying difficulty, where exemplars for these tasks are present at very different rates in the replay buffer, it is important to have a replay mechanism that samples often enough transitions for rare goals. What is the role of goal distributions when sampling goals for exploration? (\(\clubsuit\) vs \(\clubsuit\)) We introduce the **metacognitive** agent configuration (so-called because it uses knowledge of its own competence to target goals): in this configuration the **base** agent is equipped with go-explore, goal-chain and samples its goals based on intermediate competence (as defined in Equation 5). The agent performs significantly better than all our other configurations, and also exhibits very low variance (as much as 3 times as low as other agents). The difference is even more apparent if we look at final performance on _hard goals_ compared with our **base** configuration (see Figure 1, second column). For an autotelic agent, focusing on goals on which it experiences intermediate difficulty allows it to target goals on which there is good learning opportunity, as goals that are too easy are already mastered and benefit less from further exploration, whereas goals that are too hard are still unreachable for the agent. We finally describe experiments highlighting the interplay of hindsight relabelling with goal sampling. In the **extrinsic-impossible** configuration, the agent is given the list of 14 usual goals of interest plus an additional 100 _nonsense_ goals, consisting of the phrase a substance called followed by various made-up words. We see that, contrary to intuition, the extrinsic-impossible configuration works rather well: the final evaluation score is similar (different with no significance) from the base configuration. This highlights a very important property of goal-exploration processes: for non-trivial exploration, a very good baseline is having random goal sampling that pushes the agent to have diverse behavior. As long as the agent discovers meaningful states in the environment and the \(\mathcal{SP}\)'s behavior is helpful, diverse goal-conditioned behavior can be learned; the performance remains lower than in autotelic configurations nevertheless. We see here that the dynamics of goal sampling are also important in the agent's exploration, and ultimately, learning. \begin{table} \begin{tabular}{l|c} \hline \hline Goal & \#occurences \\ \hline \hline _a freezer. The freezer door is open._ & 4975 \\ _In the freezer is: nothing._ & 4118 \\ _a sink, which is turned on._ & 473 \\ _In the sink is: nothing._ & 68 \\ \hline \end{tabular} \end{table} Table 2: Occurences of goals for a random agent run for 800k timesteps with environment reset every 30 timesteps, similar to the interaction setup of our **base** configuration. We record goals at each timestep as done by \(\mathcal{SP}\). All omitted goals have 0 occurrences over the whole random run. An agent that samples goals of intermediate competence creates its own curriculum where goals that are stepping stones are targeted first and further exploration proceeds from then on. On the other hand, an agent can be given unfeasible goals as long as they lead to diverse enough behavior. ## 5 Related work Autotelic agents, goal-exploration processes, novelty searchAutotelic agents were born from the study of intrinsic motivation and curiosity in humans (Oudeyer and Kaplan, 2007) and the application of these models in developmental robotics at first (Baranes and Oudeyer, 2013) and machine learning more recently (Forestier et al., 2022; Colas et al., 2019). They are very close conceptually to other goal-exploration processes such as Go-explore (Ecoffet et al., 2021). The latter was developed to tackle hard-exploration challenges and stems from insights from novelty search (Lehman and Stanley, 2011): exploration in environments with sparse or deceptive rewards can be driven by the search for novelty alone. The ability of autotelic agents to self-organize a curriculum (Elman, 1993) of goals for training is a form of automatic curriculum learning (Portelas et al., 2021) and has been studied by adversarial goal generation approaches (Florensa et al., 2018; Campero et al., 2021). Language-conditioned agents, language for goal-explorationBuilding language-instructable agents has been one of the aims of AI research since its inception and is still a very active area of research today in machine learning (Anderson et al., 2018; Luketina et al., 2019) and robotics (Tellex et al., 2020); notable recent breakthroughs were achieved through use of large-scale pre-trained foundation models for planning (Ahn et al., 2022; Huang et al., 2022) and multi-modal grounding (Fan et al., 2022; Jiang et al., 2022). Language has been found to be beneficial for goal-exploration as well, by enabling abstraction (Mu et al., 2022; Tam et al., 2022), combination of different abstraction levels (Mirchandani et al., 2021) and goal imagination (Colas et al., 2022) supported by systematic generalization (Bahdanau et al., 2019). Go-explore has also been studied in the context of text environments (Madotto et al., 2021); albeit in very simple text environments with comparatively few valid actions compared to ScienceWorld and not in a multi-goal setting, as well as having distinct exploration and policy learning phases. Interactive text environmentsText games are of particular importance to research at the intersection of RL and NLP, and thus for the study of language-informed agents. (Cote et al., 2019) introduced TextWorld, the first such text environment, followed by IF environments (Hausknecht et al., 2020). These tasks are notoriously difficult for RL agents due to the large action space and abstract state space. Methods for exploration have been proposed in these contexts such as reducing the action space with LM action generation (Yao et al., 2020) or using novelty bonuses to counter deceptive rewards (Ammanabrolu et al., 2020). These works however did not investigate multigoal contexts (IF games being quite linear in nature) and the necessity to balance tasks of varying difficulty. ScienceWorld (Wang et al., 2022) was explicitly introduced to investigate language model's abilities to act as agents in an interactive environment, but it also features more complexity and openness than other procedural text games and is thus a perfect testbed for language-based autotelic agents. ## 6 Conclusion and further work In this work, we have presented a breakdown of the architecture of an autotelic agent, studied on a hard to explore part of the ScienceWorld text environment. Autotelic RL is a framework for implementing autonomous, open-ended, multi-task agents, and we have focused on the necessary internal components for these agents to perform efficient exploration and task learning. In particular, we have highlighted the need for a replay buffer that over-samples rare tasks and a social peer that provides appropriate interaction. This interaction comes in the form of relevant feedback of the agent's behavior but does not necessarily imply, in the case of the social peer directly giving goals to the agent, that the goals can be feasible: only that they lead to interesting interactions with the environment. The agent can shoot for the moon, all that matters is that it goes on to do something interesting and gets relevant feedback. Additionally, letting the agent sample and chain goals of intermediate competence for itself leads increased mastery of the hardest goals in ScienceWorld. Overall, we are excited by the challenges and opportunities posed by textual autotelic agents. We identify some important directions for future work. First, we only consider one environment variation; distributions of environments could be considered, and generalization could be studied: this can be challenging for current text agents. Second, more advanced forms of automatic curriculum setting could be implemented, such as ones using learning progress to sample goals (Colas et al., 2019). Third, goal sampling in this work has been limited to be taken from the list of achieved goals; a truly open-ended autotelic agent should be able to create its own novel goals based on previous achievements. Last but not least, it is worth exploring to integrate a pre-trained large language model (Brown et al., 2020) into various parts of the pipeline, such that the \(\mathcal{SP}\) can alleviate the constraints of string-matching, and being able to imagine relevant but unseen goals, by leveraging commonsense knowledge from the language model. ## Acknowledgements Experiments presented in this paper were carried out using the HPC resources of IDRIS under the allocation 2022-[A0131011996] made by GENCI.
2301.03365
Localized Bounded Below Approximate Schauder Frames are Finite Unions of Approximate Riesz Sequences
Based on the truth of Feichtinger conjecture by Marcus, Spielman and Srivastava \textit{[Ann. of Math. (2), 2015]} and from the localized version by Gr\"{o}chenig \textit{[Adv. Comput. Math., 2003]}, we introduce the notion of localization of approximate Schauder frames (ASFs) and approximate Riesz sequences (ARSs). We show that localized bounded below ASFs are finite unions of ARSs.
K. Mahesh Krishna
2023-01-01T05:14:24Z
http://arxiv.org/abs/2301.03365v1
Localized bounded below approximate Schauder frames are finite unions of approximate Riesz sequences Localized bounded below approximate Schauder frames are finite unions of approximate Riesz sequences K. Mahesh Krishna Post Doctoral Fellow Statistics and Mathematics Unit Indian Statistical Institute, Bangalore Centre Karnataka 560 059, India Email: [email protected] Date: November 6, 2021 **Abstract**: Based on the truth of Feichtinger conjecture by Marcus, Spielman and Srivastava _[Ann. of Math. (2), 2015]_ and from the localized version by Grochenig _[Adv. Comput. Math., 2003]_, we introduce the notion of localization of approximate Schauder frames (ASFs) and approximate Riesz sequences (ARSs). We show that localized bounded below ASFs are finite unions of ARSs. **Keywords**: Feichtinger conjecture, Frame, Riesz sequence, Localization. **Mathematics Subject Classification (2020)**: 42C15, 46A45, 46B45. ## 1. Introduction In the beginning years of \(21^{th}\) century, Prof. Feichtinger formulated the following conjecture based on his extensive work on Gabor/Weyl-Heisenberg frames (see [13] for the history and [4, 14, 16, 17, 18, 19, 22, 25, 26, 27, 28, 36] for general theory). **Conjecture 1.1**.: _[_8_]__**(Feichtinger Conjecture/Marcus-Spielman-Srivastava Theorem) Let \(\{\tau_{n}\}_{n}\) be a frame for a Hilbert space \(\mathcal{H}\) such that_ \[0<\inf_{n\in\mathbb{N}}\|\tau_{n}\|.\] _Then \(\{\tau_{n}\}_{n}\) can be partitioned into a finite union of Riesz sequences for \(\mathcal{H}\)._ First breakthrough which supported Conjecture 1.1 occurred when Grochenig proved it for intrinsically localized frames [23]. Shortly afterwords, it has been verified for certain classes of \(\ell^{1}\)-self-localized frames, wavelet frames, Gabor frames, frames of translates, frames formed by reproducing kernels and exponential frames/frames of exponentials [2, 3, 6, 29, 30, 35, 39]. Conjecture 1.1 received great attention after establishing its equivalence with Kadison-Singer conjecture [9, 10, 12]. Finally, the Feichtinger conjecture has been solved fully by resolving Weaver's conjecture by Marcus, Spielman, and Srivastava in 2013 [5, 33, 34, 37, 38]. In this paper, we formulate a Banach space version of Conjecture 1.1 and prove it for bounded below intrinsically localized ASFs (Theorem 2.7). ## 2. Localized bounded below ASFs are finite unions of ARSs We consider the following most general notion of approximate Schauder frames. In the entire paper, \(\mathcal{X}\) is a separable Banach space and \(\mathcal{X}^{*}\) is its dual. **Definition 2.1**.: _[_7, 11, 21_]_ _Let \(\{\tau_{n}\}_{n}\) be a collection in \(\mathcal{X}\) and \(\{f_{n}\}_{n}\) be a collection in \(\mathcal{X}^{*}\). The pair \((\{f_{n}\}_{n},\{\tau_{n}\}_{n})\) is said to be an **approximate Schauder frame** (we write ASF) for \(\mathcal{X}\) if the **frame** operator_ \[S_{f,\tau}:\mathcal{X}\ni x\mapsto S_{f,\tau}x\coloneqq\sum_{n=1}^{\infty}f_{n}(x )\tau_{n}\in\mathcal{X}\] _is a well-defined bounded linear invertible operator._ We use the following notion of 'bounded below' for ASFs. **Definition 2.2**.: _An ASF \((\{f_{n}\}_{n},\{\tau_{n}\}_{n})\) for \(\mathcal{X}\) is said to be **bounded below** if_ \[\inf_{n\in\mathbb{N}}|f_{n}(\tau_{n})|>0.\] Motivated from the definition of localization of frames [20, 24], we introduce the following notion. **Definition 2.3**.: _An ASF \((\{f_{n}\}_{n},\{\tau_{n}\}_{n})\) for \(\mathcal{X}\) is said to be **intrinsically/self localized** if there exist \(s>1\) and \(A>0\) such that_ \[|f_{n}(\tau_{m})|\leq\frac{A}{(1+|n-m|)^{s}},\quad\forall n,m\in\mathbb{N}.\] Two notions of Riesz sequences for Banach spaces exist in literature, see [1, 15] and [32]. Here we define another. **Definition 2.4**.: _An ASF \((\{f_{n}\}_{n},\{\tau_{n}\}_{n})\) for \(\mathcal{X}\) is said to be an **approximate Riesz sequence** (we write ARS) if there exists a finite partition \(Q_{1},\ldots,Q_{N}\) of \(\mathbb{N}\) such that_ \[\mathbb{N}=\bigcup_{j=1}^{N}Q_{j}\] _and for each \(1\leq j\leq N\),_ \[\inf_{n\in Q_{j}}\left(|f_{n}(\tau_{n})|-\sum_{m\in Q_{j},m\neq n}|f_{n}(\tau_ {m})|\right)>0.\] Note that for Hilbert spaces, if \(\{f_{n}\}_{n}\) is determined by \(\{\tau_{n}\}_{n}\) (Riesz representation), then Definition 2.4 is equivalent (due to positivity) to the definition of Riesz sequence (see [23]). We now formulate the following conjecture (some other are formulated in [31]). **Conjecture 2.5**.: _Every bounded below ASF can be partitioned as a finite union of ARBs._ We now prove Conjecture 2.5 for intrinsically localized ASFs with the help of following result. **Theorem 2.6**.: _[_23_]___ * _For every_ \(s>1\)_,_ \[D_{s}\coloneqq\sup_{x\in\mathbb{R}}\sum_{n=1}^{\infty}\frac{1}{(1+|n-x|)^{s}}<\infty.\] * _For every_ \(s>1\)_, there exists a_ \(C_{s}>0\) _(which does not depend on_ \(\delta\)_) such that_ \[\sup_{m\in\mathbb{N}}\sum_{n\in\mathbb{N},n\neq m}\frac{1}{(1+|n-m|)^{s}}\leq \frac{C_{s}}{\delta^{s}},\] _whenever_ \[\inf_{n,m\in\mathbb{N},n\neq m}|n-m|\geq\delta.\] **LOCALIZED BOUNDED BELOW APPROXIMATE SCHAUEDF FRAMES ARE FINITE UNIONS OF APPROXIMATE RIESZ SEQUENCES** **Theorem 2.7**.: _Conjecture 2.5 holds for intrinsically localized bounded below ASFs._ Proof.: Our proof is highly motivated from [23]. Let \((\{f_{n}\}_{n},\{\tau_{n}\}_{n})\) be an intrinsically localized bounded below ASF for \(\mathcal{X}\). Define \[C\coloneqq\inf_{n\in\mathbb{N}}|f_{n}(\tau_{n})|>0.\] Since \((\{f_{n}\}_{n},\{\tau_{n}\}_{n})\) intrinsically localized there exist \(s>1\) and \(A>0\) such that \[|f_{n}(\tau_{m})|\leq\frac{A}{(1+|n-m|)^{s}},\quad\forall n,m\in\mathbb{N}.\] Let \(C_{s}\) be the constant as in Theorem 2.6. Choose natural number \(M\) such that \[\frac{AC_{s}}{M^{s}}\leq\frac{C}{2}.\] We now partition \(\mathbb{N}\) into \(Q_{1},\ldots,Q_{N}\) such that \[\mathbb{N}\coloneqq\bigcup_{j=1}^{N}Q_{j}\] and for each \(1\leq j\leq N\), \[\inf_{n,m\in Q_{j},n\neq m}|n-m|\geq M.\] (Note that there are infinitely many partitions of \(\mathbb{N}\) of this type.) Let \(1\leq j\leq N\). Then using (ii) in Theorem 2.6 \[\sup_{n\in Q_{j}}\sum_{m\in Q_{j},m\neq n}|f_{n}(\tau_{m})|\leq A\sup_{n\in Q _{j}}\sum_{m\in Q_{j},m\neq n}\frac{1}{(1+|n-m|)^{s}}\leq A\frac{C_{s}}{M^{s} }\leq\frac{C}{2}.\] Therefore for each fixed \(1\leq j\leq N\) \[\inf_{n\in Q_{j}}\left(|f_{n}(\tau_{n})|-\sum_{m\in Q_{j},m\neq n }|f_{n}(\tau_{m})|\right) =\inf_{n\in Q_{j}}|f_{n}(\tau_{n})|-\sup_{n\in Q_{j}}\sum_{m\in Q_ {j},m\neq n}|f_{n}(\tau_{m})|\] \[\geq C-\frac{C}{2}=\frac{C}{2}>0.\] Since \(j\) was arbitrary, we get the theorem.
2307.01824
Multi-Channel Feature Extraction for Virtual Histological Staining of Photon Absorption Remote Sensing Images
Accurate and fast histological staining is crucial in histopathology, impacting diagnostic precision and reliability. Traditional staining methods are time-consuming and subjective, causing delays in diagnosis. Digital pathology plays a vital role in advancing and optimizing histology processes to improve efficiency and reduce turnaround times. This study introduces a novel deep learning-based framework for virtual histological staining using photon absorption remote sensing (PARS) images. By extracting features from PARS time-resolved signals using a variant of the K-means method, valuable multi-modal information is captured. The proposed multi-channel cycleGAN (MC-GAN) model expands on the traditional cycleGAN framework, allowing the inclusion of additional features. Experimental results reveal that specific combinations of features outperform the conventional channels by improving the labeling of tissue structures prior to model training. Applied to human skin and mouse brain tissue, the results underscore the significance of choosing the optimal combination of features, as it reveals a substantial visual and quantitative concurrence between the virtually stained and the gold standard chemically stained hematoxylin and eosin (H&E) images, surpassing the performance of other feature combinations. Accurate virtual staining is valuable for reliable diagnostic information, aiding pathologists in disease classification, grading, and treatment planning. This study aims to advance label-free histological imaging and opens doors for intraoperative microscopy applications.
Marian Boktor, James E. D. Tweel, Benjamin R. Ecclestone, Jennifer Ai Ye, Paul Fieguth, Parsin Haji Reza
2023-07-04T16:55:59Z
http://arxiv.org/abs/2307.01824v1
Multi-Channel Feature Extraction for Virtual Histological Staining of Photon Absorption Remote Sensing Images ###### Abstract Accurate and fast histological staining is crucial in histopathology, impacting diagnostic precision and reliability. Traditional staining methods are time-consuming and subjective, causing delays in diagnosis. Digital pathology plays a vital role in advancing and optimizing histology processes to improve efficiency and reduce turnaround times. This study introduces a novel deep learning-based framework for virtual histological staining using photon absorption remote sensing (PARS) images. By extracting features from PARS time-resolved signals using a variant of the K-means method, valuable multi-modal information is captured. The proposed multi-channel cycleGAN (MC-GAN) model expands on the traditional cycleGAN framework, allowing the inclusion of additional features. Experimental results reveal that specific combinations of features outperform the conventional channels by improving the labeling of tissue structures prior to model training. Applied to human skin and mouse brain tissue, the results underscore the significance of choosing the optimal combination of features, as it reveals a substantial visual and quantitative concurrence between the virtually stained and the gold standard chemically stained hematoxylin and eosin (H&E) images, surpassing the performance of other feature combinations. Accurate virtual staining is valuable for reliable diagnostic information, aiding pathologists in disease classification, grading, and treatment planning. This study aims to advance label-free histological imaging and opens doors for intraoperative microscopy applications. ## I Introduction Histopathology is a branch of pathology that utilizes microscopic examination of chemically stained thin tissue sections to investigate and diagnose diseases by analyzing cellular and structural abnormalities. Traditional histology has several limitations including time-consuming procedures, tissue alteration, limited staining options, and variability [1, 2]. Consequently, there is a growing need for imaging techniques that can directly visualize cells and tissues in their native states without the need for chemical labeling. As such, label-free microscopy techniques have gained significant attention in histopathology as they offer non-invasive methods to visualize cells and tissues [3, 4, 5, 6, 7] through leveraging the intrinsic properties of biological samples. Label-free techniques acquire informative contrast from unstained samples while preserving their integrity for subsequent analysis. These label-free contrasts can be extracted and transformed into familiar formats, such as conventional histopathology stains, through the process of virtual staining. Label-free contrasts from various optical microscopes have been coupled with deep learning algorithms to intelligently present the rich information in meaningful ways. Examples of such microscopes include quantitative phase imaging [8], reflectance confocal microscopy [3], photoacoustic microscopy [9, 10], and autofluorescence microscopy [7]. The specificity level to biomolecules varies for each microscope, and as the microscope becomes more specific towards biomolecules of interest, the dependence on the deep learning model for inference decreases. For instance, autofluorescence shows promise in capturing extranuclear elements, however, important structures like cell nuclei do not exhibit measurable autofluorescence [11]. The lack of direct measurement of nuclear contrast may cause deep learning to estimate the coloring of nuclear structures. Autofluorescence is a form of radiative relaxation and re-emittance of lower energy photons following the absorption of light. However, a portion of the absorbed energy can also undergo relaxation via temperature and pressure. These are termed non-radiative relaxation pathways and they provide additional contrast and hold significant value in capturing critical structures, including cell nuclei (DNA) [12]. One modality, Photon Absorption Remote Sensing (PARS) microscopy, previously known as Total-Absorption Photoacoustic Remote Sensing [12, 13], can simultaneously measure the radiative (e.g. autofluorescence) and non-radiative (temperature and pressure) relaxation of biomolecules. By leveraging the unique absorption spectra of biomolecules like hemoglobin, DNA, and lipids, PARS enables direct selective imaging without the need for extensive systematic modifications [14, 15, 16]. PARS has emerged as a powerful method that utilizes a non-contact, all-optical architecture to visualize the intrinsic absorption contrast of biological tissue structures. In PARS, a pulse of light is used to excite the sample and induce both radiative and non- radiative relaxations processes which are then measured [13]. The optical emissions generated from the radiative relaxation can be directly measured while the non- radiative relaxation results in nanosecond-scale variations in the specimen's optical properties. These non-radiative modulations can be measured as forward or backscattered intensity fluctuations using a secondary co-focused detection laser [13]. Furthermore, optical scattering contrast is also measured using the pre-excitation scattering intensity of the detection laser. All three contrasts are simultaneously measured from a single excitation event and are therefore intrinsically co-registered. Previously, the PARS microscope has demonstrated its efficacy in generating virtual H&E stains using a pix2pix image-to-image translation model [10]. However, pix2pix demands accurate alignment of PARS and true H&E images at the pixel level. Pixel-level registration is not always achievable due to alterations in the tissue during the staining process. Poorly aligned pairs weaken the colorization performance and lead to distorted or blurred translations [17]. To avoid such problem, here we choose to employ a more flexible image-to-image translation model known as cycleGAN, short for Cycle-Consistent Generative Adversarial Network [18]. While cycleGAN can be trained with unpaired data, in this study, registered pairs are used as they help enhance the colorization results achieved by cycleGAN [17], primarily through mitigating hallucination artifacts. Previous PARS virtual staining efforts used both the non-radiative and radiative channels from a single 266 nm UV excitation wavelength [10, 17, 19]. While these combined contrasts are highly analogous to the nuclear and connective tissue contrasts highlighted with H&E, some structures like red blood cells are not strongly captured with UV excitation alone. An additional excitation wavelength can be used to target hemoglobin or red blood cells to provide better separation between them and comparably sized features, such as nuclei. Providing additional information to the model may help it understand the separability between different structures and enhance its ability to learn the statistical relationship between PARS and H&E domains. Accordingly, an additional 532 nm excitation laser is employed in this study to help acquire red blood cell contrast, resulting in the non-radiative time-domain (TD) signals exhibiting two excitation peaks. The first peak, at a wavelength of 266 nm, is used to specifically target DNA/RNA, while the second peak, at 532 nm, primarily targets hemoglobin (red blood cells). Moreover, recently Pellegrino et al. [20, 21] showed that multiple features may be extracted from a single peak time-resolved non-radiative relaxation signal. These features are extracted based on signal shape and may relate to more specific tissue structures [20, 21]. Hence, we choose to expand the number of input channels to the cycleGAN model through intelligently extracting features from the non-radiative TD signals to improve the distinction between different structures. To extract such features, we utilize the K-means method presented in [20]. These features are subsequently employed to reconstruct feature images, which together with the conventional non-radiative and radiative images form an array of image components. Such components can serve as inputs for the virtual staining cycleGAN model in different arrangements. The novelty of this work resides in the use of such features to train a cycleGAN model. CycleGAN conventionally allows single- or three-channel input data. To effectively utilize the extracted features in virtual staining, this study introduces multi-channel cycleGAN (MC-GAN) that extends the existing cycleGAN model used in previous work [17] to support more than three channels. This approach allows for the extraction of multiple features, providing a better understanding of the potential of the TD signals in improving virtual staining. The key contribution of this work is to enhance the performance of the colorization model by incorporating multiple features for labeling structures. Another contribution of this work is the introduction of a comprehensive pipeline that enables the extraction and selection of the most effective features to enhance the process of colorization. ## II Methods ### _Sample Preparation and Data Acquisition_ The study utilized a dataset obtained from thin sections of formalin fixed paraffin embedded (FFPE) human skin tissues and mouse brain tissues. The human tissue samples are provided by the Alberta Precision Laboratories in Calgary, Alberta, Canada. The collection of these samples adhered to approved protocols established with the Research Ethics Board of Alberta (Protocol ID: HREBA.CC-18-0277) and the University of Waterloo Health Research Ethics Committee (Protocol ID: 40275). The requirement for patient consent was waived by the ethics committee, as samples are anonymized to remove all patient information, and tissues are archival samples not required for patient diagnostics. All experiments involving human tissues are conducted in compliance with the relevant guidelines and regulations of the government of Canada, such as "Ethical Conduct for Research Involving Humans (TCPS 2)". The mouse brain samples were prepared at the National Institute of Health in Bethesda, Maryland, United States. These samples adhered to approved protocols under the University of Waterloo Health Research Ethics Committee (Protocol ID: 44595). The preparation of unstained tissue sections involves several steps. Initially, tissue is resected and immediately fixed in a 10% neutral buffered fixative for up to 48 hours. Next, the sample is submerged in a series of alcohol exchanges to dehydate the tissue. Dehydration is followed by a series of rinses in xylene, a tissue clearing agent which removes any fat residues. The dehydrated and cleared tissue is then soaked in molten paraffin to allow for the complete infiltration of paraffin into the tissue. Finally, tissues are embedded in paraffin and allowed to solidify at room temperature, forming a FFPE tissue block. Tissue sections are prepared by using a microtome to slice thin sections of about 3-5 \(\upmu\)m from the FFPE block. Thin sections are then transferred to a glass microscope slide by means of a water bath and allowed to dry. Slides are heated to 60 \({}^{\circ}\)C for 60 minutes in a laboratory to evaporate excess paraffin. After this process, the thin sections are ready for imaging using the PARS system. Once the PARS dataset is acquired, the thin tissue slides are stained using H&E. The stained slides are completed with a mounting medium and a coverslip. The stained sections are imaged using a transmission mode brightfield microscope, thereby generating the corresponding H&E ground truth dataset. To create datasets for training the virtual staining GAN model, a collection of PARS images and corresponding H&E images is prepared as follows. Thin, unstained tissue sections are scanned using the PARS system, as outlined in Section II.B. After imaging, specimens are stained with H&E, then scanned using a brightfield microscope. This results in matched pairs of PARS and H&E images from the same samples. In Figure 1, a comparison between the conventional histochemical staining process and our proposed virtual staining process is presented. For training the virtual staining model, samples first undergo the PARS imaging pathway, then the standard staining procedure. This is necessary for generating ground truth data. However, this step is not required once the model has been trained. ### PARS Imaging While this paper does not focus on the PARS system design, it is important for understanding the data collection process. For a more detailed exploration of the system architecture and image formation process, refer to "Automated Whole Slide Imaging for Label-Free Histology using Photon Absorption Remote Sensing Microscopy" by Tweel _et al._[19]. Briefly, the experimental setup is depicted in Figure 2. This architecture features two excitation sources: a 266 nm UV laser (Wedge XF, RPMC) and a 532 nm visible laser (Wedge XF, RPMC). The detection source is a 405 nm OBIS-LS laser (OBIS LS 405, Coherent). The excitation and detection sources are combined using a dichroic mirror and focused onto the sample using a 0.42 NA UV objective lens (NPAL-50-UV-YSTF, OptoSigma). This configuration provides a maximum spatial resolution of approximately 400 nm. The detection light, containing the optical scattering and non-radiative relaxation is collected after transmission through the sample using a second objective lens (100X Figure 1: Virtual Staining of PARS images via deep learning. The top pipeline shows the standard workflow to generate images of histochemical stains. The bottom pipeline shows the steps for virtual staining. The same tissue section is used in the two workflows for the deep model training and performance analysis. Mitutoyo Plan Apo, Mitutoyo). The same lens also collects the radiative emissions from the sample. To separate the non-radiative detection light from the radiative relaxation, a spectral filter (NF405-13, Thorlabs) is employed. Then, the radiative emission amplitudes are directly measured using a photodiode (APD130A2, Thorlabs). Concurrently, the non-radiative detection light is directed to another photodiode (APD130A2, Thorlabs) where the optical scattering and the absorption-induced intensity modulations are captured. PARS images are generated through the following process. The sample is scanned over the stationary objective lens using mechanical stages. The 266 nm and 532 nm excitation sources are pulsed continuously at a rate of 50 kHz, with energies of \(\sim\)150-200 pJ, and \(\sim\)2.5 nJ for the 266 nm and 532 nm source, respectively. Concurrently, the continuous wave detection laser operates at \(\sim\)3-5 \(\mu\)W during imaging. The 532 nm pulses are synchronized to occur \(\sim\)500 ns after the 266 nm pulses. The stage motion is then tuned to ensure the 266 nm excitation event occurs every 250 nm. During each excitation event, the optical scattering, and radiative and non-radiative relaxation signals are collected from the corresponding photodiodes, for each excitation wavelength. Additionally, a position signal is recorded from the scanning stages. The radiative signals are condensed into a single feature by extracting the amplitude of the measured signal. The optical scattering is Figure 2: Simplified PARS histology optical architecture. Component labels are defined as follows: mirror, \(M\); dichroic mirror, _DM_; variable beam expander, _VBE_; collimator, _Col_; condenser lens, _Cond_.; spectral filter, _SF_; beam sampler, _BS_; photodiode, PD; objective lens, OL; harmonic beam splitter, _HBS_; beam trap, _BT_. extracted by averaging the transmitted detection prior to excitation. The post-excitation time-resolved non-radiative relaxation modulations are stored in their entirety for further processing. To generate a baseline model for comparisons, the non-radiative signal reconstruction method used in previous embodiments is employed [10]. This method involves signal extraction by integrating the post excitation modulation energy of the non-radiative signals. The baseline is used to contrast the efficacy of intelligently extracting features (explained in Section II.D) from the non-radiative signals for enhancing the virtual staining process. ### _Image Registration and Preprocessing_ In this study, we choose to use registered PARS and H&E pairs for training a cycleGAN for virtual staining, as discussed in Section I. However, the PARS and H&E images are not inherently co-registered. This is attributed to the H&E staining procedure, and the two different acquisition processes. Subsequently, additional steps, including field-of-view matching and registration, are required to align the PARS and H&E images properly for model training and performance analysis. The preprocessing and registration process follows the workflow previously described by Boktor et al. [10]. This includes extracting small fields from the PARS and H&E images, then coarsely matching them for one-to-one registration. The control point registration tool is used from the MATLAB Image Processing Toolbox is used for registration, with PARS as the reference images and H&E as the registered images. Control points are manually selected and refined to minimize distortions. A non-rigid geometric transformation [22] is then fitted between the two images, and the transformation is applied to the H&E images, resulting in co-registered pairs of PARS and H&E images. Following registration, PARS images are preprocessed as follows. Each image is normalized, then the contrast is enhanced by saturating the highest and lowest 1% of pixel values. Finally, a color reversal is performed to match the PARS images with the grayscale ground truth images. The same preprocessing is applied to all datasets used in this study. ### _Time-Domain Feature Extraction_ To extract material-specific information from non-radiative TD signals, a method is needed to identify constituent time-domain features that accurately represent the underlying tissue target. The method, proposed by Pellegrino et al. [20], is utilized here which features K-means clustering with a modified approach to compute cluster centroids. To generate feature images from the non-radiative TD signals, two steps are involved feature learning and feature extraction. The intelligent clustering aims to identify \(K\) characteristic shapes in the signals, described as a set of \(K\) centroids, \(\mathcal{F}=\{f_{i}(t)\},i=1,...,K\). TD signals are treated as Cartesian vectors in space \(\mathbb{R}^{n}\), where \(n\) corresponds to the number of TD samples, and thus the shape of the signal is associated with the angle of the corresponding vector, and the distance between TD signals is quantified by the sine of the angle between them, resulting in a maximum distance for orthogonal signals and zero distance for scaled or inverted signals. Cluster centroids are computed as the principal component of the combined set of each cluster and its negative, ensuring that the learned centroids are resilient to noise. Following the K-means clustering, a set of feature vectors \(\mathcal{F}=\{\vec{f}_{i}\}\) is obtained, which can represent the signals as a weighted sum. These feature vectors are then arranged in the form of a matrix of features, \(F=\left[\vec{f}_{1}\left|\vec{f}_{2}\right|...\left|\vec{f}_{K}\right|\right]\). The amplitudes of the learned TD features (centroids) contained within each time domain are extracted by transforming from the time-domain to the feature-domain. This is performed by multiplying each TD signal with the pseudo-inverse of \(F\)[20, 23]. The result is an array of \(K\) feature images, \(M_{f}=\left[m_{f_{1}},m_{f_{2}},...,m_{f_{K}}\right]\). The appropriate value of \(K\) is determined according to the specific dataset as discussed in Section III. A minimum of 2 clusters is examined during the feature extraction process. However, it is important to impose an upper limit on \(K\), set to 6 in this study, to prevent the generation of redundant clusters and avoid the introduction of visually indistinguishable or uninformative features. Following the determination of the optimal value for \(K\), a set of \(K\) feature images is generated. These feature images alongside the conventional non-radiative and radiative image components form an array of images, which can serve as the inputs for the colorization model in different combinations. ### Multi-Channel GAN (MC-GAN) This study applies cycleGAN to convert label-free PARS images into virtually stained images that resemble their corresponding H&E histochemical stained samples. CycleGAN learns feature relation between an input image domain, \(A\) and a target image domain \(B\), and generates the generators \(G\): \(A\to B\) and \(F\): \(B\to A\) and the discriminators \(D_{A}\) and \(D_{B}\). The generators aim to generate realistic images that resemble the target domain while preserving the essential characteristics of the input domain. The discriminator, on the other hand, tries to differentiate between real images from the target domain and fake images produced by the generator. It provides feedback to the generator by assessing the fidelity of the generated images, enabling the generator to improve its output quality over time through an adversarial training process. In this work, we assume domain \(A\) is PARS imagery while domain \(B\) is H&E imagery. CycleGAN is able to operate in a scenario where paired examples are not available, nevertheless the absence of paired samples poses challenges in mapping between the source and target domains as it is an under-constrained. To tackle this, CycleGAN incorporates an inverse mapping and introduces a cycle consistency loss [18]. This loss enforces that the translated images can be reliably reversed to their original form. However, it is worth noting that although paired examples are not required for training, the utilization of paired samples in our CycleGAN training does lead to visible performance improvements [17]. Typically, cycleGAN models are trained using single- (grayscale) or three-channel (RGB) images [9, 18, 24]. In previous works [10, 17, 19], these RGB channels were directly replaced with the non-radiative and radiative channels, which posed no issues as the number of channels did not exceed three. However, the focus of this research is to improve the efficacy of the virtual staining process using intelligently extracted features from non-radiative PARS signals, resulting in a scenario where the colorization model has more than three input channels. It is important to note that the use of three channels in conventional cycleGAN models is solely due to the nature of working with RGB images, and there is no inherent reason to restrict the input channels to three in principle-components or data-fusion methods. In fact, the optimal number of input channels is likely to differ from three. Hence, the proposed MC-GAN model enables the incorporation of additional features by expanding the number of input channels beyond three. The integration of extra channels in the MC-GAN model allows for a wider range of information to be utilized during training. This extension enhances the model's ability to capture and leverage diverse information, which can potentially lead to improved colorization performance. In cycleGANs, \(N\)-channel input generates \(N\)-channel output. However, our target H&E domain is an RGB image with three channels. Only a three-channel output is required, regardless of the number of channels in the source domain. To allow a three-channel output with an \(N\)-channel input, we duplicate the last channel (B) in the target domain to expand the target H&E domain dimensions to match the source PARS domain. When extracting the colorization results, we discard these duplicated channels. Except for the modifications of the multi-channel input and output, the architecture utilized in this study remains identical to the original cycleGAN. ### _Training Settings_ Two datasets are employed in this work: human skin and mouse brain data. To generate the training sets for the two datasets, overlapping patches of \(256\times 256\) pixels are extracted from the PARS and H&E images. For the human skin dataset, approximately 500 overlapping patches are extracted, while for the mouse brain dataset, around 2000 overlapping patches are extracted. The pairs are split in a ratio of 70% for training and 30% for validation. The learning rate is set to 0.0002. The maximum number of epochs is set to 200 with an early stopping criterion to terminate the training when the generator loss stops improving. The trained model is then applied to the test images which are also subdivided into overlapping patches of \(256\times 256\) pixels. An overlap of \(\sim\) 50% is usually sufficient to avoid visible artifacts at the borders of adjacent patches in the final stitched image. The colorization algorithm is implemented in Python version 3.10.6 and model training is implemented using PyTorch version 2.0.0 with support of CUDA version 12. ## III Results and Discussion In previous PARS virtual staining embodiments [10, 17, 19], only the non-radiative and radiative (denoted as NR and R, respectively, in this section for notation simplicity) channels are used as inputs for the virtual staining model. Traditionally, NR images are created by integrating the post excitation modulations in the detection signal. This method omits valuable temporal information because the shape of the signal may contain information associated with specific biological structures [20, 21]. In this paper, we opt to broaden the range of input channels by incorporating features extracted from the NR TD signals. These features augment information at each pixel location, which can enhance the colorization model's ability to comprehend the statistical transformation between the input and the target domains. The multi-channel virtual staining workflow is shown in Fig. 3. The pipeline consists of two main parts: (1) feature learning, extraction, and selection, and (2) MC-GAN model training. Feature learning, of \(K\) features, takes place using a representative subset of the NR TD signals. Feature extraction is performed on all the TD signals to form \(K\) feature images. These \(K\) different feature images, along with the NR signal integral (as described in Section II.B) from each excitation wavelength (266 nm and 532 nm), and the R channel from 266 nm excitation only, are then fed into the feature selection stage. Feature selection is used to enhance the model's prediction power by eliminating redundant data, increasing contrast between the selected features, and reducing the training volumes and times. Finally, the images of the selected features are used as an input to the proposed MC-GAN model, and the true H&E serves as the ground truth. A study (the "\(K\)-study") was conducted to determine the optimal number of features \(K\) to extract from each section. In this \(K\)-study, feature extraction is performed using the K-means algorithm (presented in Section II.D) generating \(M_{f}^{K}\) for every \(K\in\{2,...,6\}\). Then, the MC-GAN is trained using the R channel and \(M_{f}^{K}\) for each \(K\). Since the \(K\) features are extracted from the NR channel TD signals, they are independent of the R contrast. Hence, the R channel is always used as an additional component in the virtual staining phase of the \(K\)-study for a fair comparison. The model performance is then assessed. Visual assessment, and Structural Similarity Index (SSIM) [25], computed between the colorized images and their corresponding ground truth, are used to determine the optimal value of \(K\). The best \(K\) is selected independently for the two datasets in hand. For the human skin dataset, the \(K\)-study revealed that the best results are obtained when \(K=3\), whereas for the mouse brain dataset, the \(K\)-study SSIM results indicate an optimal value for \(K\) of 2. After extracting the features, a combinatorial study (the "\(C\)-study") is conducted to identify the optimal subset of features. In the \(C\)-study, the analysis encompasses all the available image components. By utilizing the optimal value of \(K\) obtained from the \(K\)-study, a set of features denoted as \(M_{f}^{opt}\)is generated. Additionally, the conventional NR and R channels are combined with \(M_{f}^{opt}\) feature images forming an array of images \(A\), where \(A=\left[NR_{532},NR_{266},R_{266},M_{f}^{opt}\right]\). This array provides a comprehensive representation of the image data. It is possible that using all the elements in \(A\) to train a model could lead to redundancy and thus confuse the model. Consequently, the \(C\)-study conducts an exhaustive search across all the possible combinations of elements in array \(A\) to determine the optimum feature combination for creating a robust colorization model. The size of array \(A\) is determined by the sum of the number of features obtained from the \(K\)-study and the Figure 3: Multi-Channel Virtual Staining Workflow. First, feature learning, of \(K\) features, takes place using a subset (shown in red box) of the NR channel TD signals. Second, feature extraction is performed on all the TD signals of the data in hand forming \(K\) feature images. NR images of each excitation wavelength (266 nm and 532 nm in this case) and R images are extracted separately and passed along with the \(K\) feature images to the feature selection phase. The selected features are then used as the input data to the proposed MC-GAN model, and the true H&E is used as the model ground truth. three conventional channels. Therefore, the total number of elements in array \(A\) is given by \(N~{}=~{}K~{}+~{}3\). The number of possible combinations is \(2^{N}-1\). To evaluate the model performance for the \(C\)-study, pixel-wise evaluation metrics, SSIM, Peak Signal-to-Noise Ratio (PSNR) [26], and Root Mean Squared Error (RMSE), are calculated since paired input and ground truth images are available [27]. The three metrics are computed between the colorized images and the true H&E. Images of the two domains are blurred prior to computing the metrics to avoid the effect of registration errors. Based on the assessment outcomes, a feature selection process is carried out to determine the best-performing members from set \(A\), as presented in Section II.E. These are then used to train virtual staining models. The results of the two datasets are presented in the following subsections. _Human Skin Dataset_ The number of elements in \(A\) is determined based on the value of the best \(K\) that is selected during the \(K\)-study. For instance, using \(K=3\) results in an image set \(A=\left[NR_{532},NR_{266},R_{266},m_{f_{1}},m_{f_{2}},m_{f_{3}}\right]\) which contains three extracted feature images along with the conventional PARS contrasts. Figure 4 (a-f) displays the members of \(A\), along with their corresponding ground truth (Fig. 4 (h)). It is worth mentioning that the three extracted features do exhibit a certain level of correlation, as shown in Fig. 4 (g), however the colorization results do show that these features are less redundant and better segment the structures compared to the conventional channels. In the \(C\)-study, an exhaustive search is conducted on the set \(A\), as explained earlier in this section. The objective is to determine the best combination of features for colorization. Given that there are six (\(K+3\)) elements in \(A\) for the human skin data, there are a total of 63 possible combinations that can be used to train individual models. After training the 63 models, virtual staining is performed on unseen test data. This generates 63 colorized image sets using all the feature combinations from \(A\). The test images are acquired from a distinct tissue section compared to the training data. These colorized test images are then compared against true H&E images. Table 1 summarizes the quantitative assessment results. All of the employed metrics reached a consensus that there are at least 17 feature combinations which produce superior colorizations compared to the conventional PARS channels. Additionally, there are 13 combinations that outperform using all \(N\) elements in \(A\). This suggests that certain features may be redundant when combined and may potentially confuse the model learning process. Conversely, other features prove extremely valuable in colorization as they enhance the segmentation of tissue structures with distinct colors. Notably, the R channel is included in all the top 30 feature combinations, highlighting the critical role which the R contrast plays in achieving precise measurements of the \(C\)-study color. \begin{table} \begin{tabular}{|c|c|c c c|} \hline \multirow{2}{*}{Rank} & \multirow{2}{*}{Feature Combination} & \multicolumn{4}{c|}{Score (rank)} \\ \cline{3-5} & & SSIM & PSNR & RMSE \\ \hline \multirow{4}{*}{Best} & NR\({}_{532}\), R\({}_{266}\), \(m_{f_{1}}\), \(m_{f_{3}}\) & **0.89 (1)** & 22.92 (3) & 18.22 (3) \\ & NR\({}_{266}\), R\({}_{266}\), \(m_{f_{1}}\) & 0.88 (2) & **23.03 (1)** & **17.99 (1)** \\ & NR\({}_{532}\), R\({}_{266}\), \(m_{f_{2}}\) & 0.88 (3) & 22.99 (2) & 18.06 (2) \\ \(\vdots\) & \multirow{4}{*}{Moderate} & NR\({}_{532}\), NR\({}_{266}\), R\({}_{266}\) & \multirow{4}{*}{0.87 (14)} & \multirow{4}{*}{22.50 (15)} & \multirow{4}{*}{19.13 (15)} \\ & \(m_{f_{1}}\), \(m_{f_{2}}\), \(m_{f_{3}}\) & & & \\ \cline{1-1} & NR\({}_{532}\), NR\({}_{266}\), R\({}_{266}\) & \multirow{4}{*}{0.87 (19)} & \multirow{4}{*}{22.40 (18)} & \multirow{4}{*}{19.35 (18)} \\ \(\vdots\) & & & & \\ \cline{1-1} & & NR\({}_{266}\) & 0.80 (61) & 19.92 (62) & 25.74 (62) \\ \cline{1-1} & & \(m_{f_{2}}\) & 0.72 (63) & 17.83 (63) & 32.71 (63) \\ \hline \end{tabular} \end{table} Table 1: Summary of quantitative analysis of the \(C\)-study colorization results using the human skin dataset. Figure 4: Example of PARS channels. (a) NR channel at 532 nm. (b) NR channel at 266 nm. (c) R channel at 266 nm. (d)-(f) Feature images (\(m_{f_{1}}\), \(m_{f_{2}}\) and \(m_{f_{3}}\)) corresponding to features 1-3 in (g), respectively. (g) TDs of three features extracted from NR channel. (e) True H&E of the same field-of-view. colorization by offering distinctive and valuable biomolecule information. The colorization results are depicted in Figure 5. The least satisfactory outcomes are obtained when using only one or two features, simply too limited to capture the complexity of different structures and their corresponding stained colors. An example using only NR\({}_{266}\) is shown in Figure 5 (b), which represents one of the poorest three combinations. With this very limited input data, the trained model appears to mistakenly identify red blood cells as connective tissue and confuses the connective tissue with lipids, as highlighted in the yellow box. Conversely, using all the conventional channels (\(NR_{266},NR_{532}\) and \(R_{266}\)) in Figure 5 (c) demonstrates better performance, but still with shortcomings. For instance, some red blood cells appear more purple than they do in the ground truth H&E, as highlighted in green. In addition, the colorization of the connective tissues sometimes exhibits a mixture of purple and pink shades instead of a consistent pink tone, as shown in the blue highlighted results. These artifacts may be attributed to insufficient input information, noise, or redundancy within the input data, all of which can hinder the effective model learning. In contrast, the feature combination which yielded the highest SSIM scores, and top three RMSE and PSNR, is shown in Figure 5 (d). In the highlighted sections, the colorization is the most accurate of the presented results. This is particularly prevalent in the red blood cells and the shading of the connective tissue. These visual comparisons between the PARS virtual H&E and the true H&E images, are supported by the quantitative measurements depicted in leftmost column of Figure 5. The NR TD signals encompass information about the material properties being analyzed. Extracting features from these signals can significantly improve the labeling of biomolecules, leading to enhanced contrast. Notably, feature extraction surpasses the conventional image reconstruction methods in segmenting tissue structures such as nuclei, as presented in Fig. 4 (b) and (e). The effectiveness of feature extraction in tissue labelling has been previously demonstrated in studies involving PARS data of murine brain fresh tissue sample [21] and human breast tissue slide [20]. Incorporating these enhanced contrasts as input channels to a virtual staining model proves beneficial for colorization performance, ultimately enhancing the accuracy and visual representation of the colorized images. Furthermore, the exhaustive search highlights that using an optimized set of features can be more advantageous than simply employing all available features. Targeted selection of features may lead to improved performance and reduced training time. This is likely because certain features may have a stronger ability to accurately label tissue structures, while others may be redundant and consequently cause confusion, negatively affecting the performance of the model. ### Mouse Brain Dataset The mouse brain dataset shows a comparable performance pattern to the human skin. The workflow is replicated, starting with the \(K\)-study analysis. In this case, with an optimal K value of 2, the number of Figure 5: A comparison of virtual staining results using different combinations of PARS feature images as inputs. (a) RGB image of a raw PARS data where R: NR332, G: R266, B: NR266 (displayed for visualization), highlighting different parts of a human skin tissue sample. (b)-(d) Worst, moderate, best results, respectively, using the feature combination labeled on the left. (e) True H&E image of the same field-of-view. channels in domain \(A\) (PARS raw data) is 5 which allows for 31 trainable models with different channel input combinations. Following a comparison of all 31 input combinations, the quantitative assessment shows that there are at least 10 feature combinations that outperform the utilization of conventional features alone. Additionally, there are 8 alternative options that demonstrate superior performance compared to using all the features in domain \(A\). Figure 6 presents a comparative analysis of the worst, moderate, and best colorization results achieved with various feature combinations as inputs to the model. The outcomes presented here further support the findings from the human dataset, which indicate that there are superior features for virtual staining that outperform the conventional features. Moreover, there is an optimal subset of features that produce best results, as opposed to all features combined. Figure 6 (b) illustrates the outcomes obtained from a feature combination (\(R_{2\,66}\) and \(m_{f_{1}}\)) that exhibit the poorest performance among the evaluated combinations. Notably, with this input combination, the trained model erroneously classifies parts of connective tissue as cell nuclei. Upon observing Fig. 6 (b) and (c), it becomes apparent that connective tissue structures and tissue surrounding the tumor, which should ideally exhibit a distinct pink color (as demonstrated in the ground truth image, (e)), are more accurately colorized in (d) compared to (c). In (c), a significant portion of the pink color is substituted with shades of brown. The model can sometimes employ an averaging strategy for colors as a means to minimize loss [28], resulting in the prediction of colors like gray or brown when uncertain about the best color choice. In contrast, the highest level of accuracy among the presented results is observed in (d), which exhibits significant correspondence with the ground truth in terms of visual appearance and quantitative measurements. These observations provide additional evidence that the extracted features contribute to the learning process of the model. Furthermore, they emphasize the significance of selecting features that accurately label and represent the data, ultimately resulting in improved overall performance. The presented results highlight that superior feature extraction methods and feature combinations exist for improved colourization of the raw input data. Future efforts will explore other tissue types and staining varieties. **IV.Conclusion** In conclusion, this study explores the use of the time resolved non-radiative signals for improving virtual staining of PARS images in both human skin and mouse brain. Using the K-means method [20, 21] to extract features from the non-radiative TD signals, additional information about imaged targets is captured. The proposed MC-GAN extends the conventional colorization model to accept more than three channels, allowing for the utilization of these additional features. The experimental results demonstrate that certain feature combinations outperform the conventional PARS channels as they exhibit improved labeling of tissue structures. Figure 6: Another comparison of virtual staining results of mouse brain tissue. (a) RGB image of raw PARS data where R: NR\({}_{532}\), G: R\({}_{266}\), B: NR\({}_{266}\) (displayed for visualization). (b)-(d) Shows worst, moderate, and best results, respectively, using the feature combination labeled on the left. (e) True H&E image of the same field-of-view. Several experiments are conducted to determine the optimum number of K-means features (\(K\) ) as well as the optimum feature combination for training virtual staining models. Three different metrics are employed to evaluate the model's performance for feature selection. The limitations of using only one or two features are evident, as they fail to accurately represent the complexity of different structures and their colors. Moreover, employing all the available features can lead to confusion within the model due to the potential redundancy among them. Therefore, it was crucial to conduct a comprehensive search to identify the most effective feature combinations, which not only reduced training time by utilizing fewer features but also alleviated model confusion. With the optimal feature combination, the colorization results exhibit a high degree of accuracy, as observed in the results. These findings highlight both high visual and quantitative agreement between the H&E-_like_ PARS and the true H&E images among the two datasets, emphasizing the potential of TD signals in enhancing the accuracy of virtual staining techniques. ## Acknowledgements The authors thank Dr. Marie Abi Daoud at the Alberta Precision Laboratories in Calgary, Canada for providing the human skin tissue samples and Dr. Deepak Dinakaran and Dr. Kevin Camphausen from the radiation oncology branch at the National Cancer Institute, NIH, Bethesda, MD, USA for providing the mouse brain samples. Additionally, the authors would like to acknowledge Hager Gaouda for their valuable assistance in staining the tissue samples used in this study. The authors gratefully acknowledge the financial support provided by the following funding sources throughout the duration of this project: Natural Sciences and Engineering Research Council of Canada (DGECR-2019-00143, RGPIN201906134), Canada Foundation for Innovation (JELF #38000), Mitacs Accelerate (IT13594), University of Waterloo Startup funds, Centre for Bioengineering and Biotechnology (CBB Seed fund), illumiSonics Inc (SRA #083181), New frontiers in research fund - exploration (NFRFE-2019-01012), and The Canadian Institutes of Health Research (CIHR PJT 185984). ## Author Contribution Statement M.B developed and implemented the multi-channel virtual staining framework, conducted experiments, prepared the figures, and wrote the main manuscript. J.E.D.T and B.R.E implemented PARS imaging system, helped with scanning PARS samples, and helped write the manuscript. J.A.Y contributed to the implementation of the cycleGAN model to support multi-channel inputs. P.F. assisted in planning the experiments and offered consultation in the writing of the manuscript. P.H.R contributed as the principal investigator, taking charge of project direction, organization, and manuscript writing. #### Competing Interests Authors James Twel, Benjamin Ecclestone, and Parsin Haji Reza all have financial interests in IllumiSonics which has provided funding to the PhotoMedicine Labs. Authors Marian Boktor and Paul Fieguth do not have any competing interests. ## Data Availability The data that support the findings of this manuscript are available from the corresponding author, P.H.R., upon reasonable request.
2306.09189
High-Resolution Convolutional Neural Networks on Homomorphically Encrypted Data via Sharding Ciphertexts
Recently, Deep Convolutional Neural Networks (DCNNs) including the ResNet-20 architecture have been privately evaluated on encrypted, low-resolution data with the Residue-Number-System Cheon-Kim-Kim-Song (RNS-CKKS) homomorphic encryption scheme. We extend methods for evaluating DCNNs on images with larger dimensions and many channels, beyond what can be stored in single ciphertexts. Additionally, we simplify and improve the efficiency of the recently introduced multiplexed image format, demonstrating that homomorphic evaluation can work with standard, row-major matrix packing and results in encrypted inference time speedups by $4.6-6.5\times$. We also show how existing DCNN models can be regularized during the training process to further improve efficiency and accuracy. These techniques are applied to homomorphically evaluate a DCNN with high accuracy on the high-resolution ImageNet dataset, achieving $80.2\%$ top-1 accuracy. We also achieve an accuracy of homomorphically evaluated CNNs on the CIFAR-10 dataset of $98.3\%$.
Vivian Maloney, Richard F. Obrecht, Vikram Saraph, Prathibha Rama, Kate Tallaksen
2023-06-15T15:16:16Z
http://arxiv.org/abs/2306.09189v2
High-Resolution Convolutional Neural Networks on Homomorphically Encrypted Data via Sharding Ciphertexts ###### Abstract Recently, Deep Convolutional Neural Networks (DCNNs) including the ResNet-20 architecture have been privately evaluated on encrypted, low-resolution data with the Residue-Number-System Cheon-Kim-Kim-Song (RNS-CKKS) homomorphic encryption scheme. We extend methods for evaluating DCNNs on images with larger dimensions and many channels, beyond what can be stored in single ciphertexts. Additionally, we simplify and improve the efficiency of the recently introduced multiplexed image format, demonstrating that homomorphic evaluation can work with standard, row-major matrix packing and results in encrypted inference time speedups by \(4.6-6.5\times\). We also show how existing DCNN models can be regularized during the training process to further improve efficiency and accuracy. These techniques are applied to homomorphically evaluate a DCNN with high accuracy on the high-resolution ImageNet dataset for the first time, achieving \(80.2\%\) top-1 accuracy. We also achieve the highest reported accuracy of homomorphically evaluated CNNs on the CIFAR-10 dataset of \(98.3\%\). ## 1 Introduction Deep learning has emerged as a powerful tool for solving image processing tasks due to its ability to automatically learn relevant features from raw data. Convolutional Neural Networks (CNNs), which are a type of deep learning model specifically designed for image processing, have achieved state-of-the-art performance on a variety of image processing tasks such as image classification [13], object detection [17], and segmentation [20]. Fully homomorphic encryption (FHE) [9; 19] is a technique enabling computation directly on encrypted data, and in particular, enabling Privacy Preserving Machine Learning (PPML). FHE has potential societal impact in applications where user and data privacy are critical, such as in cloud computing, healthcare analytics, and defense applications. However, adoption of FHE has been limited due to the speed of existing FHE neural network inference algorithms, and limitations of FHE itself. Previous work uses narrow or shallow DCNNs on low-resolution data, often using nonstandard activation functions, since FHE can only evaluate polynomials. Furthermore, it is challenging to ensure that polynomial approximations of activation functions are suitably accurate. Key contributions of this work are summarized as follows: * We design and implement efficient homomorphic convolution and pooling algorithms, which have been parallelized and handle large inputs and channels via sharding techniques. * We apply these algorithms to construct three families of ResNet architectures, achieving the highest homomophically evaluated accuracy on CIFAR-10 and ImageNet-1k while re ducing the inference latency relative to the previous state-of-the-art. We also do not observe any degradation of encrypted model accuracy relative to its unencrypted counterpart. * We propose a training technique to reduce the input range to our activation functions by penalizing the kurtosis of the distributions of BatchNorm outputs, allowing efficient homomorphic polynomial approximation of the GELU activation function. ## 2 Background Homomorphic encryptionRNS-CKKS [6; 7] is an FHE scheme that supports arithmetic over encrypted vectors of fixed-point numbers. Ciphertexts in this scheme are elements in the ring \(R_{Q}^{2}\), where \(R_{Q}=\mathbb{Z}_{Q}[x]/(x^{2N}+1)\) and \(Q\) is a large integer, and \(2N\) is called the _ring dimension_. Each such ciphertext has \(N\)_slots_, each of which stores a single real number, so it is useful to conceive of a ciphertext as a vector. Ciphertext vectors support vectorized addition and multiplication operations, as well as cyclic rotations. We pack images into RNS-CKKS ciphertexts. Each ciphertext has a _level_, or maximum number of multiplications that can be applied before decryption error becomes too high; each multiplication reduces the level by one. The ciphertext level is restored through _bootstrapping_, though this is a time-consuming operation to be used sparingly. Threat ModelThe threat model assumed is similar to previous PPMLs [5; 15]. We encrypt the input image but not the model weights. A client homomorphically encrypts data it wishes to send, which is then sent to a server for processing. The server performs inference on the encrypted data directly, sending back the encrypted inference result to the client. Since it is assumed that only the client holds the secret key, only they can decrypt the result, which guarantees privacy from the server. Because the server does not see the decrypted inference result, the Li-Macciato attack [16] is not applicable and we do not need to take noise flooding into account in our parameter selection. ## 3 Related Work Early work on encrypted machine learning evaluated narrow and shallow CNNs with nonstandard activation functions on low-resolution data [5; 10]. Recent papers have begun evaluating larger CNNs with standard design features on encrypted data. Prior work on PPML most similar to ours are Multiplexed Parallel Convolutions [15] and TileTensors [1]. Multiplexed Parallel Convolutions homomorphically evaluates deep but narrow CNNs with standard activation functions on low-resolution data. TileTensors homomorphically evaluates shallow CNNs with nonstandard activation functions on high-resolution data. In this work, we homomorphically evaluate wide and deep CNNs with standard activation functions on high-resolution data. TileTensors uses concepts similar to our sharding approach to perform inference on \(224\times 224\) images using a modified AlexNet. They rely on shallow CNNs and do not perform the bootstrapping necessary to incorporate standard activation functions, instead relying on the same nonstandard activation function used in CryptoNets [10] and LoLa Nets [5], which is unsuited for DCNNs. We improve on Multiplexed Parallel Convolutions, hereby defined as the multiplexed ResNet family, by supporting high-resolution images and wide channels that do not fit into a single ciphertext, as well as simplified packing. We also introduce a novel training regularization technique, enabling more efficient homomorphic evaluation of non-linear activations. Our implementation performs encrypted inference on a multiplexed ResNet-20 architecture \(4.6\times\) faster than Ref. [15]. We homomorphically evaluate wide ResNet architectures not supported by the multiplexed algorithms, and achieve significantly higher accuracy than multiplexed architectures on standard datasets. ## 4 Homomorphic Neural Network Operators Algorithms have been carefully designed to minimize the number of encrypted multiplication and rotation operations to minimize latency. An _image_ consists of many _channels_. All dimensions are assumed to be powers of two, and each channel is assumed to be square in shape. The approach is adaptable to dimensions not powers of two with appropriate rescaling or zero padding. Given an image with \(c\) channels of size \(m\times m\), we homomorphically encrypt and represent it with RNS-CKKS vectors. To encrypt an image into a ciphertext vector of size \(m^{2}c\), each channel \(M^{i}\) is represented in row-major order, and they are concatenated to obtain a single plaintext vector. Sharding and Encrypting an ImageIn RNS-CKKS, storage capacity of a single ciphertext is determined by the ring dimension of the scheme, and is typically in the range \(2^{14}\) to \(2^{16}\). If a \(c\times m\times m\) tensor does not fit into a single ciphertext, channels are spread across _multiple_ ciphertexts, such that each ciphertext stores a subset of channels. Here, each ciphertext vector is called a _shard_, and the maximum amount of data storable in a shard is called the _shard size_. The performance of the scheme degrades with increasing ring dimension, so increasing the ring dimension to avoid sharding would negatively impact the efficiency of encrypted inference. We distinguish the two cases of _image shards_ and _channel shards_. For _image shards_, a shard is large enough to hold at least one channel (\(m^{2}\leq s\)), but multiple shards are needed to store all channels (\(m^{2}c>s\)). See Figure 0(a) for an example of image shards. For _channel shards_ each channel must be split up across multiple shards (\(m^{2}>s\)), so that each shard contains a set of consecutive rows from a single channel. See Figure 0(b). Duplicating and Permuting ChannelsIf an image does not fill a shard, its channels are _duplicated_. When \(s>m^{2}c\), we define a _duplication factor_ given by \(d=s/m^{2}c\), and place \(d\) copies of each channel when concatenating them together. \(d\) is tracked with the encrypted image as metadata. Our implementation of average pooling can _permute_ input channels. If one tracks the channels' order with a permutation defining the correct order, subsequent convolution operations can also be computed correctly. Therefore, we attach a channel permutation as metadata to an encrypted image. ### Convolution We describe how to homomorphically convolve a single matrix with a single kernel, using same padding and a stride of \(1\); this does not change the channel's dimensions. Convolution is typically thought of as sliding a kernel over as matrix. However, one may also think of convolution as fixing the kernel, and sliding the matrix, which is a more useful visual in what follows. We formalize this observation and use it to compute convolutions. Denote \(\mathcal{S}_{k,\ell}\) on matrix \(M\) as a function that shifts rows up by \(k\) and columns left by \(\ell\). \(\mathcal{S}_{k,\ell}\) adds zeros when elements are shifted off the matrix. Then: \[M*K=\sum_{k=-\kappa/2}^{\kappa/2}\sum_{\ell=-\kappa/2}^{\kappa/2}K_{k,\ell} \cdot\mathcal{S}_{k,\ell}(M). \tag{1}\] See Figure 1. \(\mathcal{S}_{k,\ell}\) is implemented homomorphically: shifting a row-major matrix by one column is done by rotating the ciphertext vector by \(1\), while shifting by a row is done by rotating by \(m\). Wrap-around elements are zeroed out by multiplying the ciphertext vector with an appropriate binary mask. This allows us to homomorphically compute \(\mathcal{S}_{k,\ell}(M)\) for any shifts \(k\) and \(\ell\). To multiply \(\mathcal{S}_{k,\ell}(M)\) by the scalar \(K_{k,\ell}\), we create a vector of size \(m^{2}\) and multiply \(\mathcal{S}_{k,\ell}(M)\) elementwise with this vector. In practice, the multiplications for shift masking and those for kernel element multiplication are combined non-homomorphically before being applied homomorphically. With a Single ShardRecall that to convolve a \(c\)-channel image with a single filter, \(c\) matrix convolutions are individually computed, and the results are summed. An image is typically convolved with multiple filters to produce multiple channels. Convolutions are computed in parallel all at once. Figure 1: Illustrations of image sharding and channel sharding. Given an image \(M\), denote \(M^{f}_{ij}\) as the \((i,j)\)-th element in the \(f\)-th channel of \(M\). Filters \(K\) ordinarily have dimensions \(c_{i}\times c_{o}\times m\times m\), so that \(K^{fg}_{ij}\) is \((i,j)\)-th element in the kernel convolved with the \(f\)-th input channel used to compute the \(g\)-th output channel. We begin with a \(1\times 1\) kernel size, in which case \(K^{fg}\) is the single-element kernel applied to the \(f\)-th input channel, to compute the \(g\)-th output channel. We further assume that \(M\) fits in exactly one shard, and that \(c_{i}=c_{o}=c\), so that \(M*K\) also occupies one shard. Then the \(g\)-th channel of \(M*K\) is given by Equation 2: \[(M*K)^{g}=\sum_{r=0}^{c-1}K^{r+g,g}\cdot M^{r+g} \tag{2}\] \[\bigparallel_{g=0}^{c-1}K^{r+g,g}\cdot M^{r+g} \tag{3}\] where index arithmetic above is modulo \(c\). We compute all \(c\) output channels simultaneously. Given \(0\leq r<c\), the \(r\)-th _partial convolution_ is defined in Equation 3. The full convolution is obtained by summing over partial convolutions: \[M*K=\sum_{r=0}^{c-1}\bigparallel_{g=0}^{c-1}K^{r+g,g}\cdot M^{r+g}=\sum_{r=0} ^{c-1}\left(\bigparallel_{g=0}^{c-1}K^{r+g,g}\cdot\bigparallel_{g=0}^{c-1}M^ {r+g}\right). \tag{4}\] See Figure 1 for a simple illustration of summing partial convolutions. Each summand corresponds to a single rotation of the ciphertext \(M\) by \(r\cdot m\cdot m\) positions. When working with larger kernels, the prior approaches combine to compute the \(g\)-th output channel: \[(M*K)^{g}=\sum_{r=0}^{c-1}\sum_{k=-\kappa/2}^{\kappa/2}\sum_{\ell=-\kappa/2}^{ \kappa/2}K^{r+g,g}_{k,\ell}\cdot\mathcal{S}_{k,\ell}(M^{r+g}). \tag{5}\] Rotations \(\mathcal{S}_{k,\ell}(M^{r+g})\) are computed once and cached. As with \(1\times 1\) kernels, we use partial convolutions to compute all \(c\) channels at once. Rather than directly implement strided convolution as in Ref. [15], we instead compose an unstrided convolution with downsampling described in Section 4.2. This preserves the row-major order format and avoids multiplexed packing, and increases efficiency, as the multiplexed convolution algorithm of Ref. [15] has a multiplicative depth of 2, while we only use a single multiplicative level. Figure 2: (a) Partial convolution computation for a \(4\)-channel image convolved with a \(1\times 1\) kernel. (b) A single convolution computed by shifting the matrix. (c) Shifting rows from channel shards into adjacent ones. With Image ShardsLet \(M\) be an image of dimension \(c_{i}\times m\times m\), split across \(t\) shards, denoted as \([M]_{0},\dots,[M]_{t-1}\), implying a shard size \(s=\frac{m^{2}c_{i}}{t}\). Suppose we want to convolve \(M\) with filters \(K\) with dimensions \(c_{i}\times c_{o}\times m\times m\). Then the \(v\)-th output shard, \([M*K]_{v}\), is computed as: \[[M*K]_{v}=\sum_{u=0}^{t-1}[M]_{u}*K^{\iota(u),\iota(v)}, \tag{6}\] where \(\iota(u)\) is the index interval \(\iota(u)=[z\cdot u:z\cdot(u+1)]\), and \(z=s/m^{2}\), or the number of channels per shard. Intuitively, each single convolution in the summand above is computed using the approach in the previous section 4.1, slicing \(K\) accordingly, and summing up the results. With a shard size of \(s\), \(M*K\) is packed into \(c_{0}m^{2}/s\) shards, and \(v\) ranges over this. Single Shard with Duplication and PermutationConvolution must work with a shard with \(d\)-duplicated channels. Filters \(K\) can be duplicated accordingly, but we instead index into \(d\) times when computing \(M*K\). Channels can also be permuted by pooling (see 4.2). In this case, the image passed from the previous layer is also assumed to return a permutation \(\tau\) defining the correct channel order. To compute a convolution using this permutation, any time we were to index into the filter \(K\) at input channel \(i\) (so \(K^{i}\)), we instead index into \(K\) at \(\tau(i)\) (so \(K^{\tau(i)}\)). With Channel ShardsConvolving a channel-sharded image results in a channel-sharded image. Output channels are computed independently from one another, so we initially focus on convolving a shard of a single channel with a single kernel. Let \(M^{f}\) be the \(f\)-th input channel of image \(M\), which we convolve with a single kernel \(K\). Let \([M^{f}]_{u}\) be the \(u\)-th shard. We cache all _cyclic_ rotations \(\mathcal{S}_{k,\ell}([M^{f}]_{u})\), for \(k,\ell\) ranging over the indices of \(K\). \([M^{f}*K]_{v}\) is computed from the cached rotations of the input shards. Shifting channels requires shifting all associated shards simultaneously. Shifting columns is accomplished by shifting each shard independently. When shifting rows, one needs to shift rows of one shard into an adjacent shard. Each row shift is constructed from two cached rotations (with the exception of first and last shards). See Figure 1(a) showing how rows are shifted between shards. Each output channel is computed by summing over row and column shifts, and each summand is itself a sum of two kernel-masked shards. That is: \[[M^{f}*K]_{v}=\sum_{k=-\kappa/2}^{\kappa/2}\sum_{\ell=-\kappa/2}^{\kappa/2} \mathfrak{m}_{k,\ell}(K_{k,\ell})\cdot\mathcal{S}_{k,\ell}([M^{f}]_{v})+ \overline{\mathfrak{m}_{k,\ell}}(K_{k,\ell})\cdot\mathcal{S}_{k,\ell}([M^{f}] _{v+\operatorname{sign}k}) \tag{7}\] where \(\mathfrak{m}_{k,\ell}(x)\) is the vector given by shard-size-many elements of all \(x\), multiplied by the binary mask used in the shift operator \(\mathcal{S}_{k,\ell}\), and \(\overline{\mathfrak{m}_{k,\ell}}(x)\) is its complement. Then, to compute one shard \([M*K]_{v}\) of a single channel, we simply sum the shards \([M^{f}*K]_{v}\) over the input channels \(f\). Each such shard is computed independently done in parallel, concluding channel-sharded convolution. ### Average Pooling We implement an average pooling operation with a \(2\times 2\) window; this increases the channel capacity of each shard by a factor of four. Our implementation preserves the format described previously, avoiding multiplexed packing used in Ref. [15], which does not rearrange pixels after downsampling. With Image ShardsThere are up to three steps involved with pooling: _downsample_, which computes the average pool but leaves the original number of shards intact; _consolidate_, which reduces the number of shards; and _duplicate_, which duplicates channels if there is a single shard remaining. In the _downsampling_ step, we convolve each channel with a \(2\times 2\) kernel of \(1\)s (as we would with homomorphic convolutions). This replaces each \(2\times 2\) window in each channel with the sum of the elements in the window. Next, we want to select only one of four elements in the new \(2\times 2\) windows; we choose the top-left element. The following is how we operate on individual channels \(M\), but generalizes to applying the operations to all channels within each shard simultaneously. We _horizontally reduce_ the elements in channels of each shard, which is done with masking and summing over the channels of each shard, as in Equation 8: \[M^{\prime}=\sum_{i=0}^{(m-1)/2}(M\cdot\mathfrak{m}_{i})\ll i\qquad\text{(8)}\qquad M ^{\prime\prime}=\sum_{j=0}^{(m-1)/2}(M^{\prime}\cdot\mathfrak{m}_{j})\ll 3i \cdot m/2 \tag{9}\] where \(\mathfrak{m}_{i}\) is the binary mask that selects elements in the \(i\)-th column of each channel \(M\), and \(\ll\) (\(\gg\)) denotes ciphertext rotation to the left (right) by \(i\) slots. Then, we _vertically reduce_ each \(M^{\prime}\), as in Equation 9, where \(\mathfrak{m}_{j}\) is the binary mask that selects the left half of \(2j\)-th row in \(M^{\prime}\). See Figures 3. After downsampling, each \(m\times m\) channel of the resulting shards contains only \(m/2\times m/2\) non-zero elements, all packed on the left-hand side. If we started with four or more shards, then we _consolidate_ the remaining non-zero elements into a quarter as many shards. This is done by rotating the shards from the previous step, and summing each group of four consecutive shards. \[S=S_{0}+(S_{1}\gg m^{2})+(S_{2}\gg 2m^{2})+(S_{2}\gg 3m^{2}). \tag{10}\] Starting with two image shards, we only have two summands in the above, and with one shard, there is no consolidation step. Consolidating shards results in channels out-of-order. See Figure 2(c). If we downsampled from two shards or fewer, then the resulting non-zero elements in the (single) consolidated shard would not fill up the entire shard, so we _duplicate_ the shard's channels: \[S^{\prime}=S+(S\gg m^{2}/4)+(S\gg 2m^{2}/4)+(S\gg 3m^{2}/4). \tag{11}\] With two shards, we duplicate by a factor of two, so the above would only have two summands. With Channel ShardsChannel shards are downsampled individually, and every set of four consecutive shards is consolidated into one. In the edge case where the input image has one channel with two shards, we need to duplicate the resulting single shard by a factor of two. Pooling a channel-sharded image never results in an output with permuted channels. ### Other Layers Batch NormalizationAt inference time, batch normalization is an affine transformation, which is expressible as additions and multiplications, so can be implemented homomorphically. These are folded into kernel element multiplication and bias addition in the previous convolution, respectively. LinearEvaluation of a linear layer is a matrix multiplication of the previous layer's output with the weights of the linear layer. Each element of a matrix multiplication is computed as a dot product. The dot product of one vector with another is computed by first taking their elementwise product, then summing the elements of the resulting vector. Elements of vector \(v\) are summed by rotating over its slots, and adding the rotated vector to the original one. The result is a vector whose elements are all \(\Sigma_{i}v_{i}\), and is done in logarithmically many rotations. We get a single activation in the linear Figure 3: Steps involved in a pooling operation. Duplication is not depicted. layer's output, and repeat for each activation in the output of the linear layer. Activations are then masked and summed into a single ciphertext. ResNets often pool each \(m\times m\) input channel to a single pixel, and apply a linear layer at the end. In general, the pool could use a window size larger than \(2\times 2\), which we have not implemented directly. We fuse pool and linear into a _pool-linear_. The linear layer's weights are duplicated as though it were operating on channels of size \(m\times m\), and we divide by a normalization factor of \(m^{2}\). Gaussian Error Linear Unit (GELU)Non-linear activation functions are computed in RNS-CKKS through polynomial approximation. The polynomial degree and hence latency increases when the approximation must be accurate over a wide range. We introduce novel terms to the loss function during training to encourage hidden layer outputs to match the mean, variance, and kurtosis statistical moments of a Gaussian distribution, constraining the range over which the activation needs to be accurately computed. This allows more efficient low-degree polynomial approximation while minimally impacting model accuracy. We use a GELU activation function since it is more amenable to polynomial approximations for fast homomorphic evaluation. We homomorphically compute a 59-degree polynomial approximation of GELU in a numerically stable way with a shallow arithmetic circuit by expanding the polynomial in a Chebyshev basis. Details on polynomial approximation of GELU and kurtosis regularization can be found in the Appendix. ## 5 Empirical Results We use OpenFHE's implementation [3] of FHE with RNS-CKKS to implement the neural network operators described in Section 4 in C++, which are then thinly wrapped with Python bindings to build neural network architectures. Weights are loaded using PyTorch's API, though the approach is indepedent of deep learning framework. OpenMP is used to leverage parallelism from multicore CPUs. As our main focus is on fast encrypted inference of trained models rather than the unencrypted training process, we defer most of the details on the training configuration to the Appendix. Experiments for ResNet-9 and multiplexed ResNets were run on a machine with a hyperthreaded AMD Ryzen Threadripper 3970X 32-core processor, 128 GB of memory, and an Ubuntu 22.04.2 operating system. Experiments for the encrypted ResNet-50 were run on a server with an AMD EPYC 7742 64-core processor, 800 GB of memory, and RHEL 7.9. ### Datasets We perform image classification on CIFAR-10, CIFAR-100, and ImageNet, using various ResNets to evaluate the performance of our homomorphic neural network operators. CIFAR-10 and -100 contain \(32\times 32\) color images in 10 and 100 classes, respectively [12]. ImageNet-1k is a much larger scale dataset containing over 1.2 million high-resolution images with 1000 different classes [21], and is typically resized to \(224\times 224\) during inference, though this does not match our assumption that dimensions are powers of two. We evaluate two different models on ImageNet-1k resized to resolutions of both \(128\times 128\) and \(256\times 256\). ### Architectures We modify DCNN architectures to decrease encrypted inference latency without adversely affecting model accuracy. We use \(2\times 2\) average pooling with stride \((2,2)\), and the GELU activation function. We train models with kurtosis regularization as described in the previous section, and more extensively in the Appendix.1. Footnote 1: If using kurtosis-regularized GELU is not an option, such as when evaluating pre-existing models, our algorithms are compatible with any approach for computing ReLU over a wider range, such as higher-degree polynomial approximation or the approach in Ref. [14]. We homomorphically evaluate three classes of ResNets on CIFAR-10 and -100. We first evaluate the narrow deep multiplexed ResNet family used in the previous state-of-the-art for homomorphic DCNNs Ref. [15], as well as a wide ResNet-9 architecture taken from DAWNBench [8], and finally a fine-tuned version of the wide and deep ImageNet-1k ResNet-50 v1.5 [11]. The wide ResNet-9 and -50 achieve substantially higher accuracy than the multiplexed family, achieving a best accuracy of 94.7% and 98.3% on CIFAR-10, respectively, surpassing the 92.95% reported in Ref. [15] for a multiplexed ResNet-110 and the 92.8% we achieved for a multiplexed ResNet-56. Our ImageNet-1k architecture is modified ResNet-50 v1.5 [11] with GELU and average pooling. This is a wide architecture, using between 64 and 2048 channels. On ImageNet-1k, we train and evaluate ResNet-50 on images resized to \(128\times 128\) and \(256\times 256\), respectively. The \(256\times 256\) model requires both channel shards and image shards, while the \(128\times 128\) model only requires image shards. As such, this illustrates a trade-off between model accuracy and inference time for image resolution. The resolution during training was set according to the FixRes [22] optimization, where the training resolution is \(3/4\) of evaluation resolution to account for data augmentation. ### Encrypted Inference Discussion For the encrypted ResNet-50, we used a RNS-CKKS ring dimension of \(2^{16}\) and shard size of \(2^{15}\) with 59-bit scaling factors and a multiplicative depth of 34. When evaluating the multiplexed ResNets and ResNet-9, we used a lower shard size of \(2^{14}\). This lower shard size trades slower initial layers for faster later layers and bootstrapping operations, and improved the encrypted latency for these narrower architectures. These parameters suffice for a standard 128-bit security level [2]. The distributions prior to GELU are analyzed in order to determine a safe bound for our polynomial approximations; see the Appendix for details. For each model, runtime experiments are collected for 25 inferences; for each run, the runtimes for each algorithm are summed, and then the average is displayed in Tables 2 and 3, where the quoted error is the standard deviation in the total runtime. ResNet-9 and -50 models, which allow the channel dimension to substantially grow, spend less relative time bootstrapping when compared to the multiplexed ResNet family. During inference on ImageNet-1k, ResNet-50 at 128 resolution uses a maximum of 32 shards, and at 256 resolution uses a maximum of 128 shards. On CIFAR-10, ResNet-50 uses a maximum of 16 shards. Due to channel size, inference on 256 resolution requires the use of channel shards, and has a \(2.9\times\) slower latency. However, note that ResNet-50 on 256 resolution has a \(6.1\%\) higher accuracy, so in this case, using higher resolution images produces a better classifier. The _logit residual_, which is the difference between decrypted and unencrypted logits, generally form tight Gaussian distributions centered at zero. By using GELU and a small input range, we decreased the noise from bootstrapping and the polynomial approximation. This is reflected in the increased precision of the logit residual distributions, which has standard deviations at the \(10^{-4}-10^{-2}\) level when fit to a Gaussian, see Table 1 in the Appendix for more details. We ran 1000 inferences with ResNet-20 on CIFAR-10, and all encrypted predictions match respective unencrypted predictions; this is an improvement over Ref. [15], where the encrypted classification accuracy is \(0.1-0.5\)% lower than the unencrypted accuracy. Furthermore, the difference in the top-2 logits between the encrypted and unencrypted ResNet-20 are examined, yielding Gaussian standard deviations at the \begin{table} \begin{tabular}{l l r r} \hline \hline Dataset & Model & Average Accuracy (\%) & Best Accuracy (\%) \\ \hline CIFAR-10 & ResNet-9 & \(94.5\pm 0.1\) & 94.7 \\ & ResNet-50 & \(98.3\) & 98.3 \\ & ResNet-20* & \(90.6\pm 0.3\) & 91.0 \\ & ResNet-32* & \(92.2\pm 0.2\) & 92.5 \\ & ResNet-44* & \(92.2\pm 0.1\) & 92.3 \\ & ResNet-56* & \(92.8\pm 0.2\) & 93.0 \\ & ResNet-110* & \(92.7\pm 0.2\) & 92.8 \\ \hline CIFAR-100 & ResNet-9 & \(74.9\pm 0.2\) & 75.3 \\ & ResNet-32* & \(66.6\pm 0.4\) & 67.0 \\ \hline ImageNet-1k & ResNet-50 @ 128 & \(74.1\) & 74.1 \\ & ResNet-50 @ 256 & \(80.2\) & 80.2 \\ \hline \hline \end{tabular} \end{table} Table 1: Model accuracy is averaged over five runs for all architectures except ResNet-50, and the quoted error is the standard deviation. The (*) represents our implementation of the multiplexed architectures found in Ref. [15]. Due to long training times, ResNet-50s are only trained once. \(10^{-4}\) level. Thus, using kurtosis and GELU allows us to perform faster and more reliable encrypted inference. As further discussed in the Appendix, we determined that the logit error is mainly due to bootstrapping noise. By applying MetaBTS [4] to reduce bootstrapping noise we further increased logit precision by a factor of \(20\times\) at the expense of a \(1.7\times\) increase in latency. ## 6 Conclusion and Future Work We have successfully constructed three families of ResNet architectures that may be evaluated homomorphically: 1) the multiplexed family of architectures [15], 2) the ResNet-9 bag-of-tracks architectures [8], and 3) the popular ResNet-50 architecture [11]. Models have been homomorphically evaluated on a variety of standard datasets, including CIFAR-10, CIFAR-100, and ImageNet-1k. We proposed a training time technique to regularize the range of inputs to the GELU activation function by penalizing the fourth order statistical moment of the outputs of the BatchNorm distributions; this technique allows us to efficiently approximate the GELU function with polynomials under homomorphic constraints. When runtimes are compared to the previously reported runtimes of the multiplexed family, we observe a speedup on the previous state-of-the-art by approximately \(4.6-6.5\times\) without any classification accuracy degradation. We also report the highest homomorphically encrypted accuracy on CIFAR-10 and ImageNet-1k of \(98.3\%\) and \(80.2\%\), respectively. Future work includes extending our models to more advanced tasks, such as encrypted object detection with the YOLO [18] family of architectures and sensitive document analysis with encrypted transformers [23]. Parallelization in this work was achieved with using multicore CPUs, but vectorized addition and multiplication operations on ciphertexts vectors could be ported to GPUs (or other hardware accelerators) to further accelerate computation and minimize latency.
2302.03635
Multipolar Hardy inequalities and mutual interaction of the poles
In this paper we state the weighted Hardy inequality \begin{equation*} c\int_{{\mathbb R}^N}\sum_{i=1}^n \frac{\varphi^2 }{|x-a_i|^2}\, \mu(x)dx\le \int_{{\mathbb R}^N} |\nabla\varphi|^2 \, \mu(x)dx +k \int_{\mathbb{R}^N}\varphi^2 \, \mu(x)dx \end{equation*} for any $ \varphi$ in a weighted Sobolev spaces, with $c\in]0,c_o[$ where $c_o=c_o(N,\mu)$ is the optimal constant, $a_1,\dots,a_n\in \mathbb{R}^N$, $k$ is a constant depending on $\mu$. We show the relation between $c$ and the closeness to the single pole. To this aim we analyze in detail the difficulties to be overcome to get the inequality.
Anna Canale
2023-02-07T17:43:24Z
http://arxiv.org/abs/2302.03635v1
# Multipolar Hardy inequalities and mutual interaction of the poles ###### Abstract. In this paper we state the weighted Hardy inequality \[c\int_{\mathbb{R}^{N}}\sum_{i=1}^{n}\frac{\varphi^{2}}{|x-a_{i}|^{2}}\,\mu(x) dx\leq\int_{\mathbb{R}^{N}}|\nabla\varphi|^{2}\,\mu(x)dx+k\int_{\mathbb{R}^{N}} \varphi^{2}\,\mu(x)dx\] for any \(\varphi\) in a weighted Sobolev spaces, with \(c\in]0,c_{o}[\) where \(c_{o}=c_{o}(N,\mu)\) is the optimal constant, \(a_{1},\dots,a_{n}\in\mathbb{R}^{N}\), \(k\) is a constant depending on \(\mu\). We show the relation between \(c\) and the closeness to the single pole. To this aim we analyze in detail the difficulties to be overcome to get the inequality. _Key words and phrases_. Weight functions, Multipolar Hardy inequalities, Kolmogorov operators, Singular potentials. The author is member of the Gruppo Nazionale per l'Analisi Matematica, la Probabilita e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). defined on smooth functions, \(\mu>0\) is a probability density on \(\mathbb{R}^{N}\), perturbed by inverse square potentials of multipolar type and of the related evolution problems \[(P)\quad\left\{\begin{array}{l}\partial_{t}u(x,t)=Lu(x,t)+V(x)u(x,t),\quad x\in \mathbb{R}^{N},t>0,\\ u(\cdot,0)=u_{0}\geq 0\in L^{2}(\mathbb{R}^{N},\mu(x)dx).\end{array}\right.\] In the case of a single pole and of the Lebesgue measure there is a very huge literature on this topic. For the classical Hardy inequality we refer, for example, to [17, 18, 19, 15, 20, 21]. We focus our attention on multipolar Hardy's inequalities. When \(L\) is the Schrodinger operator with multipolar inverse square potentials we can find some reference result in literature. In particular, for the operator \[\mathcal{L}=-\Delta-\sum_{i=1}^{n}\frac{c_{i}}{|x-a_{i}|^{2}},\] \(n\geq 2\), \(c_{i}\in\mathbb{R}\), for any \(i\in\{1,\ldots,n\}\), V. Felli, E. M. Marchini and S. Terracini in [16] proved that the associated quadratic form \[Q(\varphi):=\int_{\mathbb{R}^{N}}|\nabla\varphi|^{2}\,dx-\sum_{i=1}^{n}c_{i} \int_{\mathbb{R}^{N}}\frac{\varphi^{2}}{|x-a_{i}|^{2}}\,dx\] is positive if \(\sum_{i=1}^{n}c_{i}^{+}<\frac{(N-2)^{2}}{4}\), \(c_{i}^{+}=\max\{c_{i},0\}\), conversely if \(\sum_{i=1}^{n}c_{i}^{+}>\frac{(N-2)^{2}}{4}\) there exists a configuration of poles such that \(Q\) is not positive. Later R. Bosi, J. Dolbeaut and M. J. Esteban in [1] proved that for any \(c\in\left(0,\frac{(N-2)^{2}}{4}\right]\) there exists a positive constant \(K\) such that the multipolar Hardy inequality \[c\int_{\mathbb{R}^{N}}\sum_{i=1}^{n}\frac{\varphi^{2}}{|x-a_{i}|^{2}}\,dx\leq \int_{\mathbb{R}^{N}}|\nabla\varphi|^{2}\,dx+K\int_{\mathbb{R}^{N}}\varphi^{2} \,dx\] holds for any \(\varphi\in H^{1}(\mathbb{R}^{N})\). C. Cazacu and E. Zuazua in [14], improving a result stated in [1], obtained the inequality \[\frac{(N-2)^{2}}{n^{2}}\sum_{\begin{subarray}{c}i,j=1\\ i<j\end{subarray}}^{n}\int_{\mathbb{R}^{N}}\frac{|a_{i}-a_{j}|^{2}}{|x-a_{i}| ^{2}|x-a_{j}|^{2}}\varphi^{2}\,dx\leq\int_{\mathbb{R}^{N}}|\nabla\varphi|^{2} \,dx,\] for any \(\varphi\in H^{1}(\mathbb{R}^{N})\) with \(\frac{(N-2)^{2}}{n^{2}}\) optimal constant (see also [13] for estimates in bounded domains). For Ornstein-Uhlenbeck type operators \[Lu=\Delta u-\sum_{i=1}^{n}A(x-a_{i})\cdot\nabla u,\] perturbed by multipolar inverse square potentials \[V(x)=\sum_{i=1}^{n}\frac{c}{|x-a_{i}|^{2}},\quad c>0,\quad a_{1}\ldots,a_{n} \in\mathbb{R}^{N},\] weighted multipolar Hardy inequalities with optimal constant and related existence and nonexistence of solutions to the problem (P) were stated in [10] following Cabre-Martel's approach in [2], with \(A\) a positive definite real Hermitian \(N\times N\) matrix, \(a_{i}\in\mathbb{R}^{N}\), \(i\in\{1,\ldots,n\}\). In such a case, the invariant measure for these operators is the Gaussian measure \(\mu_{A}(x)dx=\kappa e^{-\frac{1}{2}\sum_{i=1}^{n}\langle A(x-a_{i}),x-a_{i} \rangle}dx\), with a normalization constant \(\kappa\). The technique used to get the inequality applies to the Gaussian functions and it allows to get the result in a simple way. More delicate issue is to prove the optimality of the constant. In [12] these results have been extended to Kolmogorov operators with a more general drift term which force us to use different methods. The result stated in [14] has been extended to the weighted multipolar case in [6]. In this paper we improve a result in [12]. In particular we state that it holds \[c\int_{\mathbb{R}^{N}}\sum_{i=1}^{n}\frac{\varphi^{2}}{|x-a_{i}|^{2}}\,\mu(x) dx\leq\int_{\mathbb{R}^{N}}|\nabla\varphi|^{2}\,\mu(x)dx+k\int_{\mathbb{R}^{N}} \varphi^{2}\,\mu(x)dx \tag{1.2}\] for any \(\varphi\in H^{1}_{\mu}\), with \(c\in]0,c_{o}[\) where \(c_{o}=c_{o}(N,\mu)\) is the optimal constant, showing the relation between \(c\) and the closeness to the single pole and improving the constant \(k\) in the estimate. The proof initially uses the vector field method (see [22]) extended to the weighted case. Then we overcome the difficulties related to the mutual interaction between the poles emphasizing this relation. The class of weight functions satisfy conditions of quite general type, in particular integrability conditions to get a density result which allows us to state inequality (1.2) for any function in the weighted Sobolev space. Weights of this type were considered in [11, 3, 4, 5] in the case of a single pole. Until now, we can achieve the optimal constant on the left-hand side in (1.2) using the IMS truncation method [23, 24] (see [1] in the case of Lebesgue measure and [12] in the weighted case). As a counterpart, the estimate is not very good when the constant \(c\) is close to the constant \(\frac{c_{o}(N,\mu)}{n}\) as observed in [1] in the unweighted case. The paper is organized as follows. In Section 2 we consider the weight functions with an example. In Section 3 we show a preliminar result introducing suitable estimates useful to state the main result in Section 4. ## 2. Weight functions Let \(\mu\geq 0\) be a weight function on \(\mathbb{R}^{N}\). We define the weighted Sobolev space \(H^{1}_{\mu}=H^{1}(\mathbb{R}^{N},\mu(x)dx)\) as the space of functions in \(L^{2}_{\mu}:=L^{2}(\mathbb{R}^{N},\mu(x)dx)\) whose weak derivatives belong to \(L^{2}_{\mu}\). In the proof of weighted estimates we make us of vector field method introduced in [22] in the case of a single pole and extended to the multipolar case in [12]. To this aim we define the vector value function \[F(x)=\sum_{i=1}^{n}\beta\,\frac{x-a_{i}}{|x-a_{i}|^{2}}\mu(x),\qquad\beta>0.\] The class of weight functions \(\mu\) that we consider fulfills the conditions: \(H_{1})\) \(\quad i)\)\(\quad\sqrt{\mu}\in H^{1}_{loc}(\mathbb{R}^{N})\); \(ii)\)\(\quad\mu^{-1}\in L^{1}_{loc}(\mathbb{R}^{N})\); \(H_{2})\) there exists constants \(C_{\mu},K_{\mu}\in\mathbb{R}\), \(K_{\mu}>2-N\), such that it holds \[-\beta\sum_{i=1}^{n}\frac{(x-a_{i})}{|x-a_{i}|^{2}}\cdot\frac{\nabla\mu}{\mu} \leq C_{\mu}+K_{\mu}\sum_{i=1}^{n}\frac{\beta}{|x-a_{i}|^{2}}.\] Under the hypotheses \(i)\) and \(ii)\) in \(H_{1})\) the space \(C^{\infty}_{c}(\mathbb{R}^{N})\) is dense in \(H^{1}_{\mu}\) (see e.g. [25]). So we can regard \(H^{1}_{\mu}\) as the completion of \(C^{\infty}_{c}(\mathbb{R}^{N})\) with respect to the Sobolev norm \[\|\cdot\|_{H^{1}_{\mu}}^{2}:=\|\cdot\|_{L^{2}_{\mu}}^{2}+\|\nabla\cdot\|_{L^{2}_{ \mu}}^{2}.\] The density result allows us to get the weighted inequalities for any function in \(H^{1}_{\mu}\). As a consequence of the assumptions on \(\mu\), we get \(F_{j}\), \(\frac{\partial F_{j}}{\partial x_{j}}\in L^{1}_{loc}(\mathbb{R}^{N})\), where \(F_{j}(x)=\beta\sum_{i=1}^{n}\frac{(x-a_{i})_{j}}{|x-a_{i}|^{2}}\mu(x)\). This allows us to integrate by parts in the proof of the Teorem 4.1 in Section 3. An example of weight function satisfying \(H_{2})\) is \[\mu(x)=\prod_{j=1}^{n}\mu_{j}(x)=e^{-\delta\sum_{j=1}^{n}|x-a_{j}|^{2}},\qquad \delta\geq 0.\] Let us see it in detail without worrying about the best estimates. We get \[\frac{\nabla\mu}{\mu}=\sum_{j=1}^{n}\frac{\nabla\mu_{j}}{\mu_{j}}=-2\delta \sum_{j=1}^{n}(x-a_{j}).\] So, taking in mind the left-hand-side in \(H_{2})\), \[-\beta\sum_{i=1}^{n}\frac{(x-a_{i})}{|x-a_{i}|^{2}}\cdot\frac{\nabla\mu}{\mu} =2\beta\delta\sum_{i,j=1}^{n}\frac{(x-a_{i})\cdot(x-a_{j})}{|x-a_{i}|^{2}}.\] We estimate the scalar product. In \(B(a_{k},r_{0})\), for any \(k\in\{1,\ldots n\}\), we get \[2\beta\delta\sum_{i=1}^{n} \frac{(x-a_{i})}{|x-a_{i}|^{2}}\cdot\frac{\nabla\mu}{\mu}=2\beta \delta\frac{(x-a_{k})\cdot(x-a_{k})}{|x-a_{k}|^{2}}\] \[+2\beta\delta\sum_{\begin{subarray}{c}i\neq k\\ j=i\end{subarray}}^{n}\frac{(x-a_{i})\cdot(x-a_{i})}{|x-a_{i}|^{2}}+2\beta \delta\sum_{j\neq k}\frac{(x-a_{k})\cdot(x-a_{j})}{|x-a_{k}|^{2}}\] \[+2\beta\delta\sum_{\begin{subarray}{c}i\neq k\\ j\neq i\end{subarray}}^{n}\frac{(x-a_{i})\cdot(x-a_{j})}{|x-a_{i}|^{2}}=J_{1}+ J_{2}+J_{3}+J_{4}.\] So \[J_{1}+J_{2}=2\beta\delta n.\] Since \[(x-a_{k})\cdot(x-a_{j})=\frac{1}{2}\left(|x-a_{k}|^{2}+|x-a_{j}|^{2}-|a_{k}-a_{j}| ^{2}\right),\] \(J_{3}\) and \(J_{4}\) can be estimated as follows. \[J_{3}=\beta\delta\sum_{j\neq k}\left(1+\frac{|x-a_{j}|^{2}-|a_{k}-a_{j}|^{2}}{| x-a_{k}|^{2}}\right)\leq\beta\delta\sum_{j\neq k}\left[1+\frac{(r_{0}+|a_{k}-a_{j}| )^{2}-|a_{k}-a_{j}|^{2}}{|x-a_{k}|^{2}}\right],\] and \[J_{4}=\beta\delta\sum_{\begin{subarray}{c}i\neq k\\ j\neq i\end{subarray}}^{n}\left(1+\frac{|x-a_{j}|^{2}-|a_{i}-a_{j}|^{2}}{|x-a_{ i}|^{2}}\right)\leq\beta\delta\sum_{\begin{subarray}{c}i\neq k\\ j\neq i\end{subarray}}^{n}\left[1+\frac{(r_{0}+|a_{k}-a_{j}|)^{2}-|a_{i}-a_{j} |^{2}}{(|a_{k}-a_{i}|-r_{0})^{2}}\right].\] Then for \(C_{\mu}\) large enough and \(K_{\mu,r_{0}}=\delta\sum_{j\neq k}(r_{0}^{2}+2r_{0}|a_{k}-a_{j}|)\) in \(B(a_{k},r_{0})\) the condition \(H_{2})\) holds. For \(x\in\mathbb{R}^{N}\setminus\bigcup_{k=1}^{n}B(a_{k},r_{0})\) we obtain \[\frac{(x-a_{i})\cdot(x-a_{j})}{|x-a_{i}|^{2}}\leq\frac{|x-a_{j}|}{|x-a_{i}|} \leq\text{const}.\] In fact, if \(|x|>2\max_{i}|a_{i}|\), \[\frac{|x|}{2}\leq|x|-|a_{i}|\leq|x-a_{i}|\leq|x|+|a_{i}|\leq\frac{3}{2}|x|\] for any \(i\), so for \(|x|\) large enough we get \(|x-a_{i}|\sim|x|\). Instead if \(|x|\leq R=2\max_{i}|a_{i}|\), \[r_{0}\leq|x-a_{i}|\leq|x|+|a_{i}|\leq\frac{3}{2}R\] for any \(i\). For other examples see [12]. ## 3. A preliminary estimate The next result was stated in [12] (see also [6]). We give a reformulated version that is functional to our purposes. The estimate represents a preliminary weighted Hardy inequality. **Theorem 3.1**.: _Let \(N\geq 3\) and \(n\geq 2\). Under hypotheses \(H_{1})\) and \(H_{2})\) we get_ \[\begin{split}\int_{\mathbb{R}^{N}}\sum_{i=1}^{n}&\frac {\beta(N+K_{\mu}-2)-n\beta^{2}}{|x-a_{i}|^{2}}\varphi^{2}\,d\mu\\ &+\frac{\beta^{2}}{2}\int_{\mathbb{R}^{N}}\sum_{\begin{subarray} {c}i,j=1\\ i\neq j\end{subarray}}^{n}\frac{|a_{i}-a_{j}|^{2}}{|x-a_{i}|^{2}|x-a_{j}|^{2} }\varphi^{2}\,d\mu\\ &\leq\int_{\mathbb{R}^{N}}|\nabla\varphi|^{2}\,d\mu+C_{\mu}\int _{\mathbb{R}^{N}}\varphi^{2}\,d\mu\end{split} \tag{3.1}\] _for any \(\varphi\in H_{\mu}^{1}\). As a consequence the following inequality holds_ \[\begin{split} c_{N,n,\mu}\int_{\mathbb{R}^{N}}\sum_{i=1}^{n}& \frac{\varphi^{2}}{|x-a_{i}|^{2}}\,d\mu\\ &+\frac{c_{N,n,\mu}}{2n}\int_{\mathbb{R}^{N}}\sum_{\begin{subarray} {c}i,j=1\\ i\neq j\end{subarray}}^{n}\frac{|a_{i}-a_{j}|^{2}}{|x-a_{i}|^{2}|x-a_{j}|^{2} }\varphi^{2}\,d\mu\\ &\leq\int_{\mathbb{R}^{N}}|\nabla\varphi|^{2}\,d\mu+C_{\mu}\int _{\mathbb{R}^{N}}\varphi^{2}\,d\mu,\end{split} \tag{3.2}\] _where \(c_{N,n,\mu}=\frac{(N+K_{\mu}-2)^{2}}{4n}\) is the maximum value of the first constant on left-hand side in (3.1) attained for \(\beta=\frac{N+K_{\mu}-2}{2n}\)._ The proof of the Theorem 3.1 in [12] is based on the vector field method extended to the multipolar case. In [1] an estimate similar to (3.2) was obtained in a different way when \(\mu=1\). We observe that inequality (3.2) is an improved inequality with respect to the first example of multipolar inequality with weight \[\frac{(N+K_{\mu}-2)^{2}}{4n}\int_{\mathbb{R}^{N}}\sum_{i=1}^{n}\frac{\varphi^{ 2}}{|x-a_{i}|^{2}}\,d\mu\leq\int_{\mathbb{R}^{N}}|\nabla\varphi|^{2}\,d\mu+C_ {\mu}\int_{\mathbb{R}^{N}}\varphi^{2}\,d\mu,\] which the natural generalization of the weighted Hardy inequality (see [11]) \[\frac{(N+K_{\mu}-2)^{2}}{4}\int_{\mathbb{R}^{N}}\frac{\varphi^{2}}{|x|^{2}}\, d\mu\leq\int_{\mathbb{R}^{N}}|\nabla\varphi|^{2}\,d\mu+C_{\mu}\int_{\mathbb{R}^{ N}}\varphi^{2}\,d\mu, \tag{3.3}\] Now we focus our attention on the second term on the left-hand side in (3.1). For simplicity we put \[W(x):=\frac{1}{2}\sum_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{n}\frac{|a_{i}-a_{j}|^{2}}{|x-a_{i}|^{2}|x-a_{j}|^{2}} \tag{3.4}\] In \(B(a_{i},r_{0})\), taking into account that \[W=\frac{1}{|x-a_{i}|^{2}}\sum_{j\neq i}\frac{|a_{i}-a_{j}|^{2}}{|x-a_{j}|^{2}}+ \sum_{\begin{subarray}{c}k,j\neq i\\ j>k\end{subarray}}^{n}\frac{|a_{k}-a_{j}|^{2}}{|x-a_{k}|^{2}|x-a_{j}|^{2}},\] we have the following estimates for \(W\) from above and from below \[W\leq \frac{1}{|x-a_{i}|^{2}}\sum_{j\neq i}\frac{(|a_{i}-a_{j}|^{2}}{(| a_{i}-a_{j}|-|x-a_{i}|)^{2}}\] \[+\sum_{\begin{subarray}{c}k,j\neq i\\ j>k\end{subarray}}^{n}\frac{|a_{k}-a_{j}|^{2}}{(|a_{i}-a_{k}|-|x-a_{i}|)^{2}(| a_{i}-a_{j}|-|x-a_{i}|)^{2}}\] \[\leq\frac{n-1}{|x-a_{i}|^{2}}\frac{d^{2}}{(d-r_{0})^{2}}+\sum_{ \begin{subarray}{c}k,j\neq i\\ j>k\end{subarray}}^{n}\frac{|a_{k}-a_{j}|^{2}}{(|a_{i}-a_{k}|-r_{0})^{2}(|a_{i }-a_{j}|-r_{0})^{2}}\] \[\leq\frac{n-1}{|x-a_{i}|^{2}}\frac{d^{2}}{(d-r_{0})^{2}}+c_{1}\] and \[\begin{split} W\geq&\frac{1}{|x-a_{i}|^{2}}\sum_{j\neq i }\frac{(|a_{i}-a_{j}|^{2}}{(|a_{i}-a_{j}|+|x-a_{i}|)^{2}}\\ &+\sum_{\begin{subarray}{c}k,j\neq i\\ j>k\end{subarray}}^{n}\frac{|a_{k}-a_{j}|^{2}}{(|a_{i}-a_{k}|+|x-a_{i}|)^{2}( |a_{i}-a_{j}|+|x-a_{i}|)^{2}}\\ &\geq\frac{n-1}{|x-a_{i}|^{2}}\frac{d^{2}}{(d+r_{0})^{2}}+\sum_{ \begin{subarray}{c}k,j\neq i\\ j>k\end{subarray}}^{n}\frac{|a_{k}-a_{j}|^{2}}{(|a_{i}-a_{k}|+r_{0})^{2}(|a_{i }-a_{j}|+r_{0})^{2}}\\ &\geq\frac{n-1}{|x-a_{i}|^{2}}\frac{d^{2}}{(d+r_{0})^{2}}+c_{2}. \end{split} \tag{3.5}\] When \(x\) tends to \(a_{i}\) we get \[W\sim\frac{n-1}{|x-a_{i}|^{2}} \tag{3.6}\] and, then, taking in mind the inequality (3.1), we have the asymptotic behaviour \[\begin{split}\sum_{i=1}^{n}&\frac{\beta(N+K_{\mu}- 2)-n\beta^{2}}{|x-a_{i}|^{2}}+\frac{\beta^{2}}{2}\sum_{\begin{subarray}{c}i,j =1\\ i\neq j\end{subarray}}^{n}\frac{|a_{i}-a_{j}|^{2}}{|x-a_{i}|^{2}|x-a_{j}|^{2}}\\ &\sim\left[\beta(N+K_{\mu}-2)-n\beta^{2}+\beta^{2}(n-1)\right] \frac{1}{|x-a_{i}|^{2}}\\ &=\left[\beta(N+K_{\mu}-2)-\beta^{2}\right]\frac{1}{|x-a_{i}|^{2} }.\end{split} \tag{3.7}\] The maximum value of the constant on the right-hand side in (3.7) is the best constant in the weighted Hardy inequality with a single pole (see (3.3)). ## 4. Weighted multipolar Hardy inequality The behaviour of the function \(W\) in (3.6) when \(x\) tends to the pole \(a_{i}\) leads us to study the relation between the constant on the left-hand side in weighted Hardy inequalities and the closeness to the single pole. The next result emphasizes this relation and improves a similar inequality stated in [12] in a different way. **Theorem 4.1**.: _Assume that the conditions \(H_{1})\) and \(H_{2})\) hold. Then for any \(\varphi\in H_{\mu}^{1}\) we get_ \[c\int_{\mathbb{R}^{N}}\sum_{i=1}^{n}\frac{\varphi^{2}}{|x-a_{i}|^{2}}\,\mu(x) dx\leq\int_{\mathbb{R}^{N}}|\nabla\varphi|^{2}\,\mu(x)dx+k\int_{\mathbb{R}^{N}} \varphi^{2}\,\mu(x)dx \tag{4.1}\] _with \(c\in\,]0,c_{o}(N+K_{\mu})[\), where \(c_{o}(N+K_{\mu})=\left(\frac{N+K_{\mu}-2}{2}\right)^{2}\) optimal constant, and \(k=k(n,d,\mu)\), \(d:=\min_{\begin{subarray}{c}1\leq i,j\leq n\\ i\neq j\end{subarray}}|a_{i}-a_{j}|/2\)._ Proof. By density, it is enough to prove (4.1) for any \(\varphi\in C_{c}^{\infty}(\mathbb{R}^{N})\). The optimality of the constant \(c_{o}(N+K_{\mu})\) was stated in [12]. We will prove the inequality (4.1). We start from the integral \[\int_{\mathbb{R}^{N}}\varphi^{2}\text{div}F\,dx=\beta\int_{\mathbb{R}^{N}} \sum_{i=1}^{n}\left[\frac{N-2}{|x-a_{i}|^{2}}\mu(x)+\frac{(x-a_{i})}{|x-a_{i}| ^{2}}\cdot\nabla\mu\right]\varphi^{2}dx \tag{4.2}\] and integrate by parts getting, through Holder's and Young's inequalities, the first following inequality \[\int_{\mathbb{R}^{N}} \varphi^{2}\text{div}F\,dx=-2\int_{\mathbb{R}^{N}}\varphi F\cdot \nabla\varphi\,dx\] \[\leq 2\left(\int_{\mathbb{R}^{N}}|\nabla\varphi|^{2}\,\mu(x)dx \right)^{\frac{1}{2}}\left[\int_{\mathbb{R}^{N}}\left|\sum_{i=1}^{n}\frac{ \beta\left(x-a_{i}\right)}{|x-a_{i}|^{2}}\right|^{2}\,\varphi^{2}\,\mu(x)dx \right]^{\frac{1}{2}}\] \[\leq\int_{\mathbb{R}^{N}}|\nabla\varphi|^{2}\,\mu(x)dx+\int_{ \mathbb{R}^{N}}\left|\sum_{i=1}^{n}\frac{\beta\left(x-a_{i}\right)}{|x-a_{i}| ^{2}}\right|^{2}\,\varphi^{2}\,\mu(x)dx.\] So from (4.2), using the estimate (4.3), we get \[\begin{split}\int_{\mathbb{R}^{N}}\sum_{i=1}^{n}\frac{\beta(N-2)}{|x -a_{i}|^{2}}\varphi^{2}\mu(x)dx\leq\int_{\mathbb{R}^{N}}|\nabla\varphi|^{2}\, \mu(x)dx\\ +\int_{\mathbb{R}^{N}}\sum_{i=1}^{n}\frac{\beta^{2}}{|x-a_{i}|^{2 }}\,\varphi^{2}\,\mu(x)dx\\ +\int_{\mathbb{R}^{N}}\sum_{\begin{subarray}{c}i,j=1\\ j\neq i\end{subarray}}^{n}\frac{\beta^{2}\,(x-a_{i})\cdot(x-a_{j})}{|x-a_{i}| ^{2}|x-a_{j}|^{2}}\,\varphi^{2}\,\mu(x)dx\\ -\beta\int_{\mathbb{R}^{N}}\sum_{i=1}^{n}\frac{(x-a_{i})}{|x-a_{i}| ^{2}}\cdot\nabla\mu\,\varphi^{2}dx.\end{split} \tag{4.4}\] Let \(\varepsilon>0\) small enough and \(\delta>0\) such that \(\varepsilon+\delta<\frac{d}{2}\). The next step is to estimate the integral of the mixed term that comes out the square of the sum in (4.4) by writing \[\begin{split}\int_{\mathbb{R}^{N}}&\sum_{ \begin{subarray}{c}i,j=1\\ j\neq i\end{subarray}}^{n}\frac{\beta^{2}\,(x-a_{i})\cdot(x-a_{j})}{|x-a_{i}| ^{2}|x-a_{j}|^{2}}\,\varphi^{2}\,\mu(x)dx\\ &=\int_{\bigcup_{k=1}^{n}B(a_{k},\varepsilon)}\sum_{ \begin{subarray}{c}i,j=1\\ j\neq i\end{subarray}}^{n}\frac{\beta^{2}\,(x-a_{i})\cdot(x-a_{j})}{|x-a_{i}| ^{2}|x-a_{j}|^{2}}\,\varphi^{2}\,\mu(x)dx\\ &+\int_{\bigcup_{k=1}^{n}B(a_{k},\varepsilon+\delta)\backslash \bigcup_{k=1}^{n}B(a_{k},\varepsilon)}\sum_{\begin{subarray}{c}i,j=1\\ j\neq i\end{subarray}}^{n}\frac{\beta^{2}\,(x-a_{i})\cdot(x-a_{j})}{|x-a_{i}| ^{2}|x-a_{j}|^{2}}\,\varphi^{2}\,\mu(x)dx\\ &+\int_{\mathbb{R}^{N}\backslash\bigcup_{k=1}^{n}B(a_{k}, \varepsilon+\delta)}\sum_{\begin{subarray}{c}i,j=1\\ j\neq i\end{subarray}}^{n}\frac{\beta^{2}\,(x-a_{i})\cdot(x-a_{j})}{|x-a_{i}| ^{2}|x-a_{j}|^{2}}\,\varphi^{2}\,\mu(x)dx\\ &:=I_{1}+I_{2}+I_{3},\end{split} \tag{4.5}\] Subsequently we will rewrite the mixed term in the following way. \[\sum_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{n} \frac{(x-a_{i})\cdot(x-a_{j})}{|x-a_{i}|^{2}|x-a_{j}|^{2}}=\sum_{ \begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{n}\frac{|x|^{2}-x\cdot a_{i}-x\cdot a_{j}+a_{i}\cdot a _{j}}{|x-a_{i}|^{2}|x-a_{j}|^{2}}\] \[=\sum_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{n}\frac{\frac{|x-a_{i}|^{2}}{2}+\frac{|x-a_{j}|^{2}}{2 }-\frac{|a_{i}-a_{j}|^{2}}{2}}{|x-a_{i}|^{2}|x-a_{j}|^{2}}\] \[=\sum_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{n}\frac{1}{2}\left(\frac{1}{|x-a_{i}|^{2}}+\frac{1}{|x -a_{j}|^{2}}-\frac{|a_{i}-a_{j}|^{2}}{|x-a_{i}|^{2}|x-a_{j}|^{2}}\right) \tag{4.6}\] \[=(n-1)\sum_{i=1}^{n}\frac{1}{|x-a_{i}|^{2}}-\frac{1}{2}\sum_{ \begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{n}\frac{|a_{i}-a_{j}|^{2}}{|x-a_{i}|^{2}|x-a_{j}|^{2}}\] \[=(n-1)\sum_{i=1}^{n}\frac{1}{|x-a_{i}|^{2}}-W.\] To estimate the integral \(I_{1}\) in (4.5) we use the estimate (3.5) in Section 3 for \(W\) in a ball centered in \(a_{k}\) and the identity (4.6). We obtain \[I_{1} \leq\beta^{2}\sum_{k=1}^{n}\int_{B(a_{k},\varepsilon)}\left[\sum_ {i=1}^{n}\frac{n-1}{|x-a_{i}|^{2}}-\frac{n-1}{|x-a_{k}|^{2}}\frac{d^{2}}{(d+ \varepsilon)^{2}}\right.\] \[-\sum_{\begin{subarray}{c}i,j\neq k\\ j>i\end{subarray}}\frac{|a_{i}-a_{j}|^{2}}{(|a_{k}-a_{i}|+\varepsilon)^{2}(|a _{k}-a_{j}|+\varepsilon)^{2}}\right]\varphi^{2}\,\mu(x)dx\] \[=\beta^{2}\sum_{k=1}^{n}\int_{B(a_{k},\varepsilon)}\left\{\frac{ n-1}{|x-a_{k}|^{2}}\left[1-\frac{d^{2}}{(d+\varepsilon)^{2}}\right]+\sum_{ \begin{subarray}{c}i=1\\ i\neq k\end{subarray}}^{n}\frac{n-1}{|x-a_{i}|^{2}}\right.\] \[-\left.\sum_{\begin{subarray}{c}i,j\neq k\\ j>i\end{subarray}}\frac{|a_{i}-a_{j}|^{2}}{(|a_{k}-a_{i}|+\varepsilon)^{2}(|a _{k}-a_{j}|+\varepsilon)^{2}}\right\}\varphi^{2}\,\mu(x)dx.\] To complete the estimate of \(I_{1}\) we observe that in \(B(a_{k},\varepsilon)\), for \(i\neq k\), it occurs \[|x-a_{i}|\geq|a_{k}-a_{i}|-|x-a_{k}|\geq|a_{k}-a_{i}|-\varepsilon\] so we get \[\sum_{\begin{subarray}{c}i=1\\ i\neq k\end{subarray}}^{n}\frac{n-1}{|x-a_{i}|^{2}}\leq\sum_{\begin{subarray}{ c}i=1\\ i\neq k\end{subarray}}^{n}\frac{n-1}{(|a_{k}-a_{i}|-\varepsilon)^{2}}.\] Then \[I_{1}\leq\beta^{2}\sum_{k=1}^{n}\int_{B(a_{k},\varepsilon)}\left\{\frac{n-1}{| x-a_{k}|^{2}}\left[1-\frac{d^{2}}{(d+\varepsilon)^{2}}\right]+c_{3}\right\}\, \varphi^{2}\,\mu(x)dx,\] where \[c_{3}=\sum_{\begin{subarray}{c}i=1\\ i\neq k\end{subarray}}^{n}\frac{n-1}{(|a_{k}-a_{i}|-\varepsilon)^{2}}-\sum_{ \begin{subarray}{c}i,j\neq k\\ j>i\end{subarray}}\frac{|a_{i}-a_{j}|^{2}}{(|a_{k}-a_{i}|+\varepsilon)^{2}(|a _{k}-a_{j}|+\varepsilon)^{2}}.\] For the second integral \(I_{2}\) we observe that in \(B(a_{k},\varepsilon+\delta)\setminus B(a_{k},\varepsilon)\), for \(j\neq k\), \(|x-a_{k}|>\varepsilon\) and \[|x-a_{j}|\geq|a_{k}-a_{j}|-|x-a_{k}|\geq|a_{k}-a_{j}|-(\varepsilon+\delta)\] Therefore \[I_{2}\leq \int_{\bigcup_{k=1}^{n}B(a_{k},\varepsilon+\delta)\setminus \bigcup_{k=1}^{n}B(a_{k},\varepsilon)}\sum_{\begin{subarray}{c}i,j=1\\ j\neq i\end{subarray}}^{n}\frac{\beta^{2}}{|x-a_{i}||x-a_{j}|}\,\varphi^{2}\, \mu(x)dx\] \[\leq\frac{n\beta^{2}}{\varepsilon}\sum_{\begin{subarray}{c}j=1\\ j\neq k\end{subarray}}^{n}\frac{1}{|a_{k}-a_{j}|-(\varepsilon+\delta)}\int_{ \bigcup_{k=1}^{n}B(a_{k},\varepsilon+\delta)\setminus\bigcup_{k=1}^{n}B(a_{k},\varepsilon)}\varphi^{2}\,\mu(x)dx.\] The remaining integral \(I_{3}\) can be can be estimated as follows. \[I_{3}\leq \int_{\mathbb{R}^{N}\setminus\bigcup_{k=1}^{n}B(a_{k}, \varepsilon+\delta)}\sum_{\begin{subarray}{c}i,j=1\\ j\neq i\end{subarray}}^{n}\frac{\beta^{2}}{|x-a_{i}||x-a_{j}|}\,\varphi^{2}\, \mu(x)dx\] \[\leq\frac{n(n-1)\beta^{2}}{(\varepsilon+\delta)^{2}}\int_{ \mathbb{R}^{N}\setminus\bigcup_{k=1}^{n}B(a_{k},\varepsilon+\delta)}\varphi^{2 }\,\mu(x)dx.\] Starting from (4.5) and using the estimates obtained for \(I_{1}\), \(I_{2}\) and \(I_{3}\), we get for \(\varepsilon\) small enough, \[\begin{split}\int_{\mathbb{R}^{N}}&\sum_{ \begin{subarray}{c}i,j=1\\ j\neq i\end{subarray}}^{n}\frac{\beta^{2}\left(x-a_{i}\right)\cdot(x-a_{j})}{|x -a_{i}|^{2}|x-a_{j}|^{2}}\,\varphi^{2}\,\mu(x)dx\\ &\leq\int_{\mathbb{R}^{N}}\sum_{i=1}^{n}\frac{\beta^{2}(n-1)c_{ \varepsilon}}{|x-a_{i}|^{2}}\,\varphi^{2}\,\mu(x)dx+c_{4}\int_{\mathbb{R}^{N}} \,\varphi^{2}\,\mu(x)dx,\end{split} \tag{4.7}\] where \[c_{\varepsilon}=1-\frac{d^{2}}{(d+\varepsilon)^{2}}\qquad\text{and}\qquad c_{ 4}=\frac{n\beta^{2}}{\varepsilon}\sum_{\begin{subarray}{c}j=1\\ j\neq k\end{subarray}}^{n}\frac{1}{|a_{k}-a_{j}|-(\varepsilon+\delta)}.\] Going back to (4.4), by (4.7) and by the hypothesis \(H_{2})\), we deduce that \[\begin{split}\int_{\mathbb{R}^{N}}\sum_{i=1}^{n}& \frac{\beta(N+K_{\mu}-2)-\beta^{2}\left[1+(n-1)c_{\varepsilon} \right]}{|x-a_{i}|^{2}}\varphi^{2}\,\mu(x)dx\\ &\leq\int_{\mathbb{R}^{N}}|\nabla\varphi|^{2}\mu(x)dx+(c_{4}+C_{ \mu})\int_{\mathbb{R}^{N}}\varphi^{2}\,\mu(x)dx.\end{split}\] The maximum of the function \(\beta\mapsto c=(N+K_{\mu}-2)\beta-\beta^{2}\left[1+(n-1)c_{\varepsilon}\right]\), fixed \(\varepsilon\), is \(c_{max}(N+K_{\mu})=\frac{(N+K_{\mu}-2)^{2}}{4[1+(n-1)c_{\varepsilon}]}\) attained in \(\beta_{max}=\frac{N+K_{\mu}-2}{2[1+(n-1)c_{\varepsilon}]}\). We conclude with some remarks. If \(\varepsilon\) tends to zero, and then if we get close enough to the single pole, the constant \(c=\frac{(N+K_{\mu}-2)^{2}}{4[1+(n-1)c_{\varepsilon}]}\) tends to the optimal constant \(c_{o}(N+K_{\mu})\). The constant \(k=c_{4}+C_{\mu}\), \(c_{4}\) with \(\beta=\beta_{max}\), is better than the analogous constant in [12]. In the case of Gaussian measure the constant \(K_{\mu}\) tends to zero as the radius \(\varepsilon\) of the sphere centered in a single pole tends to zero (cf. example in Section 2). Finally, we observed that as a consequence of Theorem 4.1, we deduce the estimate \[\|V^{\frac{1}{2}}\varphi\|_{L^{2}_{\mu}(\mathbb{R}^{N})}\leq c\|\varphi\|_{H^{ 1}_{\mu}(\mathbb{R}^{N})},\] with \(V=\sum_{i=1}^{n}\frac{1}{|x-a_{i}|^{2}}\) and \(c\) a constant independent of \(V\) and \(\varphi\). For \(L^{p}\) estimates and embedding results of this type with some applications to elliptic equations see, for example, [7, 8, 9].
2310.13637
Application Performance Benchmarks for Quantum Computers
Current technological advancements of quantum computers highlight the need for application-driven, practical and well-defined methods of benchmarking their performance. As the existing NISQ device's quality of two-qubit gate errors rate is even around one percent and the number of qubits is still limited to a few or several dozen, naturally, we need to propose rather small algorithms instances taken from key promising application areas, such as quantum chemistry, combinatorial optimisation or machine learning. While many techniques for assessing the performance of logical components, such as gate fidelity and qubit coherence exist, it is still challenging to extrapolate those values onto the performance of different quantum algorithms and subroutines. This work aims to introduce a series of initial quantum application benchmarks together with a methodology of execution for measuring the performance and fidelity of the results. The proposed suite refers to several variational algorithms widely used on available NISQ devices but also includes examples of quantum circuits designed for a fault-tolerant quantum computer.
Krzysztof Kurowski, Piotr Rydlichowski, Konrad Wojciechowski, Tomasz Pecyna, Mateusz Slysz
2023-10-20T16:37:03Z
http://arxiv.org/abs/2310.13637v2
# Application Performance Benchmarks for Quantum Computers ###### Abstract Current technological advancements of quantum computers highlight the need for application-driven, practical and well-defined methods of benchmarking their performance. As the existing NISQ device's quality of two-qubit gate errors rate is around 0.1% -1% and the number of qubits is still limited to a few or several dozen, naturally, we need to propose rather small algorithms instances taken from key promising application areas, such as quantum chemistry, combinatorial optimisation or machine learning. While many techniques for assessing the performance of logical components such as gate fidelity and qubit coherence exist, it is often challenging to extrapolate those values onto the performance of different quantum algorithms and subroutines. This work aims to introduce a series of initial quantum application benchmarks together with a methodology of execution for measuring performance and fidelity of the results. The proposed suite refers to several variational algorithms, widely-used on current NISQ devices, but also includes examples of quantum circuits designed for a fault-tolerant quantum computer. ## 1 Introduction Based on the recent developments in the physical implementations of quantum computers still in the NISQ era, we can observe the growing interest of different scientific communities and industries in designing accurate benchmarking routines. With each new quantum device or its upgraded version, it is certainly beneficial to access standardized performance evaluation methods, which allow comparing the overall performance and tracking improvements across different quantum computer architectures. However, designing such benchmarks is challenging, especially given the vast dissimilarities in technologies employed by specific quantum platforms and the current machines' susceptibility to different noise sources. This challenging problem has been addressed by some recent works, such as [1, 2, 3], while others have identified guidelines for creating valid benchmarks [4]. Nevertheless, this paper presents a quantum benchmarking suite focused on application-oriented circuits. The main objective is to collect and share quantum circuits with different properties and structures to measure quantum hardware's performance in various commonly used application scenarios. ### Common metrics Among the various metrics used to evaluate the quality and performance of a quantum device, two metrics of particular interest are Quantum Volume and CLOPS (Circuit Layer Operations per Second). These metrics hold significance in the context of quantum computing, and in the following discussion, we can't ignore their importance and implications for assessing the capabilities and efficiency of quantum devices. #### 1.1.1 Quantum Volume Quantum Volume (QV) [5] is one of the most widely-used metrics for estimating quantum computer capabilities. It captures the maximum size of a square quantum circuit that can be executed on a quantum computer with an output probability sufficiently similar to the output of the same circuit simulated classically. Quantum Volume \(V_{Q}\) can be expressed with the following formula: \[\log_{2}V_{Q}=\arg\max_{m}\min(m,d(m))\] where \(m\) is the width of the Haar random circuit and \(d(m)\) is the depth. The execution of a QV circuit is successful when the heavy-output probability \(h_{U}\) is greater than \(2/3\), where heavy ouptuts \(H_{U}\) have probabilities above the median value of the ideal distribution. This ideal distribution is generated by classically simulating the circuit with exponential overhead. Even though quantum algorithms typically do not consist of random circuits, the methodology of the QV benchmark assumes that such circuits can represent generic state preparation routines. Moreover, their structure, which utilizes two-qubit unitary gates, resembles circuits commonly used in NISQ algorithms, such as variational methods and quantum adiabatic optimization [6]. These properties highlight QV's usefulness as a single-number metric for benchmarking near-term quantum computers. #### 1.1.2 Clops While QV is a holistic measure of quantum computer's performance encompassing factors like capacity and quality of final results, is was also imperative to account the speed at which these results were achieved. This is where CLOPS was introduced as the dedicated metric for quantifying the quantum computer's processing speed and computational efficiency [7]. CLOPS relies on the notion of layers, representing the basic blocks of parallel gate operations that are required to complete the quantum circuit. These layers are based on the QV layers, which also make up the circuit depth. The formula for CLOPS is as follows: \[CLOPS=\frac{M\times K\times S\times D}{\text{time\_taken}}, \tag{1}\] where \(M\) denotes number of circuit templates, \(K\) is the number of parameter updates, \(S\) is number of shots and \(D\) is number of QV layers. Although the metric might appear straightforward, it captures various potential performance problems that a circuit may encounter, particularly in scenarios involving parametrized circuits and complex algorithms demanding efficient execution making it a noteworthy metric to compare quantum devices. ## 2 Assumptions Adhering to the previously outlined metrics, we define circuit depth as the number of parallelly executed gates called layers that must be sequentially applied to compute a circuit. Following best practices presented in [1] we take the Rx, Ry, Rz, CNOT set as a basis gate set. We also assume that the CNOT gates can be applied between any pair of qubits. We opted to assess quantum computers primarily based on specific application-oriented use cases. We will employ the fidelity measure to minimize the risk of benchmarking algorithms rather than the quantum computer's true capabilities. This measure gauges the quantum computer's ability to execute the described circuits by comparing the resulting measurement distribution \(P_{\text{output}}\) to the ideal one \(P_{\text{ideal}}\) obtained from an exact state vector simulation. For the purpose of selected benchmarks, the average fidelity is defined as: \[F(P_{\text{ideal}},P_{\text{output}})=\max\left\{F_{\text{raw}}(P_{\text{ideal }},P_{\text{output}}),0\right\} \tag{2}\] where \(F_{\text{raw}}\) is defined to punish distributions, which are close to uniform: \[F_{\text{raw}}(P_{\text{ideal}},P_{\text{output}})=\frac{F_{s}(P_{\text{ideal }},P_{\text{output}})-F_{s}(P_{\text{ideal}},P_{\text{uni}})}{1-F_{s}(P_{ \text{ideal}},P_{\text{uni}})} \tag{3}\] and the \(F_{s}\), defined below, is the standard measure of classical fidelity of two probability distributions (related to the Hellinger distance): \[F_{\text{s}}(P_{\text{ideal}},P_{\text{output}})=\left(\sum_{x}\sqrt{P_{ \text{output}}(x)P_{\text{ideal}}(x)}\right)^{2} \tag{4}\] where \(P_{\text{output}}(x)\) and \(P_{\text{ideal}}(x)\) are the respective probabilities of observing bit string \(x\). Using the classical fidelity \(F_{s}\) has a serious drawback, that it can yield non-zero value for random (uniform) distributions. On the other hand, we can see that normalizing makes \(F(P_{\text{ideal}},P_{\text{uni}})=0\), which is useful for assessing errors in quantum hardware, especially as circuits become larger or more complex. In such cases of significant decoherence, the output distribution approaches the uniform and therefore should be punished. At the same time, it is worth noting, that because of the way Fraw is formulated, benchmarks for which \(F_{s}(P_{\rm ideal},P_{\rm uni})\approx 1\) should be avoided. This is not an issue with the methodology, as such benchmarks are not useful for assessing the quantum properties of any system. It has also been observed that the normalized fidelities of different benchmarks with similar circuit shapes show higher correlation than the standard fidelities. Therefore, using normalized fidelity is more practical and informative for evaluating quantum computing results. ## 3 Execution We identify a set of quantum algorithms for the application benchmarks, which exemplify the common approaches to performing quantum computations in different application areas. These include the currently most commonly used near-term variational algorithms and routines in fault-tolerant quantum computing. Since the benchmarking suite includes hybrid and purely quantum algorithms, we must agree on a base methodology applied in each case. While this methodology is designed to be as general as possible, there are still cases where such an approach cannot capture the whole nature of the challenge posed to the quantum machine. It will be elaborated on in subsequent sections, where applicable. To evaluate a given quantum system's ability to execute an algorithm, we choose a single quantum circuit, which by design is meant to represent the most typical single execution, either hybrid or purely quantum. It is especially noteworthy in the case of variational algorithms, where we do not intend to perform a full run, optimizing the parameters, but rather fix the parameters in place and estimate the fidelity on a single non-parameterized circuit. It is done carefully to avoid cases where the ideal distribution is close to uniform. The logical quantum circuits for each application benchmark are compiled into OpenQASM 2.0/3.0 assuming all-to-all connectivity and Rx, Ry, Rz, CNOT as the basis gate set. For execution on real quantum backend devices, these circuits can be freely recompiled and optimized, as long as they remain logically equivalent to the ones delivered within the described suite. For this assumption to be valid, we choose circuits that represent unitary matrices sufficiently different from the identity. Circuit approximation techniques are allowed as a way of finding more efficient compilations, but in the end the same fidelity measures apply. While error mitigation is not meant to be a part of these benchmarks, error suppression techniques like dynamic decoupling can be used. Each circuit representing a specific application benchmark is meant to be measured at least 1000 times, so the appropriate measurement average for estimating specified metrics has to be taken. No error mitigation has to be applied. For each of the following benchmarks, we assume success when the fidelity surpasses a specified threshold, which can be different in different cases. These criteria are set to ensure a minimum level of complexity and capability in the quantum computations being performed. By selecting these thresholds, we aim to evaluate and compare the performance of quantum systems that can handle moderately sized circuits and exhibit a reasonable level of fidelity in their results. These benchmarks provide a standardized measure to assess the progress and advancements in quantum computing technology. The following main metrics based on best practices discussed in [1] have been identified: * **Execution Time:** time spent on quantum simulator or hardware backend running the circuit; * **Circuit Depth:** depth of the circuit after transpiling it to the basis gates set defined as Rx, Ry, Rz, CNOT * **Fidelity:** a measure of how well the simulator or hardware runs a particular benchmark; ### Entanglement in GHZ state Entanglement is a key property differentiating quantum systems from purely classical ones. It is known that quantum systems containing sufficiently low amount of entanglement can be simulated efficiently on a classical computer. Because of that, the ability of a quantum computer to generate genuine multipartite entanglement is essential for them to be able to outperform their classical counterparts. To this end, similarly to [8], we propose producing Greenberger-Horne-Zeilinger (GHZ) states as a benchmark of the quantum system's ability to entangle multiple qubits. These states have a convenient property that they can exhibit genuine multipartite entanglement, while at the same time their fidelity can be efficiently estimated. While in other benchmarks we used a measure of fidelity based on the Hellinger distance, in this case it is not sufficient to detect genuine entanglement. Therefore, we employ instead a standard approach used in other such experiments performed on various quantum devices [9, 10] The N-qubit GHZ state is defined as: \[|\text{GHZ}_{N}\rangle=\frac{1}{\sqrt{2}}\left(|0\rangle^{\otimes N}+|1 \rangle^{\otimes N}\right), \tag{5}\] where \(N\) is the number of qubits. The fidelity F of this state can be expressed with: \[F=\frac{1}{2}(P+C) \tag{6}\] , where the _population_\(P=\rho_{00...0,00...0}+\rho_{11...1,11...1}\) is measured as the sum of occurrences of outcomes \(|00\dots 0\rangle\) and \(|11\dots 1\rangle\), while _coherence_\(C=|\rho_{00...0,00...0}|+|\rho_{11...1,11...1}|\) can be estimated either through Multiple Quantum Coherences (MQC) or parity oscillations [9]. which or a method for calculating the coherence C. Since the goal of this test is to examine the quantum system's ability to create entangled states, the specific techniques can be tailored to a specific device to achieve the highest possible fidelities. ### Toffoli gate The \(n\)-qubit Toffoli gate is a multi-qubit quantum gate operation that acts on \(n\) qubits, with \(n-1\) control qubits and \(1\) target qubit. The action of the Toffoli gate is simply that the target qubit is inverted if all control qubits are in the state \(1\), otherwise an identity operation is performed. The \(n\)-qubit Toffoli gate is an essential multi qubit operation that can be used in important applications such as Grover search algorithm [11], Quantum Fourier Transformation [12], Shor's number-factoring algorithm [13] and quantum error correction [14]. The decomposition into basic \(1\)-qubit and \(2\)-qubit gates scales quadratically (\(n^{2}\)) in the number of required \(2\)-qubit gates [15]. The quadratic scaling puts hard fidelity requirements on the system. Although there are techniques like auxiliary qubits assisted implementations [16], that lead to linear (\(n\)) overhead of \(2\)-qubit gates, or native implementation in trapped ion system [17], here we only focus on the performance of an \(n\)-qubit Toffoli gate, independent of its implementation. _Problem instance:_ A rigorous characterization of an \(n\)-qubit Toffoli gate can be done using quantum process tomography, which is highly inefficient (\(12^{n}\) measurements required) and cannot be implemented on current devices. We therefore propose to measure only the truth-table (similar as in [17]), which is also not efficient (\(2^{n}\) measurements required), but doable for NISQ devices. The average success probability \(F\) (average over all possible input states) to measure the correct output state should be \(F>0.5\). The problem instance should implement a Figure 1: An example of a \(7\)-qubits GHZ state circuit 6-qubit Toffoli gate with circuit approximation and 5-qubit Toffoli gate without circuit approximation. ### Grover's Algorithm Grover's algorithm [11] remains as one of the most well known quantum algorithms. It solves the problem of unstructured search using quadratically fewer calls to the oracle. In classical computation, \(O(N)\) evaluations of a black box function are required, while the quantum method needs only \(O(\sqrt{N})\). It has also been found that this algorithm is asymptotically optimal. _Problem instance:_ The circuit for Grover's algorithm consists generally of two key parts: the quantum oracle \(U_{\omega}\), which marks the solution states and the diffusion operator, that allows manipulating the qubits to increase the amplitude of the marked state. In the proposed benchmark we employ a simple 3-qubit circuit for finding bitstrings marked by the oracle. The novelty introduced by this circuit, compared to the latter ones described in this document, are three-qubit gates, namely CCZ gate, which can be decomposed into a Toffoli gate and two Hadamard gates. This supplements the proposed benchmark suite with the possibility of testing performance and compilation effectiveness for quantum circuits where such gates are essential. ### Quantum Fourier Transform The Quantum Fourier Transform (QFT) [18] is a fundamental quantum algorithm used for performing a Fourier transformation on a quantum state. It is a key component in many quantum algorithms, particularly in quantum algorithms for prime factorization, quantum phase estimation, and quantum simulation. The QFT maps an input state \(|x\rangle\) to its Fourier-transformed state \(|y\rangle\), where \(y=F(x)\) and \(F\) denotes the Quantum Fourier Transform operator. The QFT algorithm operates on a register of n qubits, and the input state is typically encoded in the amplitudes of the computational basis states. The QFT applies a sequence of controlled rotations and Hadamard gates to transform the input state into its Fourier-transformed state. The QFT can be represented as a unitary matrix, which depends on the number of qubits in the register. By applying the QFT to an input state, one can extract information about the frequency components of the state. This is particularly useful in applications such as signal processing, data compression, and solving certain mathematical problems efficiently [12]. The performance of the QFT is influenced by factors such as the number of qubits, gate errors, and coherence times of the quantum system. Achieving high fidelity and minimizing errors are crucial for obtaining accurate results from the QFT. _Problem instance:_ In the QFT benchmark, we specifically focus on running the inverse QFT. The inverse QFT is applied to a quantum state that is initially prepared in a Fourier basis state. However, rather than utilizing the QFT to create this state, we employ a series of one-qubit gates, such as Hadamard gates and Z rotations, to encode a specific integer value \(x\) in the Fourier basis. This approach allows us to evaluate the performance of the inverse QFT circuit in accurately decoding the encoded integer value. ### VQE for quantum chemistry calculations The Variational Quantum Eigensolver (VQE) is a quantum algorithm designed to solve problems in many domains, including but not limited to quantum chemistry, quantum simulations or optimization. It is an example of a hybrid quantum-classical algorithm that combines quantum computing with classical optimization techniques [19]. In many VQE applications, the primary goal is to find the ground state energy and corresponding wave function of a given molecular system. This is a crucial task in quantum chemistry as it provides insights into the electronic structure and properties of molecules, which are vital for various applications such as drug discovery, materials science, and catalysis. One of the most significant advantages of VQE is its potential to leverage near-term quantum devices, even with limited qubit counts and connectivity, and high error rates. VQE-based approaches are part of an active area of research and development, with ongoing efforts focused on improving their scalability, robustness against noise and errors, and enhancing their applications in various domains. As quantum hardware continues to advance, VQE holds promise of being able to solve complex quantum mechanical problems on the NISQ era quantum devices. In the proposed application benchmark, the quantum device is tasked with executing circuits typically used for estimating the energy of a molecule. The results are scored based on the average fidelity metric calculated from the measurements. This allows evaluating the performance of VQE on the quantum machine, while setting aside the difficulties related to measurement scaling in the energy estimation process, which is less favourable than the fault-tolerant Quantum Phase Estimation (QPE) algorithm. The electronic Hamiltonian of a molecular system, before it is used in VQE, is most commonly written in the second quantized form, using the fermionic creation and annihilation operators: \[H_{el}=\sum_{p,q}h_{pq}a_{p}^{\dagger}a_{q}+\frac{1}{2}\sum_{p,q,r,s}h_{pqrs}a _{p}^{\dagger}a_{q}^{\dagger}a_{r}a_{s} \tag{7}\] The first term in this equation corresponds to single-electron excitations, and the second term corresponds to two-electron excitations. Coefficients \(h_{pq}\) and \(h_{pqrs}\) are one- and two-electron integrals. This Hamiltonian is then mapped to qubits using well-known transformations, resulting in an operator written as a sum of products of Pauli matrices denoted as \(\sigma\) \[H=\sum_{j}\alpha_{j}P_{j}=\sum_{j}\alpha_{j}\prod_{i}\sigma_{i}^{j} \tag{8}\] _Problem instance:_ The Unitary Coupled Cluster with Singles and Doubles is defined as: \[|\phi(\theta)\rangle=e^{T-T^{\dagger}}|\phi_{HF}\rangle \tag{9}\] where \(T(\theta)\) is the cluster operator, \(|\phi\rangle_{HF}\) is the reference state, chosen here to be the Hartree-Fock state. The cluster operator \(T(\theta)\) has the following definition: \[T(\vec{\theta})=T_{1}(\vec{\theta_{1}})+T_{2}(\vec{\theta_{2}}), \tag{10}\] where \[T_{1}(\vec{\theta_{1}})=\sum_{i,j}\theta_{ij}a_{j}^{\dagger}a_{j} \tag{11}\] \[T_{2}(\vec{\theta_{2}})=\sum_{i,j,k,l}\theta_{ijkl}a_{i}^{\dagger}a_{j}^{ \dagger}a_{k}a_{l} \tag{12}\] Although this type of ansatz has unfavourable scaling of the required number of gates with an increasing number of electrons and spin orbitals, it is often used for small scale demonstrations of VQE and also serves as a basis for more efficient approaches, e.g. AdaptVQE, which allows to significantly decrease the depth of the circuit. We evaluate the fidelity of the quantum states prepared on a quantum computer with a selected ansatz for LiH molecule in the minimal STO-3G basis set. For additional flexibility, active space reduction is also used to execute this benchmark on 3 qubits. ### QAOA for combinatorial problems optimization The Quantum Approximate Optimization Algorithm (QAOA) is a quantum algorithm designed to solve combinatorial optimization problems. It is based on the framework of variational quantum algorithms and is often used in the context of quantum computing. QAOA combines classical optimization techniques with quantum operations to find approximate solutions to optimization problems. The algorithm begins with an initial state that is prepared as a superposition of computational basis states. This state is typically denoted as \(|+\rangle^{\otimes n}\), where n is the number of qubits. QAOA consists of a sequence of p layers, where each layer consists of two types of quantum Hamiltonians: the problem Hamiltonian \(H_{P}\) and the mixing Hamiltonian \(H_{M}\). The problem Hamiltonian represents the objective function of the optimization problem, while the mixing Hamiltonian helps explore different solutions. At each layer, the problem Hamiltonian is applied to the state, followed by the mixing Hamiltonian. The evolution of the state is controlled by parameters \(\gamma\) and \(\beta\), which are optimized to minimize the objective function. The state is measured after \(p\) layers, and the measurement outcomes are used to approximate the optimal solution. The output of the QAOA algorithm is an approximate solution to the optimization problem. The quality of the solution depends on the number of layers and the parameters \(\gamma\) and \(\beta\). By increasing the number of layers and using classical optimization techniques to refine the parameters, QAOA can provide increasingly better approximations to the optimal solution. The job shop scheduling problem is a classic combinatorial optimization problem that involves determining the optimal sequence of operations for a set of jobs to be processed on a set of machines. Each job consists of multiple operations that require specific processing times on different machines. The objective is to minimize the overall makespan, which is the total time required to complete all jobs. Job shop scheduling problems are known for their complexity due to the presence of constraints such as machine availability, precedence relationships between operations, and resource limitations. Efficiently solving job shop scheduling problems has practical applications in various industries, such as manufacturing, logistics, and project management. _Problem instance:_ The problem that will be considered as a benchmark will be as following: * Job 1: * Operation 1, Time: 1 unit, Machine 1 * Operation 2, Time: 2 units, Machine 2 * Job 2: * Operation 1, Time 1, Machine 1 * Job 3: * Operation 1, Time 2, Machine 2 This instance has the lowest completion time equal 3. Setting maximum feasible time to 3, it requires 7 qubits to encode in the time-indexed representation described in [20]. The depth of the QAOA circuit with p=1, which solves this circuit, is equal to 24. ### QSVM for image classification Quantum Support Vector Machine (QSVM) is an algorithm that utilizes a quantum kernel for a Support Vector Machine (SVM) machine learning model. SVM is an algorithm used for data classification and regression that finds an optimal hyperplane between different classes in a dataset. However, if the data is not linearly separable, a kernel function that maps the dataset to a higher dimensional feature space is needed for the SVM algorithm to perform correctly. The quantum part of the algorithm finds a feature map using a quantum computer, which can later be used to create a kernel matrix. A quantum feature map can in theory be to extract complicated patterns from data, that would not be possible by using only classical transformations for a kernel function. The quantum circuit used in this algorithm can be divided into several components. It consists of 2 symmetric parts - each corresponding to one of the elements of the pair of variables for which the value of the kernel function is calculated. Each part consists of a block that encodes data using X and Z rotation gates, a block that generates quantum entanglement, and a block of parameterized Y rotation gates with trainable parameters. As the encoding of variables can be done densely, by putting 2 numerical variables per qubit, the number of qubits for a given instance size is equal to the number of variables divided by 2. A crucial parameter for a quantum computer is the number of qubits, which will enable it to fit larger data onto the quantum computer. The depth of quantum circuits used in this algorithm is similar between instance sizes and is around 10. The entanglement strategy can influence it a little, to a factor of up to 20. The full form of the circuit consists of a parameterized block followed by its inverse and after assigning the parameters based on the input data, the resulting circuit can be represented by a unitary matrix close to the identity. To avoid this, we specifically use only the first part of the circuit, to test the quantum machine's ability to create such entangled states. _Problem instance:_ In the proposed application, benchmark QSVM algorithm is applied to a well-known benchmark dataset for image classification - MNIST [21]. It is a collection of grayscale images of handwritten digits, with resolution \(28\times 28\) pixels. For the purpose of fitting the data onto the quantum computer, the images are downscaled to a smaller resolution. The problem of binary optimization is solved with the minimum accuracy for a smallest image size \((4\times 4)\).should be able to achieve more than 70%. In order to fit 16 variables, the minimum number of qubits is 8. Figure 2: ## 4 Conclusion This paper emphasizes the need for standardized application performance benchmarks for quantum computers. It acknowledges the challenges of assessing quantum computer performance due to variations in technology and susceptibility to noise. The document introduces a series of quantum application benchmarks and a methodology for measuring performance and fidelity. The proposed benchmarks encompass a variety of quantum algorithms, from variational algorithms used in near-term quantum devices to circuits designed for fault-tolerant quantum computers. Each benchmark's problem instance is defined, showcasing the diversity of quantum applications. In conclusion, these standardized benchmarks are essential for evaluating and comparing quantum computing technologies, tracking their progress, and enabling applications in various domains. They contribute to developing of a benchmarking framework that ensures the reliability and effectiveness of quantum computing solutions. Finally, as one of the targets of the EuroHPC JU is to develop and support a highly competitive and innovative quantum computing ecosystem broadly distributed in Europe capable of autonomously producing quantum computing technologies and architectures and their integration on leading HPC computing systems, the proposed application performance benchmarking suite together with the open source code are published also to evaluate upcoming quantum systems installations, in particular a trapped ions quantum system hosted at the Poznan Supercomputing and Networking Center (PSNC), in Poland, within the EuroQCS-Poland consortium.
2308.09425
Sampling Random Cycle-Rooted Spanning Forests on Infinite Graphs
On a finite graph, there is a natural family of Boltzmann probability measures on cycle-rooted spanning forests, parametrized by weights on cycles. For a certain subclass of those weights, we construct Gibbs measures in infinite volume, as limits of probability measures on cycle-rooted spanning forests of increasing sequences of finite graphs. Those probability measures extend the family of already known random spanning forests and can be sampled by a random walks algorithm which generalizes Wilson's algorithm. We show that, unlike for uniform spanning forests, almost surely, all connected components are finite and two-points correlations decrease exponentially fast with the distance.
Héloïse Constantin
2023-08-18T09:49:59Z
http://arxiv.org/abs/2308.09425v1
# Sampling random cycle-rooted spanning forests on infinite graphs ###### Abstract. On a finite graph, there is a natural family of Boltzmann probability measures on cycle-rooted spanning forests, parametrized by weights on cycles. For a certain subclass of those weights, we construct Gibbs measures in infinite volume, as limits of probability measures on cycle-rooted spanning forests of increasing sequences of finite graphs. Those probability measures extend the family of already known random spanning forests and can be sampled by a random walks algorithm which generalizes Wilson's algorithm. We show that, unlike for uniform spanning forests, almost surely, all connected components are finite and two-points correlations decrease exponentially fast with the distance. ###### Contents * 1 Measures on Cycle-Rooted Spanning Forests on finite graphs * 1.1 Cycle-rooted spanning forests * 1.2 Wilson's algorithm * 2 \(p\)-Loop erased random walks and rooting times * 2.1 Hitting times, rooting time * 2.2 The rooting time is almost surely finite. * 2.3 \(p\)-loop-erased random walk with a boundary condition * 3 Measures on CRSF in infinite volume and thermodynamic limits * 3.1 Topological facts and boundary conditions * 3.2 Sampling algorithm for a fixed ordering on an infinite graph * 3.3 Thermodynamic limits of the Wilson measures * 4 Study of the configurations sampled under the Wilson measure * 4.1 Every connected component is finite for the Wilson measure * 4.2 Exponential decay of correlations for the Wilson measure * 5 Study of the configurations sampled under an infinite volume measure \(\mu_{w}\). * 5.1 Algorithm conditional on cycles with weights larger than 1. * 5.2 All connected components with a cycle are finite Conclusion and open questions ## Introduction We call _cycle-rooted spanning forest_, on a finite connected graph, every subgraph which contains all vertices and all of whose connected components contain a unique cycle. Such a configuration of edges, endowed with a choice of orientation of cycles, can be seen as a discrete vector field on the graph: edges which are not in the cycle are oriented towards the cycle. Every vertex is associated with an edge starting from it. Given a connected finite graph \(G=(V,E)\), all of whose oriented cycles \((\gamma)_{\gamma\in\mathcal{C}(G)}\) are endowed with positive weights \((w(\gamma))\), we say that two oriented cycles are equivalent if they are equal after removal of their orientations. We define a probability measure on cycle-rooted spanning forests, induced by these cycle-weights as follows. Every spanning forest has a probability proportional to the product of weights of its cycles, counting both orientations: \[\mu_{w}(F)=\frac{\prod_{[\gamma]\in\mathcal{C}(F)}(w(\gamma)+w(\gamma^{-1}))}{ \mathcal{Z}_{w}}, \tag{1}\] where \(\gamma,\gamma^{-1}\) are both oriented cycles of the equivalence class \([\gamma]\). The normalizing constant \(Z_{w}\) is called the _partition function_ of the model and is defined as follows: \[\mathcal{Z}_{w}=\sum_{F}\prod_{[\gamma]\in\mathcal{C}(F)}(w(\gamma)+w(\gamma^ {-1})).\] Recall that the usual model of uniform spanning tree is defined on finite graphs and its limit for infinite graphs is studied in [1, 1]. In these papers, the existence of a measure for infinite graphs is shown. This measure is sampled by an extension in infinite volume of Wilson's algorithm [11] which does not depend on the ordering of vertices; see [1] for a textbook treatment of this topic. These authors study the properties of the configurations under the measure in infinite volume, such as the number of connected components. A model of random rooted spanning forests, all of whose connected components are rooted trees is studied in [1]. In this model, probability measures are associated with weights on vertices. If only one vertex has a weight, the probability measure associated with this weight has support in rooted trees, whose unique root is this vertex, and is equal to the uniform spanning tree measure after forgetting this root. In the model of rooted spanning forests, the random configurations are still sampled by an algorithm of loop-erased random walks. The model of rooted spanning forests is a particular case of the model of cycle-rooted spanning forests, which is studied in this article. Indeed, weights on vertices can be interpreted as weights on small self-loops over vertices and the roots of the random configuration can be seen as the unique cycles of their connected component. From this point of view, the model of cycle-rooted spanning forests is a generalization of the rooted spanning forests and of the uniform spanning tree. The measure on cycle-rooted spanning forests on finite graphs associated with a weight function on cycles for which every cycle has a weight smaller than \(1\) is studied in [16]. This measure is sampled by a loop-erased random walks algorithm inspired from the Propp-Wilson algorithm for the generation of a random spanning tree. This algorithm does not depend on the ordering of vertices. A similar algorithm is also introduced in [1] to generate a random spanning web of square lattice annuli, using a "cycle-popping" inspired from the Propp-Wilson algorithm. In this article, we study properties of probability measures on cycle-rooted spanning forests depending on a weight function on oriented cycles with values smaller than \(1\). We show that under an assumption on weights (Assumption 2.0.1), the weak limit of measures on spanning forests on growing finite graphs, with cycle-weights smaller than \(1\), is well-defined and sampled by an algorithm of loop-erased random walks (Theorem 3.3.1) which does not depend on the ordering of vertices (Theorem 3.3.2). We show furthermore that under this measure with a stronger assumption on weights (Assumption 4.0.1), all connected components are almost surely finite (Theorem 4.1.4) and the decay of edge-edge correlations is exponential (Theorem 4.2.2). Those properties show that under this assumption on weights, the measure which is constructed in infinite volume corresponds to a different "phase", in the sense of statistical mechanics, than the uniform spanning forests measure studied in [1, 10, 11]. We also study probability measures on cycle-rooted spanning forests of finite graphs determined by a weight function \(w\) which can take values larger than \(1\). Those measures are no longer sampled by a loop-erased random walk algorithm. Nevertheless, we show that conditional on cycles of weights larger than \(1\), the measure is determined by a modified weight function \(w_{-}\) (Definition 5.1.1) which takes values smaller than \(1\) (Theorem 5.1.2). Assuming the existence of an infinite volume measure \(\mu_{w}\), we show that when Assumption 4.0.1 is satisfied by this modified weight function \(w_{-}\), every connected component with a cycle is finite (Theorem 5.2.8). Combined with Proposition 3.1.1 which says that almost surely every finite connected component has a cycle, this result implies that, almost surely, every connected component is either a finite cycle-rooted tree or an infinite tree. The paper is organized as follows. In Section 1, we define the probability measures on cycle-rooted spanning forests of finite graphs we are concerned with and give properties on these measures. In Section 2, we define the \(p\)-loop erased random walks which generalize the loop-erased random walks and which are used in the third section to define a measure in infinite volume sampled by a random walk algorithm. In Section 3, we study the weak limit of probability measures on spanning forests on growing finite graphs, which gives the existence of a probability measure in infinite volume. In Section 4, we study the long-range behavior of the configurations under this probability measure, such as the non-existence of an infinite connected component and the exponential rate of decay of edge-to-edge correlations. In Section 5, we study properties of infinite configurations sampled by infinite volume probability measures which are determined by unbounded weights on cycles, provided the limit exists. ## 1. Measures on Cycle-Rooted Spanning Forests on finite graphs ### Cycle-rooted spanning forests Let \(G=(V,E)\) be a finite connected graph with vertex set \(V\) and edge set \(E\). For every subgraph \(F\) of \(G\), let \(\mathcal{C}(F)\) be the set of unoriented simple cycles of \(F\) and \(E(F)\) be the set of edges of \(F\). If \([\gamma]\in\mathcal{C}(F)\) is a cycle of \(F\), denote by \(\gamma\) and \(\gamma^{-1}\) the two oriented cycles obtained from \(\gamma\). Let \(\mathcal{C}_{\rightarrow}(F)\) be the set of oriented cycles of \(F\). We say that a subgraph of \(G\) is a _cycle-rooted spanning forest_ (CRSF) if it contains all the vertices and if each of its connected components contains a unique cycle. Let \(\mathcal{U}(G)\) be the set of CRSFs of \(G\). Let \(w:\mathcal{C}_{\rightarrow}(G)\rightarrow\mathbb{R}_{+}\) be a non-zero function with non-negative values, defined on oriented cycles of \(G\). There is a natural probability measure on \(\mathcal{U}(G)\) associated with \(w\), which is denoted by \(\mu_{w}\). It is defined for every CRSF \(F\in\mathcal{U}(G)\) by: \[\mu_{w}(F)=\frac{\prod_{[\gamma]\in\mathcal{C}(F)}(w(\gamma)+w(\gamma^{-1}))} {Z_{w}}, \tag{2}\] where \(\gamma,\gamma^{-1}\) are both oriented cycles of the equivalence class \([\gamma]\), and \(Z_{w}\) is called the partition function of the model \[Z_{w}=\sum_{F\in\mathcal{U}(G)}\prod_{[\gamma]\in\mathcal{C}(F)}(w(\gamma)+w (\gamma^{-1})).\] We say that \(F\) is an _oriented cycle-rooted spanning forest_ (OCRSF) if it is a CRSF and every cycle of \(F\) is given an orientation, that is to say if every connected component contains a unique oriented cycle. Let \(\mathcal{U}_{\rightarrow}(G)\) be the set of OCRSFs of \(G\). Every edge of an OCRSF is oriented towards the cycle of its connected component. The partition function of the model can also be written as a sum of weights over OCRSF as follows: \[Z_{w}=\sum_{F}\prod_{\mathcal{O}\text{CRSF}}\prod_{\gamma\in\mathcal{C}_{ \rightarrow}(F)}w(\gamma),\] and it gives a natural probability measure on OCRSF. Let \(W\subset V\) be a subset of vertices of \(G\). We say that \(F\) is a wired cycle-rooted spanning forest or essential cycle-rooted spanning forest (ECRSF) with respect to \(W\) if every connected component of \(F\) is either a unicycle disjoint from \(W\) or an unrooted tree which contains a unique vertex of \(W\), called a boundary-rooted tree. Let \(\mathcal{U}_{W}(G)\) the set of ECRSF with respect to \(W\). **Definition 1.1.1** (Wired boundary conditions).: _We define a measure on \(\mathcal{U}_{W}(G)\) called the wired measure on ECRSF of \(G\) with boundary \(W\) whose configurations have weight proportional to the product of weights of cycles._ \[\mu_{w}^{W}(F)=\frac{\prod_{[\gamma]\in\mathcal{C}(F)}(w(\gamma)+w(\gamma^{-1} ))}{Z_{w}^{W}} \tag{3}\] _where \(\gamma,\gamma^{-1}\) are both oriented cycles of the equivalence class \([\gamma]\)._ Notice that the measure defined in (2) corresponds to the case \(W=\emptyset\). ### Wilson's algorithm When the weight function \(w\) is identically equal to \(0\), this measure \(\mu_{w}^{W}\) has support on ECRSF all of whose connected component are boundary-rooted trees. In particular, when \(W=\{r\}\) is a single vertex and the weight function \(w\) is identically equal to \(0\), this measure has support on spanning trees rooted at \(r\). Since spanning trees on \(G\) are in 1-to-1 correspondence with spanning trees of \(G\) rooted at \(r\), this measure is independent of the choice of vertex \(r\) and gives to every spanning tree the same weight. Therefore, this measure is the uniform spanning tree measure defined for every tree \(T\) by: \[\mu(T)=\frac{1}{Z_{tree}} \tag{4}\] where \(Z_{tree}\) is the partition function, that is the number of spanning trees of the graph \(G\). This measure \(\mu\) can be sampled by the Wilson algorithm ([20]). Assume that for every oriented cycle \(\gamma\in\mathcal{C}_{\rightarrow}(G)\), \[w(\gamma)\in[0,1]\] We will write \(p\) instead of \(w\) in the following when this assumption is satisfied. According to [10], the measure \(\mu_{p}\) can be sampled by an algorithm of loop-erased random walk where we keep an oriented cycle \(\gamma\), with probability \(p(\gamma)\). More precisely, let \(x_{1},\ldots,x_{n}\) be an ordering of the vertex set \(V\) of \(G\) and let \(\mathsf{F}_{0}=\emptyset\). At each step \(i\), let \((X_{n}^{(x_{i})})_{n\geq 0}\) be a simple random walk on the graph \(G\) starting from \(x_{i}\). Every time the random walk makes a loop, the oriented cycle \(\gamma\) is kept with probability \(p(\gamma)\) or erased with probability \(1-p(\gamma)\). The random walk \((X_{n}^{(x_{i})})_{n\geq 0}\) is stopped when it reaches the set of already explored vertices denoted by \(V(\mathsf{F}_{i-1})\) or when a cycle is kept. In the end of the \(i^{\text{th}}\) step, let \(\mathsf{F}_{i}=\mathsf{F}_{i-1}\cup L(X_{n}^{(x_{i})})\) where \(L(X_{n}^{(x_{i})})\) is obtained from \((X_{n}^{(x_{i})})_{n\geq 0}\) after removing all the loops except the last one if a loop is kept at the end of the \(i^{\text{th}}\) step. At the end, \(V(\mathsf{F}_{n})=V(G_{n})\). Notice that the algorithm always finishes if and only if there exists at least a loop \(\gamma\) in \(G\) such that \(p(\gamma)>0\). The measure \(\mu_{p}^{W}\) defined in Definition 1.1.1 can also be sampled by an algorithm. We follow the same algorithm but every time the random walk meets \(W\), the walk stops and a new random walk starts from the next vertex in the ordering. At the beginning of the algorithm, we set \(\mathsf{F}_{0}=W\) instead of \(\mathsf{F}_{0}=\emptyset\). The algorithm always finishes if and only if there exists at least a loop \(\gamma\) in \(G\backslash W\) such that \(p(\gamma)>0\) or \(W\neq\emptyset\). Let us emphasize that when \(W=\{r\}\) is a single vertex and the weight function \(w\) is equal to \(0\), then the sampling algorithm described just above is the classical Wilson algorithm which samples a uniform spanning tree on \(G\) rooted at \(r\). ## 2. \(p\)-Loop erased random walks and rooting times In the following, we will consider a countably infinite connected graph \(G=(V,E)\), with finite degrees, exhausted by an increasing sequence \((G_{n})_{n\geq 1}\) of connected induced subgraphs of \(G\), with respective vertex set \(V_{n}\). We denote by \(\partial G_{n}\) the subset of \(V_{n}\) of vertices which are connected by an edge to the complement of \(G_{n}\) in \(G\). For every \(v\in V\), we denote by \(\mathbb{P}_{v}\) the law of a simple random walk on \(G\) starting from \(v\). We consider a weight function \(w=p\in[0,1]\) and we make the following assumption on the exhaustion \((G_{n})\) of the graph \(G\) and the weight function \(p\). **Assumption 2.0.1**.: _There exists \(\alpha>0\) and \(\beta>0\), such that for every \(n\in\mathbb{N}^{*}\), for every random walk \((X_{n})_{n\geq 0}\) on \(G\), starting from a vertex \(v\) of \(\partial G_{n}\), there exists a loop \(\gamma_{v}\) in \(G_{n+1}\backslash(G_{n}\cup\partial G_{n+1})\) which satisfies \(p(\gamma_{v})\geq\alpha\) and \(\mathbb{P}_{v}((X_{1},\ldots,X_{|\gamma|})=\gamma)>\beta\)._ ### Hitting times, rooting time In the following we denote by \(v_{0}\) a vertex of \(G_{1}\). **Definition 2.1.1**.: _If \(C\) is a subset of the vertex set \(V\), we define for a random walk \((X_{n})\) the hitting time of \(C\), that is to say_ \[T_{C}:=\min\{k\geq 0|X_{k}\in C\}.\] _Notice that in this definition, \(T_{C}\) can be equal to 0 if the random walk starts from a vertex of \(C\)._ **Definition 2.1.2**.: _Let \((X_{n})\) be a simple random walk starting from \(v_{0}\). Let \((T_{n})\) be the sequence of random hitting times of \(\partial G_{n}\) for the random walk \((X_{n})\), that is to say_ \[T_{n}:=T_{\partial G_{n}}=\min\{k\geq 0|X_{k}\in\partial G_{n}\}.\] **Lemma 2.1.3**.: _The hitting-time \(T_{n}\) is finite almost-surely for every \(n\in\mathbb{N}^{*}\). Furthermore,_ \[\lim_{k\to\infty}\mathbb{P}_{v_{0}}(T_{n}\geq k)=\mathbb{P}_{v_{0}}(T_{n}= \infty)=0.\] Proof.: Let \(n\in\mathbb{N}^{*}\). Almost surely, \(T_{n}\) is finite because almost surely if \(k\geq 1\), there exists a time such that the random walk makes \(k\) consecutive steps in the same direction. Therefore, the random walk exits every finite ball in finite time almost surely. Since the events \((T_{n}\geq k)\) are decreasing in \(k\) (for a fixed \(n\)) with respect to inclusion, the monotone convergence theorem implies \[\lim_{k\to\infty}\mathbb{P}_{v_{0}}(T_{n}\geq k)=\mathbb{P}_{v_{0}}(\bigcap_{k \geq 1}\{T_{n}\geq k\})=\mathbb{P}_{v_{0}}(T_{n}=\infty),\] which concludes the proof. Let \((X_{n})\) be a simple random walk on \(G\) starting from \(v_{0}\) and let \((Y_{n})\) be a sequence of independent random variables of uniform law on \([0,1]\), which are independent of the random variables \(X_{n}\). We want to define a \(p\)-loop erased random walk such that, if at time \(n\), the random walk \((X_{n})\) closes a loop \(\gamma_{n}\), the loop is kept if \(Y_{n}\leq p(\gamma_{n})\) and erased else. Given \((X_{n},Y_{n})\), we construct a sequence of random walks \(((Z_{n}^{k})_{n})_{k}\) as follows. We define recursively \((Z_{n}^{k})_{n\geq 1}\) for \(k\in\mathbb{N}^{*}\). Let \((Z_{n}^{1})=(X_{n})\) and given \((Z_{n}^{k})_{n}\), let us consider the first time \(n_{k}\) such that \(Z_{n}^{k}\) closes a loop that is to say \[n_{k}=\min\{j>n_{k-1}\in\mathbb{N}^{*}|Z_{j}^{k}\in\{Z_{0}^{k},\ldots,Z_{j-1}^{ k}\}\}.\] Then, let \(n_{k}^{\prime}\) be the time of the beginning of the loop, that is to say \[n_{k}^{\prime}=\min\{j\in\mathbb{N},Z_{j}^{k}=Z_{n_{k}}^{k}\}.\] Therefore, the loop which is closed at time \(n_{k}\) is the loop \(\gamma_{n_{k}}:=(Z^{k}_{n^{\prime}_{k}},\ldots,Z^{k}_{n_{k}})\) Finally, if \(Y_{n_{k}}\geq p(\gamma_{n_{k}})\), then define for every \(n\in\mathbb{N}\), \[Z^{k+1}_{n}=\begin{cases}Z^{k}_{n_{k}}&\text{if }n^{\prime}_{k}\leq n\leq n_{k} \\ Z^{k}_{n}&\text{else.}\end{cases}\] \((Z^{k+1}_{n})\) is obtained from \((Z^{k}_{n})\) erasing the loop \(\gamma_{n_{k}}\). Otherwise, if \(Y_{n_{k}}\leq p(\gamma_{n_{k}})\), for every \(m\geq k+1\) and \(n\in\mathbb{N}\), let \[Z^{m}_{n}=\begin{cases}Z^{k}_{n}&\text{if }n\leq n_{k}\\ Z^{k}_{n_{k}}&\text{else}\end{cases}\] \((Z^{m}_{n})\) is obtained from \((Z^{k}_{n})\) stopping the random walk at time \(n_{k}\). **Definition 2.1.4**.: _If \((X_{n})\) is a simple random walk on \(G\) starting from \(v_{0}\) and \((Y_{n})\) is a sequence of independent random variables of uniform law on \([0,1]\), which are independent of the \(X_{n}\), we say that \((n_{k})_{n\geq 1}\) is the sequence of random times where the random walk \((X_{n})\) closes a loop \(\gamma_{n_{k}}\). Let \(T_{r}\) be called the random rooting time for \((X_{n},Y_{n})\) that is to say the first time where a loop is kept:_ \[T_{r}:=\min\{n_{k}|Y_{n_{k}}\leq p(\gamma_{n_{k}})\},\] _where \(\min\emptyset=+\infty\). If \(k\) is such that \(T_{r}=n_{k}\), then let \((Z_{n})_{n\leq T_{r}}=(Z^{k}_{n})_{n\leq T_{r}}\) be called the \(p\)-loop erased random walk obtained from \((X_{n},Y_{n})\)._ Let us emphasize that if \(T_{r}\) is finite, then there exists a \(k\) such that \(T_{r}=n_{k}\) and then the \(p\)-loop erased random walk \((Z_{n})_{n\leq T_{r}}\) is well defined and is obtained from \((X_{n})_{n\leq T_{r}}\), erasing every loop excepted the last one. Here, the loop-erased random walk is indexed on the same time set than the random walk \((X_{n})\). Nevertheless, \(Z_{n}\) does not depend only on \((X_{k})_{k\leq n}\). ### The rooting time is almost surely finite We will show in this subsection that the rooting time \(T_{r}\) is a stopping time and that almost surely, it is finite. **Definition 2.2.1**.: _Let \((\mathcal{F}_{n})_{n}\) be the filtration adapted to the process \(((X_{n},Y_{n}))_{n}\) that is defined by_ \[\mathcal{F}_{n}=\sigma(X_{0},\ldots,X_{n},Y_{0},\ldots,Y_{n}),\] _which is the smallest sigma-field which makes the \((X_{i})_{0\leq i\leq n},(Y_{i})_{1\leq i\leq n}\) measurable._ **Lemma 2.2.2**.: _For every \(m\in\mathbb{N}^{*}\), the hitting time \(T_{m}\) is a stopping time with respect to the filtration \((\mathcal{F}_{n})_{n}\). The rooting time \(T_{r}\) is also a stopping time with respect to the filtration \((\mathcal{F}_{n})_{n}\). Moreover, for every \(n\in\mathbb{N}\), if we consider the \(\sigma\)-field adapted to the stopping time \(T_{n}\), defined by_ \[\mathcal{F}_{T_{n}}=\{A\in\mathcal{F}:\forall k\geq 0,\{T_{n}\leq k\}\cap A \in\mathcal{F}_{k}\},\] _then, the event \(\{T_{n}<T_{r}\}\) is in \(\mathcal{F}_{T_{n}}\)._ Proof.: Let \(n\in\mathbb{N}\). The events \(\{T_{n}\geq k\}=\{X_{1},\ldots,X_{k}\in G_{n}\backslash\partial G_{n}\}\) are measurable with respect to \(\mathcal{F}_{k}\) and therefore \(T_{n}\) is a stopping time. From the construction of the rooting time, the event \(\{T_{r}\leq k\}\) only depends on the steps of the random walk before time \(k\), that is \(((X_{n},Y_{n}))_{n\leq k}\) and therefore the event \(\{T_{r}\geq k\}\) is in \(\mathcal{F}_{k}\). The event \(\{T_{n}<T_{r}\}\) is in \(\mathcal{F}_{T_{n}}\) because if \(k\geq 0\), \[\{T_{n}\leq k\}\cap\{T_{n}<T_{r}\}=\bigcup_{1\leq i\leq k}\,(\{T_{n}=i\}\cap\{ T_{r}>i\})\in\mathcal{F}_{k}.\] which concludes the proof. Let us emphasize that \(T_{r}\) is a stopping time for the filtration \((\mathcal{F}_{n})\) even if \((Z_{n})\) is not adapted to the filtration \((\mathcal{F}_{n})\). Indeed, \(Z_{n}\) depends on \((X_{k})\) for \(k\geq n\). Lemma 2.2.2 is a useful tool to show that the rooting time is almost surely finite for a simple random walk starting from \(v_{0}\). **Lemma 2.2.3**.: _Under Assumption 2.0.1, the rooting time \(T_{r}\) for a simple random walk \((X_{n})\) starting from \(v_{0}\) and \((Y_{n})\) as defined in Definition 2.1.4 is finite almost surely and the sequence \((\mathbb{P}_{v_{0}}(T_{r}>T_{n}))_{n}\) decays exponentially fast to 0 with \(n\). More precisely, there exists \(\delta\in]0,1[\) such that_ \[\mathbb{P}_{v_{0}}(T_{n}<T_{r})\leq\delta^{n}.\] Proof.: Let \(n\in\mathbb{N}^{*}\) be fixed. The process \(((X_{k},Y_{k}))_{k\geq 0}\) satisfies the strong Markov property. Therefore, conditional on the event \(\{T_{n}<\infty\}\) which is almost sure by Lemma 2.1.3, for every \(k\geq 0\), the pair of random variables \((X_{T_{n}+k},Y_{T_{n}+k})\) is independent of \(\mathcal{F}_{T_{n}}\) given \((X_{T_{n}},Y_{T_{n}})\). From Assumption 2.0.1, there exists a loop \(\gamma_{X_{T_{n}}}\) which lies inside \(G_{n+1}\backslash(G_{n}\cup\partial G_{n+1})\) with weight larger than \(\alpha\) and such that the probability that a random walk \((X_{T_{n}+k})_{k}\) makes this loop \(\gamma_{X_{T_{n}}}\) is greater than \(\beta\). Let us denote by \(A_{X_{T_{n}}}\) the event that the random walk \((X_{T_{n}+k})_{k}\) makes this loop \(\gamma_{X_{T_{n}}}\), and let us denote by \(B_{X_{T_{n}}}\) the event \(\{Y_{T_{n}+|\gamma_{X_{T_{n}}}|}\leq p(\gamma_{X_{T_{n}}})\}\). Conditional on \(X_{T_{n}}\), the events \(A_{X_{T_{n}}}\) and \(B_{X_{T_{n}}}\) are independent and have probabilities \(\mathbb{P}_{X_{T_{n}}}(\gamma_{X_{T_{n}}})\geq\beta\) and \(p(\gamma_{X_{T_{n}}})\geq\alpha\). The event \(A_{X_{T_{n}}}\cap\{Y_{T_{n}+|\gamma_{X_{T_{n}}}|}\leq p(\gamma_{X_{T_{n}}})\}\) for the random walk \((X_{T_{n}+k},Y_{T_{n}+k})_{k\geq 0}\) starting from \((X_{T_{n}},Y_{T_{n}})\in\partial G_{n}\) has a probability greater than \(\alpha\beta\). Conditional on \((X_{T_{n}},Y_{T_{n}})\), it is independent of \(\mathcal{F}_{T_{n}}\), therefore it is independent of the event \(\{T_{n}<T_{r}\}\). Let us show that conditional on \(T_{n}<T_{r}\), if the event \(A_{X_{T_{n}}}\cap\{Y_{T_{n}+|\gamma_{X_{T_{n}}}|}\leq p(\gamma_{X_{T_{n}}})\}\) is satisfied, then the event \(\{T_{n+1}>T_{r}\}\) is satisfied. The idea is that on this event, the random walk keeps a loop before reaching \(\partial G_{n+1}\) and therefore \(T_{n+1}>T_{r}\). Let \(i\) be the largest integer such that \(n_{i}\leq T_{n}\). Then, by construction of the \(p\)-looperased random walk, assuming \(T_{r}>T_{n}\geq n_{i}\), \((Z_{k}^{i+1})\) coincides with \((X_{k})\) after time \(n_{i}\) and therefore after time \(T_{n}\). For \(k\leq T_{n}\), \(Z_{k}^{i+1}\in\{X_{0},\ldots,X_{T_{n}}\}\) by construction and therefore \(Z_{k}^{i+1}\in G_{n}\). If the event \(A_{X_{T_{n}}}\cap\{Y_{T_{n}+|\gamma_{X_{T_{n}}}|}\leq p(\gamma_{X_{T_{n}}})\}\) is satisfied, then, for \(k\) between \(T_{n}+1\) and \(T_{n}+|\gamma_{X_{T_{n}}}|\), we have \(Z_{k}^{i+1}\in G_{n+1}\backslash(G_{n}\cup\partial G_{n+1})\), and therefore, for such \(k\), \[Z_{k}^{i+1}\notin(Z_{0}^{i+1},\ldots,Z_{T_{n}}^{i+1}).\] Since we have \(n_{i+1}\geq T_{n}\) by assumption on \(i\), we have necessarily \(n_{i+1}=T_{n}+|\gamma_{X_{T_{n}}}|\). Since the event \(\{Y_{T_{n}+|\gamma_{X_{T_{n}}}|}\leq p(\gamma_{X_{T_{n}}})\}\) is satisfied by assumption and \(T_{r}>n_{i}\), we have \(T_{r}=n_{i+1}=T_{n}+|\gamma_{X_{T_{n}}}|\), and since \(A_{X_{T_{n}}}\) is satisfied, for \(T_{n}+1\leq k\leq T_{n}+|\gamma_{X_{T_{n}}}|\), we have \(X_{k}\in G_{n+1}\backslash(G_{n}\cup\partial G_{n+1})\) and therefore \(T_{n+1}>T_{n}+|\gamma_{X_{T_{n}}}|=T_{r}\). Therefore, denoting by \(\delta:=1-\alpha\beta<1\), \[\mathbb{P}_{v_{0}}(T_{n+1}<T_{r}\ |\ T_{n}<T_{r}) \leq 1-\mathbb{P}_{(X_{T_{n}},Y_{T_{n}})}(A_{X_{T_{n}}}\cap\{Y_{T_{n }+|\gamma_{X_{T_{n}}}|}\leq\alpha\}\ |\ T_{n}<T_{r})\] \[=1-\mathbb{P}_{(X_{T_{n}},Y_{T_{n}})}(A_{X_{T_{n}}}\cap\{Y_{T_{n}+| \gamma_{X_{T_{n}}}|}\leq\alpha\})\] \[=1-\mathbb{P}_{X_{T_{n}}}(A_{X_{T_{n}}})\mathbb{P}(Y_{T_{n}+| \gamma_{X_{T_{n}}}|}\leq\alpha)\] \[\leq 1-\alpha\beta=\delta.\] This inequality holds for every \(n\in\mathbb{N}^{*}\) and \(\delta\) does not depend on \(n\). Then, writing \[\mathbb{P}_{v_{0}}(T_{n+1}<T_{r})=\mathbb{P}_{v_{0}}(T_{n+1}<T_{r}\ |\ T_{n}<T_{r}) \mathbb{P}_{v_{0}}(T_{n}<T_{r}),\] we obtain by induction on \(n\) the exponential decay of the following probability : \[\mathbb{P}_{v_{0}}(T_{n}<T_{r})\leq\delta^{n}.\] Let \(\varepsilon>0\). For fixed \(n\) large enough, \(\delta^{n}\leq\varepsilon\). For \(k\) large enough, we have \(\mathbb{P}_{v_{0}}(T_{n}\geq k)\leq\varepsilon\). Then for \(k\) large enough, we have \[\mathbb{P}_{v_{0}}(T_{r}\geq k) =\mathbb{P}_{v_{0}}(\{T_{r}\geq k\}\cap\{T_{n}\geq k\})+\mathbb{P} _{v_{0}}(\{T_{r}\geq k\}\cap\{T_{n}\leq k-1\})\] \[\leq\mathbb{P}_{v_{0}}(T_{n}\geq k)+\mathbb{P}_{v_{0}}(T_{r}>T_{n} )\leq 2\varepsilon\] Therefore we have shown that for every \(\varepsilon>0\), for \(k\) large enough, \(\mathbb{P}_{v_{0}}(T_{r}\geq k)<2\varepsilon\), which means that \[\lim_{k\to\infty}\mathbb{P}_{v_{0}}(T_{r}\geq k)=0.\] Therefore, from the monotone convergence theorem, \[\mathbb{P}_{v_{0}}(T_{r}=\infty)=\mathbb{P}_{v_{0}}(\bigcap_{k\geq 1}\{T_{r} \geq k\})=\lim_{k\to\infty}\mathbb{P}_{v_{0}}(T_{r}\geq k)=0\] This concludes the proof. The proof of Lemma 2.2.3 can be adapted to show that the rooting time is almost surely finite for a random walk starting from another vertex of \(G\), even if this vertex is not in \(G_{1}\), as follows. **Lemma 2.2.4**.: _Let \((X_{n}^{(x)})_{n}\) be a random walk starting from \(x\in G\) and and let \((Y_{n})_{n}\) be the process defined in Definition 2.1.4. Under Assumption 2.0.1, the rooting time \(T_{r}\) for \((X_{n}^{(x)})_{n}\) is finite almost surely and \(\mathbb{P}_{x}(T_{r}>T_{n})\) decays exponentially fast to 0 with \(n\)._ Proof.: Notice that \(x\) is not anymore in \(G_{1}\) and therefore the bound \(\mathbb{P}_{x}(T_{n}\leq T_{r})\leq\delta^{n}\) does not hold. Nevertheless, if \(m_{x}\) is such that \(x\in G_{m_{x}}\), then for \(n\geq m_{x}\), the proof of Lemma 2.2.3 shows that for the loop-erased random walk starting from \(x\), \[\mathbb{P}_{x}(T_{n+1}<T_{r}|T_{n}<T_{r})\leq\delta\] and therefore for \(n\geq m_{x}\), \[\mathbb{P}_{x}(T_{n}<T_{r})\leq\delta^{n-m_{x}}\] Therefore \(\mathbb{P}_{x}(T_{n}\leq T_{r})\) tends to 0 exponentially fast with \(n\) and an argument similar to the one given in the proof of Lemma 2.2.3 gives that \(T_{r}\) is finite almost surely. Lemma 2.2.4 shows that if we start a simple random walk on \(G\) from a vertex \(v\), almost surely \(T_{r}\) is finite. It implies that almost surely the sequence \(((Z_{n}^{k})_{n\geq 0})_{k\geq 0}\) is constant eventually and its limit \((Z_{n})_{n\geq 0}\) is well defined with \((Z_{n})_{n\geq 0}\) constant for \(n\geq T_{r}\). In the following, we define \(p\)-loop-erased random walks with wired boundary conditions in order to adapt the usual Wilson algorithm to sample cycle-rooted spanning forests of infinite graphs. ### \(p\)-loop-erased random walk with a boundary condition Let us briefly recall our current notations. We still assume that \((X_{n})\) is a simple random walk on \(G\) starting from any vertex \(v\), \((Y_{n})\) is a sequence of independent random variables of uniform law in \([0,1]\), which are independent of the \(X_{n}\) and \(W\subset V\) is a deterministic set of vertices. We define in this subsection a \(p\)-loop erased random walk obtained from \((X_{n},Y_{n})_{n\geq 0}\) with the boundary condition \(W\). **Definition 2.3.1**.: _Let \(T_{W}\) be the hitting time of \(W\), and let \(T_{r}\) be the rooting time of \((X_{n},Y_{n})_{n\geq 0}\). Let \(T_{f}=\min(T_{r},T_{W})\) be called the ending time of \((X_{n},Y_{n})_{n\geq 0}\) with boundary condition \(W\)._ Given \((X_{n},Y_{n})_{n\leq T_{W}}\), we construct a \(p\)-loop erased random walk \((Z_{n}^{W})_{n}\) with boundary conditions \(W\) as follows. We define recursively \(n_{k}^{W}\) and \((Z_{n}^{k,W})_{n\geq 0}\) for \(k\in\mathbb{N}^{*}\). Let \((Z_{n}^{1,W})_{n\leq T_{W}}=(X_{n})_{n\leq T_{W}}\) and \(n_{0}^{W}=0\). Then, we define recursively a sequence \(((Z_{n}^{i,W})_{n\leq T_{W}})_{i\geq 1}\) and a sequence \((n_{i})_{i\leq k}\) as follows. Let \(n_{i}^{W}:(X_{0},\ldots,X_{k})\mapsto\min\{n_{i-1}^{W}<j\leq T_{W}|Z_{j}^{i,W} \in\{Z_{0}^{i,W},\ldots,Z_{j-1}^{i,W}\}\}\) be the \(i\)-th loop-closing time before reaching \(W\), where \(\min\emptyset=\infty\). If \(n_{i}^{W}(X_{0},\ldots,X_{k})=\infty\), let \(Z^{i+1,W}=Z^{i,W}\). Else, let \(n_{i}^{\prime W}\) be the first time \(j<n_{i}^{W}\) such that \(Z_{n_{i}^{W}}^{i,W}=Z_{n_{i}^{\prime W}}^{i,W}\) and if \(Y_{n_{i}^{W}}\leq p(\gamma_{n_{i}^{W}})\), let for every \(m\geq i+1\), \[Z_{n}^{m,W}=\begin{cases}Z_{n}^{i,W}&\text{if $n\leq n_{i}^{W}$},\\ Z_{n_{i}^{W}}^{i,W}&\text{else},\end{cases}\] and otherwise, let for \(n\leq T_{W}\) \[Z_{n}^{i+1,W}=\begin{cases}Z_{n_{i}^{W}}^{i,W}&\text{if $n_{i}^{\prime W}\leq n \leq n_{i}^{W}$},\\ Z_{n}^{i,W}&\text{else}.\end{cases}\] Notice that \(Z_{n}^{i+1,W}\) is obtained from \(Z_{n}^{i,W}\) by erasing the first loop which ends before \(T_{W}\). While \(i\) is small enough such that \(n_{i}\leq T_{W}\), we have \(n_{i}^{W}=n_{i}\) and \(Z^{i+1,W}=Z^{i+1}\). **Proposition 2.3.2**.: _Almost surely, \(T_{f}\) is finite and \(((Z_{n}^{i,W})_{n\leq T_{f}})_{i\geq 0}\) is constant eventually. We define the \(p\)-loop erased random walk (\(p\)-LERW) with boundary conditions \(W\) as_ \[(Z_{n}^{W})_{n\leq T_{f}}=\lim_{i\to\infty}(Z_{n}^{i,W})_{n\leq T_{f}}.\] * _If_ \(T_{f}=T_{W}\)_,_ \((Z_{n}^{W})_{n\leq T_{f}}=(Z_{n}^{if})_{n\leq T_{W}}\) _where_ \(i_{f}=\min\{i|n_{i}>T_{W}\}\)_._ * _If_ \(T_{f}=T_{r}\)_,_ \((Z_{n}^{W})_{n\leq T_{f}}=(Z_{n})_{n\leq T_{f}}\) _where_ \((Z_{n})\) _is the_ \(p\)_-loop erased random walk without any boundary condition._ Proof.: Recall from Lemma 2.2.4 that \(T_{r}\) is finite almost surely. Since \(T_{f}\leq T_{r}\), the ending time \(T_{f}\) is almost surely finite. Assume that \(T_{r}<\infty\). Recall that the sequence \((n_{i})\) is strictly increasing. If \(T_{W}<T_{r}\), then \(T_{W}\) is finite and there exists \(i\) such that \(n_{i}>T_{W}\) and then \(n_{i}^{W}=\infty\) and \(Z^{m,W}=Z^{i,W}\) for \(m\geq i\). Let \(i_{f}=\min\{i|n_{i}>T_{W}\}\). Then \(n_{i_{f}-1}\leq T_{W}\). Then, \((Z_{n}^{i_{f},W})_{n\leq T_{W}}=(Z_{n}^{if})_{n\leq T_{W}}\). Then, for \(m\geq i_{f}\), \(n_{m}>T_{W}\) and then \(n_{m}^{W}=\infty\). Then, for every \(m\geq i_{f}\), \[(Z_{n}^{m,W})_{n\leq T_{f}}=(Z_{n}^{if,W})_{n\leq T_{f}}=(Z_{n}^{if_{f}})_{n \leq T_{f}}.\] Else, there exists \(i\) such that \(T_{r}=n_{i}\leq T_{W}\). Then, \(n_{i}^{W}=n_{i}\) and \(Y_{n_{i}^{W}}\leq p(\gamma_{n_{i}^{W}})\) and for \(m\geq i\), \((Z_{n}^{m,W})_{n\leq T_{r}}=(Z_{n}^{i,W})_{n\leq T_{r}}=(Z_{n}^{i})_{n\leq T_{r}}\). Since \(n_{i}=T_{r}\), then we have \((Z_{n}^{i})_{n\leq T_{r}}=(Z_{n})_{n\leq T_{r}}\) and therefore, \((Z_{n}^{W})_{n\leq T_{f}}=(Z_{n}^{W})_{n\leq T_{f}}\) where \((Z_{n})\) is the \(p\)-loop erased random walk without any boundary condition. Notice that a \(p\)-loop-erased random walk with boundary condition \(W\) is obtained from \((X_{n},Y_{n})_{n\leq\min(T_{r},T_{W})}\) erasing every loop except the last one if \(T_{r}<T_{W}\). Let us emphasize that when \(T_{r}>T_{W}\), the \(p\)-loop erased random walk with boundary conditions \((Z_{n}^{W})_{n\leq T_{W}}\) is not equal to the \(p\)-loop erased random walk \((Z_{n})_{n\leq T_{W}}\) stopped at \(T_{W}\). We will define, in the next section, sequences of probability measures on CRSF on a growing exhaustion \((G_{n})\) of a countably infinite connected graph \(G\), with boundary conditions. We will see that under some hypotheses, those sequences of probability measures on CRSF of finite graphs \(G_{n}\) converge to thermodynamic limits which are probability measures on CRSF of the infinite graph \(G\). We will also define probability measures on CRSF of infinite graphs from \(p\)-loop-erased random walks and compare those probability measures with limits of sequences of probability measures on finite graphs. ## 3. Measures on CRSF in infinite volume and thermodynamic limits In the following, let \(G=(V,E)\) be a countably infinite connected graph, with finite degrees, and let \((G_{n})\) be an exhaustion of \(G\), in the sense of an increasing sequence of finite subgraphs of \(G\) whose union is \(G\). Let \(v_{0}\) be a vertex of \(G_{1}\). ### Topological facts and boundary conditions Every subgraph of \(G\) can be seen as an element of \(\{0,1\}^{E}\). Let us recall some topological facts about the space \(\{0,1\}^{E}\). Since \(\{0,1\}\) is compact, \(\Omega=\{0,1\}^{E}\) is compact for the product topology and this topology is compatible with the following metric \[d(w,w^{\prime})=\sum_{e\in E}2^{-\|e-\|}1_{\{w_{e}\notin w_{e}^{\prime}\}},\] where \(\|e_{-}\|\) is the length of the shortest path between \(v_{0}\) and an extremity of the edge \(e\). Therefore \(\Omega\) is a compact metric space. A function \(f:\Omega\to\mathbb{R}\) is continuous for the product topology if for every \(\varepsilon>0\), there exists a finite subset \(\Lambda\subset E\), such that \[\sup_{w,w^{\prime}\in\Omega:w_{|\Lambda}=w_{|\Lambda}^{\prime}}|f(w)-f(w^{ \prime})|\leq\varepsilon.\] A function \(f:\Omega\to\mathbb{R}\) is called local if there exists a finite set \(\Lambda\subset E\) such that \(f(w)\) is entirely determined by \(w_{|\Lambda}\). We say that an event \(A\subset\Omega\) has finite support if the function \(1_{A}\) is a local function. The set of local functions is dense in the set of continuous functions \((\mathcal{C}(\Omega),||.||_{\infty})\) which is a Banach-space. We consider \(\mathcal{C}\) the smallest \(\sigma\)-field which makes the cylinders \(\mathcal{C}_{\Lambda,\eta}=\{w\in\Omega,w_{\Lambda}=\eta\}\) measurable for every finite subset \(\Lambda\subset E\) and every finite configuration \(\eta\in\{0,1\}^{\Lambda}\). We say that a sequence of probability measures \((\mu_{n})\) converges to the measure \(\mu\) on \((\Omega,\mathcal{C})\) if and only if \[\lim_{n\to\infty}\int_{\Omega}fd\mu_{n}=\int_{\Omega}fd\mu,\] for every local function \(f\). Since the set of local functions is dense in the set of continuous functions, this topology on the set of measures on \((\Omega,\mathcal{C})\) is the weak convergence. In this section, we are interested in the sequences of measures \((\mu_{n})_{n\geq 1}\) on CRSF on growing subgraphs \(G_{n}\). If such a sequence of measures converges weakly towards an infinite volume measure, we have the following result on the limit measure. **Proposition 3.1.1**.: _Assume that a sequence \((\mu_{n})_{n\geq 1}\) of measures on CRSF on growing subgraphs \(G_{n}\) of \(G\) converges weakly towards a measure \(\mu\), and let \(F\) be distributed according to \(\mu\). Then \(\mu\)-almost surely, every finite connected component of \(F\) has exactly one cycle and its cycle has non-trivial weight._ Proof.: Let \(x\in G\) and let \(T\) be a finite connected subgraph of \(G\) which contains \(x\) and which satisfies one of the following properties: * T has strictly more than one cycle; * T has a cycle of trivial weight; * T has no cycle. Let \(cc(x)\) be the connected component of \(x\) in \(F\). Notice that the event \(\{cc(x)=T\}\) is an event with finite support since its support is included in the set of edges which have at least one extremity in \(T\). Let \(m\) be large enough such that \(T\subset G_{m-1}\). Then, the event \[\{cc(x)=T\}=\{cc(x)\cap G_{m}=T\}\] has support in \(G_{m}\). For every \(n\geq m\), \(\mu_{n}\) is supported on CRSF whose cycles have non trivial weight on \(G_{n}\). Therefore, \[\mu_{n}(cc(x)\cap G_{m}=T)=\mu_{n}(cc(x)_{F_{n}}=T)=0.\] We have the convergence \(\mu_{n}\to\mu\) on configurations with finite support. Then \[\mu_{n}(cc(x)\cap G_{m}=T)\to\mu(cc(x)\cap G_{m}=T).\] Finally, we obtain \(\mu(cc(x)=T)=0\). Since \(G\) is countable, almost surely, every finite connected component of \(F\) has exactly one cycle and its cycle has non-trivial weight. We can consider sequences of measures \((\mu_{n})_{n\geq 1}\) on CRSF on growing subgraphs \(G_{n}\) of \(G\) with boundary conditions, such as free and wired boundary conditions. **Definition 3.1.2** (Free boundary conditions).: _We define the free measure on CRSF of \(G_{n}\) as the measure on CRSF of \(G_{n}\) whose configurations have weight proportional to the product of weights of cycles. This measure is denoted by \(\mu_{n}^{F}\)._ **Definition 3.1.3** (Wired boundary conditions).: _We define the wired measure on CRSF on \(G_{n}\) as the measure on CRSF of graph \(G_{n}\) whose configurations are either trees connected to \(\partial G_{n}\) or unicycles and have weight proportional to the product of weights of cycles. This measure is denoted by \(\mu_{n}^{W}\)._ ### Sampling algorithm for a fixed ordering on an infinite graph In the following, we still consider a countably infinite connected graph \(G\), an exhaustion \((G_{n})\) and a weight function \(w=p\in[0,1]\) on cycles which satisfies Assumption 2.0.1. We now construct a probability measure on CRSF of \(G\) for weights \(p\) which is sampled by an algorithm determined by a fixed ordering of the vertex set \(V\). Let \(\varphi\) be an ordering of the vertex set \(V\) of \(G\), in the sense of a bijection \(\varphi:\mathbb{N}\to V\). Let \((v_{i})_{i\geq 1}\) be the sequence of vertices of \(G\) with ordering \(\varphi\), that is \((v_{i})_{i}=(\varphi(i))_{i}\). We will construct a measure on CRSF of \(G\) by means of a family of \(p\)-loop-erased random walks with boundary conditions which are defined recursively. This family will be obtained deterministically from a family of independent simple random walks following results of Section 2. We still denote by \(\mathcal{C}\) the smallest \(\sigma\)-field which makes the cylinders \(C_{\Lambda,\eta}\) measurable. **Definition 3.2.1**.: _Let \(((X_{n}^{(x)})_{n\geq 1})_{x\in G},((Y_{n}^{(x)})_{n\geq 1})_{x\in G}\) be independent random variables such that for all \(x\in G\), \((X_{n}^{(x)})_{n}\) is a simple random walk on \(G\) starting from \(x\) and \((Y_{n}^{(x)})_{n}\) is a sequence of independent random variables with uniform law on \([0,1]\). For a fixed \(x\), consider the sequence \(\left((X_{n}^{(x)},Y_{n}^{(x)})\right)_{n\geq 1}\) and denote by \(T_{r}^{x}\) the rooting time of the \(p\)-LERW that is to say the first time \(n\) such that \((X_{n}^{(x)})_{n}\) closes a loop \(\gamma_{n}\) such that the inequality \(p(\gamma_{n})\geq Y_{n}^{(x)}\) holds._ **Definition 3.2.2**.: _For a fixed random data \(((X_{n}^{(x)})_{n\geq 1})_{x\in G},((Y_{n}^{(x)})_{n\geq 1})_{x\in G}\) as above, we construct the subgraphs \((\mathsf{F}_{i})\) recursively. Let \(\mathsf{F}_{0}=\emptyset\). Let \(i\in\mathbb{N}^{*}\) and assume that \(\mathsf{F}_{i-1}\) is constructed. Denote by \(T_{f}^{\underline{v}_{i}}\) the ending time of \(((X_{n}^{(v_{i})})_{n\geq 1},(Y_{n}^{(v_{i})})_{n\geq 1})\) with boundary condition \(V(\mathsf{F}_{i-1})\), that is_ \[T_{f}^{\underline{v}_{i}}=\min(T_{r}^{v_{i}},T_{V(\mathsf{F}_{i-1})}).\] _Let \(\mathsf{F}_{i}=\mathsf{F}_{i-1}\cup L(v_{i})\) where \(L(v_{i})\) is the \(p\)-LERW with boundary condition \(V(\mathsf{F}_{i-1})\) obtained from \(\left((X_{n}^{(v_{i})},Y_{n}^{(v_{i})})\right)_{n\geq 1}\) until \(T_{f}^{\underline{v}_{i}}\)._ Each step \(i\) of the algorithm finishes either if the random walk reaches a connected component created during a previous step or if the random walk is rooted to a loop. Notice that \(T_{f}^{\underline{v}_{i}}\) is the time where the \(i^{th}\)-step of the algorithm with ordering \(\varphi\) finishes. Recall that under Assumption 2.0.1 on \(p\), Proposition 2.3.2 implies that \(T_{f}^{\underline{v}_{i}}\) is finite almost surely. **Lemma 3.2.3**.: _There exists a measure \(\mu_{\varphi}\) on \((\mathcal{U}(G),\mathcal{C})\) which is sampled by the previous algorithm with ordering \(\varphi\). The measure on finite cylinders corresponds to finite random configurations which are sampled in a finite time._ Proof.: Sample a sequence \(((X_{n}^{(x)})_{n\geq 1})_{x\in G},((Y_{n}^{(x)})_{n\geq 1})_{x\in G}\) such as in Definition 3.2.1. From Definition 3.2.2, we obtain a CRSF of \(G\) by setting \(\mathsf{F}=\cup_{i\geq 1}\mathsf{F}_{i}\). The configuration \(\mathsf{F}\) is well defined since it is a deterministic function of \[((X_{n}^{(x)})_{n\geq 1})_{x\in G},((Y_{n}^{(x)})_{n\geq 1})_{x\in G}.\] Let \(\mu_{\varphi}\) be the law of \(\mathsf{F}\) associated with a random choice of \(((X_{n}^{(x)})_{n\geq 1})_{x\in G},((Y_{n}^{(x)})_{n\geq 1})_{x\in G}\), that is to say the push-forward by the algorithm of the measure which gives \[((X_{n}^{(x)})_{n\geq 1})_{x\in G},((Y_{n}^{(x)})_{n\geq 1})_{x\in G}.\] Then, \(\mu_{\varphi}\) is a probability measure on \((\mathcal{U}(G),\mathcal{C})\). Let \(B\) be a finite subset of size \(n\) of \(E\), with edges \(e_{1},\ldots,e_{n}\) and let \(\varepsilon_{1},\ldots,\varepsilon_{n}\in\{0,1\}^{n}\). Let \(K\) be the finite set of vertices containing all the extremities of edges of \(B\) and vertices which are preceding those vertices for the order \(\varphi\). Let us consider the previous algorithm for vertices \(v_{1},\ldots,v_{|K|}\) for the ordering \(\varphi\). Almost surely, the algorithm to construct \(\mathsf{F}_{|K|}\) finishes in a finite time. The constructed graph \(\mathsf{F}_{|K|}\) is a random subgraph of \(G\) which is spanning for \(K\) and therefore it is spanning for \(B\). Then \(\mu_{\varphi}(C_{\varepsilon_{1},\ldots,\varepsilon_{n}})\) is the probability that the random configuration \(F_{|B}\) obtained from the previous construction satisfies \[\mathsf{F}_{|B}\in C_{\varepsilon_{1},\ldots,\varepsilon_{n}}.\] The measure \(\mu_{\varphi}\) restricted to \((2^{B},\mathcal{C})\) is the law of the random configuration \(\mathsf{F}_{|B}\), which is sampled in a finite time. In the following, we will show that the measure in infinite volume constructed from an enumeration of the vertex set \(V\) does not depend on the choice of the enumeration. The proof of this statement will rely on a comparison between the measure \(\mu_{\varphi}\) for an ordering \(\varphi\) and a measure on CRSF on a large finite subgraph of \(G\). We will see that, under some hypotheses, the thermodynamic limit coincides with the measure sampled by the previous algorithm and does not depend on the ordering of the infinite vertex set. ### Thermodynamic limits of the Wilson measures Assume that Assumption 2.0.1 on the existence of a lower bound \(\alpha>0\) on the weight of a family of loops still holds. Let \(n\in\mathbb{N}\) and \(\varphi_{n}\) be an ordering of the vertex set \(V_{n}\) of the graph \(G_{n}\). The measures \(\mu_{n}^{F}\) and \(\mu_{n}^{W}\) are sampled by the algorithm described in the first section. Let us consider the sequence of measures \((\mu_{n})\) on CRSF of \((G_{n})_{n\geq 1}\) which are defined by the previous algorithm but if the walk meets \(\partial G_{n}\), the walk is stopped. According to [10], the measure does not depend on the ordering of the vertices of \(G_{n}\backslash\partial G_{n}\). **Theorem 3.3.1**.: _Let \(\varphi\) be an ordering of \(G\) in the sense of a bijection \(\varphi:\mathbb{N}\to G\). Let \((G_{n})\) be an increasing exhaustion of \(G\), and let \((\mu_{n}^{F}),(\mu_{n}^{W})\) be the corresponding sequences of probability measures on CRSF of \(G_{n}\) with free and wired boundary conditions, respectively. The sequences of probability measures \((\mu_{n}^{F})\) and \((\mu_{n}^{W})\) converges weakly to the measure \(\mu_{\varphi}\)._ Proof.: We consider an event \(B\in 2^{E}\) which depends only on finitely many edges, and we consider \(K_{0}\) the set of vertices incident to the edges on which \(B\) depends. Let \(K\) be the union of \(K_{0}\) and the set of vertices that precede some vertex in \(K_{0}\) in the ordering \(\varphi\) of the vertices. Let \(n\) be large enough such that \(K\subset G_{n}\). Let us construct a coupling \((\mathsf{F},\tilde{\mathsf{F}}_{n}^{F},\tilde{\mathsf{F}}_{n}^{W})\) of random configurations, obtained from the same random data \(((X_{n}^{(x)})_{n\geq 1})_{x\in G},((Y_{n}^{(x)})_{n\geq 1})_{x\in G}\), such that the law of \(\mathsf{F}\) is \(\mu_{\varphi}\), the law of \(\tilde{\mathsf{F}}_{n}^{F}\) is \(\mu_{n}^{F}\) and the law of \(\tilde{\mathsf{F}}_{n}^{W}\) is \(\mu_{n}^{W}\) and such that the three configurations coincide with high probability on \(B\). We denote \(\tilde{\varphi}_{n}\) the ordering induced by the ordering \(\varphi\) on \(G_{n}\). We follow the algorithm for every vertex of \(K\) following the ordering \(\varphi_{|K}\). If at one step of the algorithm, the random walk \((X_{n}^{(x)})\) starting from a vertex \(x\in G_{n}\) reaches \(\partial G_{n}\), the configuration \(\mathsf{F}\) is obtained following the random walk in the infinite graph \(G\) until the end of the step and \(\tilde{\mathsf{F}}_{n}^{F}\), \(\tilde{\mathsf{F}}_{n}^{W}\) are obtained following the random walk with boundary conditions. More precisely, \(\tilde{\mathsf{F}}_{n}^{W}\) is obtained from \(p\)-loop erased random walks with boundary conditions \(\partial G_{n}\) and \(\tilde{\mathsf{F}}_{n}^{F}\) is obtained following the random walk on the graph \(G_{n}\), until the end of the step, that is the ending time of the process \((X_{n}^{(x)},Y_{n}^{(x)})_{n}\). Once every vertex of \(K\) has been explored, we complete the configuration \(\mathsf{F}\) following the ordering \(\varphi\) and we complete the configurations \(\tilde{\mathsf{F}}_{n}^{F}\), \(\tilde{\mathsf{F}}_{n}^{W}\) on \(G_{n}\) following the ordering \(\tilde{\varphi}_{n}\), with boundary conditions. Let us denote by \(E(K)\) the set of edges whose vertices are in \(K\). The configurations \(\mathsf{F}\), \(\tilde{\mathsf{F}}_{n}^{F}\) and \(\tilde{\mathsf{F}}_{n}^{W}\) obtained from the algorithm are respectively subgraphs of \(G\) and \(G_{n}\). The three configurations are spanning subgraphs of \(G_{n}\) and in particular spanning subgraphs of \((K,E(K))\). \(\mathsf{F}_{E(K)}\) is the restriction to \((K,E(K))\) of a random configuration following the law \(\mu_{\varphi}\). Since \(G_{n}\) is finite, \((\tilde{\mathsf{F}}_{n})_{E(K)}^{W,F}\) is the restriction to \((K,E(K))\) of a random configuration following the law \(\mu_{n}^{W,F}\) and this law does not depend on \(\tilde{\varphi}_{n}\) (see [1]). Since \(B\) depends only on edges whose endpoints are in \(K_{0}\), we know that \[|\mu_{\varphi}(B)-\mu_{n}^{F,W}(B)|\leq\mathbb{P}(\mathsf{F}_{E(K)}\neq( \tilde{\mathsf{F}}_{n})_{E(K)})\] Following the previous algorithm, while each step starting from a vertex of \(K\) finishes before the random walk reaches \(\partial G_{n}\), both configurations \(\mathsf{F}_{E(K)},(\tilde{\mathsf{F}}_{n})_{E(K)}\) which are obtained are equal. Therefore, from the union bound, \[\mathbb{P}(\mathsf{F}_{E(K)}\neq(\tilde{\mathsf{F}}_{n})_{E(K)}) \leq\mathbb{P}\left(\bigcup_{i\in[|K|]}\{T_{n}^{v_{i}}\leq T_{f}^ {\frac{v}{f}}\}\right)\leq\sum_{i\in[|K|]}\mathbb{P}(T_{n}^{v_{i}}\leq T_{f}^{ \frac{v}{f}})\] \[\leq\sum_{i\in[|K|]}\mathbb{P}(T_{n}^{v_{i}}\leq T_{r}^{v_{i}}) \leq|K|\max_{i\in[|K|]}\mathbb{P}(T_{n}^{v_{i}}\leq T_{r}^{v_{i}})\] From Lemma 2.2.4, we obtain when \(n\to\infty\), \[|\mu_{\varphi}(B)-\mu_{n}^{W,F}(B)|\leq|K|\max_{i\in[|K|]}\mathbb{P}(T_{n}^{v _{i}}\leq T_{r}^{v_{i}})\to 0\] which implies the weak convergence of \((\mu_{n}^{W,F})\) towards \(\mu_{\varphi}\). Recall that for every \(n\), the measure \(\mu_{n}^{W}\) is sampled by an algorithm and does not depend on the ordering of vertices of \(G_{n}\) chosen in the algorithm. Combined with Theorem 3.3.1, this independence implies the following result. **Theorem 3.3.2**.: _Let \(\varphi\) be an ordering of the vertices, that is a bijection \(\varphi:\mathbb{N}\to V\). Let \(p\) be a weight function satisfying Assumption 2.0.1. Let \(\mu_{\varphi}\) be the measure on the cycle-rooted spanning forests of \(G\) associated with the algorithm of loop-erased random walk with weights \(p(\gamma)\). The measure \(\mu_{\varphi}\) does not depend on \(\varphi\)._ Proof.: Let \(\varphi\), \(\tau\) be two orderings, with \((v_{i})=(\varphi(i))\), \((w_{i})=(\tau(i))\) let \(K_{1}\) and \(K_{2}\) be respectively the sets of vertices that precede some vertex in \(K_{0}\) in the ordering \(\varphi\) (resp. \(\tau\)) and let \(n\) large enough such that \(K_{1}\cup K_{2}\subset G_{n}\). Then, from Theorem 3.3.1, \[|\mu_{\varphi}(B)-\mu_{\tau}(B)|\leq|K_{1}|\max_{i\in[|K_{1}|]}\mathbb{P}_{v_{i} }(T_{n}\leq T_{r})+|K_{2}|\max_{i\in[|K_{2}|]}\mathbb{P}_{w_{i}}(T_{n}\leq T_{ r})\to 0\] This shows that both distributions in infinite volume coincide on cylinders and therefore the measure in infinite volume does not depend on the ordering of the vertices. For a weight function \(p\) satisfying Assumption 2.0.1, we will denote by \(\mu_{p}\) the corresponding probability measure on CRSFs of G, which does not depend on the ordering of the vertices. ## 4. Study of the configurations sampled under the Wilson measure In this section, we will study the asymptotic behavior of configurations and the rate of decay of correlations with the distance for the measure \(\mu_{p}\) which is sampled by the previous algorithm of \(p\)-loop erased random walks in infinite volume and which will be called the Wilson measure, under the following assumption. **Assumption 4.0.1**.: _There exists \(\alpha>0,\beta>0,M,M^{\prime}>0,C>0,d\in\mathbb{N}\), a family of loops \(\Gamma\subset\mathcal{C}(G)\) and for every \(x\in G\), an increasing sequence \((B_{n}^{x})\) of subgraphs of \(G\), exhausting \(G\) and containing \(x\) such that :_ * _For every_ \(\gamma\in\Gamma\)_,_ \(\alpha\leq w(\gamma)\leq 1\)_._ * _For every_ \(v\in\partial B_{n}^{x}\)_, there exists a loop_ \(\gamma_{v}\in\Gamma\cap(B_{n+1}^{x}\backslash(B_{n}\cup\partial B_{n+1}^{x}))\) _such that the probability for a random walk starting from_ \(v\) _of making this loop_ \(\gamma_{v}\) _is greater than_ \(\beta\)_._ * _For every_ \(x\)_, for every_ \(n\in\mathbb{N}\)_,_ \(M^{\prime}n\leq d(x,\partial B_{n}^{x})\leq Mn\)_, and_ \(|\partial B_{n}^{x}|\leq Cn^{d}\)_._ Assumption 4.0.1 implies Assumption 2.0.1 and is satisfied in particular if the graph and the weight function \(w\) on cycles are invariant under translations and if Assumption 2.0.1 is satisfied for an exhaustion \((G_{n})_{n}\) such that \(d(0,\partial G_{n})\sim Mn\) when \(n\) tends to infinity, where \(M>0\). ### Every connected component is finite for the Wilson measure **Definition 4.1.1**.: _For a vertex \(x\) and a subset \(A\subset G\), we denote by \(\{x\leftrightarrow A\}\) the event \(\{A\cap C_{x}\neq\emptyset\}\) where \(C_{x}\) is the connected component of \(x\) in the random configuration sampled under \(\mu\). In particular, \(\{x\leftrightarrow y\}\) means that \(x\) and \(y\) are in the same connected component._ We will denote by \(T_{m,x}^{x}\) the hitting-time of \(\partial B_{m}^{x}\) for the random walk \((X_{n}^{(x)})\). Recall that \(T_{m}^{x}\) is the hitting-time of \(\partial G_{m}\) for the random walk \((X_{n}^{(x)})\). **Lemma 4.1.2**.: _Under Assumption 4.0.1, there exists \(\delta>0\) such that the following inequality holds for every \(m\), for every \(x\in G\)_ \[\mathbb{P}_{x}(\{T_{r}^{x}\geq T_{m,x}^{x}\})\leq\delta^{m}.\] Proof.: Let \(x\in G\). Under Assumption 4.0.1, Assumption 2.0.1 is satisfied for the vertex \(x\), and therefore, if we denote by \(T_{m,x}^{x}\) the hitting time of \(\partial B_{m}^{x}\) for a \(p\)-loop erased random walk starting from \(x\), and \(T_{r}^{x}\) its rooting time, Lemma 2.2.3 gives the existence of a \(0<\delta<1\) such that the following inequality holds for every \(m\), \[\mathbb{P}_{x}(\{T_{r}^{x}\geq T_{m,x}^{x}\})\leq\delta^{m},\] where \(\delta=1-\alpha\beta\) for parameters \(\alpha,\beta\) of 4.0.1. In particular, \(\delta\) does not depend on \(x\), which concludes the proof. **Lemma 4.1.3**.: _Let \(\delta>0\) as in Lemma 4.1.2 and \(M>0\) as in Assumption 4.0.1. Let \(x,y\) be two vertices of \(G\) and denote by \(d(x,y)\) the distance between \(x\) and \(y\), that is to say the length of the shortest path from \(x\) to \(y\). Then, if \(n\leq\frac{d(x,y)}{2M}\),_ \[\mu_{p}(x\leftrightarrow y)\leq 2\delta^{n}.\] Proof.: According to Theorem 3.3.2, the measure \(\mu_{p}\) does not depend on the ordering of the vertices. We may choose an ordering \(\varphi\) in which \(x\) and \(y\) are the first two vertices. Since \(n\leq\frac{d(x,y)}{2M}\), we have \(d(x,\partial B_{n}^{x})\leq Mn\leq\frac{d(x,y)}{2}\) and \(d(y,\partial B_{n}^{y})\leq Mn\leq\frac{d(x,y)}{2}\). If \(x\) and \(y\) are in the same connected component in a configuration obtained from this algorithm, we know that either for the \(p\)-loop erased random walk starting from \(x\) or for the one starting from \(y\), we have \(\{T_{r}^{x}\geq T_{f\varphi}^{x}\geq T_{n,x}^{x}\}\) or \(\{T_{r}^{y}\geq T_{f\varphi}^{y}\geq T_{n,x}^{y}\}\). Indeed, if \(T_{f\varphi}^{x}\leq T_{n,x}^{x}\) and \(T_{f\varphi}^{y}\leq T_{n,y}^{x}\), then the \(p\)-loop erased random walk starting from \(x\) and from \(y\) cannot intersect, and form two disjoint connected component in the configuration. Therefore, from the union bound, \[\mu_{p}(x\leftrightarrow y)\leq\mathbb{P}_{x}(\{T_{r}^{x}\geq T_{n,x}^{x}\}) +\mathbb{P}_{y}(\{T_{r}^{y}\geq T_{n,y}^{y}\})\leq 2\delta^{n}\] which concludes the proof. **Theorem 4.1.4**.: \(\mu_{p}\)_-almost surely, for every vertex \(x\in V\) of \(G\), the connected component of \(x\) is finite._ Proof.: Let \(x\in G\). For every \(n\in\mathbb{N}\), for every \(y\in\partial B_{n}^{x}\), \(d(x,y)\geq M^{\prime}n\). Let \(n^{\prime}=\lfloor\frac{M^{\prime}n}{2M}\rfloor\). Then \(n^{\prime}\leq\frac{d(x,y)}{2M}\) and therefore, from Lemma 4.1.3, \[\mu_{p}(x\leftrightarrow y)\leq 2\delta^{n^{\prime}}.\] Then, from the union bound, the following upper bound on the probability that the connected component of x contains vertices of the boundary of \(B_{n}^{x}\) holds for every \(n\in\mathbb{N}\), with \(\delta^{\prime}=\delta^{\frac{M^{\prime}}{2M}}\): \[\mathbb{P}(x\leftrightarrow\partial B_{n}^{x})\leq\sum_{y\in\partial B_{n}^{x }}\mathbb{P}(x\sim y)\leq 2|\partial B_{n}^{x}|\delta^{\lceil\frac{M^{\prime}n}{2M} \rfloor}\leq 2Cn^{d}\delta^{\prime n}.\] Then, from the monotone convergence theorem, we have \[\mathbb{P}(x\leftrightarrow\infty)=\mathbb{P}(\cap_{n}\{x\leftrightarrow \partial B_{n}^{x}\})=\lim_{n\to\infty}\mathbb{P}(x\leftrightarrow\partial B_ {n}^{x})=0.\] Since \(G\) is countable, we know that \(\mu\)-almost surely, for every \(x\in G\), the connected component of \(x\) is finite. ### Exponential decay of correlations for the Wilson measure We still assume that weights are in \([0,1]\) and satisfy Assumption 4.0.1. Let \(m\in\mathbb{N}\) and let \(e_{1}=(x_{1},y_{1})\) and \(e_{2}=(x_{2},y_{2})\) be such that \(d(\{x_{1},y_{1}\},\{x_{2},y_{2}\})\geq m\). If F is a CRSF following the law \(\mu_{p}\) it can be sampled from the algorithm described in Section 3.2 and from Theorem 3.3.2, it does not depend on the chosen ordering of vertices, therefore we may assume that the first four vertices of the ordering \(\varphi\) are \(x_{1},y_{1},x_{2},y_{2}\). Let us consider in the following, four independent couples of sequences of random variables \((X_{n}^{x_{1}},Y_{n}^{x_{1}})\), \((X_{n}^{y_{1}},Y_{n}^{y_{1}})\), \((X_{n}^{x_{2}},Y_{n}^{x_{2}})\), \((X_{n}^{y_{2}},Y_{n}^{y_{2}})\), as defined in Section 4.1. For \(i\in\{1,2\}\), let us denote by \(A_{i}\) the event that both \(p\)-loop erased random walks obtained from \((X_{n}^{x_{i}},Y_{n}^{x_{i}})_{n}\), \((X_{n}^{y_{i}},Y_{n}^{y_{i}})_{n}\) starting from \(x_{i},y_{i}\) are rooted before leaving the subgraphs \(B_{m/2}^{x_{i}},B_{m/2}^{y_{i}}\), that is to say that \[A_{i}=\{T_{r}^{x_{i}}<T_{x_{i},m/2}^{x_{i}}\}\cap\{T_{r}^{y_{i}}<T_{y_{i},m/2}^ {y_{i}}\}\] **Lemma 4.2.1**.: _Conditional on \(A_{1}\cap A_{2}\), the events \(\{e_{1}\in\textsf{F}\}\) and \(\{e_{2}\in\textsf{F}\}\) are independent._ Proof.: Let \(\textsf{F}_{4}\) be the subgraph obtained after the first four runs of the algorithm. Notice that, once \(\textsf{F}_{4}\) has been sampled, during every subsequent run of the algorithm, the \(p\)-loop erased random walk stops if it reaches \(x_{1},y_{1},x_{2},y_{2}\) because \(\textsf{F}_{4}\) contains \(x_{1},y_{1},x_{2},y_{2}\). Therefore, for F the configuration obtained following the algorithm in infinite volume, we have the following equality of events for \(i\in[1,2]\), \[\{e_{i}\in\textsf{F}\}=\{e_{i}\in\textsf{F}_{4}\}.\] Let \((Z_{n}^{x_{1}})_{n\leq T_{r}^{x_{1}}}\) be the \(p\)-loop erased random walk obtained from \((X_{n}^{x_{1}},Y_{n}^{x_{1}})\) and let \[W_{1}=V((Z_{n}^{x_{1}})_{n\leq T_{r}^{x_{1}}})\] be the set of vertices explored by this \(p\)-loop erased random walk. Let \((Z_{n}^{y_{1}})_{n\leq\min(T_{r}^{y_{1}},T_{W_{1}}^{y_{1}})}\) be the \(p\)-loop erased random walk starting from \(y_{1}\) with boundary condition \(W_{1}\). Let \(\mathfrak{F}_{1}\) be the subgraph given by \((Z_{n}^{x_{1}})_{n\leq T_{r}^{x_{1}}},(Z_{n}^{y_{1}})_{n\leq\min(T_{r}^{y_{1}}, T_{W_{1}}^{y_{1}})}\). Let \((Z_{n}^{x_{2}})_{n\leq T_{r}^{x_{2}}}\) be the \(p\)-loop erased random walk obtained from \((X_{n}^{x_{2}},Y_{n}^{x_{2}})\) and let \[W_{2}=V((Z_{n}^{x_{2}})_{n\leq T_{r}^{x_{2}}})\] be the set of vertices explored by this \(p\)-loop erased random walk. Let \((Z_{n}^{y_{2}})_{n\leq\min(T_{r}^{y_{2}},T_{W_{2}}^{y_{2}})}\) be the \(p\)-loop erased random walk starting from \(y_{1}\) with boundary condition \(W_{2}\). Let \(\mathfrak{F}_{2}\) be the subgraph given by \((Z_{n}^{x_{2}})_{n\leq T_{r}^{x_{2}}},(Z_{n}^{y_{2}})_{n\leq\min(T_{r}^{y_{2}},T_{W_{2}}^{y_{2}})}\). Let us emphasize that the \(p\)-loop erased random walk corresponding to the third and the fourth runs of the algorithm has boundary conditions \(V(\mathfrak{F}_{1})\), corresponding to the configuration created during the first two runs of the algorithm. Therefore, in general, \(\mathfrak{F}_{1}\) and \(\mathfrak{F}_{2}\) are not disjoint and their union is not the component created after four runs of the algorithm. If \(A_{1}\) is satisfied, \(\mathfrak{F}_{1}\) is contained in \(B_{m/2}^{x_{1}}\cup B_{m/2}^{y_{1}}\) and if \(A_{2}\) is satisfied, \[\begin{cases}T_{r}^{x_{2}}<T_{x_{2},m/2}^{x_{2}}<T_{V(\mathfrak{F}_{1})}\\ T_{r}^{y_{2}}<T_{y_{2},m/2}^{y_{2}}<T_{V(\mathfrak{F}_{1})}\end{cases}\] and therefore, the third and the fourth runs finish before the \(p\)-loop erased random walks reach \(V(\mathfrak{F}_{1})\), that is to say that the \(p\)-loop erased random walks with boundary condition \(V(\mathfrak{F}_{1})\) coincides with the \(p\)-loop erased random walks without this boundary condition (see Proposition 2.3.2). Therefore, if \(A_{1}\) and \(A_{2}\) are satisfied, \(\mathfrak{F}_{1}\) and \(\mathfrak{F}_{2}\) are disjoint connected components and their union is exactly the component created after four runs of the algorithm. In particular, if \(A_{1}\cap A_{2}\) is satisfied, for \(i\in[1,2]\), \(\{e_{i}\in\mathsf{F}\}\) is satisfied if and only if \(\{e_{i}\in\mathfrak{F}_{i}\}\) is satisfied. We show that conditional on \(A_{1}\cap A_{2}\), the random configurations \(\mathfrak{F}_{1}\) and \(\mathfrak{F}_{2}\) are independent. Recall that \((Z_{n}^{x_{1}}),(Z_{n}^{y_{1}})\) and \((Z_{n}^{x_{2}}),(Z_{n}^{y_{2}})\) are independent and \(\mathfrak{F}_{1},A_{1}\) only depends on \((Z_{n}^{x_{1}}),(Z_{n}^{y_{1}})\) and \(\mathfrak{F}_{2},A_{2}\) only depends on \((Z_{n}^{x_{2}}),(Z_{n}^{y_{2}})\). Therefore, if \(F_{1},F_{2}\) are some fixed configurations, \[\mathbb{P}(\mathfrak{F}_{1}=F_{1},A_{1},\mathfrak{F}_{2}=F_{2},A_{2})=\mathbb{ P}(\mathfrak{F}_{1}=F_{1},A_{1})\mathbb{P}(\mathfrak{F}_{2}=F_{2},A_{2}).\] Therefore, using independence of \(\{\mathfrak{F}_{i}=F_{i}\}\cap A_{i}\) and \(A_{j}\) for \(i\neq j\), we obtain \[\mathbb{P}(\mathfrak{F}_{1}=F_{1},\mathfrak{F}_{2}=F_{2}|A_{1},A_{ 2}) =\frac{\mathbb{P}(\mathfrak{F}_{1}=F_{1},A_{1})\mathbb{P}(\mathfrak{F}_{2}=F_ {2},A_{2})}{\mathbb{P}(A_{1}\cap A_{2})}\] \[=\mathbb{P}(\mathfrak{F}_{1}=F_{1}|A_{1})\mathbb{P}(\mathfrak{F}_{ 2}=F_{2}|A_{2})\] \[=\mathbb{P}(\mathfrak{F}_{1}=F_{1}|A_{1}\cap A_{2})\mathbb{P}( \mathfrak{F}_{2}=F_{2}|A_{1}\cap A_{2}).\] Therefore, conditional on \(A_{1}\cap A_{2}\), the random variables \(\mathfrak{F}_{1}\) and \(\mathfrak{F}_{2}\) are still independent. Therefore, conditional on \(A_{1}\cap A_{2}\), \(\{e_{1}\in\mathsf{F}\}=\{e_{1}\in\mathfrak{F}_{1}\}\) and \(\{e_{2}\in\mathsf{F}\}=\{e_{2}\in\mathfrak{F}_{2}\}\) are independent. As a consequence, we obtain the following decay of correlations. **Theorem 4.2.2**.: _There exists a parameter \(0<\iota<1\) such that for every \(m\) large enough,_ \[\mu_{p}(e_{2}\in\mathsf{F})\mu_{p}(e_{1}\in\mathsf{F})-\iota^{m}\leq\mu_{p}(\{e_ {2}\in\mathsf{F}\}\cap\{e_{1}\in\mathsf{F}\})\leq\mu_{p}(e_{2}\in\mathsf{F}) \mu_{p}(e_{1}\in\mathsf{F})+\iota^{m}\] Proof.: Let us compute \(\mu_{p}(\{e_{1},e_{2}\in\mathsf{F}\})\) using the following decomposition : \[\{e_{1},e_{2}\in\mathsf{F}\}=(\{e_{1},e_{2}\in\mathsf{F}\}\cap A_{1}\cap A_{2}) \cup\left(\{e_{1},e_{2}\in\mathsf{F}\}\cap(A_{1}^{\complement}\cup A_{2}^{\complement}) \right).\] Since \(\{e_{1},e_{2}\in\mathsf{F}\}\cap(A_{1}^{\complement}\cup A_{2}^{\complement})\) is included in \(A_{1}^{\complement}\cup A_{2}^{\complement}\), it has probability less than the quantity \(\mu_{p}(A_{1}^{\complement}\cup A_{2}^{\complement})\). From Lemma 2.2.3, there exists some \(\delta<1\) such that from the union bound, we get \[\mu_{p}(A_{1}^{\complement}\cup A_{2}^{\complement})\leq 4\delta^{m/2}.\] For the other term, we use the independence of \(\{e_{1}\in\mathsf{F}\}\) and \(\{e_{2}\in\mathsf{F}\}\) conditional on \(A_{1}\cap A_{2}\) proved in Lemma 4.2.1, which implies \[\mu_{p}(\{e_{2}\in\mathsf{F}\}\cap\{e_{1}\in\mathsf{F}\}\cap A_{ 1}\cap A_{2}) =\mu_{p}(\{e_{1}\in\mathsf{F}\}\cap\{e_{2}\in\mathsf{F}\}|A_{1} \cap A_{2})\mu_{p}(A_{1}\cap A_{2})\] \[=\frac{\mu_{p}(\{e_{1}\in\mathsf{F}\}\cap A_{1}\cap A_{2})\mu_{p} (\{e_{2}\in\mathsf{F}\}\cap A_{1}\cap A_{2})}{\mu_{p}(A_{1}\cap A_{2})}\] \[\leq\frac{\mu_{p}(\{e_{1}\in\mathsf{F}\})\mu_{p}(\{e_{2}\in \mathsf{F}\})}{\mu_{p}(A_{1}\cap A_{2})}.\] Using again the lower bound on \(\mu_{p}(A_{1}\cap A_{2})\) which comes from Lemma 2.2.3, we have \[\mu_{p}(A_{1}\cap A_{2})\geq 1-4\delta^{m/2}.\] Let \(\eta<1\) be such that for \(m\) large enough, \[4\delta^{m/2}<\eta^{m}\] Therefore, we have the following upper bound on \(\frac{1}{\mu_{p}(A_{1}\cap A_{2})}\), \[\frac{1}{\mu_{p}(A_{1}\cap A_{2})}\leq\frac{1}{1-\eta^{m}}=\sum_{k\geq 0} \eta^{mk}=1+\sum_{k\geq 1}\eta^{mk}\leq 1+\sum_{k\geq m}\eta^{k}\leq 1+\frac{ \eta^{m}}{1-\eta}.\] Therefore we get \[\mu_{p}(\{e_{2}\in\mathsf{F}\}\cap\{e_{1}\in\mathsf{F}\}\cap A_{1}\cap A_{2}) \leq\mu_{p}(e_{1}\in\mathsf{F})\mu_{p}(e_{2}\in\mathsf{F})(1+\frac{\eta^{m}}{ 1-\eta}).\] For the other inequality, notice that \[\frac{\mu_{p}(\{e_{1}\in\mathsf{F}\}\cap A_{1}\cap A_{2})\mu_{p}(\{e_{2}\in \mathsf{F}\}\cap A_{1}\cap A_{2})}{\mu_{p}(A_{1}\cap A_{2})}\] \[\geq(\mu_{p}(\{e_{1}\in\mathsf{F}\}\mu_{p}(\{e_{2}\in\mathsf{F}\})-2\mu_{p}(A _{1}^{\complement}\cup A_{2}^{\complement})\] \[\geq(\mu_{p}(\{e_{1}\in\mathsf{F}\}\mu_{p}(\{e_{2}\in\mathsf{F}\})-2\eta^{m}.\] Therefore, \[\mu_{p}(\{e_{2}\in\mathsf{F}\}\cap\{e_{1}\in\mathsf{F}\}\cap A_{1}\cap A_{2}) \geq(\mu_{p}(\{e_{1}\in\mathsf{F}\}\mu_{p}(\{e_{2}\in\mathsf{F}\})-2\eta^{m},\] and \[\mu_{p}(\{e_{2}\in\mathsf{F}\}\cap\{e_{1}\in\mathsf{F}\})\geq(\mu_{p}(\{e_{1} \in\mathsf{F}\}\mu_{p}(\{e_{2}\in\mathsf{F}\})-2\eta^{m}.\] Considering \(\iota<1\) such that for \(m\) large enough, \(2\eta^{m}<\iota^{m}\) and \(\eta^{m}\frac{1}{1-\eta}+4\delta^{m/2}<\iota^{m}\) concludes the proof. ## 5. Study of the configurations sampled under an infinite volume measure \(\mu_{w}\). In this section, we consider a non-negative weight function \(w\) on oriented cycles of \(G\), which can take values larger than \(1\). We assume that the sequence of measures \((\mu_{n})\) on cycle-rooted spanning forests of \(G_{n}\) associated with the weight function \(w\) converges weakly towards an infinite volume measure \(\mu\) and that this measure does not depend on the free or wired boundary conditions. ### Algorithm conditional on cycles with weights larger than 1 In this subsection, we assume that \(G\) is a finite connected graph. Let \(W\subset G\) be a set of vertices of \(G\) (which can be empty). Let \(\mathcal{C}_{+}(G\backslash W)\) be the set of cycles of \(G\) of weight strictly larger than 1, so-called positive cycles and \(\mathcal{C}_{-}(G\backslash W)\) the set of cycles of \(G\) of weight less than 1, so-called negative cycles. **Definition 5.1.1**.: _The weight function \(w_{-}\) on cycles of the graph \(G\) associated with the weight function \(w\) is defined by the following restrictions_ \[\begin{cases}w_{-\mathcal{C}_{-}(G\backslash W)}=w_{\mathcal{C}_{-}(G \backslash W)},\\ w_{-\mathcal{C}_{+}(G\backslash W)}=0.\end{cases}\] The following result gives a way to sample a cycle-rooted spanning forest conditional on its positive cycles. **Theorem 5.1.2**.: _Let \(C\) be a subset of \(\mathcal{C}_{+}(G\backslash W)\) and let \(A\) be the set of vertices which are extremities of edges in \(C\). Let \(\mathsf{F}\) a ECSRF with respect to \(W\) sampled according \(\mu^{W}\). Conditional on \(\mathcal{C}_{+}(\mathsf{F})=C\), \(\mathsf{F}\backslash C\) has the same law than a ECRSF with respect to \(A\cup W\) with weight function \(w_{-}\)._ Proof.: Let \(F_{0}\in\mathcal{U}_{W}(G)\). \[\mu^{W}(\mathsf{F}=F_{0}|\mathcal{C}_{+}(F)=C)=\frac{\mu(\mathsf{F}=F_{0}\cap \mathcal{C}_{+}(\mathsf{F})=C)}{\mu(C_{+}(\mathsf{F})=C)}.\] Notice that this quantity is zero if \(C_{+}(F_{0})\neq C\). Then, if \(C_{+}(F_{0})=C\) and \(F_{0}\in\mathcal{U}_{W}(G)\), every connected component of \(F_{0}\) either contains a unique cycle in \(C_{-}(G)\) or a unique cycle in \(C\) or is connected to a unique point in \(W\). Therefore, every connected component of \(F_{0}\backslash C\) either contains a unique cycle in \(C_{-}(G)\) or or is connected to a unique point in \(W\cup A\) which means that \(F_{0}\backslash C\in\mathcal{U}_{W\cup A}(G)\). Then, the measure \(\mu^{W}(.|C_{+}(F)=C)\) has support in \[\mathcal{U}_{W}^{C}(G)=\{F\in\mathcal{U}_{W}(G)|C_{+}(F)=C\}=\{F\in\mathcal{ U}_{W}(G)|F=C\cup F_{-},F_{-}\in\mathcal{U}_{W\cup A}(G)\},\] and if \(F_{0}\in\mathcal{U}_{W}^{C}(G)\), \[\mu^{W}(\mathsf{F}=F_{0}|\mathcal{C}_{+}(F)=C) =\frac{\prod_{\gamma\in C}w(\gamma)\prod_{\gamma\in\mathcal{C}_{ -}(F_{0})}w(\gamma)}{\sum_{F\in\mathcal{U}_{W}(G)|\mathcal{C}_{+}(F)=C}\prod_ {\gamma\in C}w(\gamma)\prod_{\gamma\in\mathcal{C}_{-}(F)}w(\gamma)}\] \[=\frac{\prod_{\gamma\in\mathcal{C}_{-}(F_{0})}w(\gamma)}{\sum_{F \in\mathcal{U}_{W}^{C}(G)}\prod_{\gamma\in\mathcal{C}_{-}(F)}w(\gamma)}.\] Writing every \(F\in\mathcal{U}_{W}^{C}(G)\) on a unique way as \(C\cup F_{-}\) with \(F_{-}\in\mathcal{U}_{W\cup A}(G)\), \[\mu^{W}(\mathsf{F}=F_{0}|\mathcal{C}_{+}(\mathsf{F})=C) =\frac{\prod_{\gamma\in\mathcal{C}_{-}(F_{0})}w(\gamma)}{\sum_{F \subset\mathcal{U}_{W\cup A}(G)}\prod_{\gamma\in\mathcal{C}_{-}(F_{-})}w( \gamma)}\] \[=\mu^{W\cup A}_{w_{\mathcal{C}_{-}(G\backslash W)}}(F_{0-})=\mu^ {W\cup A}_{w_{\mathcal{C}_{-}(G\backslash W)}}(F_{0}\backslash C).\] Finally, \[\mu^{W}(\mathsf{F}\backslash C=.|\mathcal{C}_{+}(\mathsf{F})=C)=\mu^{W\cup A }_{w_{\mathcal{C}_{-}(G\backslash W)}}(.),\] which concludes the proof. Since \(w_{\mathcal{C}_{-}(G\backslash W)}\) takes values in \([0,1]\) by definition of \(C_{-}(G\backslash W)\), the measure \(\mu^{W\cup A}_{w_{\mathcal{C}_{-}(G\backslash W)}}\) can be sampled by the wired Wilson algorithm with boundary conditions \(A\cup W\). Therefore, under the measure \(\mu^{W}\), conditional on \(\mathcal{C}_{+}(F)\), a ECRSF with respect to \(W\) has the same law as a a ECRSF with respect to \(W\) and extremities of edges in \(\mathcal{C}_{+}(F)\) and can be sampled from a loop-erased random walk algorithm. ### All connected components with a cycle are finite We assume that Assumption 4.0.1 holds for the weight function \(w_{-}\) as defined in Definition 5.1.1 and for a family of cycles \(\Gamma\subset\mathcal{C}_{-}(G)\). In particular, Assumption 2.0.1 also holds for \(w_{-}\), but the weight function \(w\) can take values larger than 1. We will show in this subsection that under this assumption, every connected component with a cycle is finite. We will use the following lemma on the exponential decay of the tail distribution of ending times, which is a corollary of Section 2. **Lemma 5.2.1**.: _Let \(m\in\mathbb{N}\), \(n\geq m\). Let \(C\subset G_{n}\) be a subgraph of \(G_{n}\). Let \(\mathbb{P}_{x}\) be the law of a \(w_{-}\)-loop erased random walk \((X_{n})\) starting from x and \(T_{r}\) be the rooting time of \((X_{n})\). Let \(T_{C}\) and \(T_{m,x}\) be the hitting times of \(C\) and \(\partial B_{m}^{x}\). Under Assumption 2.0.1 on \(w_{-}\), the following inequality holds_ \[\mathbb{P}_{x}(\min(T_{C},T_{r})\geq T_{m,x})\leq\delta^{m}.\] Proof.: Applying Lemma 2.2.3 to the random walk \((X_{n})\) with weight function \(w_{-}\) which satisfies Assumption 2.0.1, we get \[\mathbb{P}_{x}(T_{r}\geq T_{m,x})\leq\delta^{m}.\] But since \(\min(T_{C},T_{r})\leq T_{r}\), we have \[\mathbb{P}_{x}(\min(T_{C},T_{r})\geq T_{m}^{x})\leq\mathbb{P}_{x}(T_{r}\geq T_ {m,x})\leq\delta^{m},\] which concludes the proof. Let \(x\in V\) be a fixed vertex of \(G\). We introduce some events with compact support which depend on \(x\) and prove an upper bound on the probability of those events. **Definition 5.2.2**.: _For \(m\in\mathbb{N}\), we denote by \(A_{m}=\{x\leftrightarrow\partial B_{2m}^{x}\}\) the event that \(x\) and \(\partial B_{2m}^{x}\) are connected in \(F\), that is to say that there exists a path between \(x\) and \(\partial B_{2m}^{x}\) in \(B_{2m}^{x}\). If \(n\in\mathbb{N}\), we denote by \(\Gamma_{n}^{-}=\{x\leftrightarrow C_{-}(F_{G_{n}})\}\) the event that x is connected to a closed cycle in \(G_{n}\) of weight less than 1._ **Lemma 5.2.3**.: _Let \(m\in\mathbb{N}\). For \(n\geq m\) large enough, if \(F_{n}\) is distributed according to the free measure \(\mu_{n}\) on \(G_{n}\),_ \[\mu_{n}(A_{m}\cap\Gamma_{n}^{-})\leq(|\partial B_{m}^{x}|+1)\delta^{m}.\] Proof.: Let \(n\) be a large enough integer such that for every \(y\in\partial B_{2m}^{x}\), we have \(B_{m}^{y}\subset G_{n}\). Let \(C\subset\mathcal{C}_{+}(G_{n})\). From Theorem 5.1.2, conditional on \(\mathcal{C}_{+}(F_{n})=C\), \(F_{n}\) is given by an algorithm of \(w_{-}\)-loop erased random walks with boundary conditions on \(C\). The proof relies on the same ideas as that in the proof of Lemma 4.1.3 and Theorem 4.1.4. The event \(A_{m}\cap\Gamma_{n}^{-}\) is satisfied if there exists \(y\in\{x\}\cup\partial B_{2m}^{x}\) such that the \(w_{-}\)-loop erased random walk starting from \(y\) has left \(B_{m}^{y}\) before being rooted to a cycle in \(\mathcal{C}_{-}(G_{n})\) and before touching \(C\). From Lemma 5.2.1, for every \(y\in\{x\}\cup\partial B_{2m}^{x}\), \[\mathbb{P}_{y}(\min(T_{C},T_{r})\geq T_{m,y})\leq\delta^{m}.\] Then, the union bound concludes the proof. **Lemma 5.2.4**.: _The previous lemma implies that_ \[\mu(\{x\leftrightarrow C_{-}(F)\}\cap\{|cc(x)|=\infty\})=0.\] Proof.: Let \(\varepsilon>0\). Let \(m\in\mathbb{N}\) fixed, large enough such that \((|\partial B_{m}^{x}|+1)\delta^{m}<\varepsilon\). We consider the notations from Definition 5.2.2. Since \((\Gamma_{n}^{-})\) is increasing, if we let \(\Gamma^{-}:=\{x\leftrightarrow\mathcal{C}_{-}(F)\}=\cup\Gamma_{n}^{-}\), then \[\mu(A_{m}\cap\Gamma^{-})=\mu(A_{m}\cap\cup\Gamma_{n}^{-})=\mu(\cup_{n}(A_{m} \cap\Gamma_{n}^{-}))=\lim_{n}\mu(A_{m}\cap\Gamma_{n}^{-}).\] Let us consider \(n_{0}\) large enough such that the inequality from Lemma 5.2.3 holds. Since for \(n\geq n_{0},\ \ \mu_{n}(A_{m}\cap\Gamma_{n_{0}}^{-})\leq\mu_{n}(A_{m}\cap \Gamma_{n}^{-})\leq\varepsilon\), We obtain \[\mu(A_{m}\cap\Gamma_{n_{0}}^{-})=\lim_{n}\mu_{n}(A_{m}\cap\Gamma_{n_{0}}^{-}) \leq\varepsilon.\] It holds for every \(n_{0}\) large enough and therefore, \(\mu(A_{m}\cap\Gamma^{-})\leq\varepsilon\). Since this inequality holds for \(m\) large enough, we have when \(m\to\infty\), \(\mu(A_{m}\cap\Gamma^{-})\to 0\). Therefore, since \((A_{m})\) is decreasing and \(\cap_{m}A_{m}=\{|cc(x)|=\infty\}\), the monotone convergence theorem concludes the proof. **Definition 5.2.5**.: _For \(l\in\mathbb{N}\), let \(\Gamma_{l}^{+}=\{x\leftrightarrow_{B_{l}^{x}}C_{+}(F_{B_{l}^{x}})\}\) be the event that x is connected inside \(B_{l}^{x}\) to a cycle with weight larger than 1 which is inside \(B_{l}^{x}\), that is to say the event that in \(F_{B_{l}^{x}}\), the connected component of x contains a cycle with weight larger than 1. For \(m\in\mathbb{N}\), let \(A_{m}:=\{x\leftrightarrow\partial B_{m}^{x}\}\) be the event that x is connected to the boundary of \(B_{m}^{x}\)._ **Lemma 5.2.6**.: _Let \(m_{0}\in\mathbb{N}\). For \(m\) large enough, there exists \(n_{0}\) such that if \(n\geq n_{0}\) and if \(F_{n}\) is distributed according to \(\mu_{n}\), then,_ \[\mu_{n}(A_{m}\cap\Gamma_{l}^{+})\leq|\partial B_{m_{0}}^{x}|\delta^{m_{0}}.\] Proof.: Let \(m_{0},m\in\mathbb{N}\). Assume that \(m\) is large enough such that for every \(y\in\partial B_{m}^{x}\), the equality \(B_{m_{0}}^{y}\cap B_{l}^{x}=\emptyset\) holds. Let \(n\) be large enough such that for every \(y\in\partial B_{m}^{x}\), the inclusion \(B_{m_{0}}^{y}\subset G_{n}\) holds. Let \(C\in\mathcal{C}_{+}(G_{n})\). Conditional on \(\mathcal{C}_{+}(F_{n})=C\), \(F_{n}\) is given by an algorithm of \(w_{-}\)-loop erased random walks with wired conditions on \(C\). The event \(\Gamma_{l}^{+}\cap A_{m}\) is satisfied if the \(w_{-}\)-loop erased random walk starting from x hits a cycle in \(C_{+}(F_{B_{l}^{x}})\) before leaving \(B_{l}^{x}\) and before being rooted to another cycle and if one of the \(w_{-}\)-loop erased random walks starting from points of \(\ \partial B_{m}^{x}\) reaches \(B_{l}^{x}\) before being rooted to a cycle or hitting \(C\). Therefore, from Lemma 5.2.1 and from the union bound, \[\mu_{n}(A_{m}\cap\Gamma_{l}^{+})\leq|\partial B_{m_{0}}^{x}|\delta^{m_{0}}\] which concludes the proof. **Lemma 5.2.7**.: _For every vertex \(x\in V\) of \(G\), we have_ \[\mu(\{x\leftrightarrow C_{+}(F)\}\cap\{|cc(x)|=\infty\})=0.\] Proof.: Let \(x\in V\) be a fixed vertex of \(G\) and let \[A:=\{|cc(x)|=\infty\}=\bigcap_{m}A_{m}\] be the event that the connected component of 0 is infinite, where \(A_{m}\) was defined in Definition 5.2.5. Let \(l\in\mathbb{N}\). Let \(\varepsilon>0\). Let \(m_{0}\) large enough such that \(|\partial B_{m_{0}}^{x}|\delta^{m_{0}}\leq\varepsilon\). Let \(m\) be large enough such that for every \(y\in\partial B_{m}^{x}\), \(B_{m_{0}}^{y}\cap B_{l}^{x}=\emptyset\). Let \(n\) be large enough such that for every \(y\in\partial B_{m}^{x}\), \(B_{m_{0}}^{y}\subset G_{n}\). Let \(F_{n}\) distributed according \(\mu_{n}\). Then, from Lemma 5.2, \[\mu_{n}(A_{m}\cap\Gamma_{l}^{+})\leq|\partial B_{m_{0}}^{x}|\delta^{m_{0}} \leq\varepsilon.\] Since this inequality holds for every \(n\) large enough and \(A_{m}\cap\Gamma_{l}^{+}\) depends on finitely many edges, \[\mu(A_{m}\cap\Gamma_{l}^{+})=\lim_{n}\mu_{n}(A_{m}\cap\Gamma_{l}^{+})\leq\varepsilon.\] Since this inequality holds for every \(\varepsilon\) for \(m\) large enough, we obtain when \(m\to\infty\), \[\mu(A_{m}\cap\Gamma_{l}^{+})\to 0.\] Since the sequence of events \((A_{m})_{m}\) is decreasing, \[\mu(A\cap\Gamma_{l}^{+})=\mu\left(\bigcap_{m}A_{m}\cap\Gamma_{l}^{+}\right)= \lim_{m}\mu(A_{m}\cap\Gamma_{l}^{+})=0.\] Since the sequence of events \((\Gamma_{l}^{+})_{l}\) is increasing and \[\Gamma^{+}=\{x\leftrightarrow C_{+}(F)\}=\bigcup_{l}\Gamma_{l}^{+},\] we have \[\mu(A\cap\Gamma^{+})=\mu\left(A\cap\left(\bigcup_{l}\Gamma_{l}^{+}\right) \right)=\mu\left(\bigcup_{l}(A\cap\Gamma_{l}^{+})\right)=\lim_{l}\mu(A\cap \Gamma_{l}^{+})=0.\] which is precisely what we wanted to prove. Let us emphasize that Lemma 5.2.7 and Lemma 5.2.4 show that for every vertex \(x\in V\) of \(G\), almost surely, if \(x\) is connected to a cycle in the random configuration \(F\), the connected component of \(x\) is finite. Therefore, since \(G\) is countable, we immediately deduce the following theorem. **Theorem 5.2.8**.: _Under a measure \(\mu_{w}\) such that \(w_{-}\) satisfies Assumption 4.0.1, every connected component with a cycle is finite._ From Proposition 3.1.1, we know that every finite connected component has a cycle then, if a connected component does not have a cycle, it is necessarily an infinite tree. Therefore, almost surely every connected component is either a finite cycle-rooted tree or an infinite tree. ## Conclusion and open questions When a positive weight function on oriented cycles takes values in \([0,1]\) and satisfies an assumption of minoration of weights, it gives rise to a unique infinite volume measure on cycle-rooted spanning forests, which is sampled by an algorithm of loop-erased random walks and which is the thermodynamic limit of finite volume measures, with respect to free or wired boundary conditions. Under this measure, almost surely, all connected components are finite and the edge-to-edge correlations decay is exponential. By contrast, when the weight function is constant equal to \(0\), the model is the uniform spanning tree and the thermodynamic limit in infinite volume of finite volume measures is the free or wired uniform spanning forests measure, depending on boundary conditions. On a large class of graphs (amenable graphs for instance), the infinite volume measure does not depend on the boundary conditions and is sampled by the Wilson algorithm of loop-erased random walks (see [1, 1, 1]). Under this measure, almost surely, every connected component is an infinite tree and the edge-to-edge correlations have long range (for instance, they decay polynomially for \(\mathbb{Z}^{d}\)). Considering these two cases as instances of a same model, we thus observe two qualitatively distinct phases, depending on the weight function on cycles. For determinantal measures on cycle-rooted spanning forests (see [11]) associated to a unitary connection, sequences of measures on finite growing subgraphs also converge towards infinite volume measures (see [11, 12, 13, 14]). When the connection satisfies some assumptions, the infinite volume measure does not depend on the boundary conditions (see [14, 15]). These determinantal measures are associated to a weight function on cycles which can take values larger than \(1\), like in Section 5. Under some assumptions on the connection, the assumption of minoration of cycle weights (Assumption 4.0.1) is satisfied and therefore, by Theorem 5.2.8, almost surely all connected components are either finite cycle-rooted trees or infinite trees. We also observe two phases (polynomial versus exponential decay of edge-to-edge correlations) depending on the unitary connection (see [1]). A relevant question is to know if under the assumption of minoration on the weight function, there are infinite trees with a positive probability under the infinite volume measure, in particular in the case where the measure is determinantal and associated to a weight function which is provided by a unitary connection. ## Acknowledgments We thank Adrien Kassel for suggesting this topic to us and for guidance throughout its study. We also thank Titus Lupu, Beatrice de Tiliere, Cedric Boutillier and Kilian Raschel for helpful conversations. Financial support was partly provided by ANR grant number ANR-18-CE40-0033.
2304.09928
Personalized State Anxiety Detection: An Empirical Study with Linguistic Biomarkers and A Machine Learning Pipeline
Individuals high in social anxiety symptoms often exhibit elevated state anxiety in social situations. Research has shown it is possible to detect state anxiety by leveraging digital biomarkers and machine learning techniques. However, most existing work trains models on an entire group of participants, failing to capture individual differences in their psychological and behavioral responses to social contexts. To address this concern, in Study 1, we collected linguistic data from N=35 high socially anxious participants in a variety of social contexts, finding that digital linguistic biomarkers significantly differ between evaluative vs. non-evaluative social contexts and between individuals having different trait psychological symptoms, suggesting the likely importance of personalized approaches to detect state anxiety. In Study 2, we used the same data and results from Study 1 to model a multilayer personalized machine learning pipeline to detect state anxiety that considers contextual and individual differences. This personalized model outperformed the baseline F1-score by 28.0%. Results suggest that state anxiety can be more accurately detected with personalized machine learning approaches, and that linguistic biomarkers hold promise for identifying periods of state anxiety in an unobtrusive way.
Zhiyuan Wang, Mingyue Tang, Maria A. Larrazabal, Emma R. Toner, Mark Rucker, Congyu Wu, Bethany A. Teachman, Mehdi Boukhechba, Laura E. Barnes
2023-04-19T19:06:42Z
http://arxiv.org/abs/2304.09928v1
Personalized State Anxiety Detection: An Empirical Study with Linguistic Biomarkers and A Machine Learning Pipeline ###### Abstract Individuals high in social anxiety symptoms often exhibit elevated state anxiety in social situations. Research has shown it is possible to detect state anxiety by leveraging digital biomarkers and machine learning techniques. However, most existing work trains models on an entire group of participants, failing to capture _individual differences_ in their psychological and behavioral responses to _social contexts_. To address this concern, in Study 1, we collected linguistic data from N=35 high socially anxious participants in a variety of social contexts, finding that digital linguistic biomarkers significantly differ between evaluative vs. non-evalative social contexts and between individuals having different trait psychological symptoms, suggesting the likely importance of personalized approaches to detect state anxiety. In Study 2, we used the same data and results from Study 1 to model a multilayer personalized machine learning pipeline to detect state anxiety that considers contextual and individual differences. This personalized model outperformed the baseline's F1-score by 28.0%. Results suggest that state anxiety can be more accurately detected with personalized machine learning approaches, and that linguistic biomarkers hold promise for identifying periods of state anxiety in an unobtrusive way. ## I Introduction Social anxiety disorder (SAD) is highly prevalent, impacting 13% of adults in the United States at some point in their lifetime [1]. Individuals with SAD fear and often avoid social interactions, or endure them with significant anxiety [2]. However, roughly 80% of individuals delay or avoid treatment [3, 4]. Digital interventions delivered to individuals via mobile technology (e.g., smartphones) in daily life can increase treatment access among individuals with SAD. "Just-in-time" adaptive interventions (JITAIs; [5]) are a good candidate to increase access to care. By providing individuals with treatment components when and where they need them, JITAIs have the potential to help socially anxious individuals navigate social situations more effectively in real time. Crucially, to deploy JITAIs, we need to detect when _socially anxious_ people are in need of an intervention. Machine learning (ML) offers a computational solution to detect state anxiety status from digital biomarkers (i.e., passively sensed bio-behavioral indicators) [6]. In particular, using ML with linguistic biomarkers is a promising option for understanding social anxiety as this data can be passively collected without active user input and contains a wealth of psychological information [7]. Despite this potential, a limitation of much ML-based computational work with psychological data is its reliance on nomothetic approaches. Specifically, it has generally used one-size-fits-all approaches (e.g., modeling and reasoning about entire observed populations indiscriminately) [8], even though idiographic (i.e., person-specific) outcomes are typically of most interest to clinicians (e.g., a specific individual's risk of developing an anxiety disorder) [9]. At the same time, most personalized ML approaches require extensive data points from individuals, which is often impractical to collect (e.g., due to participant burden, low engagement, time and resource limitations) [10]. There are thus many challenges for ML models to efficiently and effectively detect state anxiety (e.g., model overfitting, inaccuracy, and bias) [11]. This study aims to develop a _personalized ML pipeline_ that accounts for contextual and individual information to detect state anxiety. To this end, we propose a two-stage study. Study 1, an _empirical study_, examines the differences in linguistic biomarkers associated with state anxiety across social contexts and individual subgroups. In controlled dyadic Zoom conversations, we observe that socially anxious college students' (N=35 final sample) linguistic patterns significantly differ across experimentally manipulated contexts (one designed to be explicitly socially evaluative and one not explicitly socially evaluative) and clustered psychological-symptom severity subgroups. Based on these findings, in Study 2, we develop and test a _personalized ML pipeline_ to detect state anxiety that accounts for contextual and individual differences using multi-layer fine-tuning training approaches. This pipeline hierarchically trains the model at the population, contextual, and individual levels by progressively adding new neural network layers and narrowing down the data samples grouped by contexts and individuals. In summary, this paper makes the following contributions: * We report evidence of significant linguistic differences between situational contexts and individual subgroups among socially anxious individuals, which suggests idiographic distinctions and motivated our ML models. * We propose a multilayer personalized ML pipeline to detect state anxiety, which is able to hierarchically fine tune a population-based model to capture contextual (i.e., social threat) and subgroup-based (i.e., psychological symptom severity) domain knowledge. ## II Empirical Study ### _Study Design_ All study procedures were approved by the Institutional Review Board (IRB) of a large U.S. university and conducted under the supervision of a licensed clinical psychologist and researcher with expertise in anxiety disorders. We recruited 45 undergraduate participants with a Social Interaction Anxiety Scale (SIAS) score of 34 or above, indicating greater trait social anxiety (scale ranges from 0 to 80). Ten of the 45 participants did not complete the social experiences to be analyzed in this paper, leaving 35 participants eligible. The 35 participants had a mean age of 19.46 (SD = 2.09), and the majority were female (74.3%) and White (82.8%). In the broader parent study, participants completed a series of social and non-social tasks involving different group sizes (i.e., alone, pairs, and groups of 4-6) and levels of experimenter-manipulated social threat. All study procedures were conducted virtually via Zoom. For the purposes of the present investigation, we focus on the two conversation tasks that participants completed in pairs (because the dyadic exchange allowed for clear evaluation of linguistic features). One conversation was explicitly evaluative and the other was _not_ explicitly evaluative based on experimenter instructions (termed evaluative and non-evaluative, accordingly, for ease of reference). Specifically, we manipulated the level of social-evaluative threat present in the conversations: Prior to one conversation, participants were told that their partner would rate their social performance following the conversation (i.e., _evaluative_); prior to the other conversation, participants were told that they would _not_ be rated (i.e., _non-evaluative_). During the two conversations, two participants discussed a randomly assigned topic for four minutes. The order of the two conversations was randomized. ### _Data Collection_ Participants reported on their baseline and concurrent state anxiety (Def 1) during each of the social experiences via two brief surveys to report state anxiety levels (Def 2). **Definition 1**: **Baseline and Concurrent State Anxiety**_: Baseline state anxiety refers to the state anxiety level that participants report prior to learning about the upcoming social experience they will complete. Concurrent state anxiety refers to participants' most intense state anxiety _during_ the social experience, as reported by each participant immediately after the experience ended. In this study we assessed participants' state anxiety via a Qualtrics survey by asking them to rate how anxious they felt on a five-point Likert scale from Very Calm (1) to Very Anxious (5)._ **Definition 2**: **State Anxiety Status**_: In this study, we aim to detect _state anxiety_ status (i.e., status when the participants were in _high and/or elevated_ state anxiety), as both represent times when individuals may benefit from a JITAI. Specifically, _high_ state anxiety refers to periods when a participant reports either feeling anxious (4) or very anxious (5) during the concurrent stage. _Elevated_ state anxiety refers to periods when the participants' concurrent anxiety is higher than their baseline anxiety, regardless of the particular score reported. See Figure 1, state anxiety oscillated during the study with notable elevation in the evaluative contexts (the right half)._ Participants' _psychological symptoms_ (in this case, self-reported trait anxiety and depression symptoms, and emotion regulation) were measured at the end of the study by four scales. The scales included the depression subscale from the Depression, Anxiety and Stress Scale (DASS; [12]) [13], the Social Interaction Anxiety Scale (SIAS; [14]), Brief Fear of Negative Evaluation scale (BFNE; [15]), and Difficulties in Emotion Regulation Scale - Short Form (DERS-SF; [16]). The audio of the participants was recorded through Zoom. Otter.ai then transcribed the conversation text, identifying the start and end time points and the speaker for each sentence. The identified time points were used to segment and aggregate each participant's recordings sentence by sentence. Altogether, 55 samples of audio recordings and corresponding self-reported state anxiety were collected; 20 participants completed both the non-evaluative and evaluative sections (40 samples); due to the time limitation of the data collection, 7 participants only completed the non-evaluative experience (7 samples), and 8 participants only completed the evaluative part (8 samples). ### _Linguistic Biomarkers_ We extracted linguistic biomarkers from speech audio and \begin{table} \begin{tabular}{l l l} \hline \hline Domain & Feature & Description \\ \hline \multirow{4}{*}{Acoustic} & Pitch & Mean and delta of voice frequency \\ & Energy & Mean and delta of voice intensity \\ & Zero-crossing & Mean and delta of the number of times the \\ & rate & voice signal crosses the zero-axis \\ & Spectral center & Mean and delta of the voice spectrum center \\ \hline \multirow{4}{*}{Syntactic} & Avg. word count & Average number of words per sentence \\ & Long sentence & Percentage of sentences with \(>\) 15 words \\ & Sentence count & Total number of sentences spoken \\ \hline \multirow{4}{*}{Lexical} & Pos. emotion & Number of words indicating positive sentiment \\ & Neg. emotion & Number of words indicating negative sentiment \\ & \(I\)-statements & Number of first-person pronouns \\ & Your-statements & Number of second-person pronouns \\ & Negations & Number of negation words or phrases \\ & Stop words & Number of function words \\ \hline \hline \end{tabular} \end{table} TABLE I: Linguistic feature list with descriptions. Fig. 1: Self-reported baseline v.s. concurrent state anxiety. Each point reflects an anxiety score reported by a participant. transcripts in each sample, as a large body of literature has linked anxiety with linguistic biomarkers [17, 7]. We extracted _phonetic_, _syntactic_, and _lexical_ domain features for each sample (Table I). To effectively model and measure participants' linguistic behaviors, we only extracted one-dimensional features verified by the existing literature. ### _Results_ #### Iv-D1 **Situational Context Matters** First, we explored the differences in linguistic biomarkers between _non-evaluative v.s. evaluative_ situational contexts with paired analysis on the 20 participants who completed both non-evaluative and evaluative experiences. After normalizing each feature to a scale of 0-1 for comparison on a common scale, using paired analysis, we compared the percentage changes of linguistic biomarkers in the evaluative contexts compared to the non-evaluative contexts (see Figure 2). In the evaluative context, notably decreased linguistic biomarker scores included negations (-27.0%, p=0.306), pitch delta (-22.5%, p=0.018), negative emotion words (-18.4%, p=0.066), I-statements (-14.6%, p=0.396), zero-crossing rate (-11.7%, p=0.100), sentence count (-11.2%, p=0.087), zero-crossing rate delta (-9.6%, 0.050), and pitch mean (-8.7%, p=0.096). Increased biomarker scores between evaluative and non-evaluative contexts included words per sentence (+34.5%, p=0.015) and energy delta (+9.5%, p=0.171). The Wilcoxon signed rank test was used to test the p-values with p\(<\).05 considered a reliable effect. #### Iv-D2 **Individual Differences Matter** We then performed cluster analysis to aggregate participants according to their trait psychological scales by clustering the participants by their DASS, SIAS, BFNE, and DERS scales using the K-means algorithm. Individuals with similar trait symptom severity were gathered (i.e., similar score patterns on social anxiety and depression severity, and emotion regulation tendencies; henceforth called _symptom severity_ for ease of reference 1). To find optimal parameter \(K\) of K-means, we calculated the silhouette scores of K values from 2 to 5. The optimal \(K\) found is 2 (silhouette score=0.394), while the silhouette scores are 0.343, 0.373, and 0.364 when \(K\) are 3, 4, and 5. Selecting \(K\) = 2 to operate cluster analysis, as profiled in Table II, the two clustered individual cohorts reflected two symptom severity groups. Group 1 (named high symptom severity) generally had high symptoms of social anxiety and depression (DASS and SIAS), fear of negative evaluation (BFNE) and difficulty with emotion regulation (DERS), whereas group 2 (named low symptom severity) had lower scores on those measures. Footnote 1: Though we recognize DERS reflects a transdiagnostic vulnerability for emotional disorders, rather than a measure of symptom severity per se. As shown in Figure 3, the two subgroups behaved differently in some aspects in linguistic. Specifically, the high symptom group generally had _lower_ energy mean and delta, zero-crossing rate mean, energy entropy mean, pitch mean, as well as _higher_ energy entropy delta, words per sentence, long sentence rate, and stop word rate in both contexts. Typically, in the evaluative context, the high symptom group had more _words per sentence_. Also, in the evaluative context, individuals with high symptom levels tend to use more negative emotional words, perhaps indicating a higher degree of distress, compared to those with low symptom levels. Moreover, high symptom individuals used more first-person pronouns than the low symptom group in the evaluative context. These patterns suggest that, in addition to individual and subgroup differences in behavior across contexts, certain behavioral patterns are more likely to be associated with evaluative social contexts than others. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Measures**} & \multicolumn{2}{c}{**High Sx (N=13)**} & \multicolumn{2}{c}{**Low Sx (N=17)**} \\ \cline{2-5} & mean & std & mean & std \\ \hline **DASS** & 69.16 & 10.31 & 51.13 & 7.67 \\ **SIAS** & 33.44 & 4.51 & 23.93 & 8.00 \\ **BFNE** & 58.12 & 6.27 & 41.93 & 5.75 \\ **DERS** & 15.36 & 5.05 & 12.27 & 5.02 \\ \hline \hline \end{tabular} \end{table} TABLE II: Profile of the 4 psychological scales among the two symptom groups. Abbreviation: Symptom=sx. Fig. 3: Distributions of 0-1 normalized linguistic features of clustered symptom severity subgroups in non-evaluative and evaluative contexts, respectively. Fig. 2: Percentage loss (in blue) vs. gain (in red) for linguistic biomarkers between non-evaluative vs evaluative context. ## III Personalized Learning Pipeline Sections II-D1 and II-D2 indicated there are behavioral differences between the two contexts and the two cohorts, suggesting that a personalized ML pipeline should ideally be tailored to the variability in social contexts and across persons/cohorts. We devised such a ML pipeline based on Multilayer Perceptron (MLP) model [18], which we term the Personalized State Anxiety Detector (**PSAD**). ### _Problem Formulation_ Given the raw input data \(D_{p}=\{d_{1},d_{2},\cdots,d_{n}\}\), where \(n\) indicates the number of linguistic views of \(D_{p}\), of participant \(p\), the task is to predict binary state anxiety status \(S_{p}\) (Definition 2). To achieve this, we proposed a framework that includes: 1) a set of biomarker extractors \(\{f_{1},f_{2},\cdots,f_{n}\}\) to map different views of features into spaces \(F_{i}^{p}\in\mathbb{R}^{m_{i}}\), where \(m_{i}\) is the dimension of features in view \(i\); 2) a multiview fusion method that fused \(F_{1}^{p},F_{2}^{p},\cdots,F_{n}^{p}\) into a hidden vector \(H_{p}\); 3) applies a personalized classification model \(C_{p}\) to \(H_{p}\) that can be narrowed down by participant \(p\), leveraging the contextual and individual information \(SC_{p}\) to predict \(S_{p}\). ### _Personalized ML Pipeline Design_ As shown in Figure 4, PSAD includes two components: #### Iii-B1 **Multiview Featurization and Fusion** The pipeline first extracts multi-view biomarkers \(FS\) from the raw input data stream \(D\) while using domain knowledge. Then it fuses \(FS\) from different aspects/views with different domain-specific meanings (e.g, in our case, acoustic, syntactic, and lexical features) into one hidden vector \(H\). Specifically, to model the contributions of different views' inputs, we assign learnable coefficients \(\alpha_{i}\) to each view \(i\) and refine the coefficient weights during the training process; formally, \[H=\sum_{i\in N}\alpha_{i}\cdot g_{i}(FS_{i},\theta_{gi}) \tag{1}\] where H denotes the fused vector of biomarkers, N is the number of different views, \(g\) considered a set of neural networks that maps \(FS\) from \(\mathbb{R}^{m_{i}}\) to \(\mathbb{R}^{K}\), \(K\) is the output dimension of \(H\), and the \(\theta_{gi}\) are the model parameters. #### Iii-B2 **Multilayer Personalized Training** consists of a pre-trained global module and a local grouping-based fine-tuned module, leveraging of both population-based and personalized (i.e., accounting for situational contexts and individual differences) information from the data collection. Inspired by the idea of transfer learning [19], the training process incorporates one _global pre-training_ set of layers for learning from the population domain and two _fine-tuning_ layers for situational context and individual cohort domains, respectively. Firstly, in the global pre-training step. We train a global model \(M_{G}\) (i.e., \(M_{G}(\theta_{G},H)\)), where \(\theta_{G}\) is the parameter set of the model and \(H\) is all the samples, with output \(\hat{S}\) based on the ground-truth state anxiety status \(S\) using a binary cross entropy objective loss function \(BCE(\hat{S},S)\). After optimizing \(BCE(\hat{S},S)\), the model parameter set \(\theta_{G}\) is frozen as a globally _pre-trained_ model for further training. Then, in the fine-tuning step, based on pre-trained \(M_{G}\) with frozen parameter set \(\theta_{G}\), we subsequently attach new layers to \(M_{G}\) (without its output layer) to adapt the model to fine-tuned \(M_{L}\) according to the specific clustered samples \(H_{p}\) by situational contexts and then by individual differences, where \(p\in P\), \(P\) denotes the set of participants in a specific group. The resulting prediction is \(\hat{S_{p}}=M_{L}(\theta_{L},H_{p})\). ### _Model Evaluation_ We evaluated PSAD with the following research questions. * **RQ1**: Does PSAD outperform generic (non-personalized) ML models? * **RQ2**: How does the "multiview featurization and fusion" component optimize the model performance? * **RQ3**: How does the "multilayer personalized training" component optimize the model performance? We used the data collected in Study 1 to evaluate the ML pipeline. For the _context-aware layer_, the observations were divided by the situational contexts (i.e., non-evalative and evaluative); for the _group-centered layer_, the sample was divided by high/low symptom severity subgroups. We compared our PSAD with 5 ML methods, including K Nearest Neighbour (KNN), Support Vector Machine (SVM), Extreme Gradient Boosting (XGBoost), Multilayer Perceptron (MLP), Random Forest (RF) Classifier [20]. Evaluation metrics include accuracy, precision, and F1-Score. We performed a cross-validation grid search to determine the best hyper-parameters (e.g., learning rate, and training epoch) for every baseline (i.e., each comparison method) and PSAD in a leave-one-sample-out cross-validation (LOOCV) manner. To make it a fair evaluation, 1) for the model with neural networks (i.e., PSAD and MLP), we set the number of layers to 4 (in PSAD, 2 for global training, 1 for context-aware fine-tuning, 1 for group-centered fine-tuning; in MLP, 4 layers in total) to guarantee that they have the same learning capability; 2) situational (non-evalative or evaluative) and individual (high or low symptom severity) information was embedded into the feature space of the baseline models. Fig. 4: Proposed multi-layer personalized ML pipeline. #### Iii-C1 RQ1 As shown in table III, PSAD shows improvements on every metric. Specifically, PSAD scored 74.55% in accuracy, 74.64% in precision, and 74.49% in F1-score, which improved at least 28.0% compared to baseline models. This indicates that the PSAD method can better detect the state anxiety status of highly socially anxious people. We also compared PSAD to separately trained models for 4 subgroups to further evaluate its effectiveness. We found that PSAD performed significantly better in evaluative contexts and had a higher overall performance than the separate models. As shown in Table IV, our multilayer design showed a 10.85% improvement in performance due to its use of a hierarchical multilayer design. #### Iii-C2 RQ2 We conducted a feature ablation study to understand the effect of each linguistic view on model performance. We removed each view (acoustic, syntactic, and lexical) from the feature space and re-trained the model. The results, shown in Table V, indicate that the model performed best when equipped with all the features. Additionally, the learnable coefficient \(\alpha\) for each modality shows the importance of each view of data: \(\alpha_{acoustic}\) = 0.5071, \(\alpha_{syntactic}\) = 0.5801, \(\alpha_{lexical}\) = 0.8772. Both the ablation study and the coefficient \(\alpha\) suggest that the lexical perspective is the most indicative of state anxiety, followed by the syntactic. #### Iii-C3 RQ3 To investigate the effect of the multilayer fine-tuning step, we conducted a second ablation study to understand how the context-aware and group-centered layers contribute to model performance. We removed one or two sublayers at a time and re-trained the model. The results, shown in Table VI, indicate that both the context-aware and group-centered considerations contribute to the final detection performance. However, the context-aware layer is more critical than the group-centered layer as the F1 score dropped to 63.56% when it was removed, compared to 66.81% when the group-centered layer was removed. ## IV Discussion ### _Summary of Findings_ Study 1 indicated that linguistic patterns vary across social contexts and individual symptom profiles. Specifically, we observed that participants had lower pitch and longer sentences on average in evaluative contexts, which is aligned with prior human behavior research [21]. Moreover, we found that individuals with higher anxiety symptom severity tended to exhibit specific linguistic characteristics, such as lower energy, zero-crossing rate, and pitch, longer sentences, and more stop words, particularly in an evaluative context. These patterns suggest that some groups' specific behavioral patterns are more likely to be associated with some social contexts than others, pointing to a person by context interaction (e.g., individuals with higher symptom severity may struggle to regulate emotions in contexts involving greater social threat [17]). Study 2 advanced the ML model to be more personalized and learn more efficiently. Specifically, with PSAD, ML models can learn more personalized knowledge from limited human datasets in a hierarchical learning fashion, especially when highly socially anxious people are in an evaluative social threat context (RQ1). Both context-aware and group-centered information are crucial for the model's performance in digital state anxiety detection. In particular, the ablation study showed that the context-aware layer plays a more vital role in this detection (RQ2). Notably, acoustic, syntactic and lexical features all contribute to effective state anxiety detection. Results show that lexical features are the most indicative of state anxiety, which can inform future digital mental health practice (RQ3). ### _Implications for Future Practice_ The present findings have multiple implications for state anxiety detection. First, linguistic characteristics are indicative of individuals' state anxiety. This, in turn, suggests that researchers working to build JITAIs may benefit from collecting linguistic information to identify opportune moments for intervention. Of course, determining how to do this in ways that are acceptable to the user and their contacts in terms of privacy, confidentiality, intrusiveness and other ethical issues is paramount. Second, personalizing prediction models by \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Views} & \multicolumn{2}{c}{Metrics (\%)} \\ \cline{2-4} & Accuracy & Precision & F1 Score \\ \hline w/o Acoustic & 62.35 & 61.82 & 61.56 \\ w/o Syntactic & 52.80 & 52.73 & 52.70 \\ w/o Lexical & 49.15 & 49.09 & 49.06 \\ \hline PSAD & **74.55** & **74.64** & **74.49** \\ \hline \hline \end{tabular} \end{table} TABLE V: PSAD without (w/o) every single view. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Subgroups} & \multicolumn{4}{c}{STAD} & \multicolumn{4}{c}{Separated} \\ \cline{2-5} & Acc & Prec & F1 & Acc & Prec & F1 \\ \hline High, Non-Eval & 71.43 & 71.43 & 71.43 & 100.00 & 100.00 & 100.00 \\ Low, Non-Eval & 52.91 & 52.38 & 52.16 & 61.90 & 61.90 & 61.90 \\ High, Eval & 100.00 & 100.00 & 100.00 & 61.73 & 61.11 & 61.23 \\ Low, Eval & 69.10 & 69.09 & 69.07 & 45.56 & 44.44 & 44.44 \\ \hline Overall & **74.55** & **74.64** & **74.49** & 63.64 & 63.64 & 63.64 \\ \hline \hline \end{tabular} \end{table} TABLE IV: PSAD vs separately trained models \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c}{Metrics (\%)} \\ \cline{2-4} & Accuracy & Precision & F1 Score \\ \hline KNN & 50.90 & 50.94 & 50.91 \\ SVM & 41.81 & 41.79 & 41.81 \\ XGBoost & 58.18 & 58.36 & 58.09 \\ MLP & 58.18 & 58.22 & 58.18 \\ RF & 56.76 & 56.86 & 56.72 \\ \hline PSAD & **74.55** & **74.64** & **74.49** \\ \hline \hline \end{tabular} \end{table} TABLE III: Comparison of model performance between proposed PSAD and the baselines (i.e., comparison methods). \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Views} & \multicolumn{3}{c}{Metrics (\%)} \\ \cline{2-4} & Accuracy & Precision & F1 Score \\ \hline w/o Acoustic & 62.35 & 61.82 & 61.56 \\ w/o Syntactic & 52.80 & 52.73 & 52.70 \\ w/o Lexical & 49.15 & 49.09 & 49.06 \\ \hline PSAD & **74.55** & **74.64** & **74.49** \\ \hline \hline \end{tabular} \end{table} TABLE V: PSAD without (w/o) every single view. using information about the individual's emotional health and symptoms, and their current social context helps us better detect the phenomenon of interest. Thus, future JITAI work would likely benefit from incorporating both individual differences and contextual features into prediction models. Third, our findings indicate that there is heterogeneity in terms of how linguistic features relate to state anxiety. That is, not only did linguistic features vary across evaluative and non-evalative contexts and across symptom severity groups, but accounting for this variability in personalized models aided our prediction of state anxiety. This is in line with prior work suggesting that emotional states may not have a distinct 'fingerprint' and may instead relate to physiological data differently based on features of the individual in a particular situation [22]. At the same time, our findings suggest that clustering individuals based on psychological constructs (achieving a middle-ground between nomothetic and idiographic approaches) allows us to obtain valuable information about state anxiety beyond individuals. Future work aimed at detecting potential JITAI targets would benefit from considering cluster-based semi-idiographic methods to optimize prediction. ### _Limitations and Future Directions_ Our study has several limitations, which point to opportunities for future research. Our analyses were limited to the two social contexts and the two subgroups among a small sample of university students. As such, our findings may not generalize to other individual differences, contextual features, and populations. Future work should examine these questions across different contexts (e.g., in-person vs. virtual interactions) and individual differences (e.g., gender identity, age). Also, as noted, even though linguistic data may help researchers improve state anxiety detection, there are many privacy concerns which could impact participants' willingness to use the technology. This is a concern researchers should keep in mind and they should ensure they are using secure methods, discussing concerns with participants, etc. ## V Conclusion This paper tested the ability of personalized (vs. one-size-fits-all) ML approaches to detect state anxiety from linguistic biomarkers. Then, we proposed a personalized ML pipeline which progressively trains the model according to different domain knowledge (i.e., contextual and subgroup). We believe our personalized method may have considerable clinical utility relative to nomothetic ML approaches, and provide novel insights into how to optimize detection of key mental health outcomes.
2301.05714
The Catchment Area of Groomed Jets at NNLL
Groomed jet observables have a dynamical catchment area which plays a key role in determining the leading nonperturbative power corrections and the impact of the underlying event. Based on field-theoretic arguments, certain moments of the groomed jet radius $R_g$ capture the entirety of the kinematic and grooming parameter dependence of these effects. These moments can be computed perturbatively in the soft drop operator expansion region where these corrections are small, but yet significant to be relevant for precision physics. A precise determination of these moments is thus crucial to faithfully isolate the universal contributions of hadronization and the underlying event. Building on a previously developed effective field theory framework for the doubly differential soft drop groomed jet mass and groomed jet radius measurement, we present here a calculation of these moments at next-to-next-to-leading-logarithmic (NNLL) accuracy including matching into the plain jet mass region. We compare our predictions for these moments against parton-shower Monte Carlo simulations and find good agreement. These results have applications for precision physics with soft drop jet mass such as determination of the strong coupling constant and the top quark mass and for improving hadronization models.
Aditya Pathak
2023-01-13T19:00:01Z
http://arxiv.org/abs/2301.05714v2
# The Catchment Area of Groomed Jets at NNLL ###### Abstract Groomed jet observables have a dynamical catchment area which plays a key role in determining the leading nonperturbative power corrections and the impact of the underlying event. Based on field-theoretic arguments, certain moments of the groomed jet radius \(R_{g}\) capture the entirety of the kinematic and grooming parameter dependence of these effects. These moments can be computed perturbatively in the soft drop operator expansion region where these corrections are small, but yet significant to be relevant for precision physics. A precise determination of these moments is thus crucial to faithfully isolate the universal contributions of hadronization and the underlying event. Building on a previously developed effective field theory framework for the doubly differential soft drop groomed jet mass and groomed jet radius measurement, we present here a calculation of these moments at next-to-next-to-leading-logarithmic (NNLL) accuracy including matching into the plain jet mass region. We compare our predictions for these moments against parton-shower Monte Carlo simulations and find good agreement. These results have applications for precision physics with soft drop jet mass such as determination of the strong coupling constant and the top quark mass and for improving hadronization models. QCD, Colliders, Precision Physics + ## 1 Introduction ### 1.1 Measurement and kinematics \(2.1.1\) Hard kinematics \(2.1.2\) Soft drop kinematics \(2.2\) Effective theory regions \(2.3\) Effective theory modes \(2.4\) One-loop results of factorization functions \(2.4.1\) Matrix elements and measurement functions \(2.4.2\) Hard-collinear radiation outside the jet \(2.4.3\) Inclusive jet mass measurement on collinear radiation \(2.4.4\) Hard-collinear radiation within groomed jet radius \(2.4.5\) Collinear-soft radiation within groomed jet radius \(2.4.6\) Wide-angle soft radiation failing soft drop \(2.4.7\) Collinear-soft radiation at intermediate groomed jet radius \(2.4.8\) Widest angle collinear soft radiation passing soft drop \(2.4.9\) Wide-angle soft radiation in plain jet mass region \(2.4.10\) Fixed-order cross section \(2.5\) Factorization and resummation \(2.5.1\) Max-\(R_{g}\) in plain jet mass resummation region: \(2.5.2\) Max-\(R_{g}\) in soft drop resummation region \(2.5.3\) Min-\(R_{g}\) regime \(2.5.4\) Intermediate-\(R_{g}\) regime \(2.6\) Matched cross section \(2.6.1\) Matching in the max-\(R_{g}\) regime \(2.6.2\) Matched resummed cross section \(2.6.3\) Perturbative uncertainty \(2.6.4\) Effect of two-loop pieces for NNLL resummation \(2.6.5\) Non-singular corrections \(2.6.6\)\(R_{g}\)-weighted jet mass cross section ### 1.2 Soft drop \(2.6.6\) The soft drop \(2.6.7\) The soft drop \(2.6.8\) The soft drop \(2.6.9\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.3\) The soft drop \(2.6.4\) The soft drop \(2.6.5\) The soft drop \(2.6.6\) The soft drop \(2.6.6\) The soft drop \(2.6.7\) The soft drop \(2.6.8\) The soft drop \(2.6.9\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.3\) The soft drop \(2.6.4\) The soft drop \(2.6.5\) The soft drop \(2.6.6\) The soft drop \(2.6.6\) The soft drop \(2.6.7\) The soft drop \(2.6.8\) The soft drop \(2.6.9\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.3\) The soft drop \(2.6.4\) The soft drop \(2.6.5\) The soft drop \(2.6.6\) The soft drop \(2.6.7\) The soft drop \(2.6.8\) The soft drop \(2.6.9\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.3\) The soft drop \(2.6.4\) The soft drop \(2.6.5\) The soft drop \(2.6.6\) The soft drop \(2.6.6\) The soft drop \(2.6.7\) The soft drop \(2.6.8\) The soft drop \(2.6.9\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.3\) The soft drop \(2.6.4\) The soft drop \(2.6.5\) The soft drop \(2.6.6\) The soft drop \(2.6.6\) The soft drop \(2.6.7\) The soft drop \(2.6.8\) The soft drop \(2.6.9\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.3\) The soft drop \(2.6.4\) The soft drop \(2.6.5\) The soft drop \(2.6.6\) The soft drop \(2.6.7\) The soft drop \(2.6.8\) The soft drop \(2.6.9\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.3\) The soft drop \(2.6.4\) The soft drop \(2.6.5\) The soft drop \(2.6.6\) The soft drop \(2.6.6\) The soft drop \(2.6.7\) The soft drop \(2.6.8\) The soft drop \(2.6.9\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.3\) The soft drop \(2.6.4\) The soft drop \(2.6.5\) The soft drop \(2.6.6\) The soft drop \(2.6.7\) The soft drop \(2.6.8\) The soft drop \(2.6.9\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.3\) The soft drop \(2.6.4\) The soft drop \(2.6.5\) The soft drop \(2.6.6\) The soft drop \(2.6.7\) The soft drop \(2.6.8\) The soft drop \(2.6.9\) The soft drop \(2.6.1\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.3\) The soft drop \(2.6.4\) The soft drop \(2.6.5\) The soft drop \(2.6.6\) The soft drop \(2.6.7\) The soft drop \(2.6.8\) The soft drop \(2.6.9\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.3\) The soft drop \(2.6.4\) The soft drop \(2.6.4\) The soft drop \(2.6.5\) The soft drop \(2.6.6\) The soft drop \(2.6.7\) The soft drop \(2.6.8\) The soft drop \(2.6.9\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.3\) The soft drop \(2.6.4\) The soft drop \(2.6.5\) The soft drop \(2.6.4\) The soft drop \(2.6.6\) The soft drop \(2.6.7\) The soft drop \(2.6.8\) The soft drop \(2.6.9\) The soft drop \(2.6.1\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.3\) The soft drop \(2.6.4\) The soft drop \(2.6.5\) The soft drop \(2.6.4\) The soft drop \(2.6.6\) The soft drop \(2.6.7\) The soft drop \(2.6.8\) The soft drop \(2.6.9\) The soft drop \(2.6.1\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.3\) The soft drop \(2.6.4\) The soft drop \(2.6.5\) The soft drop \(2.6.4\) The soft drop \(2.6.6\) The soft drop \(2.6.7\) The soft drop \(2.6.8\) The soft drop \(2.6.9\) The soft drop \(2.6.1\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.2\) The soft drop \(2.6.3\) The soft drop \(2.6.4\) The soft drop \(2.6.5\) The soft drop \(2.6.4\) The soft drop \(2.6.7\) The soft drop \(2.6.8\) The soft drop \(2.6.9\) The soft drop \(2.6.1\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.2\) The soft drop \(2.6.3\) The soft drop \(2.6.4\) The soft drop \(2.6.5\) The soft drop \(2.6.6\) The soft drop \(2.6.7\) The soft drop \(2.6.8\) The soft drop \(2.6.9\) The soft drop \(2.6.1\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.2\) The soft drop \(2.6.3\) The soft drop \(2.6.4\) The soft drop \(2.6.5\) The soft drop \(2.6.4\) The soft drop \(2.6.6\) The soft drop \(2.6.7\) The soft drop \(2.6.8\) The soft drop \(2.6.9\) The soft drop \(2.6.1\) The soft drop \(2.6.1\) The soft drop \(2.6.1\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.2\) The soft drop \(2.6.3\) The soft drop \(2.6.4\) The soft drop \(2.6.5\) The soft drop \(2.6.6\) The soft drop \(2.6.7\) The soft drop \(2.6.8\) The soft drop \(2.6.9\) The soft drop \(2.6.1\) The soft drop \(2.6.1\) The soft drop \(2.6.1\) The soft drop \(2.6.1\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.1\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.2\) The soft drop \(2.6.2\) The soft drop \(2.6.3\) The soft drop \(2.6.4\) The soft drop \(2.6.5\) The soft drop \(2.6.7\) The soft drop \(2.6.8\) The soft drop \(2.6.9\) The soft drop \(2.6.1\) The soft drop \(2.6.1\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.2\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.2\) The soft drop \(2.6.1\) The soft drop \(2.6.1\) The soft drop \(2.6.2\) The soft drop \(2.6.3\) The soft drop \(2.6.4\) The soft drop \(2.6.5\) The soft drop \(2.6.6\) The soft drop \(2.6.7\) The soft drop \(2.6.8\ Numerical results and comparison with simulations * 4.1 Results for \(C_{1}\) * 4.2 Results for \(C_{2}\) * 5 Conclusion * A Anomalous dimensions * B Computing the resummation kernels * C Profile functions * C.1 Plain jet mass profiles * C.2 Soft drop profiles * D Weight functions ## 1 Introduction The nonperturbative effects of quantum chromodynamics (QCD) encompass a wide array of rich phenomena ranging from low energy nuclear physics, the spectrum of hadrons, the structure of energetic protons described by parton densities, the physics of hadronization, the formation and evolution of the quark gluon plasma, and much more, each contributing to the richness of the QCD phase-diagram. These nonperturbative effects can be probed in different experimental setups and studied via an appropriate effective field theory of QCD. Among these, the physics of hadronization is one such complex phenomenon that has so far remained largely elusive to first principle calculations. Studies of jets and jet substructure in high energy collisions have offered us invaluable insights about hadronization. On the other hand, jets are also unique tools that allow us to probe fundamental aspects of QCD and in searching for physics beyond the Standard Model [1; 2]. Hadronization impacts measurements of jet substructure observables relative to a reference parton level computation in a way that is partly unique to the observable and partly universal [3; 4; 5; 6; 7; 8; 9]. Because the effects of hadronization cannot be predicted from first principle calculations, the effort has been to seek ways to eliminate or minimize hadronization corrections, such that perturbative calculations can be reliably compared to collider data. Typically, the dominant effects of hadronization arise in the soft wide angle physics, which is challenging to bring under theoretical control even in perturbation theory. Moreover, at the Large Hadron Collider (LHC), the underlying event and pile-up also make a contribution of similar nature. With the introduction of grooming algorithms [10; 11; 12; 13; 14; 15; 16; 17] it has been possible to make first principle predictions for jet substructure observables at the LHC due to their effectiveness in removing soft wide angle contamination in the complex LHC environment and thus suppressing hadronization, underlying event and pile-up effects. These grooming techniques were initially designed as a taggers to suppress effects of QCD in searches of multiprong decays originating from electroweak and potentially new physics, but over the recent years they have come to be seen as tools with strong potential for applications in precision physics.1 Footnote 1: Somewhat recently, there has also been a lot of exciting progress in developing a completely complementary approach of employing energy-energy correlators (EEC) for precision collider physics [18; 19; 20; 21; 22; 23]. Among various variants, the soft drop (SD) groomer [14] (including the modified Mass Drop Tagger [13; 15] as a special case), has been extensively studied. The algorithm of jet grooming is intimately tied to sequential recombination jet algorithms that pair-wise cluster particles/subjets into subjets, to eventually form a jet. The criteria for soft drop is given by a sequential test between pairs of subjets \(i\) and \(j\) found by de-clustering a Cambridge-Achen clustered tree (based solely on angular separation) of particles in a jet: \[\frac{\min(p_{T_{i}},p_{T_{j}})}{p_{T_{i}}+p_{T_{j}}}>z_{\rm cut}\Big{(}\frac {\Delta R_{ij}}{R_{0}}\Big{)}^{\beta}\,. \tag{1}\] If the pair collectively fails this criteria then the softer of the two (say \(i\) with \(p_{T_{i}}<p_{T_{j}}\)) is removed and the next pair obtained by de-clustering the harder subjet (\(j\)) is tested for this condition. The pair that eventually satisfies this condition terminates the groomer, and the remaining particles in either of the subjets in the pair constitute the groomed jet. The parameters \(z_{\rm cut}\) and \(\beta\) control the strength of the groomer and are typically chosen to be \(z_{\rm cut}\sim 0.1\) and \(\beta\sim 1\). The careful formulation of the soft drop condition in Eq. (1) ensures that IRC-safe observables measured on jets remain calculable in perturbation theory even after grooming. This has resulted in an impressive list of phenomenological applications including measurement of the QCD splitting functions [24; 25], study of fragmentation structure [26; 27], isolation of the soft-sensitive dynamics [28; 29; 30], quantification of medium modifications [31; 32; 33; 34; 35]. A classic observable studied extensively at the LHC is the jet mass. For instance, both ungroomed and groomed jet mass measurements have been measured by ALICE [36], ATLAS [37; 38; 39; 40; 41], and CMS [42; 43; 44; 45] collaborations. The groomed jet angularities (a generalization of the jet mass) have also recently been measured by the ALICE collaboration [46]. Soft drop jet mass has been explored for applications such as precision top quark mass [47; 48; 49] and strong coupling constant [50; 51] measurements. State-of-the-art perturbative calculations for soft drop jet mass have reached high accuracy of next-to-next-to-next-to-leading logarithmic accuracy (N\({}^{3}\)LL) matched to next-to-next-to-leading order (NNLO) predictions for groomed jets in the \(e^{+}e^{-}\to q\bar{q}\) process [52], and next-to-next-to-leading-logarithmic (NNLL) accuracy [51] for jets at the LHC. To make these calculations useful for precision phenomenology, it is crucial to describe nonperturbative power corrections in the jet mass spectrum. In the region where soft drop is effective in removing soft-wide angle radiation and where perturbation theory is dominant, the nonperturbative power corrections can be as large as \(\sim 10\%\). As was shown in a recent analysis in Ref. [51], these power corrections thus start to become relevant already at NNLL accuracy. For instance, we showed in Ref. [51] that the unconstrained effects of hadronization respectively contribute about 3% and 8% irreducible uncertainty for quark and gluon jets groomed with \(z_{\rm cut}=0.1\) and \(\beta=1\). Understanding these effects is also crucial for top quark mass determination using soft drop jet mass -- in Ref. [47], leveraging on the resilience of soft drop against underlying event, soft drop jet mass was proposed as a candidate for precision top mass measurement, and followed up by a MC top mass calibration in Pythia8+Powheg by the ATLAS collaboration [49]. By directly comparing the theory prediction against the unfolded LHC data, a determination of the top quark mass in a short distance scheme can be envisaged. However, the peak of the distribution where the dominant sensitivity to the top mass lies, receives significant hadronization corrections which are important to account for in order to formulate a consistent hadron level prediction. One of the most common method to account for non-perturbative (NP) corrections is to use hadronization models, such as those provided with event generators such as Pythia[53], Sherpa[54], and Herwig[55], see for example Ref. [56]. These hadronization models are extremely useful tools in guiding our intuition and allow us to study nonperturbative effects on any arbitrarily complicated observable. However, these models are designed to be compatible with the respective parton showers that are less precise than the aforementioned analytical calculations at NNLL and beyond, and hence are not ideal for describing NP power corrections in high precision analytical calculations. Furthermore, it is difficult to understand the physics described by these models from a field theory perspective, and given the complicated nature of these models with numerous knobs to available turn, it is not possible to associate an intrinsic uncertainty with their predictions. For precision studies we need a first principles, field-theoretic paradigm for describing these nonperturbative power corrections [3; 4; 5; 6; 7; 9] that can be systematically improved and can be combined with perturbative calculations independent of their accuracy, beyond those achievable with parton showers. This has been made possible thanks to rigorous QCD factorization theorems [57; 58; 59; 60; 61], combined with the powerful machinery of soft-collinear effective field theory (SCET) [62; 63; 64; 65; 66]. These methods have enabled a field theoretically consistent study of NP power corrections. Although this approach cannot be applied generally to any arbitrary observable, by systematically analyzing a variety of jet substructure observables and event shapes, we have been able to draw universal and model-independent conclusions about the precise way in which NP corrections enter in these observables. By exploiting the universal properties of soft QCD, this approach allows us to identify the kinematic dependence of these NP corrections in a variety of jet observables, allowing them to be parameterized in terms of a few (a lot fewer than parameters in a hadronization model), universal \(\mathcal{O}(\Lambda_{\rm QCD})\) constants. For example, an analysis along these lines reveals that nonperturbative corrections to the jet mass and jet \(p_{T}\) take the following form [8; 9], \[\delta m_{J}^{2}=p_{T}\big{(}R\,\Omega_{\kappa}^{(1)}+R^{3}\Omega_{\kappa}^{( 3)}+\dots\big{)}\,,\qquad\delta p_{T}=\frac{1}{R}\Upsilon_{\kappa}^{(-1)}+R \Upsilon_{\kappa}^{(1)}+\dots\,. \tag{2}\] Here \(\Omega_{\kappa}^{(i)}\) and \(\Upsilon_{\kappa}^{(i)}\) are unknown \(\mathcal{O}(\Lambda_{\rm QCD})\) parameters which only depend on the flavor \(\kappa=q,g\) of the parton initiating the jet, and \(R\) is the jet radius. The specific scaling of these corrections with \(p_{T}\) and the dependence on the jet radius is a model-independent statement that can be derived on general arguments of factorization of the soft physics. Here the leading terms with the smallest power of \(R\) correspond to the leading hadronization effect in each of these cases. The subleading terms (including ones not shown in Eq. (2)) arise due to contributions from the initial state radiation (ISR) and the underlying event. Although not obtainable from a first-principle calculations, these nonperturbative parameters can be determined by making a comparison with experimental data.2 These field theory predictions also clearly demarcate the kinematic region of validity of the proposed NP corrections as well as allow for precisely estimating the associated theory uncertainty (for example in terms of the size of higher order power corrections) and the experimental uncertainty (related to the statistics and the fitting procedure). This approach was pursued in determination of the strong coupling constant from the LEP data in Refs. [67; 68; 69]. Footnote 2: Further assumptions about the hadronization physics in the dispersive approach [8] can be used to reduce them down to a single parameter (which also reveals that the leading hadronization correction to the jet \(p_{T}\), \(\Upsilon_{1}^{(-1)}<0\)). In this paper we focus on a factorization-based approach for analyzing the nonperturbative power corrections to groomed observables. Based on recent advancement in the understanding of the nonperturbative structure of the soft drop jet mass in Refs. [70; 71], the power corrections in this case can be similarly encoded into universal, \(\mathcal{O}(\Lambda_{\rm QCD})\) constants as in Eq. (2). These power corrections in the groomed case in fact constitute a very non-trivial extension and a combination of those in Eq. (2) for ungroomed jet mass shift and ungroomed jet \(p_{T}\). More specifically, it was shown in Ref. [70] that, in the region where soft drop is _active_ and where a perturbative description is valid, the leading corrections from hadronization take the following form: \[\frac{1}{\sigma_{\kappa}}\frac{{\rm d}\sigma_{\kappa}}{{\rm d}m_{J}^{2}}= \frac{1}{\hat{\sigma}_{\kappa}}\frac{{\rm d}\hat{\sigma}_{\kappa}}{{\rm d}m_{ J}^{2}}-Q\Omega_{1\kappa}^{\mathfrak{o}}\frac{{\rm d}}{{\rm d}m_{J}^{2}} \Big{(}\frac{1}{\hat{\sigma}_{\kappa}}\frac{{\rm d}\hat{\sigma}_{\kappa}^{ \mathfrak{o}}}{{\rm d}m_{J}^{2}}\Big{)}+\frac{\Upsilon_{1,0\kappa}^{\oplus}+ \beta\Upsilon_{1,1\kappa}^{\mathfrak{o}}}{Q}\frac{1}{\hat{\sigma}_{\kappa}} \frac{{\rm d}\hat{\sigma}_{\kappa}^{\otimes}}{{\rm d}m_{J}^{2}}+\cdots, \tag{3}\] where the left hand side denotes the hadron-level soft drop jet mass cross section initiated by a parton \(\kappa=\text{quark}/\text{gluon}\) and \(Q\) is the hard scale characterizing the jet. The first term on the right hand side with a hat, \({\rm d}\hat{\sigma}^{\kappa}\) is the perturbative parton level soft drop jet mass cross-section. The \(1/\hat{\sigma}_{\kappa}\) factor is included to consider normalized cross section. In the region of small jet masses that we are interested in, these functions can be unambiguously defined as jet mass distributions of jets with a specific quark or gluon flavor. See Ref. [51] for more details on how these objects can be systematically defined in the context of inclusive or exclusive jet measurements. The remaining pieces in Eq. (3) are the nonperturbative corrections parameterized in terms of the three \(\mathcal{O}(\Lambda_{\rm QCD})\) universal nonperturbative (NP) constants \(\Omega_{1\kappa}^{\mathfrak{o}}\), \(\Upsilon_{1,0\kappa}^{\bigotimes}\) and \(\Upsilon_{1,1\kappa}^{\bigotimes}\). They appear with certain jet mass dependent functions that we describe below. The form of NP power corrections in Eq. (3) holds for jet masses that satisfy the criteria \[\frac{Q\Lambda_{\rm QCD}}{m_{J}^{2}}\Big{(}\frac{m_{J}^{2}}{QQ_{\rm cut }}\Big{)}^{\frac{1}{2+\beta}}\ll 1\,, \text{(soft drop stopping emission is perturbative)}\,, \tag{4}\] \[m_{J}^{2}<QQ_{\rm cut}\,, \text{(soft drop is active)}\,.\] Here \(Q_{\rm cut}\sim Qz_{\rm cut}\) is the energy scale associated with soft drop and is precisely defined below. This region is referred to as the soft drop operator expansion (SDOE) region and is shown in Fig. 1. The first condition ensures that the subjet that stops the soft drop is typically perturbative, such that soft drop spectrum can be calculated in perturbation theory. This need not always be the case, but the cases where the stopping subjet is nonperturbative are power suppressed. The second condition constraints the softer of the stopping pair to be also collinear to the jet, such that the radiation at wider angles is groomed away. For larger jet masses, the groomer can stop at wider angles and the distribution consequently looks very similar to the ungroomed jet mass spectrum. Interestingly, it is only with the LHC kinematics with jets of transverse momenta \(\sim 500\) GeV that this OPE region opens up and becomes accessible.3 Footnote 3: Soft drop jet mass distributions (and soft drop angularities) have been measured in the legacy ALEPH data in Ref. [72]. However, at LEP energies the above conditions cannot be satisfied and almost the entire soft drop jet mass spectrum (including even the cusp region) is fully nonperturbative. The three parameters are grouped in two terms which have distinct physical meanings associated with them. The first term proportional to \(\Omega_{1\kappa}^{\mathfrak{g}}\) is the "shift correction" analogous Figure 1: Various regions of the soft drop jet mass spectrum that are differently affected by nonperturbative corrections. This work focuses on the middle, soft drop operator expansion region, where the hadronization corrections, while significant, can be described in a systematic expansion. to Eq. (2) that captures a shift to the groomed jet mass, coming from the NP particles4 that survive grooming. The second term proportional to \(\Upsilon^{\oplus}_{1,0\kappa}\) and \(\Upsilon^{\oplus}_{1,1\kappa}\) is referred to as the "boundary correction" that describes how the outcome of the soft drop test (with respect to a reference parton level configuration) is altered due to hadronization. The parameter \(\Upsilon^{\oplus}_{1,0\kappa}\) is exactly analogous to \(\Upsilon^{(-1)}_{\kappa}\) in Eq. (2) but now refers to the \(p_{T}\) of the dynamically determined collinear-soft (c-soft) subjet that is found at the last stage of the soft drop groomer. The term proportional to \(\beta\) is related to the change in direction of this c-soft subjet relative to the collinear core due to hadronization. Thus, an interesting implication of jet grooming is that both types of hadronization corrections appear in the same groomed observable. Footnote 4: We will often refer to hadrons produced at the last stage of hadronization (for example, subsequent to a parton shower in an event generator) as “nonperturbative particles”. In the language of SCET, these corresponds to distinct modes in the effective theory that have significantly lower virtuality \(\sim\Lambda_{\text{QCD}}\) compared to other perturbative modes. We pause to note that '...' in Eq. (3) refer to two types of subleading power corrections. The first kind are those that are suppressed by higher powers of \(\Lambda_{\text{QCD}}\). These power corrections grow as the groomed jet mass is reduced and become \(\mathcal{O}(1)\) for small jet masses beyond the SDOE region. The second type corresponds to those where the radiation pattern of the groomed jet is more complicated than a simplest possible "two-pronged configuration" of a collinear and c-soft subjet. The second type of correction is a next-to-leading logarithmic (NLL) effect, as at leading-logarithmic accuracy, strong ordering of angles between the perturbative soft radiation off the jet can be assumed, such that any further radiation beyond the c-soft subjet must lie at hierarchically small angles within the collinear jet, resulting in a dipole-configuration that governs the hadronization corrections. Equivalently, Eq. (3) can be seen as an expansion in number of identified and ordered soft subjets, analogous to the dressed gluon expansion employed in non-global logarithms (NGL) resummation [73] and more recently for resummation of jet mass close to the cusp [74]. With this interpretation, the leading terms in Eq. (3) represent the leading single-subjet piece. The dominance of the two-prong configuration was crucial in Ref. [70] for identifying universal NP constants in Eq. (3). Because these corrections necessarily involve kinematic properties of the c-soft subjet that cannot be unambiguously identified via the jet mass measurement, these corrections appear with certain jet mass dependent perturbative weights \(\mathrm{d}\hat{\sigma}^{\mathfrak{w},\oplus}_{\kappa}\) that capture this dynamical effect. Furthermore, in the SDOE region specified by Eq. (4), these weights are perturbatively calculable and hence indicated with a hat. These weights are related to moments of the groomed jet radius, \(R_{g}\), the angular separation between the soft drop-stopping pair, at a given jet mass \(m_{J}^{2}\).5 They are given in terms of cross sections that are differential in these kinematic properties of the c-soft subjet in addition to the jet mass: Footnote 5: Unlike Ref. [75], we do not normalize the groomed jet radius \(R_{g}\) by the original jet radius \(R\). However, for sake of simplifying calculations we will consider a normalized variant \(r_{g}\) defined in Eq. (15) below. \[\frac{1}{\hat{\sigma}_{\kappa}}\frac{\mathrm{d}\hat{\sigma}^{\mathfrak{w}}_{ \kappa}}{\mathrm{d}m_{J}^{2}}\equiv\int\mathrm{d}r_{g}\,r_{g}\frac{1}{\hat{ \sigma}_{\kappa}}\frac{\mathrm{d}^{2}\hat{\sigma}_{\kappa}}{\mathrm{d}m_{J}^{ 2}\mathrm{d}r_{g}}\,, \tag{5}\] \[\frac{1}{\hat{\sigma}_{\kappa}}\frac{\mathrm{d}\hat{\sigma}_{\kappa}^{\otimes}}{ \mathrm{d}m_{J}^{2}}\equiv\int\frac{\mathrm{d}r_{g}\mathrm{d}z_{g}\,\delta\big{(} z_{g}-z_{\mathrm{cut}}r_{g}^{\beta}\big{)}}{r_{g}}\frac{1}{\hat{\sigma}_{\kappa}} \frac{\mathrm{d}^{3}\hat{\sigma}_{\kappa}}{\mathrm{d}m_{J}^{2}\mathrm{d}r_{g} \mathrm{d}z_{g}}\,. \tag{6}\] We have expressed here the groomed jet radius \(R_{g}\) in terms of \(r_{g}=R_{g}/R\), which we also generalize below in Eq. (15) to include jets in \(e^{+}e^{-}\) collisions. Here, \(z_{g}\) is the energy (or \(p_{T}\)) fraction of the c-soft subjet. These moments in fact appear in precisely the same fashion as the factor of jet radius \(R\) does in the leading NP power correction in the ungroomed jet mass and jet \(p_{T}\) in Eq. (2). For the shift correction, the groomed jet mass shift is given by \[m_{J,\mathrm{sd}}^{2}=\hat{m}_{J,\mathrm{sd}}^{2}+p_{T}R_{g}\Omega_{1\kappa}^{ \mathfrak{g}}\,. \tag{7}\] Here \(\hat{m}_{J,\,\mathrm{sd}}\) is the jet mass of a reference parton level configuration. Here we are implicitly considering inclusively identified jets, where, as we will discuss below, the hard scale \(Q=p_{T}R\). Normalizing the jet mass squared by \(Q^{2}\) we find that the shift \(\delta m_{J,\mathrm{sd}}^{2}=r_{g}\Omega_{1\kappa}^{\mathfrak{g}}/Q\). Thus, Eq. (5) indicates that \(r_{g}\) must be averaged over all possible values allowed for a given jet mass measurement \(m_{J}\). Lastly, because this shifts the value of the jet mass, the corresponding shift upon Taylor-expanding appears as a derivative as shown in Eq. (3). The term in Eq. (6) appearing with a \(1/r_{g}\) factor is analogous to \(1/R\) factor with the leading hadronization effect associated with the jet \(p_{T}\) in Eq. (2). Instead of a shift in the jet mass, this effect modifies the normalization of the cross section. It is analogous to how the normalization of the jet mass spectrum changes from parton-level to hadron-level due to migration of events across \(p_{T}\) bins. Just as this effect predominantly affects the events that are at the boundary of the \(p_{T}\)-bin, the analogous correction in Eq. (6) appears in groomed jet mass when there are c-soft subjets near the "boundary" of passing/failing the soft drop condition, i.e. when \(z_{g}\approx z_{\mathrm{cut}}r_{g}^{\beta}\), such that effects of hadronization can lead to different outcomes at parton and hadron levels. Hence, in Eq. (6) an additional \(\delta\)-function for soft drop boundary is included. The factor of \(1/r_{g}\) can be understood by noting that the parton level values \(\hat{z}_{g}\) and \(\hat{r}_{g}\) upon hadronization are modified as \[z_{g}=\hat{z}_{g}+\frac{1}{r_{g}}\frac{\Upsilon_{1,0\kappa}^{ \otimes}}{Q}\,,\qquad\qquad\qquad r_{g}=\hat{r}_{g}-\frac{\Upsilon_{1,1\kappa }^{\otimes}}{Q}\,. \tag{8}\] The two \(\mathcal{O}(\Lambda_{\mathrm{QCD}})\) constants \(\Upsilon_{1,0\kappa}^{\otimes}\) and \(\Upsilon_{1,1\kappa}^{\otimes}\) respectively encode the shift in the c-soft subjet \(p_{T}\) and the groomed jet radius. When combined together, along with the factor of \(\beta\) resulting from differentiation of \(r_{g}^{\beta}\), we arrive at the boundary correction in Eq. (3). Finally, as a technical remark, while Eq. (6) is more intuitive to understand when expressed in terms of \(z_{g}\), in practice, we will find it simpler to compute this correction by varying the soft drop condition itself in the doubly differential cross section in Eq. (5), allowing us to recycle the calculations for the shift correction. We thus see that Eq. (3) significantly constrains the form of the leading nonperturbative corrections, which can be parameterized for a given flavor of jet in terms of 3 constants. However, for Eq. (3) to be useful in precision physics with soft drop jet mass, we are required to accurately calculate the jet mass dependent weights in perturbation theory. The accuracy with which these weights can be determined in turn determines the extent to which the NP parameters can be extracted or constrained in an analysis with real world collider data. In Ref. [70], a straightforward calculation of these weights at LL accuracy in the coherent branching formalism was presented. The factorization in Eq. (3) was tested by performing a comparison between parton and hadron level jet mass spectra in the \(e^{+}e^{-}\to q\bar{q}\) process in the dijet region simulated in the MC event generators using the LL-accurate predictions of the weights. However, while a good agreement with the predictions of factorization was found, there was no clear procedure to ascertain the perturbative uncertainty in the LL calculations. To enable more precise predictions of these weights, it was in Ref. [71] where their computation was first recast as moments of a multi-differential soft drop cross section. By dissecting the kinematic phase space of \(r_{g}\) and the \(m_{J}\), Ref. [71] identified the relevant set of effective field theories that are required for a precise computation of the doubly differential cross section in the SDOE region.6 The boundary correction in Eq. (6) was computed by considering variations in the soft drop condition in the doubly differential \(\frac{\mathrm{d}^{2}\hat{\sigma}_{s}}{\mathrm{d}m_{J}^{2}\mathrm{d}r_{g}}\) cross section. The EFT formalism enabled a systematic improvement in the computation of these weights. In Ref. [71] these moments were computed with NLL resummation including \(\mathcal{O}(\alpha_{s})\) singular matrix elements to achieve NLL\({}^{\prime}\) accuracy in the SDOE region. At the same time, the framework also enabled a systematic estimate of the perturbative uncertainty associated with these weights that was previously lacking in the LL calculations in the coherent branching framework. Footnote 6: See also [29; 24; 76] for applications of doubly differential cross sections in the context of other groomed jet observables. In this work, with the goal of providing necessary perturbative input for Eq. (3) for precision phenomenology, we further improve the prediction of these jet mass dependent perturbative weights. A straightforward improvement comes from employing relatively recently calculated two-loop non-cusp anomalous dimensions of certain factorization functions associated with soft drop from Ref. [77] to extend the resummation of global logarithms in the doubly differential cross section to NNLL accuracy. More crucially, we extend the calculation by matching the doubly differential cross section in the SDOE region to the ungroomed region for \(m_{J}^{2}\gtrsim QQ_{\mathrm{cut}}\) which involves calculating new contributions previously not considered in Ref. [71]. There the power corrections of the form \(m_{J}^{2}/(QQ_{\mathrm{cut}})\) were systematically dropped, which become significant close to the soft drop cusp for \(m_{J}^{2}\lesssim QQ_{\mathrm{cut}}\). These power corrections also impact the location of the soft drop cusp which differs between NLL and NNLL estimates. By consistently matching the result in the SDOE region to larger jet masses, we are able to provide the necessary perturbative input for precision phenomenology consistent with the NNLL soft drop jet mass prediction (which is also similarly matched in the ungroomed region, and in fact is simply a component of the entire doubly differential cross section). Finally, as discussed in Ref. [71], the calculation of the doubly differential cross section involves consideration of three different EFTs depending on whether the groomed jet radius is close to the minimum or maximum kinematic bounds imposed by jet mass measurement, or in an intermediate region within these bounds. The transition from one EFT to another depends ot only on the choice of jet kinematic and grooming parameters, but also on the jet mass itself. This will continue to be the case in the new calculations performed in this paper where we extend the previous result to the plain jet mass resummation region. Next, while typical values of the grooming parameters lead to strong suppression of underlying event (UE) and ISR effects, there are situations where less aggressive grooming is desirable. For example, in the case of groomed boosted top quark jets [47], a strong 10%-level grooming invalidates a simple inclusive description of the top decay, and instead a light grooming of 1%-level is desired. Likewise, for exclusively studying soft radiation and quark gluon discrimination using the collinear-drop [28; 30], combinations of light and more aggressive soft drop is employed. In such scenarios the effect of underlying event on the jet mass spectrum is no longer negligible. We show in Fig. 2 the spectrum for \(z_{\rm cut}=0.1\) and \(\beta=2\). In the SDOE region the impact of UE is even somewhat larger than hadronization. While it is impossible to predict the effects of UE from first principles, we can nevertheless attempt to phenomenologically describe these effects by making certain reasonable assumptions. The UE distribution is to a good approximation uniform in rapidity and independent of the hard scattering, such that it makes a contribution proportional to the jet area [79]. Under these assumptions, in Ref. [80] we show that effects of ISR and UE appear as corrections associated with higher powers of groomed jet radius: \[\frac{1}{\sigma_{\kappa}}\frac{{\rm d}\sigma_{\kappa}^{\rm had+UE}}{{\rm d}m _{J}^{2}} = \frac{1}{\sigma_{\kappa}}\frac{{\rm d}\sigma_{\kappa}}{{\rm d}m _{J}^{2}}-Q\Omega_{\rm UE}^{\bullet}\frac{{\rm d}}{{\rm d}m_{J}^{2}}\Big{(} \frac{C_{1\kappa}^{(4)}(m_{J}^{2})}{\sigma_{\kappa}}\frac{{\rm d}\sigma_{ \kappa}}{{\rm d}m_{J}^{2}}\Big{)}\] \[+\frac{\Upsilon_{0{\rm UE}}^{\oplus}+\beta\Upsilon_{1{\rm UE}}^{ \oplus}}{Q}\frac{C_{2\kappa}^{(2)}(m_{J}^{2})}{\sigma_{\kappa}}\frac{{\rm d} \sigma_{\kappa}}{{\rm d}m_{J}^{2}}+\cdots,\] where \({\rm d}\sigma_{\kappa}\) (without a hat) is the hadron level cross section and the left hand side, \({\rm d}\sigma_{\kappa}^{\rm had+UE}\) is the hadron+UE level cross section. The jet mass dependent coefficients \(C_{1,2\kappa}^{(n)}(m_{J}^{2})\) are parton Figure 2: For certain values of soft drop parameters, such as \(\beta=2\), the underlying event effects can be significant. level \(r_{g}\)-moments of the doubly differential soft drop and boundary soft drop cross sections: \[\frac{1}{\hat{\sigma}_{\kappa}}\frac{\mathrm{d}\hat{\sigma}_{\kappa}}{\mathrm{d}m _{J}^{2}}C^{(n)}_{1\kappa}(m_{J}^{2}) \equiv\int\mathrm{d}r_{g}\,r_{g}^{n}\frac{1}{\hat{\sigma}_{\kappa}} \frac{\mathrm{d}^{2}\hat{\sigma}_{\kappa}}{\mathrm{d}m_{J}^{2}\mathrm{d}r_{g} }\,, \tag{10}\] \[\frac{1}{\hat{\sigma}_{\kappa}}\frac{\mathrm{d}\hat{\sigma}}{\mathrm{d}m_{J}^{ 2}}C^{(n)}_{2\kappa}(m_{J}^{2}) \equiv\int\mathrm{d}r_{g}\mathrm{d}z_{g}\;\delta(z_{g}-z_{\mathrm{ cut}}r_{g}^{\beta})\,r_{g}^{n}\frac{1}{\hat{\sigma}_{\kappa}}\frac{ \mathrm{d}^{3}\hat{\sigma}_{\kappa}}{\mathrm{d}m_{J}^{2}\mathrm{d}r_{g} \mathrm{d}z_{g}}\,.\] The appearance of \(n=4\) for the shift correction and \(n=2\) for the boundary correction is analogous to the jet radius scaling of the impact of ISR and underlying event on the jet mass and jet \(p_{T}\) distribution respectively [8]. As an extension of the formalism for hadronization corrections, we will also compute these higher moments of \(r_{g}\) for shift and boundary corrections. The results of this work are used in complementary applications of Eqs. (3) and (9) and discussed in companion papers: Firstly, as already mentioned above, in Ref. [51] the NNLL results are employed for estimating impact of the nonperturbative corrections on \(\alpha_{s}\)-determination in a completely model-independent approach. Secondly, in Ref. [78] these results are used for a precise calibration of MC hadronization models in event generators, investigating their interplay with parton showers, and rigorously testing the universality predictions of Eq. (3). Analysis of Ref. [78] confirms that indeed with precise calculations of the perturbative weights with reliable uncertainty estimates, a determination of these parameters with LHC data is foreseeable. Finally, in Ref. [80] the calculations of the moments relevant for UE and ISR is used for an analogous calibration of underlying event contribution in simulations. The organization of the paper is as follows: In Sec. 2 we discuss the effective field theories required for a complete prediction of the doubly differential groomed cross section and the framework for analytical resummation. The analogous calculation for the boundary correction is described in Sec. 3. In Sec. 4 we show a comparison of the NNLL resummed results and parton shower simulations for various moments of \(r_{g}\), for both shift and boundary cross sections. We conclude in Sec. 5. In the appendices we consolidate various formulae and technical details of the computation of \(r_{g}\)-moments. ## 2 Soft drop double differential factorization In this section we describe our computation of the cross section differential in the jet mass and cumulative in the groomed jet radius. We first describe in Sec. 2.1 the measurement and kinematics of inclusive jets and the various energy scales associated with the groomed jet mass measurement. In Sec. 2.2 we summarize the various effective field theory regions and the corresponding EFT modes in Sec. 2.3. Before describing in detail the factorization formulae associated with each of these regions, we jump straight into \(\mathcal{O}(\alpha_{s})\) computation of various factorization functions Sec. 2.4. In Sec. 2.5 we describe how the results for factoriztaion functions are combined in factorization formulae. The cross sections in various regions are eventually combined together in Sec. 2.6 where we also describe the procedure for obtaining perturbative uncertainty. In the next section we discuss the boundary soft drop cross section. ### Measurement and kinematics For concreteness, our starting point is the inclusive jet measurement in hadron colliders. However, we will shortly generalize our notation to also simultaneously describe exclusive jets in \(pp\) and inclusive jets in \(e^{+}e^{-}\) collisions. In the formal limit of jet radius \(R/2\ll 1\) the cross section for the process \(pp\to\mathrm{jet}+X\), where \(X\) includes any radiation we are inclusive over factorizes as [81; 82; 83] \[\frac{\mathrm{d}^{3}\Sigma(R_{g})}{\mathrm{d}p_{T}\mathrm{d}\eta_{J} \mathrm{d}m_{J}^{2}}=\sum_{abc}\int\frac{\mathrm{d}x_{a}\mathrm{d}x_{b} \mathrm{d}z}{x_{a}x_{b}z}f_{a}(x_{a},\mu)f_{b}(x_{b},\mu)H_{ab}^{c}\Big{(}x_{a },x_{b},\eta_{J},\frac{p_{T}}{z},\mu\Big{)}\mathcal{G}_{c}(z,m_{J}^{2},R_{g},p _{T},R,\mu)\,, \tag{1}\] Here \(f_{a,b}\) are parton distribution functions which when combined with inclusive hard function \(H_{ab}^{c}\) account for the hard process leading to production of parton \(c\). This can also be generalized to describe processes with an additional vector boson \(V\), such as \(pp\to\mathrm{jet}+V+X\). The subsequent branching of the parton \(c\) to form a jet is described via the inclusive jet function \(\mathcal{G}_{c}\). In addition to the jet \(p_{T}\) and pseudorapidity \(\eta_{J}\), \(\mathcal{G}_{c}\) depends on \(z\), the momentum fraction of original parton retained in the reconstructed jet as well as (groomed) jet mass. Finally \(\mathrm{d}\Sigma(r_{g})\) refers to the cross section that is differential in the jet kinematics and the jet mass with an additional upper bound of \(R_{g}\) as the groomed jet radius. In the small jet mass limit \(m_{J}^{2}\ll p_{T}^{2}R^{2}\), the radiation in the jet is constrained to be predominantly soft and collinear, such that the inclusive jet function factorizes as \[\mathcal{G}_{c}^{\mathrm{fact.}}\big{(}z,m_{J}^{2},R_{g},p_{T},R,\mu\big{)}= \sum_{i}\mathcal{H}_{c\to i}\big{(}z,p_{T}R,\mu\big{)}\mathcal{J}_{i}(m_{J}^ {2},R_{g},p_{T},\eta_{J},R,\mu)\bigg{[}1+\mathcal{O}\Big{(}\frac{m_{J}^{2}}{ p_{T}^{2}R^{2}}\Big{)}\bigg{]}\,. \tag{2}\] The factorization involves separation of hard collinear modes at scales \(p_{T}R\) described by \(\mathcal{H}_{c\to i}\), and soft and collinear modes in a multiscale function \(\mathcal{J}_{i}\). As discussed in Ref. [51], in this region, the normalized cross section can be expressed as \[\frac{1}{\sigma_{\mathrm{incl}}(p_{T},\eta_{J})}\frac{\mathrm{d} ^{3}\Sigma(R_{g})}{\mathrm{d}m_{J}^{2}\mathrm{d}p_{T}\mathrm{d}\eta_{J}} \tag{3}\] \[\qquad=x_{q}(p_{T}R,\eta_{J},\mu)\,\tilde{\mathcal{G}}_{q}(m_{J} ^{2},R_{g},p_{T}R,\mu)+x_{g}(p_{T}R,\eta_{J},\mu)\tilde{\mathcal{G}}_{g}(m_{J} ^{2},R_{g},p_{T}R,\mu)\,.\] where \(x_{q,g}\) are quark/gluon fractions that are theoretically well defined, but renormalization scale \(\mu\)-dependent objects. These fractions depend on the underlying hard process and being normalized quantities are only mildly dependent on the parton distribution functions. Our focus will be on computing the piece that captures the jet mass and groomed jet radius dependence \(\tilde{\mathcal{G}}_{\kappa}\), which is interpreted as the normalized cross section for parton \(\kappa\): \[\tilde{\mathcal{G}}_{\kappa}(m_{J}^{2},R_{g},p_{T}R,\mu)\equiv\frac{1}{ \sigma_{\kappa}^{\mathrm{incl}}}\frac{\mathrm{d}\Sigma_{\kappa}(R_{g})}{ \mathrm{d}m_{J}^{2}}=N_{\mathrm{incl}}^{\kappa}(p_{T}R,\mu)\mathcal{J}_{ \kappa}(m_{J}^{2},p_{T},\eta_{J},R,\mu)\,. \tag{4}\] Here the normalization factor \(N_{\rm incl}^{\kappa}\) arises from factoring out the piece of \(\mathcal{H}_{c\to i}\) that describes the Sudakov double-logarithmic part connected with the soft-collinear factorization of \(\mathcal{G}_{c}\) in Eq. (2). The product of \(N_{\rm incl}^{\kappa}\) and \(\mathcal{J}_{\kappa}\) lead to an RG-invariant combination that can be independently studied. When considering different measurements involving groomed jet mass and groomed jet radius, only the function \(\mathcal{J}_{\kappa}\) will be modified. We will state the all-orders factorization formulae below in terms of \(\mathcal{G}_{c}\) in Eq. (2), and employ Eq. (4) for numerical implementation. #### 2.1.1 Hard kinematics The above formulation in terms of the normalized inclusive cross section \(\tilde{\mathcal{G}}_{\kappa}\) and quark/gluon fractions can be extended to exclusive, fixed number of jets with a jet veto on additional radiation, or jets in \(e^{+}e^{-}\) collisions. Apart from different definitions of the quark gluon fractions, this involves straightforward substitutions of the hard scales and kinematic and grooming parameters. Thus, it will prove worthwhile to develop a unified notation to simultaneously treat all these cases. We first define the hard scale for the following scenarios: \[Q_{\rm incl}^{pp}\equiv p_{T}R\,,\qquad Q_{\rm excl}^{pp}\equiv 2p_{T}\cosh \eta_{J}\,,\qquad Q_{\rm incl}^{e^{+}e^{-}}\equiv 2E_{J}\tan\frac{R}{2}\,, \tag{5}\] where the subscript 'incl' ('excl') indicates that the jets are identified inclusively (exclusively) and the superscript for the incoming partons involved. This way, the function \(N_{\rm incl}^{\kappa}\) only depends on the scale \(Q\). We now define a dimensionless variable \(\xi\) as a substitute for the jet mass: \[\xi\equiv\frac{m_{J}^{2}}{Q^{2}}\,. \tag{6}\] We have suppressed the superscripts and subscripts on \(Q\), and will generally follow this convention, such that the above definition of \(\xi\) can be adjusted for the three situations in Eq. (5). Next, to describe dynamics of the soft and collinear radiation we will work with light cone coordinates defined via the light-like vectors \[n^{\mu}\equiv\zeta^{-1}\big{(}1,\vec{n}\big{)}\,,\qquad\bar{n}^{ \mu}\equiv\zeta\big{(}1,-\vec{n}\big{)}\,, \tag{7}\] where the parameter \(\zeta\) encodes a large boost in the jet direction given by: \[\zeta_{\rm incl}^{pp}\equiv\frac{R}{2\cosh\eta_{J}}\,,\qquad\zeta _{\rm excl}^{pp}\equiv 1\,,\qquad\zeta_{\rm incl}^{e^{+}e^{-}}\equiv\tan \frac{R}{2}\,. \tag{8}\] Including the boost factors in the reference light-like vectors will allow us to work with hemisphere-like coordinates and eliminate several intermediate factors involving jet radius. In terms of these light-like vectors, we will follow the following light-cone decomposition of momentum \(q^{\mu}\): \[q^{\mu}=q^{+}\frac{\bar{n}^{\mu}}{2}+q^{-}\frac{n^{\mu}}{2}+q^{ \mu}_{\perp}\,,\qquad q^{+}\equiv n\cdot q\,,\qquad q^{-}\equiv\bar{n}\cdot q\,. \tag{9}\] #### 2.1.2 Soft drop kinematics Next we define some useful variables associated with soft drop. The criteria for soft drop differs between \(e^{+}e^{-}\) and \(pp\) cases and is given by \[\frac{\min(E_{i},E_{j})}{E_{i}+E_{j}}>z_{\rm cut}\bigg{(}\sqrt{2} \frac{\sin(\theta_{ij}/2)}{\sin(R_{0}^{e^{+}e^{-}}/2)}\bigg{)}^{\beta}\,, (e^{+}e^{-}\ {\rm case})\,, \tag{10}\] \[\frac{\min(p_{T_{i}},p_{T_{j}})}{p_{T_{i}}+p_{T_{j}}}>z_{\rm cut} \bigg{(}\frac{\Delta R_{ij}}{R_{0}^{pp}}\bigg{)}^{\beta}\,, (pp\ {\rm case})\,,\] Application of soft drop introduces an additional scale \(Q_{\rm cut}\) that plays a role in distinguishing the radiation that is groomed away from the radiation that is not, which is given by \[Q_{\rm cut,\,incl}^{pp} \equiv z_{\rm cut}Q_{\rm incl}^{pp}\Big{(}\frac{R}{R_{0}^{pp}} \Big{)}^{\beta}\,, Q_{\rm cut,\,excl}^{pp} \equiv z_{\rm cut}Q_{\rm excl}^{pp}\Big{(}\frac{2\cosh\eta_{J}}{R_{0}^{pp}} \Big{)}^{\beta}\,, \tag{11}\] \[Q_{\rm cut,\,incl}^{e^{+}e^{-}} \equiv z_{\rm cut}Q_{\rm incl}^{e^{+}e^{-}}\Bigg{(}\sqrt{2}\frac{ \tan\frac{R}{2}}{\sin\frac{R_{0}^{e^{+}e^{-}}}{2}}\Bigg{)}^{\beta}\,.\] As we probe jets with smaller masses, the emissions at wider angles become progressively softer and are groomed away. Using the scale \(Q_{\rm cut}\) we can identify the jet mass transition point below which soft drop becomes active. This is simply given by \[\xi_{0}\equiv\frac{Q_{\rm cut}}{Q}\,. \tag{12}\] This point roughly defines the location of the distinct soft drop cusp, and for the reasons that will become clear below, is referred to as the "soft-collinear transition point". Higher order corrections slightly modify this value. We will see below that at \({\cal O}(\alpha_{s})\), the transition point appears at the location \[\xi_{0}^{\prime}\equiv\frac{\xi_{0}}{\big{(}1+\zeta^{2}\big{)}^{ \frac{2+\beta}{2}}}\,. \tag{13}\] We will refer to \(\xi_{0}^{\prime}\) as the soft wide-angle transition point. \(\xi_{0}^{\prime}\) corresponds to the location of the cusp for the NNLL resummed groomed jet mass spectrum. For later use we also define \[Q_{\rm cut}^{\prime}\equiv Q\xi_{0}^{\prime}=\frac{Q_{\rm cut}} {\big{(}1+\zeta^{2}\big{)}^{\frac{2+\beta}{2}}}\,. \tag{14}\] Finally, we now specify our choice of normalization for the groomed jet radius. We refer to \(R_{g}\) as the groomed jet radius which can stand for rapidity invariant angular distance in \(pp\) collisions or physical angular distance in \(e^{+}e^{-}\) collisions. For simplicity, we will work with a normalized version of \(R_{g}\) to simultaneously treat these two cases defined by \[r_{g}^{pp}\equiv\frac{R_{g}}{R}\,,\qquad r_{g}^{e^{+}e^{-}}\equiv \frac{\tan\frac{R_{g}}{2}}{\tan\frac{R}{2}}\,. \tag{15}\] Here the same definitions apply for both inclusive and exclusive cases in \(pp\). In this normalization, \(r_{g}\) can at most be 1, which corresponds to scenario where grooming does not eliminate any radiation from the jet. As detailed in Ref. [71], the measurement of jet mass \(m_{J}^{2}\), or equivalently \(\xi\), puts kinematic bounds on the groomed jet radius, such that \[\text{Simultaneous }\xi,\,r_{g}\text{ measurement:}\qquad r_{g}^{\text{min}}(\xi)\leq r_{g}\leq r_{g}^{\text{max}}(\xi)\,, \tag{16}\] where \[r_{g}^{\text{min}}(\xi)=\sqrt{\xi}\,,\qquad r_{g}^{\text{max}}(\xi)\approx \min\left\{\left(\frac{\xi}{\xi_{0}}\right)^{\frac{1}{2+\beta}},1\right\}. \tag{17}\] The \(r_{g}^{\text{min}}\) arises from the kinematic bound imposed by the jet mass measurement: for a given jet mass \(\xi\), it is impossible to squeeze the radiation into a jet of radius smaller than \(r_{g}^{\text{min}}(\xi)\). The maximum bound has two terms. For \(\xi<\xi_{0}\), we are in the region where grooming is active. Here the combination of the groomed jet mass measurement and requirement that the radiation pass grooming leads to the first of the two upper bounds shown. For \(\xi>\xi_{0}\), the radiation can be close to the jet boundary and still pass the groomer, and hence the upper bound \(r_{g}^{\text{max}}(\xi)\) saturates to 1. Here we have stated an approximate version of \(r_{g}^{\text{max}}\) that was derived in Ref. [71] in the limit \(\xi\ll\xi_{0}\). We provide the formula compatible with the NNLL transition point below in Eq. (17). We also see that for \(\xi<1\), \(r_{g}^{\text{min}}(\xi)<r_{g}^{\text{max}}(\xi)\). ### Effective theory regions The measurement of jet mass, groomed jet radius and application of soft drop introduce several energy scales which can become hierarchical in certain regions of the \(\xi\)-\(r_{g}\) phase space and consequently induce logarithmic singularities. We now list down the various regions that require a specialized effective field theory based treatment. We first consider the single differential groomed jet mass measurement and enumerate the various factorization regimes that are relevant. 1. Fixed order region: This corresponds to the region when \(\xi\lesssim 1\). Here the fixed order treatment of the (groomed) jet mass differential cross section suffices. 2. Plain jet mass resummation region: This is the region where \(\xi_{0}<\xi\ll 1\). Here \(\xi\ll 1\) implies that the jet mass \(m_{J}\) is hierarchically smaller than the hard scale \(Q\). This results in dominance of the soft-collinear modes leading to Sudakov double logarithms, \(\alpha_{s}^{n}\text{ln}^{k}\xi\), with \(k\leq 2n\). This region can be described by the standard soft-collinear Sudakov factorization for plain jet mass. The condition that \(\xi>\xi_{0}\) implies that effects of grooming alone can be treated in fixed order perturbation theory. 3. Soft drop resummation region: As we discussed above, the effects of grooming on the spectrum become visible for \(\xi<\xi_{0}\). When we move to yet lower jet masses for \(\xi\ll\xi_{0}\), additional logarithms related to soft drop involving the ratio \(\xi/\xi_{0}\) become large and require resummation. This results in an additional factorization of the soft radiation beyond the one mentioned above in plain jet mass resummation. Next, as discussed in Ref. [71], three different effective theory regimes related to simultaneous measurement of \(r_{g}\) and \(\xi\) arise:7 Footnote 7: In Ref. [71] these regimes were respectively referred to as large-\(R_{g}\), intermediate-\(R_{g}\) and small-\(R_{g}\) cases. We avoid using labels ‘small’ and ‘large’ to avoid any confusion related to numerical size of \(R_{g}\). 1. Max-\(R_{g}\) regime: In this region the groomed jet radius is close to the maximum possible value, such that \[\text{Max-}R_{g}\text{ regime:}\qquad r_{g}^{\text{min}}(\xi)\ll r_{g}\lesssim r _{g}^{\text{max}}(\xi)\,.\] (18) As we will see below, the factorization in this regime is identical to that of the single differential groomed jet mass, and the additional measurement of \(r_{g}\) can be treated as a fixed order correction. Note that unlike Ref. [71] we do not impose an additional constraint of \(r_{g}^{\text{max}}(\xi)\ll 1\) and also include the situation \(r_{g}^{\text{max}}(\xi)\lesssim 1\) which arises in the plain jet mass region. We also note that this regime is also relevant for the leading hadronization corrections. Here the collinear-soft modes that pass grooming lie at maximal possible angular separation from the collinear core, and hence the leading two-pronged configurations relevant for Eq. (3) arise specifically in this regime. 2. Intermediate-\(R_{g}\) regime: This regime is relevant when the groomed jet radius is hierarchically separated from both minimum and maximum kinematic bounds: \[\text{Intermediate-}R_{g}\text{ regime:}\qquad r_{g}^{\text{min}}(\xi)\ll r_{g} \ll r_{g}^{\text{max}}(\xi)\,.\] (19) Figure 3: Three regimes relevant for doubly differential groomed jet radius and groomed jet mass measurement for \(\beta=0,1,2\). The region to the left of the vertical line \(\xi=\xi_{0}\) is the soft drop resummation region, and to the right is the plain jet mass resummation region. The boundaries of the shaded region denote the kinematic bounds imposed on groomed jet radius by the jet mass measurement. Depending on the values of grooming parameters, a third intermediate-\(R_{g}\) regime can become relevant. Depending on the precise values of the hard scale and grooming parameters, this regime may or may not be relevant. However, because of the two hierarchies, describing this regime leads to the most factorized version of the doubly differential cross section. As a result, the intermediate regime cross section can be used as an efficient tool to subtract the singular pieces and define a stable prescription for incorporating fixed order power corrections in the entire double differential spectrum. 3. Min-\(R_{g}\) regime: This regime arises when the groomed jet radius is close to the lower bound: \[\text{Min-}R_{g}\text{ regime:}\qquad r_{g}^{\text{min}}(\xi)\lesssim r_{g} \ll r_{g}^{\text{max}}(\xi)\,.\] (20) This physically corresponds to a scenario where the jet is filled uniformly with hard collinear radiation, with a haze of soft radiation at the same angular scales. If we fix the \(r_{g}\) and vary the jet mass, this region in fact corresponds to the end-point of the jet mass spectrum. For this reason, the factorization in this regime has resemblance with the fixed-order region of the singly differential jet mass spectrum. This feature of this regime was utilized in Ref. [51] for incorporating jet mass related power corrections in the single differential jet mass spectrum. We show the various regimes discussed above in Fig. 3 for jets with LHC kinematics and \(\beta=0\) and \(\beta=1\). The vertical line at \(\xi=\xi_{0}\) separates the soft drop and plain jet mass resummation regions. The boundary of the colored region is the kinematic bound on \(r_{g}\) given in Eq. (17). We see that for \(\beta=0\), the intermediate-\(R_{g}\) regime is absent whereas this regime covers a substantial phase space for \(\beta=2\) in the soft drop resummation region. The precise formulae for demarcating these regions are discussed in App. D. ### Effective theory modes From above discussion we learn that there are two jet mass regions and three regimes corresponding to groomed jet radius measurements that require resummation. It is instructive to represent the measurements and the modes in the (primary) Lund plane of a soft/collinear emission off the jet-initiating fast parton as shown in Fig. 4. Here \(z\) is the momentum fraction of the emission and \(\theta\) is the angle it makes relative to the jet axis. The choice of axes in Fig. 4 implies that emissions to the right are increasingly collinear and those higher are increasingly softer. These schematic figures are extremely useful in identifying the relevant effective theory modes that appear at the intersections of various measurements and constraints imposed on the jet. We first describe how various measurements are displayed in Fig. 4. The black vertical line labeled \(\theta=R\) denotes the boundary of the jet and the gray region corresponds to radiation outside the jet. Emissions on the blue line with negative slope labeled \(p^{+}Q=m_{J}^{2}\) contribute (in combination with the fast massless jet-initiating parton) jet mass of \(m_{J}^{2}\). Thus, increasing the jet mass corresponds to moving this line downwards. The soft drop condition is given by the dashed black line with positive slope. The groomed jet radius measurement is given by the orange vertical line labeled \(\theta=R_{g}\). Emissions that are vetoed by soft drop are the ones that are encountered at angles larger than \(R_{g}\) and are shown in yellow shaded region. Hence, the columns from left to right correspond to max-\(R_{g}\), intermediate-\(R_{g}\) and min-\(R_{g}\) regimes respectively. Finally, a given jet mass and groomed jet radius measurement (along with jet radius and soft drop constraints) excludes any emissions that are harder shown in the hatched region. The cases in the top row where the jet mass lies in the soft drop resummation region were already considered in Ref. [71]. The cases in the bottom row are new and correspond to larger jet masses in the plain jet mass resummation region. In the bottom left plot we show the completely ungroomed case when the \(R_{g}=R\). However, this plot also includes the scenario where \(R_{g}\lesssim R\), and where the effects of soft drop do not require any further factorization. We notice that the relevant EFT modes appear at the intersections of two or more measurement or veto conditions. 1. _Collinear modes_: Modes on the \(x\)-axis represent collinear radiation with \(z\sim 1\) emitted by the fast parton at the core of the jet. In the scenarios there is a softer radiation at wider angles that stops the groomer, the jet mass measurement imposed on the collinear radiation at the center of the jet is essentially inclusive denoted by \(C\). On the other hand, (hard-)collinear modes denoted by \(N\) and \(\mathcal{C}\) respectively see the jet radius and groomed jet radius boundary. Figure 4: Effective theory modes appearing in various regimes for double differential measurement. The solid yellow shaded region is where radiation is groomed away. The rectangular gray region corresponds to radiation falling outside the jet. The hatched region is the radiation vetoed by combination of grooming, jet mass and groomed jet radius measurements. The top row corresponds to the jet masses in the soft drop resummation region where as the bottom row describes scenarios in the plain jet mass resummation region. 2. Wide-angle soft modes: Modes at wider angles on the \(y\)-axis will naturally encounter the groomer first. For jet masses in the top row of Fig. 4, the radiation at the jet-boundary must necessarily be groomed away, else we would have found a larger value of the jet mass. The physics associated with this radiation can be factorized in the soft drop resummation region when \(\xi\ll\xi_{0}\) described by the global soft mode \(S_{G}\). On the other hand, in the plain jet mass region, for \(r_{g}\lesssim 1\), the wide angle radiation is energetic enough to pass soft drop and thus we do not encounter this mode, as shown in the bottom left case. Here, the wide angle mode \(S_{\rm plain}\) is the same as in the plain jet mass resummation. These two modes only differ in their relative energy and the combination of measurements and vetoes they see. 3. _Collinear-soft modes_: Finally, we have modes that have simultaneously \(z\ll 1\) and \(\theta\ll 1\). These collinear soft modes are distinguished from each other via the role they play in the entire measurement. The CS mode is the same as that appears in the single differential jet mass measurement in the soft drop resummation region. It has the largest possible angle and energies to saturate the soft drop condition. When \(r_{g}<r_{g}^{\rm max}(\xi)\), we denote the soft radiation that stops soft drop as \(\nicefrac{{\rm CS}}{{g}}\). However, as we can see \(\nicefrac{{\rm CS}}{{g}}\) mode does not carry sufficient energy to result in the jet mass shown. Thus we include another mode \(\nicefrac{{\rm CS}}{{m}}\) which lies at similar angular scales but is more energetic as required by the jet mass measurement. The momentum scaling of various modes in the light cone coordinates in Eq. (9) are shown in Tab. 1. \begin{table} \begin{tabular}{c|l|c} \hline \hline Mode & Description & Scaling \\ \hline \(N\) & Hard-collinear mode at jet boundary & \(Q\big{(}1,\,1,\,1\big{)}\) \\ \hline \(\mathscr{C}\) & Hard-collinear mode within \(R_{g}\) & \(Qr_{g}\big{(}r_{g},\,\frac{1}{r_{g}},\,1\big{)}\) \\ \hline \(C\) & Collinear mode for inclusive jet mass & \(Q\sqrt{\xi}\big{(}\sqrt{\xi},\,\frac{1}{\sqrt{\xi}},\,1\big{)}\) \\ \hline \(S_{\rm plain}\) & Wide-angle soft & \(Q\xi\big{(}1,\,1,\,1\big{)}\) \\ \hline \(S_{G}\) & Global soft & \(Q_{\rm cut}\big{(}1,\,1,\,1\big{)}\) \\ \hline \(\nicefrac{{\rm CS}}{{g}}\) & Collinear-soft mode for groomed jet radius & \(Q_{\rm cut}r_{g}^{1+\beta}\big{(}r_{g},\,\frac{1}{r_{g}},\,1\big{)}\) \\ \hline \(\nicefrac{{\rm CS}}{{m}}\) & Collinear-soft mode for jet mass & \(\frac{Q\xi}{r_{g}}\big{(}r_{g},\,\frac{1}{r_{g}},\,1\big{)}\) \\ \hline CS & Collinear-soft mode at maximum \(R_{g}\) & \(\frac{Q\xi}{r_{g}^{\rm max}(\xi)}\bigg{(}r_{g}^{\rm max}(\xi),\,\frac{1}{r_{g} ^{\rm max}(\xi)},\,1\bigg{)}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Scalings of various modes for soft drop double differential cross section following the light cone decomposition in Eq. (9). Including jet radius related factors in the reference light cone vectors results in hemisphere-jets like kinematics. The third argument also indicates the virtuality of these modes. ### One-loop results of factorization functions Before we state the factorization formulae for each of the cases discussed above, we will perform one-loop caluclations of the corresponding soft and collinear factorization functions to gain familiarity with the details of measurements associated with each of the modes in Fig. 4. #### 2.4.1 Matrix elements and measurement functions At \(\mathcal{O}(\alpha_{s})\), the jet initating parton \(i^{*}\) splits as \(i^{*}\to jk\). It will prove useful parameterize the light cone coordinates in the decomposition in Eq. (2.9) in terms of dimensionless numbers: \[\text{Soft functions}:\qquad y\equiv\frac{q_{j}^{+}}{Q}\,,\qquad x\equiv\frac{q _{j}^{-}}{Q}\,, \tag{2.21}\] \[\text{Collinear functions}:\qquad y\equiv\frac{p_{i^{*}}^{2}}{Q^{2}}\,,\qquad x \equiv\frac{q_{j}^{-}}{Q}\,,\] where \(j\) is the softer of the two final state partons. Here \(p_{i^{*}}^{2}=s=(p_{j}+p_{k})^{2}\). All the soft functions associated with the wide-angle soft and collinear-soft modes involve the same eikonal matrix element with a single-particle phase space, whereas those associated with collinear modes involve the full splitting functions and two-particle phase space. For a generic observable \(\mathcal{O}\) including any veto conditions \(\Theta\), the corresponding soft and collinear functions renormalized in \(\overline{\text{MS}}\)-scheme at \(\mathcal{O}(\alpha_{s})\) are then given by \[\mathcal{J}_{\kappa}^{[1]}\big{(}\mathcal{O},\delta_{\mathcal{O}},\mu\big{)} \equiv\frac{\alpha_{s}}{2\pi}\frac{e^{\epsilon\gamma_{E}}}{\Gamma(1 -\epsilon)}\Big{(}\frac{\mu^{2}}{Q^{2}}\Big{)}^{\epsilon}\int_{0}^{1}\frac{ \mathrm{d}x\,\mathrm{d}y}{y^{1+\epsilon}\big{[}x(1-x)\big{]}^{\epsilon}}\, \delta_{\mathcal{O}}\big{[}\hat{\mathcal{O}}(x,y),\hat{\Theta}(x,y)\big{]} \sum_{\kappa^{\prime}}\hat{P}_{\kappa^{\prime}\kappa}(x)\,,\] \[\mathcal{S}_{\kappa}^{[1]}\big{(}\mathcal{O},\delta_{\mathcal{O} },\mu\big{)} \equiv\frac{\alpha_{s}C_{\kappa}}{\pi}\frac{e^{\epsilon\gamma_{E}} }{\Gamma(1-\epsilon)}\Big{(}\frac{\mu^{2}}{Q^{2}}\Big{)}^{\epsilon}\int_{0}^{ \infty}\frac{\mathrm{d}x\,\mathrm{d}y}{(xy)^{1+\epsilon}}\,\delta_{\mathcal{O} }\big{[}\hat{\mathcal{O}}^{(s,cs)}(x,y),\hat{\Theta}^{(s,cs)}(x,y)\big{]}\,, \tag{2.22}\] where \(\delta_{\mathcal{O}}\) combines the measurement and veto conditions on the two parton system and also includes the virtual contribution. We have suppressed dependence on kinematic and grooming parameters for simplicity. The splitting functions are given by \[\hat{P}_{gq}(x) =C_{F}\bigg{[}\frac{1+(1-x)^{2}}{x}-\epsilon x\bigg{]}\,,\] \[\hat{P}_{gg}(x) =C_{A}\bigg{[}\frac{x}{1-x}+\frac{1-x}{x}+x(1-x)\bigg{]}\,,\] \[\hat{P}_{qg}(x) =n_{f}T_{F}\bigg{[}x^{2}+(1-x)^{2}-2\epsilon x(1-x)\bigg{]}\,. \tag{2.23}\] In the soft matrix elements, we additionally take soft or collinear-soft limit of the observable and the veto condition as indicated by the superscript \((s,cs)\): \[\text{Soft limit}:\qquad\qquad\qquad\qquad x\to 0\,,\qquad \qquad x\sim y\,, \tag{2.24}\] \[\text{Collinear-soft limit}:\qquad\qquad\qquad x\to 0\,,\qquad \qquad\frac{y}{x}\to 0\,.\] Thus, factorization functions associated with various modes differ only in the details of the measurement and veto conditions and some of the modes may only involve veto conditions. For the cases we are interested in, we will only consider the jet mass measurement, which is the same for both collinear and soft matrix elements, and is simply given by \[\hat{\xi}(x,y)=y\,. \tag{25}\] Next, we have the jet radius, groomed jet radius and soft drop constraints. The jet radius constraint corresponds to the \(k_{T}\) clustering condition: \[\hat{\Theta}_{k_{T}}(x,y)\equiv\Theta\big{(}x(1-x)-y\big{)}\,. \tag{26}\] Note that the jet radius does not explicitly appear in the above constraint as we have absorbed the factor \(\zeta\) defined in Eq. (8) in the definitions of the light cone coordinates. Similarly, it is easy to check the condition that the two partons are within the groomed jet radius is given by \[\hat{\Theta}_{r_{g}}(x,y,r_{g})\equiv\Theta\big{(}r_{g}^{2}x(1-x)-y\big{)}\,. \tag{27}\] Finally, the full soft drop condition in Eq. (10) in terms of the variables \(x\) and \(y\) is given by \[\hat{\Theta}_{\rm sd}(x,y,\xi_{0},\zeta)=\Theta\bigg{(}\frac{\min\{x_{1},x_{2 }\}}{x_{1}+x_{2}}-\xi_{0}\Big{(}\frac{y}{4x_{1}x_{2}}\Big{)}^{\frac{\beta}{2} }\bigg{)}\,, \tag{28}\] where \[x_{1}(x,y)=\frac{xy\zeta^{2}}{2}+\frac{1-x}{2}\,,\qquad x_{2}(x,y)=\frac{(1-x )y\zeta^{2}}{2}+\frac{x}{2}\,, \tag{29}\] and we have suppressed dependence on \(\beta\) for simplicity. The corresponding soft and collinear-soft limit of these results read: soft particle clustered with the jet: \[\hat{\Theta}_{k_{T}}^{(s,cs)}(x,y) \equiv \Theta(x-y)\,,\] (30) soft emission at an angle \[<r_{g}\] : \[\hat{\Theta}_{r_{g}}^{(s,cs)}(x,y,r_{g}) \equiv \Theta(r_{g}^{2}x-y)\,,\] collinear-soft emission passes soft drop: \[\hat{\Theta}_{\rm sd}^{(cs)}(x,y,\xi_{0}) \equiv \Theta\big{(}x-y^{\frac{\beta}{2+\beta}}\xi_{0}^{\frac{2}{2+ \beta}}\big{)}\,.\] soft emission passes soft drop: \[\hat{\Theta}_{\rm sd}^{(s)}(x,y,\xi_{0},\zeta) \equiv \Theta\big{(}x+y\zeta^{2}-y^{\frac{\beta}{2+\beta}}\xi_{0}^{\frac {2}{2+\beta}}\big{)}\,,\] Having compiled the various measurement and veto functions, we now describe how they are combined in the functions \(\delta_{\mathcal{O}}\) appearing in Eq. (22). #### 2.4.2 Hard-collinear radiation outside the jet We first state the normalization contribution of the hard-collinear near the boundary of the jet as calculated in Refs. [51; 84]: \[N_{\rm incl}^{q}(Q,\mu) =1+\frac{\alpha_{s}(\mu)C_{F}}{2\pi}\Bigg{\{}-\frac{1}{2}{\rm ln}^ {2}\frac{\mu^{2}}{Q^{2}}-\frac{3}{2}{\rm ln}\frac{\mu^{2}}{Q^{2}}-\frac{13}{2} +\frac{3\pi^{2}}{4}\Bigg{\}}\,, \tag{31}\] \[N_{\rm incl}^{g}(Q,\mu) =1+\frac{\alpha_{s}(\mu)}{2\pi}\Bigg{\{}-\frac{C_{A}}{2}{\rm ln}^ {2}\frac{\mu^{2}}{Q^{2}}-\frac{\beta_{0}}{2}{\rm ln}\frac{\mu^{2}}{Q^{2}}+C_{ A}\Big{(}-\frac{5}{12}+\frac{3\pi^{2}}{4}\Big{)}-\frac{23}{12}\beta_{0} \Bigg{\}}\,.\] #### 2.4.3 Inclusive jet mass measurement on collinear radiation We now consider the simplest case of inclusive ungroomed jet mass, which is relevant for the collinear mode \(C\) in the max- and intermediate-\(R_{g}\) cases. Here the measurement function is given by \[\delta_{\xi}^{C}(x,y)=\delta(\xi-y)-\delta(y)\,. \tag{32}\] such that using Eq. (22) the inclusive jet function at NLO is given by \[J_{\kappa}^{[1]}(m_{J}^{2},\mu)=\delta(m_{J}^{2})+\frac{1}{Q^{2}}\mathcal{J}_ {\kappa}^{[1]}\bigg{(}\xi=\frac{m_{J}^{2}}{Q^{2}},\delta_{\xi}^{C},\mu\bigg{)} +\mathcal{O}(\alpha_{s}^{2})\,. \tag{33}\] We have expressed the jet function in terms of its natural argument, and will do the same for all the factorization functions below. The results read [85; 86] \[J_{q}(m_{J}^{2},\mu) =\delta(m_{J}^{2})+\frac{\alpha_{s}(\mu)C_{F}}{\pi}\bigg{(} \mathcal{L}_{1}(m_{J}^{2},\mu^{2})-\frac{6}{8}\mathcal{L}_{0}(m_{J}^{2},\mu^{ 2})-\frac{\delta(m_{J}^{2})}{4}\Big{(}\pi^{2}-7\Big{)}\bigg{)}\,, \tag{34}\] \[J_{g}(m_{J}^{2},\mu) =\delta(m_{J}^{2})+\frac{\alpha_{s}(\mu)}{\pi}\bigg{(}\frac{C_{A} }{\pi}\mathcal{L}_{1}(m_{J}^{2},\mu^{2})-\frac{\beta_{0}}{4}\mathcal{L}_{0}(m _{J}^{2},\mu^{2})\] \[\qquad\qquad\qquad\qquad+\frac{\delta(m_{J}^{2})}{36}\Big{(}C_{A }\big{(}67-9\pi^{2}\big{)}-20n_{F}T_{R}\Big{)}\bigg{)}\,,\] where \(\mathcal{L}_{n}\) is the standard plus function: \[\mathcal{L}_{n}(\ell^{+},\mu)\equiv\frac{1}{\mu}\mathcal{L}_{n}\bigg{(}\frac {\ell^{+}}{\mu}\bigg{)}\,. \tag{35}\] For later use we also state the result for the Laplace transform defined by \[\tilde{J}_{\kappa}(x,\mu)\equiv\int_{0}^{\infty}{\rm d}m_{J}^{2}\,e^{-xm_{J}^ {2}}\,J_{\kappa}(m_{J}^{2},\mu)\,, \tag{36}\] such that \[\tilde{J}_{q}(x,\mu) =1+\frac{\alpha_{s}C_{F}}{\pi}\left[\frac{1}{2}\log^{2}\big{(}x \,\mu^{2}e^{\gamma_{E}}\big{)}+\frac{3}{4}\log\big{(}x\,\mu^{2}e^{\gamma_{E}} \big{)}+\frac{7}{4}-\frac{\pi^{2}}{6}\right]\,, \tag{37}\] \[\tilde{J}_{g}(x,\mu) =1+\frac{\alpha_{s}}{\pi}\left[\frac{C_{A}}{2}\log^{2}\big{(}x\, \mu^{2}e^{\gamma_{E}}\big{)}+\frac{\beta_{0}}{4}\log\big{(}x\mu^{2}e^{\gamma_{ E}}\big{)}+C_{A}\left(\frac{67}{36}-\frac{\pi^{2}}{6}\right)-n_{F}T_{R}\,\frac{5}{9} \right]\,.\] #### 2.4.4 Hard-collinear radiation within groomed jet radius Next, we turn to the hard-collinear mode \(\mathcal{C}\) which involves an additional constraint of groomed jet radius with jet mass measurement: \[\delta_{\xi}^{\mathcal{C}}(x,y,r_{g})\equiv\hat{\Theta}_{r_{g}}(x,y,r_{g}) \big{[}\delta(\xi-y)-\delta(\xi)\big{]}\,, \tag{38}\] where \(\hat{\Theta}_{r_{g}}\) was defined in Eq. (27). The corresponding collinear function is then given by \[\mathcal{C}^{\kappa}\big{(}\xi,r_{g},Q,\mu\big{)}\equiv\frac{1}{r_{g}^{2}} \mathcal{C}^{\kappa}\bigg{(}\frac{\xi}{r_{g}^{2}},Qr_{g},\mu\bigg{)}=\delta( \xi)+\mathcal{J}_{\kappa}^{[1]}\big{(}\xi,\delta_{\xi}^{\mathcal{C}},\mu\big{)} +\mathcal{O}(\alpha_{s}^{2})\,. \tag{39}\] with explicit expressions of the function with three arguments being [71, 51], \[\mathcal{C}^{q}\big{(}\xi,Q,\mu\big{)} =\delta(\xi)+\frac{\alpha_{s}(\mu)C_{F}}{\pi}\Bigg{\{}\delta(\xi) \bigg{(}\frac{1}{4}\text{ln}^{2}\frac{\mu^{2}}{Q^{2}}+\frac{3}{4}\text{ln} \frac{\mu^{2}}{Q^{2}}+\frac{7}{4}-\frac{5\pi^{2}}{24}\bigg{)} \tag{40}\] \[\qquad+\Theta\big{(}1-4\xi\big{)}\Bigg{[}-\mathcal{L}_{1}(\xi)+ \mathcal{L}_{0}(\xi)\bigg{(}2\text{ln}\Big{(}\frac{1+\sqrt{1-4\xi}}{2}\Big{)} -\frac{3}{4}\sqrt{1-4\xi}\bigg{)}\Bigg{]}\Bigg{\}}\,,\] \[\mathcal{C}^{g}\big{(}\xi,Q,\mu\big{)} =\delta(\xi)+\frac{\alpha_{s}(\mu)}{\pi}\Bigg{\{}\delta(\xi) \bigg{(}\frac{C_{A}}{4}\text{ln}^{2}\frac{\mu^{2}}{Q^{2}}+\frac{\beta_{0}}{4} \text{ln}\frac{\mu^{2}}{Q^{2}}+C_{A}\Big{(}\frac{67}{36}-\frac{5\pi^{2}}{24} \Big{)}-\frac{5}{9}n_{f}T_{F}\bigg{)}\] \[\qquad+\Theta\big{(}1-4\xi\big{)}\Bigg{[}-C_{A}\mathcal{L}_{1}( \xi)+\mathcal{L}_{0}(\xi)\bigg{(}2C_{A}\log\Big{(}\frac{1+\sqrt{1-4\xi}}{2} \Big{)}\] \[\qquad-\frac{\beta_{0}}{4}\sqrt{1-4\xi}+\frac{C_{A}-2n_{f}T_{F}}{ 6}\xi\sqrt{1-4\xi}\bigg{)}\Bigg{]}\Bigg{\}}\,.\] #### 2.4.5 Collinear-soft radiation within groomed jet radius We now consider soft functions and consider the simplest case of \(\text{CS}_{m}\) mode with jet mass measurement and groomed jet radius boundary. The measurement function is simply a soft limit of the previous case and is defined as: \[\delta_{\xi}^{\text{CS}_{m}}(x,y,r_{g})\equiv\hat{\Theta}_{r_{g}}^{(cs)}\big{(} x,y,r_{g}\big{)}\big{[}\delta(x-y)-\delta(\xi)\big{]}\,, \tag{41}\] with the corresponding function given by \[S_{c_{m}}^{\kappa}\bigg{(}\frac{\ell^{+}}{r_{g}},\mu\bigg{)}=\delta\Big{(} \frac{\ell^{+}}{r_{g}}\Big{)}+\frac{1}{Q}\mathcal{S}_{\kappa}^{[1]}\bigg{(} \xi=\frac{\ell^{+}}{Q},\delta_{\xi}^{\text{CS}_{m}},\mu\bigg{)}+\mathcal{O}( \alpha_{s}^{2})\,. \tag{42}\] which yields \[S_{c_{m}}^{\kappa}\bigg{(}\frac{\ell^{+}}{r_{g}},\mu\bigg{)}=\delta\bigg{(} \frac{\ell^{+}}{r_{g}}\bigg{)}+\frac{\alpha_{s}C_{\kappa}}{2\pi}\Bigg{[}-4 \mathcal{L}_{1}\bigg{(}\frac{\ell^{+}}{r_{g}},\mu\bigg{)}+\frac{\pi^{2}}{12} \delta\bigg{(}\frac{\ell^{+}}{r_{g}}\bigg{)}\Bigg{]}\,. \tag{43}\] We next define the Laplace transform: \[\tilde{S}_{c_{m}}(u,\mu)\equiv\int_{0}^{\infty}\mathrm{d}\ell^{+}\,e^{-u\ell^{+}} \,S_{c_{m}}^{\kappa}(\ell^{+},\mu)\,, \tag{44}\] which yields \[\tilde{S}_{c_{m}}^{\kappa}\big{[}u,\mu\big{]}=1+\frac{\alpha_{s}C_{\kappa}}{\pi }\bigg{[}-\log^{2}\big{(}ue^{\gamma_{E}}\mu\big{)}-\frac{\pi^{2}}{8}\bigg{]}\,. \tag{45}\] #### 2.4.6 Wide-angle soft radiation failing soft drop The next case we consider is the global soft modes \(S_{G}\) that fail soft drop. Here we only have a veto condition and no measurement: \[\hat{\Theta}^{S_{G}}(x,y,\xi_{0},\zeta)\equiv\hat{\Theta}^{(s)}_{k_{T}}(x,y) \big{(}1-\hat{\Theta}^{(s)}_{\mathrm{sd}}(x,y,\xi_{0},\zeta)\big{)}\,, \tag{46}\] such that \[S_{G}^{\kappa}\big{(}Q_{\mathrm{cut}},\zeta,\beta,\mu\big{)}=1+ \mathcal{S}_{\kappa}^{[1]}\big{(}\cdot,\hat{\Theta}^{S_{G}},\mu\big{)}+ \mathcal{O}(\alpha_{s}^{2})\,, \tag{47}\] Here the empty slot in the first argument simply denotes that the function has no differential measurement applied on it and only contributes to the normalization. At one-loop the result reads [83; 87] \[S_{G}^{\kappa}\big{(}Q_{\mathrm{cut}},\zeta,\beta,\mu\big{)}=1+ \frac{\alpha_{s}(\mu)C_{\kappa}}{\pi}\bigg{[}\frac{1}{(1+\beta)}\log^{2}\Big{(} \frac{\mu}{Q_{\mathrm{cut}}}\Big{)}-\frac{\pi^{2}}{24}\Big{(}\frac{1}{1+\beta }\Big{)} \tag{48}\] \[-\frac{(2+\beta)}{4}\Big{(}2\mathrm{Li}_{2}\Big{[}\frac{\zeta^{2 }}{1+\zeta^{2}}\Big{]}+\log^{2}\big{[}1+\zeta^{2}\big{]}\Big{)}\bigg{]}\,.\] Note that the above global soft function is really only valid for inclusive jets measurement. For exclusive measurements, one additionally needs to include contributions from the beam region and other trigger jets. However, as detailed in Ref. [51], the above result is nevertheless useful in the case of exclusive jets with an appropriate treatment of the quark-gluon fractions. #### 2.4.7 Collinear-soft radiation at intermediate groomed jet radius Analogous to above, we consider the case of \(\CS_{g}\) modes that pass soft drop and are within the required groomed jet radius, but do not contribute to the jet mass measurement: \[\hat{\Theta}^{\CS_{g}}(x,y,r_{g},\xi_{0})\equiv\hat{\Theta}^{(cs)}_{r_{g}}(x, y,r_{g})\hat{\Theta}^{(cs)}_{\mathrm{sd}}(x,y,\xi_{0})\,. \tag{49}\] The corresponding function being \[S_{c_{g}}^{\kappa}\big{(}Q_{\mathrm{cut}}r_{g}^{1+\beta},\beta \big{)}=1+\mathcal{S}_{\kappa}^{[1]}\big{(}\cdot,\hat{\Theta}^{\CS_{g}},\mu \big{)}+\mathcal{O}(\alpha_{s}^{2})\,. \tag{50}\] While formally in Eq. (49) we are required to take the collinear-soft limit of the soft drop constraint, we will also find it useful to employ the results of intermediate-\(R_{g}\) regime for implementing fixed-order subtractions. To this end, it is helpful to evaluate the above function in the soft-wide angle limit, for which we can directly recycle the computation of the previous result of global soft function (while being careful about minus signs): \[S_{c_{g}}^{\kappa}\big{(}Q_{\rm cut}r_{g}^{1+\beta},\zeta_{g}, \beta\big{)}=1-\frac{\alpha_{s}(\mu)C_{\kappa}}{\pi}\bigg{[}\frac{1}{(1+\beta) }\log^{2}\Big{(}\frac{\mu}{Q_{\rm cut}r_{g}^{1+\beta}}\Big{)}-\frac{\pi^{2}}{ 24}\Big{(}\frac{1}{1+\beta}\Big{)}\] \[\qquad\qquad\qquad\qquad\qquad-\frac{(2+\beta)}{4}\Big{(}2{\rm Li }_{2}\Big{[}\frac{\zeta_{g}^{2}}{1+\zeta_{g}^{2}}\Big{]}+\log^{2}\big{[}1+ \zeta_{g}^{2}\big{]}\Big{)}\bigg{]}\,. \tag{2.51}\] Here we have defined a variable \(\zeta_{g}\) analogous to \(\zeta\) defined above in Eq. (2.8) \[\zeta_{g,{\rm incl}}^{pp}\equiv\frac{R_{g}}{2\cosh\eta_{J}}\,, \qquad\zeta_{g,{\rm excl}}^{pp}\equiv 1\,,\qquad\zeta_{g,{\rm incl}}^{e^{+}e^{-} }\equiv\tan\frac{R_{g}}{2}\,. \tag{2.52}\] #### 2.4.8 Widest angle collinear soft radiation passing soft drop We now turn to the CS mode that saturates the kinematic constraints imposed by jet mass measurement and soft drop passing condition. Here we have \[\delta_{\xi}^{\rm CS}(x,y,r_{g},\xi_{0})\equiv\hat{\Theta}_{\rm sd }^{(cs)}(x,y,\xi_{0})\big{[}\hat{\Theta}_{r_{g}}^{(cs)}(x,y,r_{g})\delta(\xi-y )-\delta(\xi)\big{]} \tag{2.53}\] This condition can be straightforwardly obtained by demanding that modes that pass soft drop are also required to satisfy the groomed jet radius constraint. On the other hand, modes that fail soft drop and the virtual piece, however, do not see this constraint. The collinear-soft function is then given by \[S_{c}^{\kappa}\big{(}\tilde{k},r_{g},Q_{\rm cut},\beta,\mu\big{)}\equiv \delta(\tilde{k})+\frac{1}{QQ_{\rm cut}^{\frac{1}{1+\beta}}}\mathcal{S}_{k}^ {[1]}\bigg{(}\xi=\frac{\tilde{k}}{QQ_{\rm cut}^{\frac{1}{1+\beta}}},\delta_{ \xi}^{\rm CS},\mu\bigg{)}+\mathcal{O}(\alpha_{s}^{2})\,. \tag{2.54}\] Note that we have made use of a \(\frac{2+\beta}{1+\beta}\) dimensional variable which appears as the natural argument for this function. It will be helpful to split the measurement in Eq. (2.53) as \[\delta_{\xi}^{\rm CS}(x,y,r_{g},\xi_{0})=\hat{\Theta}_{\rm sd}^{(cs )}(x,y,\xi_{0})\big{[}\delta(\xi-y)-\delta(\xi)\big{]}+\Delta\delta_{\xi,\,r_ {g}}^{\rm CS}(x,y,r_{g},\xi_{0})\,, \tag{2.55}\] where \[\Delta\delta_{\xi,\,r_{g}}^{\rm CS}(x,y,r_{g},\xi_{0})\equiv- \big{(}1-\hat{\Theta}_{r_{g}}^{(cs)}(x,y,r_{g})\big{)}\hat{\Theta}_{\rm sd}^{ (cs)}(x,y,\xi_{0})\delta(\xi-y)\,, \tag{2.56}\] such that the first term simply results in the standard collinear-soft function for single differential jet mass, and the second piece in a finite fixed order correction: \[S_{c}^{\kappa}\big{(}\tilde{k},r_{g},Q_{\rm cut},\beta,\mu\big{)}=S_{c}^{ \kappa}\big{(}\tilde{k},\beta,\mu\big{)}+\Delta S_{r_{g}}^{\kappa}\big{(} \tilde{k},r_{g},Q_{\rm cut},\beta,\alpha_{s}(\mu)\big{)}\,, \tag{2.57}\] where [87] (see also Ref. [51]) \[S_{c}^{\kappa}\big{(}\tilde{k},\beta,\mu\big{)}=\delta(\tilde{k})+\frac{ \alpha_{s}C_{\kappa}}{\pi}\Bigg{[}\frac{-2(1+\beta)}{2+\beta}\,\mathcal{L}_{1} \left(\tilde{k},\mu^{\frac{2+\beta}{1+\beta}}\right)+\frac{\pi^{2}}{24}\frac{2 +\beta}{1+\beta}\delta(\tilde{k})\Bigg{]}\,. \tag{2.58}\] Here the dependence on \(Q_{\rm cut}\) drops out as it is a high scale from the perspective of low energy collinear-soft modes. The Laplace transform is defined by \[\tilde{S}_{c}^{\kappa}(s,\beta,\mu)\equiv\int{\rm d}\tilde{k}\;e^{-s\tilde{k}}S_{ c}^{\kappa}\big{(}\tilde{k},\beta,\mu\big{)}\,, \tag{59}\] such that \[\tilde{S}_{c}^{\kappa}(s)=1+\frac{\alpha_{s}C_{\kappa}}{\pi}\bigg{[}-\Big{(} \frac{1+\beta}{2+\beta}\Big{)}\log^{2}\big{(}se^{\gamma_{E}}\mu^{\frac{2+\beta }{1+\beta}}\big{)}-\frac{\pi^{2}}{24}\frac{\beta(3\beta+4)}{(1+\beta)(2+\beta) }\bigg{]}\,. \tag{60}\] Next we turn to the finite correction \(\Delta S_{r_{g}}^{\kappa}\) that describes fixed-order corrections due to \(r_{g}\) measurement and re-introduces dependence on \(Q_{\rm cut}\). This correction was evaluated in Ref. [71] in the \(r_{g}\ll 1\) limit. Since we are also interested in covering the region close to the soft drop cusp, where \(r_{g}\lesssim 1\), we will find it helpful for the purposes of matching to include the full jet radius dependence by employing the soft-wide angle limit of the soft drop constraint \(\hat{\Theta}_{\rm sd}^{(s)}\) in Eq. (53) and including jet radius constraint \(\hat{\Theta}_{k_{T}}^{(s)}\), such that we use in Eq. (55) \[\Delta\delta_{\xi,\,r_{g}}^{\rm CS,\,full}(x,y,r_{g},\xi_{0},\zeta)\equiv-\hat {\Theta}_{k_{T}}^{(s)}(x,y)\big{(}1-\hat{\Theta}_{r_{g}}^{(s)}(x,y,r_{g}) \big{)}\hat{\Theta}_{\rm sd}^{(s)}(x,y,\xi_{0},\zeta)\delta(\xi-y)\,. \tag{61}\] which yields the correction piece \[Q_{\rm cut}^{\frac{1}{1+\beta}}\Delta S_{r_{g}}^{\kappa}\bigg{(}\ell^{+}Q_{ \rm cut}^{\frac{1}{1+\beta}},r_{g},Q_{\rm cut},\zeta,\beta,\alpha_{s}(\mu) \bigg{)}=\frac{1}{Q}\mathcal{S}_{\kappa}^{[1]}\bigg{(}\xi=\frac{\ell^{+}}{Q}, \Delta\delta_{\xi,\,r_{g}}^{\rm CS,\,full},\mu\bigg{)} \tag{62}\] \[=\frac{\alpha_{s}C_{\kappa}}{\pi}\frac{\Theta(\ell^{+}-Q_{\rm cut}^{\prime}v (r_{g}))}{\ell^{+}}\Bigg{[}\ln(r_{g}^{2})+\Theta(Q_{\rm cut}^{\prime}-\ell^{ +})\ln\!\bigg{(}\Big{(}\frac{Q_{\rm cut}}{\ell^{+}}\Big{)}^{\frac{2}{2+\beta} }-\zeta^{2}\bigg{)}\Bigg{]}\,,\] where we have defined \[v(r_{g})\equiv\left(\frac{1+\zeta^{2}}{\frac{1}{r_{g}^{2}}+\zeta^{2}}\right)^ {\frac{2+\beta}{2}}\,, \tag{63}\] and have expressed the result in terms of \(\ell^{+}Q_{\rm cut}^{\frac{1}{1+\beta}}\) due to explicit \(Q_{\rm cut}\) dependence from constraining \(r_{g}\). We had defined \(Q_{\rm cut}^{\prime}\) in Eq. (14). We can check that by setting \(\zeta\) to zero, we recover the result in the soft drop resummation region (\(\xi<\xi_{0}\)) calculated in Ref. [71]: \[Q_{\rm cut}^{\frac{1}{1+\beta}}\Delta S_{r_{g}}^{\kappa}\bigg{(}\ell^{+}Q_{ \rm cut}^{\frac{1}{1+\beta}}<Q_{\rm cut}^{\frac{2+\beta}{1+\beta}},r_{g},Q_{ \rm cut},\zeta=0,\beta,\alpha_{s}(\mu)\bigg{)} \tag{64}\] \[=\frac{-2}{2+\beta}\frac{\alpha_{s}C_{\kappa}}{\pi}\Theta\bigg{(}\frac{\ell^{ +}}{Q_{\rm cut}}-r_{g}^{2+\beta}\bigg{)}\frac{1}{\ell^{+}}\log\left(\frac{ \ell^{+}}{Q_{\rm cut}r_{g}^{2+\beta}}\right).\] We thus see that including soft-wide angle effects modify the transition point from \(Q_{\rm cut}\) to \(Q_{\rm cut}^{\prime}\). #### 2.4.9 Wide-angle soft radiation in plain jet mass region We now consider the final case where the wide-angle soft mode \(S_{\rm plain}\) is tested for soft drop, jet radius constraints with jet mass measurement. This is relevant for the max-\(R_{g}\) regime in the plain jet mass resummation region. The measurement function is given by \[\delta_{\xi}^{S_{\rm plain}}(x,y,r_{g},\xi_{0},\zeta)\equiv\hat{\Theta}^{(s)}_{ k_{T}}(x,y)\hat{\Theta}^{(s)}_{\rm sd}(x,y,\xi_{0},\zeta)\big{[}\hat{\Theta}^{(s)}_ {r_{g}}(x,y,r_{g})\delta(\xi-y)-\delta(\xi)\big{]} \tag{2.65}\] This expression is a simple extension of Eq. (2.53) where we have also included the jet radius constraint. We will find it helpful to split the measurement into chunks that we have already encountered: \[\delta_{\xi}^{S_{\rm plain}}(x,y,r_{g},\xi_{0},\zeta)=\delta_{\xi}^{\rm CS_{ m}}(x,y,1)+\Delta\delta_{\xi,\,r_{g}}^{\rm CS,\,full}(x,y,r_{g},\xi_{0},\zeta)+ \Delta\delta_{\xi,\,\rm sd}^{S_{\rm plain}}(x,y,\xi_{0},\zeta)\,. \tag{2.66}\] In the first term on the right hand side, setting \(r_{g}=1\) in Eq. (2.41) results in the same jet radius constraint, and hence is simply the familiar ungroomed soft function. The second piece accounts for the cumulative measurement of groomed jet radius using Eq. (2.61). Finally, the new piece accounts for effects of soft drop on wide-angle soft modes: \[\Delta\delta_{\xi,\,\rm sd}^{S_{\rm plain}}(x,y,\xi_{0},\zeta)=-\hat{\Theta}^ {(s)}_{k_{T}}(x,y)\big{(}1-\hat{\Theta}^{(s)}_{\rm sd}(x,y,\xi_{0},\zeta) \big{)}\big{[}\delta(\xi-y)-\delta(\xi)\big{]}\,. \tag{2.67}\] Hence, the soft function in the plain jet mass region is given by \[S_{\rm plain}^{\kappa}\big{(}\ell^{+},r_{g},Q_{\rm cut},\zeta, \beta,\mu\big{)} =\delta(\ell^{+})+\frac{1}{Q}\mathcal{S}_{\kappa}^{[1]}\bigg{(} \xi=\frac{\ell^{+}}{Q},\delta_{\xi}^{S_{\rm plain}},\mu\bigg{)}+\mathcal{O}( \alpha_{s}^{2}) \tag{2.68}\] \[=S_{c_{m}}^{\kappa}\big{(}\ell^{+},\mu\big{)}+Q_{\rm cut}^{ \frac{1}{1+\beta}}\Delta S_{r_{g}}^{\kappa}\big{(}\ell^{+}Q_{\rm cut}^{\frac {1}{1+\beta}},r_{g},Q_{\rm cut},\zeta,\beta,\alpha_{s}(\mu)\big{)}+\Delta S_{ \rm sd}^{\kappa}\big{(}\ell^{+},Q_{\rm cut},\beta,\zeta,\alpha_{s}(\mu) \big{)}\,,\] where we have written the result in terms of previous results in Eqs. (2.43) and (2.62) and a new piece given by \[\Delta S_{\rm sd}^{\kappa}\big{(}\ell^{+},Q_{\rm cut},\zeta, \beta,\alpha_{s}(\mu)\big{)}=\frac{1}{Q}\mathcal{S}_{\kappa}^{[1]}\bigg{(}\xi= \frac{\ell^{+}}{Q},\Delta\delta_{\xi,\,\rm sd}^{S_{\rm plain}},\mu\bigg{)} \tag{2.69}\] \[=-\frac{\alpha_{s}(\mu)C_{\kappa}}{\pi}\Bigg{[}\frac{\Theta(\ell^ {+})\Theta(Q_{\rm cut}^{\prime}-\ell^{+})}{\ell^{+}}\text{ln}\bigg{(}\Big{(} \frac{Q_{\rm cut}}{\ell^{+}}\Big{)}^{\frac{2}{2+\beta}}-\zeta^{2}\bigg{)} \Bigg{]}_{+}^{[Q_{\rm cut}^{\prime}]}\,\,.\] Here the plus-function with a non-standard boundary condition is defined as \[\big{[}\Theta(x)q(x)\big{]}_{+}^{[x_{0}]}\equiv\lim_{\epsilon\to 0}\Big{(} \Theta(x-\epsilon)q(x)-\delta(x-\epsilon)\int_{\epsilon}^{x_{0}}\mathrm{d}x^{ \prime}\,q(x^{\prime})\Big{)}\,. \tag{2.70}\] We also note that for functions satisfying \(q(x)=\lambda^{\alpha}g(\lambda^{-1}x)\) we have, \[\big{[}\Theta(x)q(x)\big{]}_{+}^{[x_{0}]}=\lambda^{\alpha}\big{[}\Theta( \lambda^{-1}x)g(\lambda^{-1}x)\big{]}_{+}^{[\lambda^{-1}x_{0}]}\,. \tag{2.71}\] This property proves useful in simplifying expressions involving integrals of such plus-functions. #### 2.4.10 Fixed-order cross section We now turn to the fixed order cross section. Here the measurement function is same as Eq. (2.65) without any expansions: \[\delta_{\xi}^{\rm FO}(x,y,r_{g},\xi_{0},\zeta)\equiv\hat{\Theta}_{k_{T}}(x,y) \hat{\Theta}_{\rm sd}(x,y,\xi_{0},\zeta)\big{[}\hat{\Theta}_{r_{g}}(x,y,r_{g}) \delta(\xi-y)-\delta(\xi)\big{]}\,, \tag{2.72}\] and the fixed order cross section is given by \[\mathcal{G}_{\kappa,\rm sd}^{\rm FO}\big{(}\xi,r_{g},\xi_{0},\zeta,\alpha_{s} (\mu)\big{)}=\mathcal{J}^{[1]}\big{(}\xi,\delta_{\xi}^{\rm FO},\mu\big{)}+ \mathcal{O}(\alpha_{s}^{2})\,. \tag{2.73}\] Because of the complicated form of the full soft drop condition in Eq. (2.28) we will evaluate this numerically. To this end, we define a subtraction term with the measurement function, \[\delta_{\xi}^{\rm FO(0)}(x,y,r_{g},\xi_{0},\zeta)\equiv\hat{\Theta}_{k_{T}}(x, y)\hat{\Theta}_{\rm sd}^{(s)}(x,y,\xi_{0},\zeta)\big{[}\hat{\Theta}_{r_{g}}(x,y,r_{g} )\delta(\xi-y)-\delta(\xi)\big{]}\,. \tag{2.74}\] where we have now replaced full soft drop constraint by its soft limit in Eq. (2.30), and evaluate the following function numerically by implementing subtraction at the level of the integrand: \[\Delta\mathcal{G}_{\kappa,\rm sd}^{\rm FO[1]}\big{(}\xi,r_{g},\xi_{0},\zeta, \alpha_{s}(\mu)\big{)}=\mathcal{G}_{\kappa,\rm sd}^{\rm FO}\big{(}\xi,r_{g}, \xi_{0},\zeta,\alpha_{s}(\mu)\big{)}-\mathcal{G}_{\kappa,\rm sd}^{\rm FO(0)} \big{(}\xi,r_{g},\xi_{0},\zeta,\alpha_{s}(\mu)\big{)}\,. \tag{2.75}\] The soft matrix element corresponding to this measurement is given by \[\tilde{\mathcal{G}}_{\kappa,\,\rm sd}^{\rm FO(0)}(\xi,r_{g},\xi_{0},\zeta, \alpha_{s}(\mu))\equiv\mathcal{S}_{\kappa}^{[1]}\big{(}\xi,\delta_{\xi}^{\rm FO (0)},\mu\big{)}+\mathcal{O}(\alpha_{s}^{2})\,, \tag{2.76}\] such that we have \[\xi\tilde{\mathcal{G}}_{q,\,\rm sd}^{\rm FO(0)}(\xi,r_{g},\xi_{0 },\zeta,\alpha_{s}(\mu)) =\frac{\alpha_{s}(\mu)C_{F}}{\pi}\text{ln}\bigg{(}\frac{\frac{1} {2}(1+\sqrt{1+4\xi/r_{g}^{2}})}{\text{max}\big{\{}\frac{1}{2}(1-\sqrt{1+4\xi/r _{g}^{2}}),\,\xi_{0}^{\frac{2}{2+\beta}}y^{\frac{\beta}{2+\beta}}-y\zeta^{2} \big{\}}}\bigg{)}\,,\] \[\xi\tilde{\mathcal{G}}_{g,\,\rm sd}^{\rm FO(0)}(\xi,r_{g},\xi_{0 },\zeta,\alpha_{s}(\mu)) =\frac{\alpha_{s}(\mu)C_{A}}{\pi}\text{ln}\bigg{(}\frac{1/2}{ \text{max}\big{\{}\frac{1}{2}(1-\sqrt{1+4\xi/r_{g}^{2}}),\,\xi_{0}^{\frac{2}{2 +\beta}}y^{\frac{\beta}{2+\beta}}-y\zeta^{2}\big{\}}}\bigg{)}\,. \tag{2.77}\] Since we are interested in the differential cross section we can restrict to \(\xi>0\) by multiplying by \(\xi\) and avoid considering the zero-bin terms. ### Factorization and resummation Having discussed the mode structure, we now state the factorization formulae for the three regimes discussed here [71]. We recall the discussion in Sec. 2.1 where in Eq. (2.2) we showed how in the small jet mass region the inclusive jet function factorizes, which can be formulated in terms of an RG invariant jet mass distribution for a jet flavor \(\kappa\) in Eq. (2.4). We will see that the various cases we consider below for \(\xi\ll 1\) will differ only in the details of the multi-scale \(\mathcal{J}_{\kappa}\) function describing soft collinear dynamics. We will treat the non-global logarithms at NLL accuracy where they can be factorized. #### 2.5.1 Max-\(R_{g}\) in plain jet mass resummation region: In this region the soft drop condition and measurement of the groomed jet radius are accounted for via fixed order corrections, and hence the factorization and resummation proceeds precisely the same way as the ungroomed jet mass case. This is given by the bottom left scenario in Fig. 4 involving the hard collinear modes \(N\), the (inclusive) collinear modes \(C\) and the wide angle soft modes \(S_{\rm plain}\). The factorized cross section is given by \[\mathcal{G}_{\kappa}^{\rm plain}(z,\xi,r_{g},\mu)=\sum_{\kappa^{\prime}} \mathcal{H}_{\kappa^{\prime}\to\kappa}(z,Q,\mu)\frac{\mathrm{d}}{\mathrm{d} \xi}\Bigg{[}\mathcal{S}_{\rm NGL}^{\kappa}\big{(}t[\xi Q,Q]\big{)}\Sigma_{\rm plain }^{\kappa}(\xi,r_{g},Q,Q_{\rm cut},\zeta,\mu)\Bigg{]}\,, \tag{78}\] where we have suppressed dependence on kinematic and grooming parameters in \(\mathcal{G}_{\kappa}^{\rm plain}\). As shown in Eq. (2), this factorization involves the same hard collinear function \(\mathcal{H}_{\kappa^{\prime}\to\kappa}\). The jet mass measurement with cumulative \(r_{g}\) cut off is described by the derivative of the cumulative cross section: \[\Sigma_{\rm plain}^{\kappa}(\xi,r_{g},Q,Q_{\rm cut},\zeta,\mu)=\int_{0}^{ \infty}\mathrm{d}s\,\mathrm{d}\ell_{c}^{+}\,J_{\kappa}\big{(}s,\mu\big{)}\, \mathcal{S}_{\rm plain}^{\kappa}\big{(}\ell_{c}^{+},r_{g},Q_{\rm cut}, \zeta,\beta,\mu\big{)}\,\delta\bigg{(}\xi-\frac{s}{Q^{2}}-\frac{\ell_{c}^{+} }{Q}\bigg{)}\,. \tag{79}\] where \(\mathcal{S}_{\rm plain}^{\kappa}\) is the cumulative version of the wide-angle soft function in Eq. (68). \[\mathcal{S}_{\rm plain}^{\kappa}\big{(}\ell_{c}^{+},r_{g},Q_{\rm cut},\zeta, \beta,\mu\big{)}\equiv\int_{0}^{\ell_{c}^{+}}\mathrm{d}\ell^{+}\,S_{\rm plain }^{\kappa}\big{(}\ell^{+},r_{g},Q_{\rm cut},\zeta,\beta,\mu\big{)}\,, \tag{80}\] The \(\mathcal{S}_{\rm NGL}^{\kappa}\) accounts for non-global logarithms up to NLL accuracy, and the argument is defined as \[t[\mu_{0},\mu_{1}]\equiv\frac{1}{2\pi}\int_{\mu_{0}}^{\mu_{1}}\frac{d\mu^{ \prime}}{\mu^{\prime}}\,\alpha_{\rm s}(\mu^{\prime})\,. \tag{81}\] From Eq. (79) we see that the NGLs depend on the integral of running coupling between wide-angle soft and hard-collinear scales. Finally, we point out that factorizing the cross section in Eq. (79) amounts to dropping the following power corrections: \[\mathcal{G}_{\kappa}\big{(}z,\xi,r_{g},\alpha_{s}(\mu)\big{)}=\mathcal{G}_{ \kappa}^{\rm plain}(z,\xi,r_{g},\mu)\Big{(}1+\mathcal{O}(\xi_{0},\xi)\Big{)}\,, \tag{82}\] where the left hand side is the full QCD cross section which only depends on running coupling at a single scale \(\mu\). In the resummed version of the factorized cross section we will employ separate jet mass and groomed jet radius dependent profile scales for each of the factorization functions. In addition to the \(\mathcal{O}(\xi)\) jet mass power corrections mentioned above, we have also dropped the finite-\(z_{\rm cut}\) terms of \(\mathcal{O}(\xi_{0})\). We now describe the resummation using the renormalization group evolution of the functions appearing in factorization formula above. We will consider the normalized cross section \(\tilde{\mathcal{G}}_{\kappa}^{\rm plain}\) defined in Eq. (4) after stripping off the DGLAP evolution. We will find it helpful to isolate the fixed-order corrections in \(S_{\rm plain}^{\kappa}\) in Eq. (68) and decompose \(\tilde{\mathcal{G}}_{\kappa}^{\rm plain}\) as \[\tilde{\mathcal{G}}_{\kappa}^{\rm plain}(\xi,r_{g},Q,Q_{\rm cut},\zeta,\mu_{ \rm plain})=\tilde{\mathcal{G}}_{\kappa,\,{\rm no\,sd}}^{\rm plain}(\xi,Q, \mu_{\rm plain})+\Delta\tilde{\mathcal{G}}_{\kappa}^{\rm plain}(\xi,r_{g},Q,Q _{\rm cut},\zeta,\mu_{\rm plain})\,. \tag{83}\] Instead of a single scale \(\mu\), we employ here a set of scales \(\mu_{\rm plain}\) for plain jet mass resummation that minimize the logs in each factorization function: \[\mu_{\rm plain}\equiv\{\mu_{N},\mu_{J}(\xi),\mu_{s}(\xi)\}\,. \tag{84}\] Precise implementation of these scales was discussed extensively in Refs. [71; 51]. We summarize the formulae for these scales and their variations in App. C. The first of these is the resummed ungroomed cross section given by \[\tilde{\mathcal{G}}_{\kappa,{\rm no\,sd}}^{\rm plain}(\xi,Q,\mu_{\rm plain}) =N_{\rm incl}^{\kappa}(Q,\mu_{N})e^{K_{N}}\Big{(}\frac{\mu_{N}}{Q} \Big{)}^{\omega_{N}}\mathcal{J}_{\kappa,{\rm no\,sd}}^{\rm plain}(\xi,Q,\mu_ {\rm plain})\,, \tag{85}\] where \(K_{N}\) and \(\omega_{N}\) are resummation kernels associated with the hard collinear function \(N_{\rm incl}^{\kappa}\). We follow a shorthand throughout this paper \[K_{\mathcal{F}}\equiv j_{\mathcal{F}}K\big{(}\Gamma_{\mathcal{F}}[\alpha_{s}],\mu,\mu_{\mathcal{F}}\big{)}+\eta\big{(}\gamma_{\mathcal{F}}[\alpha_{s}],\mu,\mu_{\mathcal{F}}\big{)}\,,\qquad\omega_{\mathcal{F}}\equiv\eta\big{(} \Gamma_{\mathcal{F}}[\alpha_{s}],\mu,\mu_{\mathcal{F}}\big{)}\,, \tag{86}\] where \(K_{\mathcal{F}}\) and \(\omega_{\mathcal{F}}\) are resummation kernels associated with any factorization function \(\mathcal{F}\), \(j_{\mathcal{F}}\) is the dimension of the argument of the function such as \(m_{\mathcal{J}}^{2}(j_{\mathcal{F}}=2)\), \(\ell^{+}(j_{\mathcal{F}}=1)\), \(\tilde{k}(j_{\mathcal{F}}=\frac{2+\beta}{1+\beta})\), \(\mu_{\mathcal{F}}\) is the choice of scale used to minimize the logs and \(\mu\) is the final scale up to which the function is RG evolved (which we will leave unspecified). The functions \(K(\Gamma,\mu,\mu_{\mathcal{F}})\) and \(\eta(\Gamma,\mu,\mu_{\mathcal{F}})\) are responsible for implementing single and double logarithmic resummation associated with cusp \(\Gamma_{\mathcal{F}}\) and non-cusp \(\gamma_{\mathcal{F}}\) anomalous dimensions of the functions. The formulae for these kernels were described in detail in App. A of Ref. [71]. In App. A we state the anomalous dimensions required for NNLL resummation. The function \(\mathcal{J}_{\kappa,{\rm no\,sd}}^{\rm plain}\) accounts for the remaining soft and collinear pieces: \[\mathcal{J}_{\kappa,{\rm no\,sd}}^{\rm plain}(\xi,Q,\mu_{\rm plain})=\frac{ \mathrm{d}}{\mathrm{d}\xi}\Big{(}\mathcal{S}_{\rm NGL}^{\kappa}\big{(}t[Q\xi,Q ]\big{)}\,\Sigma_{{\rm no\,sd}}^{\kappa}(\xi,Q,\mu_{\rm plain})\Big{)}\,. \tag{87}\] where \(\Sigma_{{\rm no\,sd}}^{\kappa}\) is defined analogously to \(\Sigma_{\rm plain}\) in Eq. (79): \[\Sigma_{{\rm no\,sd}}^{\kappa}\big{(}\xi,Q,\mu\big{)}=\int\mathrm{d}s\int \mathrm{d}\ell_{c}^{+}\,J_{\kappa}\big{(}s,\mu\big{)}\,\mathcal{S}_{c_{m}}^{ \kappa}(\ell_{c}^{+},\mu)\,\delta\bigg{(}\xi-\frac{s}{Q^{2}}-\frac{\ell^{+}}{Q} \bigg{)}\,, \tag{88}\] and similar to Eq. (80), \(\mathcal{S}_{c_{m}}^{\kappa}\) is the cumulative version of the \(S_{c_{m}}\) soft function defined in Eq. (42). Making the RG evolution in Eq. (87) explicit, we have \[\mathcal{J}_{\kappa,{\rm no\,sd}}^{\rm plain}(\xi,Q,\mu_{\rm plain}) =\Bigg{(}\mathcal{S}_{\rm NGL}^{\kappa}\big{(}t[Q\xi,Q]\big{)} \mathcal{J}_{\kappa}^{\rm plain}[\partial_{\Omega};\xi,Q,\mu_{\rm plain}] \frac{e^{\gamma_{\mathcal{E}}\Omega}}{\Gamma(-\Omega)} \tag{89}\] \[\quad+\Big{(}\frac{\mathrm{d}}{\mathrm{d}\mathrm{ln}\xi}\mathcal{S }_{\rm NGL}^{\kappa}\big{(}t[Q\xi,Q]\big{)}\Big{)}\mathcal{J}_{\rm plain}^{ \kappa}[\partial_{\Omega};\xi,Q,\mu_{\rm plain}]\frac{e^{\gamma_{\mathcal{E}} \Omega}}{\Gamma(1-\Omega)}\Bigg{)}\Bigg{|}_{\Omega=\tilde{\omega}_{csm}(\mu_{s },\mu_{J})}\,,\] where \(\Omega\) is evaluated at \[\tilde{\omega}_{cs_{m}}(\mu_{s},\mu_{J})\equiv\omega_{cs_{m}}(\mu,\mu_{s})+\omega_ {J}(\mu,\mu_{J})\,, \tag{90}\] and the function of the derivative operator is given by \[\mathcal{J}_{\kappa}^{\rm plain}[\partial_{\Omega};\xi,Q,\mu_{ \rm plain}] \equiv\frac{1}{\xi}e^{K_{J}+K_{s}}\frac{\big{(}Q\mu_{s}\big{)}^{ \omega_{s}}\big{(}\mu_{J}^{2}\big{)}^{\omega_{J}}}{(\xi Q^{2})^{\Omega}}\] \[\times\,\tilde{J}_{\kappa}\Big{[}\partial_{\Omega}+\log\Big{(} \frac{\mu_{J}^{2}}{Q^{2}\xi}\Big{)},\,\alpha_{s}(\mu_{J})\Big{]}\,\tilde{S}_{c _{m}}^{\kappa}\Big{[}\partial_{\Omega}+\log\Big{(}\frac{\mu_{s}}{Q\xi}\Big{)},\alpha_{s}(\mu_{s})\Big{]}\,, \tag{91}\] Here \(\tilde{J}_{\kappa}\) and \(\tilde{S}_{c_{m}}^{\kappa}\) are the Laplace transforms of the jet and ungroomed soft function, and we have written them in a notation that makes the logarithms explicit: \[\tilde{J}_{\kappa}\big{[}\log(e^{\gamma_{E}}x\mu_{J}^{2}),\alpha_ {s}(\mu_{J})\big{]} \equiv\tilde{J}_{\kappa}(x,\mu_{J})\,, \tag{92}\] \[\tilde{S}_{c_{m}}^{\kappa}\Big{[}\log\big{(}ue^{\gamma_{E}}\mu_{s }\big{)},\alpha(\mu_{s})\Big{]} \equiv\tilde{S}_{c_{m}}^{\kappa}\big{(}u,\mu_{s}\big{)}\,.\] We now turn to the remaining piece in Eq. (83). Since this term involves fixed order terms that are not related to a boundary condition of the RG evolution, they have to be treated differently, and result in the formula \[\Delta\tilde{\mathcal{G}}_{\kappa}^{\rm plain}(\xi,r_{g},Q,Q_{ \rm cut},\zeta,\mu_{\rm plain})=N_{\rm incl}^{\kappa}(Q,\mu_{N})e^{K_{N}} \Big{(}\frac{\mu_{N}}{Q}\Big{)}^{\omega_{N}} \tag{93}\] \[\quad+\Big{(}\frac{\rm d}{{\rm d}\!{\rm ln}\xi}\mathcal{S}_{\rm NGL }^{\kappa}\big{(}t[Q\xi,Q]\big{)}\Big{)}\mathcal{J}_{\kappa}^{\rm plain}[ \partial_{\Omega};\xi,Q,\mu_{\rm plain}]\mathcal{Q}_{\kappa}^{\rm plain}( \Omega-1,\xi,r_{g},\alpha_{s}(\mu_{s}))\Bigg{)}\bigg{|}_{\Omega=\tilde{\omega} (\mu_{s},\mu_{J})}\,.\] where the kernel \(\mathcal{Q}_{\kappa}^{\rm plain}\) is defined as Laplace transform of the fixed order soft function terms and convolved with RG resummation kernels: \[\mathcal{Q}_{\kappa}^{\rm plain}\equiv\mathcal{Q}_{\kappa}^{\rm sd}\big{(} \Omega,\xi,\alpha_{s}(\mu)\big{)}+\mathcal{Q}_{\kappa}^{r_{g}}\big{(}\Omega, \xi,r_{g},\alpha_{s}(\mu);a_{20}^{\rm max}\big{)}+\mathcal{Q}_{\kappa}^{({\rm sd },r_{g})}\big{(}\Omega,\xi,r_{g},\alpha_{s}(\mu)\big{)}\,. \tag{94}\] The first two terms in Eq. (94) involve the \(\mathcal{O}(\alpha_{s})\) soft function pieces in Eqs. (62) and (69): \[\mathcal{Q}_{\kappa}^{\rm sd}\big{(}\Omega,\xi,\alpha_{s}(\mu)\big{)} \equiv\frac{e^{\gamma_{E}\Omega}}{\Gamma(-\Omega)}\int_{0}^{\infty} \rm d\ell^{+}\;\mathcal{L}_{0}^{-\Omega}\!\left(1-\frac{\ell^{+}}{Q\xi}\right) \Delta S_{\rm sd}^{\kappa}\big{(}\ell^{+},Q_{\rm cut},\zeta,\beta,\alpha_{s}( \mu)\big{)}\,,\] \[\mathcal{Q}_{\kappa}^{r_{g}}\big{(}\Omega,\xi,r_{g},\alpha_{s}( \mu);a_{20}^{\rm max}\big{)} \equiv\frac{e^{\gamma_{E}\Omega}}{\Gamma(-\Omega)}\int_{0}^{\infty} \rm d\ell^{+}\;\mathcal{L}_{0}^{-\Omega}\!\left(1-\frac{\ell^{+}}{Q\xi}\right) \!\left(1+\frac{\alpha_{s}(\mu_{cs})}{\pi}a_{20}^{\rm max}\right)\] \[\quad\times Q_{\rm cut}^{\frac{1}{1+\beta}}\Delta S_{r_{g}}^{ \kappa}\bigg{(}Q_{\rm cut}^{\frac{1}{1+\beta}}\ell^{+},r_{g},Q_{\rm cut},\zeta,\beta,\alpha_{s}(\mu)\bigg{)}\,, \tag{95}\] and the third is an \(\mathcal{O}(\alpha_{s}^{2})\) cross term: \[\mathcal{Q}_{\kappa}^{(\mathrm{sd},r_{g})}\big{(}\Omega,\xi,r_{g},Q_{ \mathrm{cut}},\zeta,\beta,\alpha_{s}(\mu)\big{)}\equiv\frac{e^{\gamma_{E}\Omega }}{\Gamma(-\Omega)}\int_{0}^{\infty}\mathrm{d}\ell_{1}^{+}\,\mathrm{d}\ell_{2} ^{+}\,\mathcal{L}_{0}^{-\Omega}\bigg{(}1-\frac{\ell_{1}^{+}+\ell_{2}^{+}}{Q \xi}\bigg{)} \tag{96}\] \[\qquad\qquad\qquad\times\Delta S_{\mathrm{sd}}^{\kappa}\big{(} \ell_{1}^{+},Q_{\mathrm{cut}},\zeta,\beta,\alpha_{s}(\mu)\big{)}\,Q_{\mathrm{ cut}}^{\frac{1}{1+\beta}}\Delta S_{r_{g}}^{\kappa}\bigg{(}Q_{\mathrm{cut}}^{\frac{1}{1+ \beta}}\ell_{2}^{+},r_{g},Q_{\mathrm{cut}},\zeta,\beta,\alpha_{s}(\mu)\bigg{)}\,.\] Here we have defined \[\mathcal{L}_{0}^{a}(x)\equiv\mathcal{L}^{a}(x)+\frac{1}{a}\delta(x)\,,\qquad \mathcal{L}^{a}(x)\equiv\Big{[}\frac{\Theta(x)}{x^{1-a}}\Big{]}_{+}\,,\qquad a \neq 0\,, \tag{97}\] and case with \(\Omega=0\) corresponds to turning off resummation: \[\lim_{\Omega\to 0}\frac{e^{\gamma_{E}\Omega}}{\Gamma(-\Omega)}\mathcal{L}_{0} ^{-\Omega}(x)=\delta(x)\,. \tag{98}\] We note that the \(r_{g}\)-dependence in the cross section in the plain jet mass region arises at \(\mathcal{O}(\alpha_{s})\) through the fixed order correction \(\Delta S_{r_{g}}^{\kappa}\) in Eq. (62). However, as mentioned above, \(\Delta S_{r_{g}}^{\kappa}\) itself cannot provide the \(\mathcal{O}(\alpha_{s})\) boundary condition for NLL evolution (and for NNLL accuracy). Hence, as discussed in Ref. [71], we are required to consider cross terms with between \(\mathcal{O}(\alpha_{s})\) pieces in \(\mathcal{J}_{\kappa}^{\mathrm{plain}}\) and the normalization factor \(N_{\mathrm{incl}}^{\kappa}\), and the \(r_{g}\)-dependent piece in Eq. (62). We will see that the same applies to the cross section in the soft drop resummation region. On the other hand, the role of the piece \(\Delta S_{\mathrm{sd}}^{\kappa}\) in Eq. (69) is to account for effects of grooming on the soft drop jet mass which factorize into global-soft and collinear-soft pieces in the soft drop resummation region. As we will see below, we are required to include the cross term \(\mathcal{Q}_{\kappa}^{(\mathrm{sd},r_{g})}\) in Eq. (96) in order to ensure that the \(\mathcal{O}(\alpha_{s}^{2})\) terms in the plain jet mass region consistently match with \(\mathcal{O}(\alpha_{s}^{2})\) pieces in the soft drop resummation region. Finally, the computation of these kernels and others below is detailed in App. B. We note that unlike Ref. [71], we have chosen not to include additional \(\mathcal{O}(\alpha_{s}^{2})\) terms proportional to \(\beta_{0}\) in order to cancel running coupling effects in these kernels and render them \(\mu\)-independent. We have retained for simplicity only the minimal set of \(\mathcal{O}(\alpha_{s}^{2})\) terms required to achieve NNLL accuracy, and parameterized uncertainty due to the missing \(\mathcal{O}(\alpha_{s}^{2})\) corrections in terms of the nuisance parameter \(a_{20}^{\mathrm{max}}\) in Eq. (95). We assume that the missing two-loop pieces in Eq. (94) have the same functional form as the one-loop kernels with an unknown normalization parameterized by \(a_{20}^{\mathrm{max}}\in[-2\pi,2\pi]\). Additionally, we will separately consider below the effects of two-loop logarithmic terms that arise from RG evolution. #### 2.5.2 Max-\(R_{g}\) in soft drop resummation region In the soft drop resummation region, the relevant modes are shown in the top left case in Fig. 4. The factorized cross section reads \[\mathcal{G}_{\kappa}^{\mathrm{sd}\,\mathrm{res.}}\big{(}z,\xi,r_ {g},\mu\big{)} =\sum_{\kappa^{\prime}}\mathcal{H}_{\kappa^{\prime}\to\kappa}(z,Q,\mu)S _{G}^{\kappa}\big{(}Q_{\mathrm{cut}},\zeta,\beta,\mu\big{)}\mathcal{S}_{ \mathrm{NGL}}^{\kappa}\big{(}t\big{[}Q_{\mathrm{cut}},Q\big{]}\big{)} \tag{99}\] \[\qquad\times\int\mathrm{d}\tilde{k}\int\mathrm{d}s\,J_{\kappa} \big{(}s,\mu\big{)}S_{c}^{\kappa}\big{(}\tilde{k},r_{g},Q_{\mathrm{cut}},\beta,\mu\big{)}\delta\bigg{(}\xi-\frac{s}{Q^{2}}-\frac{\tilde{k}(Q_{\mathrm{cut}}) ^{\frac{-1}{1+\beta}}}{Q}\bigg{)}\,.\] This involves the global-soft and c-soft functions we discussed above in Eqs. (47) and (54). The NGLs are now independent of jet mass involving constant scales \(Q_{\rm cut}\) and \(Q\), such that we are able to write it directly as differential in groomed jet mass. The power corrections associated with this factorization are given by \[\mathcal{G}_{\kappa}\big{(}z,\xi,r_{g},\alpha_{s}(\mu)\big{)}=\mathcal{G}_{ \kappa}^{\rm sd\,res.}\big{(}z,\xi,r_{g},\mu\big{)}\Bigg{[}1+\mathcal{O}\bigg{(} \xi_{0},\frac{\xi}{r_{g}^{2}},\Big{(}\frac{\xi}{\xi_{0}}\Big{)}^{\frac{2}{2+ \beta}}\bigg{)}\Bigg{]}\,, \tag{100}\] We see that the jet mass related \(\mathcal{O}(\xi)\) power corrections in the plain jet mass region are now replaced by \(\mathcal{O}(\xi/r_{g}^{2})\), due to modification of the effective jet radius seen by collinear modes. In the plain jet mass region with \(r_{g}=1\), these corrections match with those in Eq. (82). Additionally, we have new power corrections related to soft drop that have resulted in the factorization of the wide-angle soft mode into a global soft and c-soft mode. Next we state the resummed formula for this regime for the jet mass dependent part of \(\mathcal{G}_{\kappa}\): \[\tilde{\mathcal{G}}_{\kappa}^{\rm sd\,res.} (\xi,r_{g},Q,Q_{\rm cut},\zeta,\beta,\mu_{\rm sd})=N_{\kappa}^{ \rm evol}\big{(}\mu_{N},\mu_{gs},Q,Q_{\rm cut},\zeta,\beta\big{)}\mathcal{S}_{ \rm NGL}^{\kappa}\big{(}t\big{[}Q_{\rm cut},Q\big{]}\big{)} \tag{101}\] \[\times\mathcal{J}_{\kappa}^{\rm sd\,res}[\partial_{\Omega};\xi,Q, Q_{\rm cut},\mu_{\rm sd}]\bigg{(}\frac{e^{\gamma_{E}\Omega}}{\Gamma(-\Omega)}+ \mathcal{Q}_{\kappa}^{r_{g}}\big{(}\Omega,\xi,r_{g},\alpha_{s}(\mu_{cs});a_{2 0}^{\rm max}\big{)}\bigg{)}\Bigg{|}_{\Omega=\tilde{\omega}_{cs}(\mu_{cs},\mu_{ J})}\,.\] Here \(\mu_{\rm sd}\) stands for the set of scales in the max-\(R_{g}\) regime in the soft drop resummation region: \[\mu_{\rm sd}(\xi)\equiv\{\mu_{N},\mu_{gs},\mu_{J}(\xi),\mu_{cs}(\xi)\}\,. \tag{102}\] The normalization factor \(N_{\kappa}^{\rm evol}\) includes resummation of logarithms between the hard-collinear and global-soft scales: \[N_{\kappa}^{\rm evol}\big{(}\mu_{N},\mu_{gs},Q,Q_{\rm cut},\zeta,\beta\big{)} \equiv N_{\rm incl}^{\kappa}(Q,\mu_{N})S_{G}^{\kappa}\big{(}Q_{\rm cut}, \zeta,\beta,\mu_{gs}\big{)}\] \[\times e^{\big{[}K_{N}+K_{gs}\big{]}}\,\Big{(}\frac{\mu_{N}}{Q} \Big{)}^{\omega_{N}}\Big{(}\frac{\mu_{gs}}{Q_{\rm cut}}\Big{)}^{\omega_{gs}}\,.\] The function of the derivative operator is given by \[\mathcal{J}_{\kappa}^{\rm sd\,res}[\partial_{\Omega};\xi,Q,Q_{\rm cut },\mu_{\rm sd}]\equiv\frac{1}{\xi}\,e^{K_{cs}+K_{J}}\,\frac{\big{(}\mu_{J}^{2} \big{)}^{\omega_{J}}\big{(}Q\mu_{cs}\big{)}^{\omega_{cs}}}{(\xi Q^{2})^{\Omega }}\Bigg{(}\Big{(}\frac{\mu_{cs}}{Q_{\rm cut}}\Big{)}^{\frac{1}{1+\beta}} \Bigg{)}^{\omega_{cs}} \tag{104}\] \[\times\,\tilde{J}_{\kappa}\Big{[}\partial_{\Omega}+\log\Big{(} \frac{\mu_{J}^{2}}{Q^{2}\xi}\Big{)},\,\alpha_{s}(\mu_{J})\Big{]}\,\tilde{S}_{ c}^{\kappa}\Bigg{[}\partial_{\Omega}+\log\Bigg{(}\frac{\mu_{cs}}{Q\xi}\Big{(} \frac{\mu_{cs}}{Q_{\rm cut}}\Big{)}^{\frac{1}{1+\beta}}\Bigg{)},\alpha_{s}( \mu_{cs})\Bigg{]}\,,\] where the Laplace transforms are written analogously to Eq. (92). Similar to Eq. (90), the derivatives are evaluated at \(\tilde{\omega}_{cs}(\mu_{cs},\mu_{J})\) defined as \[\tilde{\omega}_{cs}(\mu_{cs},\mu_{J})\equiv\omega_{cs}(\mu,\mu_{cs})+\omega_{ J}(\mu,\mu_{J})\,. \tag{105}\] Finally, as in Eq. (93), we expand the above equation to \(\mathcal{O}(\alpha_{s}^{2})\) including cross terms between \(\mathcal{J}_{\kappa}^{\rm sd\,res}\) and the same kernel \(\mathcal{Q}_{\kappa}^{r_{g}}\) defined in Eq. (95) required for NNLL accuracy. #### 2.5.3 Min-\(R_{g}\) regime We now turn to the min-\(R_{g}\) regime. As seen in the rightmost column in Fig. 4, the cross section in the min-\(R_{g}\) regime involves combination of the hard collinear \(\mathcal{C}\) mode, the collinear soft mode \(\textsc{CS}_{\beta}\) and the global soft mode \(S_{G}\), and is given by \[\mathcal{G}_{\kappa}^{\text{min}}\big{(}z,\xi,r_{g},\mu\big{)} =\sum_{\kappa^{\prime}}\mathcal{H}_{\kappa^{\prime}\to\kappa}(z,Q, \mu)S_{G}^{\kappa}\big{(}Q_{\text{cut}},\zeta,\beta,\mu\big{)}\mathcal{S}_{ \text{NGL}}^{\kappa}\big{(}t\big{[}Q_{\text{cut}},Q\big{]}\big{)} \tag{106}\] \[\quad\times S_{c_{g}}^{\kappa}\big{(}Q_{\text{cut}}r_{g}^{1+\beta },\beta,\mu\big{)}\mathcal{S}_{\text{NGL}}^{\kappa}\Big{(}t\big{[}Q_{\text{ cut}}r_{g}^{1+\beta},Qr_{g}\big{]}\Big{)}\frac{1}{r_{g}^{2}}\mathcal{C}^{\kappa} \bigg{(}\frac{\xi}{r_{g}^{2}},Qr_{g},\mu\bigg{)}\,.\] Here we notice appearance of new NGLs between the scales associate with \(\textsc{CS}_{\beta}\) and \(\mathcal{C}\) modes due to an additional boundary of the groomed jet radius. The power corrections that are dropped in this formula are given by \[\mathcal{G}_{\kappa}\big{(}z,\xi,r_{g},\alpha_{s}(\mu)\big{)}=\mathcal{G}_{ \kappa}^{\text{min}}\big{(}z,\xi,r_{g},\mu\big{)}\Bigg{[}1+\mathcal{O}\bigg{(} \xi_{0},\Big{(}\frac{\xi}{\xi_{0}}\Big{)}^{\frac{2}{2+\beta}},r_{g}^{2+\beta} \frac{\xi_{0}}{\xi}\bigg{)}\Bigg{]}\,, \tag{107}\] As before, the soft drop factorization proceeds by dropping the \(\mathcal{O}((\xi/\xi_{0})^{\frac{2}{2+\beta}})\) power corrections. In contrast with the max-\(R_{g}\) regime in Eqs. (82) and (100) collinear function now includes the \(\mathcal{O}(\xi/r_{g}^{2})\) terms which become \(\mathcal{O}(1)\) in this regime. However, in order to resum logarithms between scales associated with \(\textsc{CS}_{\beta}\) and \(\mathcal{C}\) modes, the power corrections of the form \(r_{g}^{2+\beta}\frac{\xi_{0}}{\xi}\) are dropped. The resummed result for the normalized cross section is given by \[\tilde{\mathcal{G}}_{\kappa}^{\text{min}}(\xi,r_{g},Q,Q_{\text{ cut}},\zeta,\beta,\mu_{\text{min}}) =N_{\kappa}^{\text{evol}}\big{(}\mu_{N},\mu_{gs},Q,Q_{\text{cut}},\zeta,\beta\big{)}\mathcal{S}_{\text{NGL}}^{\kappa}\big{(}t\big{[}Q_{\text{ cut}},Q\big{]}\big{)} \tag{108}\] \[\quad\times\mathcal{S}_{\text{NGL}}^{\kappa}\Big{(}t\big{[}Q_{ \text{cut}}r_{g}^{1+\beta},Qr_{g}\big{]}\Big{)}e^{K_{\mathcal{C}}+K_{cs_{g}}} \Big{(}\frac{\mu_{gs}}{Q_{\text{cut}}r_{g}^{1+\beta}}\Big{)}^{\omega_{cs_{g}} }\Big{(}\frac{\mu_{\mathcal{C}}}{Qr_{g}}\Big{)}^{\omega_{\mathcal{C}}}\] \[\quad\times S_{c_{g}}^{\kappa}\big{(}Q_{\text{cut}}r_{g}^{1+\beta },\beta,\mu_{cs_{g}}\big{)}\frac{1}{r_{g}^{2}}\mathcal{C}^{\kappa}\bigg{(} \frac{\xi}{r_{g}^{2}},Qr_{g},\mu_{\mathcal{C}};a_{20}^{\text{min}}\bigg{)}\,,\] with the set of profiles for this regimes being: \[\mu_{\text{min}}(r_{g})\equiv\{\mu_{N},\mu_{gs},\mu_{\mathcal{C}}(r_{g}),\mu_ {cs_{g}}(r_{g})\}\,. \tag{109}\] Here we have included additional \(\mathcal{O}(\alpha_{s}^{2})\) terms in the collinear function precisely as described in Ref. [71]: \[\frac{\xi}{r_{g}^{2}}\mathcal{C}^{q}\bigg{(}\frac{\xi}{r_{g}^{2}},Qr_ {g},\mu;a_{20}^{\min}\bigg{)} =\] \[+\] \[\frac{\xi}{r_{g}^{2}}\mathcal{C}^{g}\bigg{(}\frac{\xi}{r_{g}^{2}},Qr _{g},\mu;a_{20}^{\min}\bigg{)} =\] \[+\] Here \(a_{10}^{\mathcal{C}_{\kappa}}\frac{\alpha_{s}(\mu)}{\pi}\) is the \(\mathcal{O}(\alpha_{s})\) result of the collinear function in Eq. (2.40) after including \(\xi/r_{g}^{2}\) factor that sets the \(\delta(\xi/r_{g}^{2})\) terms to zero. The new \(\mathcal{O}(\alpha_{s}^{2})\) pieces in the square brackets provide the boundary condition for NLL resummation, as we saw above in the max-\(R_{g}\) case. In the second line we have included parameterized the uncertainty from missing \(\mathcal{O}(\alpha_{s})\) pieces in terms of the parameter \(a_{20}^{\min}\in[-2\pi,2\pi]\), while shifting the argument appropriately to take into account the different end-point of the jet mass spectrum at NLO [71] at \(r_{g}=8/5\sqrt{\xi}\) instead of \(r_{g}=2\sqrt{\xi}\) at LO. Additionally, we also include \(\mathcal{O}(\alpha_{s}^{2})\) cross terms from other pieces in Eq. (2.108). #### 2.5.4 Intermediate-\(R_{g}\) regime Finally, we describe the factorization in the intermediate \(R_{g}\) regime shown in the middle column in Fig. 4 which represents the most factorized scenario: \[\mathcal{G}_{\kappa}^{\rm int}(z,\xi,r_{g},\mu) = \tag{2.111}\] \[\times\frac{\mathrm{d}}{\mathrm{d}\xi}\Bigg{[}\mathcal{S}_{\rm NGL }^{\kappa}\bigg{(}t\bigg{[}Q_{\rm cut}r_{g}^{1+\beta},\frac{Q\xi}{r_{g}} \bigg{]}\bigg{)}\Sigma_{\rm int}^{\kappa}\bigg{(}\frac{\xi}{r_{g}^{2}},Qr_{g},\mu\bigg{)}\Bigg{]}\,,\] Here the \(r_{g}\)-dependent NGLs are analogous to the previous case but involve running between the scales associated with CS\({}_{g}\) and CS\({}_{m}\) modes. Since they do depend on the jet mass, we have written them as a derivative of the cumulative jet mass cross section. Here \(\Sigma_{\rm int}^{\kappa}\) is defined in terms of \(\Sigma_{\rm no\,sd}^{\kappa}\) in Eq. (2.88): \[\Sigma_{\rm int}^{\kappa}\bigg{(}\frac{\xi}{r_{g}^{2}},Qr_{g},\mu\bigg{)}= \Sigma_{\rm no\,sd}^{\kappa}\bigg{(}\xi\to\frac{\xi}{r_{g}^{2}},Q\to Qr_{g}, \mu\bigg{)}\,. \tag{2.112}\] The power corrections that are dropped in this regime are combinations of the previous cases: \[\mathcal{G}_{\kappa}\big{(}z,\xi,r_{g},\alpha_{s}(\mu)\big{)}=\mathcal{G}_{ \kappa}^{\rm int}\big{(}z,\xi,r_{g},\mu\bigg{)}\Bigg{[}1+\mathcal{O}\bigg{(} \xi_{0},\Big{(}\frac{\xi}{\xi_{0}}\Big{)}^{\frac{2}{2+\beta}},r_{g}^{2+\beta} \frac{\xi_{0}}{\xi},\frac{\xi}{r_{g}^{2}}\bigg{)}\Bigg{]}\,. \tag{2.113}\] Finally, we state the formula for the resummed cross section in this regime: \[\tilde{\mathcal{G}}_{\kappa}^{\text{int}}(\xi,r_{g},Q,Q_{\text{cut}},\zeta,\beta,\mu_{\text{int}})=N_{\kappa}^{\text{evol}}\big{(}\mu_{N},\mu_{gs},Q,Q_{\text{cut}},\zeta,\beta\big{)}\mathcal{S}_{\text{NGL}}^{\kappa}\big{(}t \big{[}Q_{\text{cut}},Q\big{]}\big{)} \tag{114}\] \[\times\Bigg{(}\mathcal{S}_{\text{NGL}}^{\kappa}\bigg{(}t\bigg{[}Q _{\text{cut}}r_{g}^{1+\beta},\frac{Q\xi}{r_{g}}\bigg{]}\bigg{)}\mathcal{J}_{ \kappa}^{\text{int}}[\partial_{\Omega};\xi,r_{g},Q,Q_{\text{cut}},\zeta,\beta, \mu_{\text{int}}]\frac{e^{\gamma_{E}\Omega}}{\Gamma(-\Omega)}\] \[+\left.\frac{\text{d}}{\text{dln}\xi}\mathcal{S}_{\text{NGL}}^{ \kappa}\bigg{(}t\bigg{[}Q_{\text{cut}}r_{g}^{1+\beta},\frac{Q\xi}{r_{g}} \bigg{]}\bigg{)}\times\mathcal{J}_{\kappa}^{\text{int}}[\partial_{\Omega}; \xi,r_{g},Q,Q_{\text{cut}},\zeta,\beta,\mu_{\text{int}}]\frac{e^{\gamma_{E} \Omega}}{\Gamma(1-\Omega)}\Bigg{)}\Bigg{|}_{\Omega=\tilde{\omega}(\mu_{s},\mu _{J})}\,.\] where \[\mathcal{J}_{\kappa}^{\text{int}}[\partial_{\Omega};\xi,r_{g},Q,Q_ {\text{cut}},\zeta,\beta,\mu_{\text{int}}]\equiv\frac{e^{K_{J}+K_{s}+K_{csg}} }{\xi}\Big{(}\frac{\mu_{csg}}{Q_{\text{cut}}r_{g}^{1+\beta}}\Big{)}^{\omega_{ csg}}\frac{\big{(}Qr_{g}\mu_{cs_{m}}\big{)}^{\omega_{cs_{m}}}(\mu_{J}^{2})^{ \omega_{J}}}{(\xi Q^{2})^{\Omega}}\] \[\times S_{c_{g}}^{\kappa}\big{(}Q_{\text{cut}}r_{g}^{1+\beta}, \beta,\mu_{cs_{g}}\big{)}\tilde{J}_{\kappa}\Big{[}\partial_{\Omega}+\log\Big{(} \frac{\mu_{J}^{2}}{Q^{2}\xi}\Big{)},\,\alpha_{s}(\mu_{J})\Big{]}\,\tilde{S}_{c _{m}}^{\kappa}\Big{[}\partial_{\Omega}+\log\Big{(}\frac{\mu_{cs_{m}}r_{g}}{Q \xi}\Big{)},\alpha_{s}(\mu_{cs_{m}})\Big{]}\,, \tag{115}\] and the intermediate-\(R_{g}\) profiles being \[\mu_{\text{int}}(\xi,r_{g})\equiv\left\{\mu_{N},\mu_{gs},\mu_{J}(\xi),\mu_{cs_ {m}}(\xi,r_{g}),\mu_{cs_{g}}(r_{g})\right\}. \tag{116}\] Unlike the previous two cases, here every function contributes to the NLL boundary condition, and hence we do not need to include any additional \(\mathcal{O}(\alpha_{s}^{2})\) pieces. ### Matched cross section We now combine the cross sections in the three regimes and the two jet mass regions to obtain the complete matched cross section. Figure 5: Matching for max-\(R_{g}\) cross section. The vertical lines denote the extent of the soft drop operator expansion region. The three curves correspond to choosing different intermediate values of groomed jet radius as a function of the jet mass. #### 2.6.1 Matching in the max-\(R_{g}\) regime We first begin with combining the max-\(R_{g}\) cross sections in Eqs. (83) and (101) to obtain a complete matched result in this regime. Our prescription for matching the two results is given by [51] \[\tilde{\mathcal{G}}_{\kappa}^{\rm max}(\xi,r_{g})\equiv\tilde{\mathcal{G}}_{ \kappa}^{\rm sd\,res.}(\xi,r_{g},\mu_{\rm sd\toplain})+\left[\tilde{\mathcal{G }}_{\kappa}^{\rm plain}(\xi,r_{g},\mu_{\rm plain})-\tilde{\mathcal{G}}_{ \kappa}^{\rm sd\,res.}(\xi,r_{g},\mu_{\rm plain})\right]. \tag{117}\] For simplicity we have suppressed dependence on kinematic and grooming parameters. The profile \(\mu_{\rm sd\toplain}\) is designed such that it transitions from \(\mu_{\rm sd}\) scales in Eq. (102) to \(\mu_{\rm plain}\) scales in Eq. (84) by merging the c-soft \(\mu_{cs}\) and global-soft \(\mu_{gs}\) scales for \(\xi\geq\xi_{0}\). As a result the two \(\tilde{\mathcal{G}}_{\kappa}^{\rm sd\,res.}\) terms cancel each other for \(\xi\geq\xi_{0}\) leaving behind the correct \(\tilde{\mathcal{G}}_{\kappa}^{\rm plain}\) cross section in this region. In the soft drop resummation region for \(\xi<\xi_{0}\), the term \(\tilde{\mathcal{G}}_{\kappa}^{\rm sd\,res.}(\xi,r_{g},\mu_{\rm plain})\) acts as subtraction piece for the \(\tilde{\mathcal{G}}_{\kappa}^{\rm plain}\) cross section evaluated with the same scale. By evaluating the collinear-soft and global-soft pieces at the same \(\mu_{s}\) scale, the difference between the two amounts to \((\xi/\xi_{0})^{\frac{2}{2+\beta}}\) soft drop related power corrections lacking in the \(\tilde{\mathcal{G}}_{\kappa}^{\rm sd\,res.}\) in Eq. (100). Furthermore, since we have chosen to employ the same kernel \(\mathcal{Q}_{\kappa}^{r_{g}}\) in Eq. (95) in all the three pieces, the matching of \(r_{g}\)-related piece simply amounts to choosing the right soft scale in the argument of \(\alpha_{s}\) and the resummation kernels \(\omega_{cs_{m},s}\) in Eqs. (90) and (105). Finally, as remarked above, including the cross term \(\mathcal{Q}_{\kappa}^{(\rm sd,r_{g})}\) defined in Eq. (96) in the plain jet mass cross section, Eq. (117) seamlessly implements matching of \(\mathcal{O}(\alpha_{s}^{2})\) terms as well. We show the result of matching for gluon and quark jets in Fig. 5. For now we will only include terms up to \(\mathcal{O}(\alpha_{s})\) and discuss the effects of including \(\mathcal{O}(\alpha_{s}^{2})\) cross terms below. Here we have taken the groomed jet radius cut \(r_{g}\) to lie somewhere between the maximum and minimum value of \(r_{g}\) for a given jet mass: \[r_{g}^{\rm min}(\xi)\equiv\sqrt{\xi}\,,\qquad r_{g}^{\rm max}(\xi)\equiv\min \left\{1,\left[\left(\frac{\xi_{0}}{\xi}\right)^{\frac{2}{2+\beta}}-\zeta^{2} \right]^{-\frac{1}{2}}\right\}. \tag{118}\] In Fig. 5 the vertical lines denote the extent of the soft drop operator expansion region. As explained above, the overlap curve \(\mathcal{G}_{\kappa}^{\rm sd\,res.}(\xi,r_{g},\mu_{\rm plain})\) merges with the un-factorized soft drop curve in for small jet masses and with the factorized one for \(\xi>\xi_{0}\). We see that the effect of matching close to the cusp is particularly noticeable for gluon jets. #### 2.6.2 Matched resummed cross section Having defined the max-\(R_{g}\) cross section, we now follow the same strategy as in Ref. [71] to obtain the matched resummed cross section valid across all three regimes: \[\tilde{\mathcal{G}}_{\kappa}^{\rm match}(\xi,r_{g}) \equiv\tilde{\mathcal{G}}_{\kappa}^{\rm int}(\xi,r_{g},\mu_{\rm hyb })+\left[\tilde{\mathcal{G}}_{\kappa}^{\rm max}(\xi,r_{g})-\tilde{\mathcal{G }}_{\kappa}^{\rm int}(\xi,r_{g},\mu_{\rm sd\toplain})\right] \tag{119}\] \[\quad+\left[\tilde{\mathcal{G}}_{\kappa}^{\rm min}(\xi,r_{g},\mu_ {\rm min})-\tilde{\mathcal{G}}_{\kappa}^{\rm int}(\xi,r_{g},\mu_{\rm min})\right].\] The new hybrid profile \(\mu_{\rm hyb}\) that appears in the first term interpolates between the three sets of profiles \(\mu_{\rm sd\toplain}\), \(\mu_{\rm min}\) and \(\mu_{\rm int}\), and is given by \[\mu_{\rm hyb}\equiv\left(\mu_{\rm min}\right)^{w_{\rm min}}\!\left(\mu_{\rm int }\right)^{w_{\rm int}}\!\left(\mu_{\rm sd\toplain}\right)^{w_{\rm sd\to plain }},\quad w_{\rm min}+w_{\rm int}+w_{\rm sd\toplain}=1\,. \tag{120}\] where \(w_{i}\) are weight functions that depend on both \(\xi\) and \(r_{g}\) and provide a prescription for demarcating boundaries between the three regimes, as shown in Fig. 3. The construction of the weight functions for soft drop resummation region was discussed in detail in Ref. [71]. Below in App. D we describe our implementation of the weight functions that extends them into the plain jet mass resummation region. In Fig. 6 we show the matching across the three regimes for quark and gluon jets. We can see that the overlap pieces obtained from the intermediate-\(R_{g}\) regime (shown as dotted lines) cancel the max-\(R_{g}\) cross section for \(r_{g}\ll r_{g}^{\rm max}(\xi)\) and likewise the min-\(R_{g}\) cross section for \(r_{g}\gg r_{g}^{\rm min}\). The intermediate-\(R_{g}\) cross section with hybrid profiles (dashed) interpolates between the two overlap pieces. The matched curve thus obtained agrees with the NNLL jet mass cross section at the end point \(r_{g}=r_{g}^{\rm max}\), such that \(\mathcal{G}_{\kappa}^{\rm match}(\xi,r_{g}^{\rm max}(\xi))=\mathcal{G}_{ \kappa}^{\rm NNLL}(\xi)\). However, note that \(\mathcal{G}_{\kappa}^{\rm match}(\xi,r_{g}^{\rm max}(\xi))\neq\mathcal{G}_{ \kappa}^{\rm max}(\xi,r_{g}^{\rm max}(\xi))\). This is because the \(\mathcal{G}_{\kappa}^{\rm max}\) cross section is derived from factorizing the jet mass measurement and lacks jet mass related power corrections which are supplied by the min-\(R_{g}\) cross section. #### 2.6.3 Perturbative uncertainty In Fig. 7 we show the estimate of perturbative uncertainty at NNLL through scale variations and nuisance parameters. In App. C we summarize the implementation of profile scales and their variation discussed in Refs. [51; 71]. The variation involves six parameters that test sensitivity of the cross section to a) a uniform up/down variation of all scales, b) trumpet variation of \(\xi\)- and \(r_{g}\)-dependent scales in the resummation region, c) the scale where the coupling is deemed non-perturbative and frozen to a constant value, and d) breaking of three canonical relations which are used to derive the jet and (hard-)collinear scales from the soft scales. In addition to scale variations, we also show uncertainty induced from lack of knowledge of two-loop non-logarithmic terms in the max-\(R_{g}\) and min-\(R_{g}\) cross sections via parameters \(a_{20}^{\rm max/min}\) in Eqs. (95) and (110). As discussed in Ref. [71] we limit these variations within their respective regime by multiplying them by the corresponding weight functions. We see Figure 6: Matching across the three \(R_{g}\) regimes. The vertical lines denote the minimum and maximum \(r_{g}\) allowed for the specified \(\xi=10^{-2}\) value. that the \(a_{20}^{\rm min}\) variation allows us to probe sensitivity to the order-dependent minimum \(r_{g}\) end point of the cross section, and that this variation also is the dominant uncertainty in the min-\(R_{g}\) regime. On the other hand, the variation due to \(a_{20}^{\rm max}\) is subdominant compared to scale variations in the max-\(R_{g}\) region. We note that both these variations vanish for \(r_{g}=r_{g}^{\rm max}\) and they do not impact the single differential jet mass cross section, whereas the scale variations are correlated between both single and doubly differential cross sections at each value of \(m_{J}\). #### 2.6.4 Effect of two-loop pieces for NNLL resummation Next, we inspect the effect of including \(\mathcal{O}(\alpha_{s}^{2})\) cross-terms to implement the boundary condition for NNLL resummation. As noted above, there are two places where this is needed: firstly in the max-\(R_{g}\) cross sections in Eqs. (3.12) and (2.101) and secondly in the collinear function in the min-\(R_{g}\) regime as shown in Eq. (2.110). Since these two loop pieces do not match straightforwardly as the one-loop pieces above in Eq. (2.119) we include them in the cross section by multiplying these terms by the corresponding weight function. We show this in Fig. 8 where we plot \[w_{X}(\xi,r_{g})\,\xi\tilde{\mathcal{G}}_{\kappa X}^{[\mathcal{O}(\alpha_{s} ^{2})]}(\xi,r_{g})\equiv w_{X}(\xi,r_{g})\Big{[}\xi\tilde{\mathcal{G}}_{ \kappa X}(\xi,r_{g})-\xi\tilde{\mathcal{G}}_{\kappa X}^{[\mathcal{O}(\alpha_ {s})]}(\xi,r_{g})\Big{]}\,, \tag{2.121}\] Figure 7: Profile and two-loop constant terms variation of the cumulative \(r_{g}\) cross section (top row) and the single differential jet mass cross section (bottom row) for quarks (left) and gluon (right) jets. The vertical lines in the top row denote the minimum and maximum \(r_{g}\) allowed for the specified \(\xi=10^{-2}\) value. where \(X\) refers to the regime considered and in the second term we truncate all the fixed-order terms to \(\mathcal{O}(\alpha_{s})\). In Fig. 8 we show the size of the two-loop pieces in the \(\text{min-}R_{g}\) cross section multiplied by \(w_{\text{min}}(\xi,r_{g})\), and those from each of the three terms in Eq. (117) with a common weight factor \(w_{\text{max}}(\xi,r_{g})\). Note that despite the fact that corrections to the \(\tilde{\mathcal{G}}_{\kappa}^{\text{plain}}(\xi,r_{g},\mu_{\text{plain}})\) are relatively large, after including the two loop cross term in Eq. (96) it cancels almost entirely against the correction from \(\tilde{\mathcal{G}}_{\kappa}^{\text{sd\,res.}}(\xi,r_{g},\mu_{\text{plain}})\) and only those remaining in the \(\tilde{\mathcal{G}}_{\kappa}^{\text{sd\,res.}}\) with soft drop resummation survive. These corrections are overall small and they turn out to be smaller than the perturbative uncertainty at NNLL from scale variations and nuisance parameters in Fig. 7. We will investigate them again when considering \(r_{g}\)-moments of the doubly differential cross section. #### 2.6.5 Non-singular corrections Finally, we can include the non-singular pieces in the resummed, matched cross section as \[\tilde{\mathcal{G}}_{\kappa}(\xi,r_{g})\equiv\tilde{\mathcal{G}}_{\kappa}^{ \text{match}}(\xi,r_{g})+\Delta\tilde{\mathcal{G}}_{\kappa}^{[\text{n.s.}]} \big{(}\xi,r_{g},\alpha_{s}(\mu_{N})\big{)}\,, \tag{122}\] where \[\Delta\tilde{\mathcal{G}}_{\kappa}^{[\text{n.s.}]}\big{(}\xi,r_{g},\alpha_{s} (\mu_{N})\big{)}\equiv\tilde{\mathcal{G}}_{\kappa}^{\text{FO}}(\xi,r_{g}, \alpha_{s}(\mu_{N}))-\tilde{\mathcal{G}}_{\kappa}^{\text{match}}(\xi,r_{g}, \mu_{i}\to\mu_{N}) \tag{123}\] In \(\tilde{\mathcal{G}}_{\kappa}^{\text{match}}(\xi,r_{g},\mu_{i}\to\mu_{N})\) all the \(\mu_{i}\) scales are set to the hard-collinear \(\mu_{N}\) scale, turning off all resummation and non-global logarithms. To see how the cancellation of singular pieces between the fixed order and the matched cross section takes place, we note that the measurement function in Eq. (72) can be written as \[\xi\delta_{\xi}^{\text{FO}}(x,y,r_{g},\xi_{0},\zeta)\simeq\xi \Big{[}\delta_{\xi}^{\zeta}(x,y)+\delta_{\xi}^{\text{CS}_{m}}(x,y,1)- \delta_{\xi}^{\text{CS}_{m}}(x,y,r_{g}) \tag{124}\] \[\quad\quad+\Delta\delta_{\xi,r_{g}}^{\text{CS, full}}(x,y,r_{g},\xi_{0},\zeta)+\Delta\delta_{\xi,\text{sd}}^{S_{\text{plain}}}(x,y,\xi_{0}, \zeta)\Big{]}\,,\] Figure 8: Size of two-loop pieces of the resummed cross section required for \(\text{NLL}^{\prime}\) boundary condition for quark and gluon jets. Here we have multiplied each piece in a given regime by corresponding weight function. where the approximate equality results from dropping power suppressed terms in the soft approximations on the right hand side. The measurement functions on the right hand side were defined above in Eqs. (38), (41), (61) and (67) respectively. It is not hard to see that setting all the scales in Eq. (119) to \(\mu_{N}\) will result in the above combination of factorization functions at one-loop. In Fig. 9 we show the non-singular correction for quark and gluon jets. In fact, we find a complete cancellation between the fixed order and the matched cross-section for \(r_{g}\leq r_{g}^{\rm max}\) defined in Eq. (118). This is because we have already taken into account the mass-power corrections in the collinear function in Eq. (40) and the remaining power corrections in Eq. (122) arise from only approximations made in the soft drop condition. For \(r_{g}\leq r_{g}^{\rm max}\) the \(r_{g}\)-constraint is stronger than soft drop constraint. This also serves as a strong cross check of the numerical implementation. On the other hand for \(r_{g}>r_{g}^{\rm max}\), the resummed-cross section saturates to single differential jet mass cross section governed by the soft drop constraint. It is here there is a non-trivial non-singular correction in Eq. (122). However, as can be seen in Fig. 9, this correction is numerically negligible. It is of the same order as the non-singular correction in single differential jet mass which was also shown to be numerically negligible in Ref. [51]. Hence, we will ignore the non-singular corrections from soft drop constraints entirely in our numerical analysis. #### 2.6.6 \(R_{g}\)-weighted jet mass cross section Having set up the matched cross section we can compute the \(r_{g}\)-weighted cross section differential in jet mass that are relevant phenomenologically as perturbative weights of hadronization and underlying event corrections. To facilitate comparison with Monte Carlo event generators we will normalize the weighted cross section with the usual jet mass cross section. To this end Figure 9: Non-singular correction related to approximations in the soft drop constraint. The vertical lines denote the range of groomed jet radius allowed for the specified jet mass value of \(\xi=10^{-2}\). The non-singular correction related to soft drop is non-zero only beyond this range since it is where the soft drop constraint is a stronger condition than \(r_{g}\)-constraint. we define \[C_{1\kappa}^{(n)}(\xi)\frac{1}{\hat{\sigma}_{\kappa}}\frac{\mathrm{d} \hat{\sigma}_{\kappa}}{\mathrm{d}\xi}\equiv\int_{0}^{1}\mathrm{d}r_{g}\,r_{g}^{ n}\frac{1}{\hat{\sigma}_{\kappa}}\frac{\mathrm{d}^{2}\hat{\sigma}_{\kappa}}{ \mathrm{d}r_{g}\mathrm{d}\xi} \tag{2.125}\] Since we have access to the cross section cumulative in \(r_{g}\), we have \[\frac{1}{\hat{\sigma}_{\kappa}}\frac{\mathrm{d}^{2}\hat{\sigma}_ {\kappa}}{\mathrm{d}r_{g}\mathrm{d}\xi}=\frac{\mathrm{d}\tilde{\mathcal{G}}_{ \kappa}^{\mathrm{match}}(\xi,r_{g})}{\mathrm{d}r_{g}}\,,\qquad\qquad\frac{1}{ \hat{\sigma}_{\kappa}}\frac{\mathrm{d}\hat{\sigma}_{\kappa}}{\mathrm{d}\xi}= \tilde{\mathcal{G}}_{\kappa}^{\mathrm{match}}(\xi,1)\equiv\tilde{\mathcal{G}} _{\kappa}^{\mathrm{match}}(\xi)\,. \tag{2.126}\] Hence, the moment \(C_{1}^{(n)}(\xi)\) becomes \[C_{1\kappa}^{(n)}(\xi) =\frac{1}{\tilde{\mathcal{G}}_{\kappa}^{\mathrm{match}}(\xi)} \int_{r_{g}^{\mathrm{min}}(\xi)}^{r_{g}^{\mathrm{max}}(\xi)}\mathrm{d}r_{g}\,r _{g}^{n}\frac{\mathrm{d}}{\mathrm{d}r_{g}}\,\tilde{\mathcal{G}}_{\kappa}^{ \mathrm{match}}(\xi,r_{g})\] \[=\frac{1}{\tilde{\mathcal{G}}_{\kappa}^{\mathrm{match}}(\xi)} \bigg{(}\big{[}r_{g}^{\mathrm{max}}(\xi)\big{]}^{n}\tilde{\mathcal{G}}_{ \kappa}^{\mathrm{match}}\big{(}\xi,r_{g}^{\mathrm{max}}(\xi)\big{)}-n\int_{r_{ g}^{\mathrm{min}}(\xi)}^{r_{g}^{\mathrm{max}}(\xi)}\mathrm{d}r_{g}\,r_{g}^{n-1}\, \tilde{\mathcal{G}}_{\kappa}^{\mathrm{match}}(\xi,r_{g})\bigg{)}\] \[=\big{[}r_{g}^{\mathrm{max}}(\xi)\big{]}^{n}-\frac{n}{\tilde{ \mathcal{G}}_{\kappa}^{\mathrm{match}}(\xi)}\int_{r_{g}^{\mathrm{min}}(\xi)}^ {r_{g}^{\mathrm{max}}(\xi)}\mathrm{d}r_{g}\,r_{g}^{n-1}\,\tilde{\mathcal{G}}_ {\kappa}^{\mathrm{match}}(\xi,r_{g})\,, \tag{2.127}\] Figure 10: \(C_{1\kappa}^{(n)}\) for \(n=1\) (top) and \(n=4\) (bottom) for quark and gluon jets. The vertical lines denote the extent of the SDOE region. where we have used the fact that the differential cross section only has support between \(r_{g}^{\rm min}(\xi)\) and \(r_{g}^{\rm max}(\xi)\). In Fig. 10 we show \(C_{1\kappa}^{(n)}\) for \(n=1,4\) cases, relevant for hadronization and underlying event corrections. Following the same color scheme as in Fig. 7 we find that the effect of scale variations on the normalized moment is very small. This is because the scale variation uncertainties are correlated between the doubly differential and the singly differential cross sections and cancel in the ratio. On the other hand, the dominant uncertainty in \(C_{1\kappa}^{(n)}\) results from variation of the \(a_{20}^{\rm min}\) and \(a_{20}^{\rm max}\) nuisance parameters. In Fig. 11 we inspect the effect of adding two-loop logarithmic pieces, that are required to implement consistently the NNLL boundary condition, and non-global logs at NLL (both separately) by considering the fractional deviation in the moment. We also show for comparison the fractional deviation from adding two loop non-logarithmic terms in the min-\(R_{g}\) cross section by varying the nuisance parameter \(a_{20}^{\rm min}\), which dominates both these effects. The \(a_{20}^{\rm max}\) variation (not shown) in the ungroomed region will similarly dominate the effect of two-loop logarithmic terms. Hence, in our final numerical analysis we can safely ignore the \(\mathcal{O}(\alpha_{s}^{2})\) logarithmic pieces and non-global logarithms. ## 3 Soft drop boundary cross section We now consider the computation of the soft drop boundary cross section stated in Eq. (6). We are specifically interested in the projection of the triply differential cross section on the boundary of the soft drop: \[\frac{1}{\hat{\sigma}_{\kappa}}\frac{{\rm d}\hat{\sigma}_{\kappa}^{\otimes}}{ {\rm d}r_{g}{\rm d}\xi}\equiv\int{\rm d}z_{g}\;\delta(z_{g}-\xi_{0}r_{g}^{ \beta})\,\frac{1}{\hat{\sigma}_{\kappa}}\frac{{\rm d}\hat{\sigma}_{\kappa}}{ {\rm d}z_{g}{\rm d}r_{g}{\rm d}\xi} \tag{11}\] However, instead of directly computing the triply-differential cross section we can equivalently calculate the doubly differential cross section while including a soft drop condition small shift Figure 11: Fractional deviation of \(C_{1}^{(1)}(\xi)\) upon including \(\mathcal{O}(\alpha_{s}^{2})\) pieces and non-global logarithms compared with that from varying the \(a_{20}^{\rm min}\) nuisance parameter. in the soft drop condition: \[\hat{\Theta}_{\rm sd}(x,y,\xi_{0},\zeta,\varepsilon) =\Theta\bigg{(}\frac{\min\{x_{1},x_{2}\}}{x_{1}+x_{2}}-\xi_{0} \Big{(}\frac{y}{4x_{1}x_{2}}\Big{)}^{\frac{\beta}{2}}+\varepsilon\bigg{)}\] \[=\hat{\Theta}_{\rm sd}(x,y,\xi_{0},\zeta)+\varepsilon\,\delta \bigg{(}\frac{\min\{x_{1},x_{2}\}}{x_{1}+x_{2}}-\xi_{0}\Big{(}\frac{y}{4x_{1} x_{2}}\Big{)}^{\frac{\beta}{2}}\bigg{)}+\ldots\,, \tag{11}\] and differentiate it with respect to \(\varepsilon\): \[\frac{1}{\hat{\sigma}_{\kappa}}\frac{{\rm d}\hat{\sigma}_{\kappa}^{\oplus}}{{ \rm d}r_{g}{\rm d}\xi}=\frac{{\rm d}}{{\rm d}\varepsilon}\bigg{(}\frac{1}{\hat {\sigma}_{\kappa}}\frac{{\rm d}^{2}\hat{\sigma}_{\kappa,\varepsilon}}{{\rm d }r_{g}{\rm d}\xi}\bigg{)}\bigg{|}_{\varepsilon\to 0}\,, \tag{12}\] where \({\rm d}\sigma_{\kappa,\varepsilon}\) is calculated with the shifted soft drop condition. Additionally, as explained in Ref. [71], we only need to consider the max-\(R_{g}\) regime which captures the correct two-pronged geometry, whereas the contributions from the intermediate and min-\(R_{g}\) enter beyond the LL nonperturbative factorization in Eq. (3). In the discussion above for doubly differential cross section we did not drop contributions from these regions because effects of hadronization (and of ISR and underlying event) involve positive powers of \(r_{g}\)-moments, and hence they are naturally suppressed. Whereas for the boundary cross section we require \(1/r_{g}\) moment, which receives large power corrections from the intermediate and min-\(R_{g}\) region, which must be suppressed by hand when calculating using shifted soft drop condition. As above, in practice, we will find it easier to compute first the soft matrix elements, and use them in the factorization formulae as well as subtractions for fixed order pieces. In the soft limit, the shifted soft drop condition reads \[\hat{\Theta}_{\rm sd}^{(s)}(x,y,\xi_{0},\zeta,\varepsilon) =\Theta\bigg{(}y\zeta^{2}+x-\xi_{0}^{\frac{2}{2+\beta}}y^{\frac{ \beta}{2+\beta}}+\frac{2\varepsilon}{2+\beta}\Big{(}\frac{y}{\xi_{0}}\Big{)}^ {\frac{\beta}{2+\beta}}\Big{(}\zeta^{2}+\frac{x}{y}\Big{)}^{\frac{\beta}{2}}+\ldots \bigg{)}\] \[=\hat{\Theta}_{\rm sd}^{(s)}(x,y,\xi_{0},\zeta)+\frac{2 \varepsilon}{2+\beta}\;\delta\Big{(}y\zeta^{2}+x-\big{(}\xi_{0}\big{)}^{\frac {2}{2+\beta}}(y)^{\frac{\beta}{2+\beta}}\Big{)}+\ldots\,. \tag{13}\] The same in collinear-soft limit is given by \[\hat{\Theta}_{\rm sd}^{(cs)}(x,y,\xi_{0},\varepsilon) =\Theta\Bigg{(}x-\xi_{0}^{\frac{2}{2+\beta}}y^{\frac{\beta}{2+ \beta}}+\frac{2\varepsilon}{2+\beta}\Big{(}\frac{y}{\xi_{0}}\Big{)}^{\frac{ \beta}{2+\beta}}\Big{(}\frac{x}{y}\Big{)}^{\frac{\beta}{2}}+\ldots\Bigg{)}\] \[=\hat{\Theta}_{\rm sd}^{(cs)}(x,y,\xi_{0})+\frac{2\varepsilon}{2+ \beta}\;\delta\big{(}x-\xi_{0}^{\frac{2}{2+\beta}}y^{\frac{\beta}{2+\beta}} \big{)}+\ldots\,. \tag{14}\] Thus, we define: \[\hat{\delta}_{\rm sd}^{(s)}(x,y,\xi_{0},\zeta) \equiv\frac{2}{2+\beta}\;\delta\Big{(}y\zeta^{2}+x-\big{(}\xi_{0} \big{)}^{\frac{2}{2+\beta}}(y)^{\frac{\beta}{2+\beta}}\Big{)}\,, \tag{15}\] \[\hat{\delta}_{\rm sd}^{(cs)}(x,y,\xi_{0}) \equiv\frac{2}{2+\beta}\;\delta\big{(}x-\xi_{0}^{\frac{2}{2+\beta }}y^{\frac{\beta}{2+\beta}}\big{)}\,.\] ### Plain jet mass region We first consider the plain jet mass region. Here the modification to the soft drop condition only affects the \(S_{\text{plain}}\) mode. Thus, the measurement function analogous to Eq. (65) is given by \[\delta^{S_{\text{plain}}}_{\xi,\varepsilon}(x,y,r_{g},\xi_{0},\zeta)\equiv \hat{\Theta}^{(s)}_{k_{T}}(x,y)\hat{\delta}^{(s)}_{\text{sd}}(x,y,\xi_{0}, \zeta)\big{[}\hat{\Theta}^{(s)}_{r_{g}}(x,y,r_{g})\delta(\xi-y)-\delta(x)\big{]}\,, \tag{110}\] where here, and everywhere below, we will include a subscript \(\varepsilon\) to distinguish functions relevant to the soft drop boundary cross section. Similar to Eq. (66) we can isolate the piece corresponding to \(r_{g}\) measurement: \[\delta^{S_{\text{plain}}}_{\xi,\varepsilon}(x,y,r_{g},\xi_{0},\zeta)=\Delta \delta^{\text{CS, full}}_{\xi,\,r_{g},\,\varepsilon}(x,y,r_{g},\xi_{0},\zeta)+ \Delta\delta^{S_{\text{plain}}}_{\xi,\,\text{sd},\,\varepsilon}(x,y,\xi_{0}, \zeta)\,. \tag{111}\] Unlike Eq. (66), there is no term corresponding to ungroomed measurement as we have differentiated with respect to \(\varepsilon\). These terms are given by \[\Delta\delta^{\text{CS, full}}_{\xi,\,r_{g},\,\varepsilon}(x,y,r_{g},\xi_{0}, \zeta) \equiv-\hat{\Theta}^{(s)}_{k_{T}}(x,y)\big{(}1-\hat{\Theta}^{(s)} _{r_{g}}(x,y,r_{g})\big{)}\hat{\delta}^{(s)}_{\text{sd}}(x,y,\xi_{0},\zeta) \delta(\xi-y)\,, \tag{112}\] \[\Delta\delta^{S_{\text{plain}}}_{\xi,\,\text{sd},\,\varepsilon}(x,y,\xi_{0}, \zeta) \equiv+\hat{\Theta}^{(s)}_{k_{T}}(x,y)\hat{\delta}^{(s)}_{\text{ sd}}(x,y,\xi_{0},\zeta)\big{[}\delta(\xi-y)-\delta(\xi)\big{]}\,.\] The fixed order correction that captures the dependence on the \(r_{g}\) measurement is given by \[Q^{\frac{1}{1+\beta}}_{\text{cut}}\Delta S^{\kappa}_{r_{g},\, \varepsilon}\bigg{(}\ell^{+}Q^{\frac{1}{1+\beta}}_{\text{cut}},r_{g},Q_{\text {cut}},\zeta,\beta,\alpha_{s}(\mu)\bigg{)}=\frac{1}{Q}\mathcal{S}^{[1]}_{ \kappa}\bigg{(}\xi=\frac{\ell^{+}}{Q},\Delta\delta^{\text{CS, full}}_{\xi,\,r_{g},\, \varepsilon},\mu\bigg{)} \tag{113}\] \[=\frac{\alpha_{s}C_{\kappa}}{\pi}\frac{\Theta(\ell^{+}-Q^{\prime }_{\text{cut}}v(r_{g}))\Theta(Q^{\prime}_{\text{cut}}-\ell^{+})}{(\ell^{+})^{2 }/Q}\bigg{(}\Big{(}\frac{Q_{\text{cut}}}{\ell^{+}}\Big{)}^{\frac{2}{2+\beta} }-\zeta^{2}\bigg{)}^{-1}\,,\] and analogously, the soft drop related fixed-order piece is given by \[\Delta S^{\kappa}_{\text{sd},\,\varepsilon}\big{(}\ell^{+},Q_{\text{cut}}, \zeta,\beta,\alpha_{s}(\mu)\big{)}=\frac{1}{Q}\mathcal{S}^{[1]}_{\kappa} \bigg{(}\xi=\frac{\ell^{+}}{Q},\Delta\delta^{S_{\text{plain}}}_{\xi,\,\text{ sd},\,\varepsilon},\mu\bigg{)} \tag{114}\] These fixed order corrections are combined with the rest of the factorized cross section in precisely the same manner as in Eq. (93) above \[\tilde{\mathcal{G}}^{\text{plain}}_{\kappa,\,\varepsilon}(\xi,Q,Q_{\text{cut }},\zeta,\mu_{\text{plain}})=N^{\kappa}_{\text{incl}}(Q,\mu_{N})e^{K_{N}} \Big{(}\frac{\mu_{N}}{Q}\Big{)}^{\omega_{N}} \tag{115}\] \[\quad\times\Bigg{(}\mathcal{S}^{\kappa}_{\text{NGL}}\big{(}t[Q\xi,Q]\big{)}\mathcal{J}^{\text{plain}}_{\kappa}[\partial_{\Omega};\xi,Q,\mu_{ \text{plain}}]\mathcal{Q}^{\text{plain}}_{\kappa,\varepsilon}(\Omega, \alpha_{s}(\mu_{s}))\] \[\quad\quad\quad+\Big{(}\frac{\text{d}}{\text{dln}\xi}\mathcal{S}^{ \kappa}_{\text{NGL}}\big{(}t[Q\xi,Q]\big{)}\Big{)}\mathcal{J}^{\text{plain}}_{ \kappa}[\partial_{\Omega};\xi,Q,\mu_{\text{plain}}]\mathcal{Q}^{\text{plain}}_{ \kappa,\varepsilon}(\Omega-1,\alpha_{s}(\mu_{s}))\Bigg{)}\Bigg{|}_{\Omega= \tilde{\omega}(\mu_{s},\mu_{J})}\,.\] Similar to Eq. (2.94), the kernels \(\mathcal{Q}^{\rm plain}_{\kappa,\varepsilon}\) include Laplace transforms of the soft functions in Eqs. (3.10) and (3.11), but also \(\mathcal{O}(\alpha_{s}^{2})\) cross terms that are required to consistently carry out NNLL resummation [71]. In Tab. 2 we summarize the various \(\mathcal{O}(\alpha_{s}^{2})\) cross terms that constitute the kernel \(\mathcal{Q}^{\rm plain}_{\kappa,\varepsilon}\). We have included cross terms involving \(\Delta S^{\kappa}_{\rm sd}\) pieces for consistent matching with the cross section in the soft drop resummation regime. Finally, there are also cross terms arising from \(\mathcal{O}(\alpha_{s})\) parts of \(\mathcal{J}^{\rm plain}_{\kappa}\) and \(\mathcal{O}(\alpha_{s})\) soft functions that we have not shown. ### Soft drop resummation region We now turn to the boundary cross section in the soft drop resummation region. Here, including shift in the boundary cross section modifies the measurement for the global soft and collinear-soft functions: \[\hat{\Theta}^{{}^{S_{G}}}_{\varepsilon}(x,y,\xi_{0},\zeta) \equiv-\hat{\Theta}^{(s)}_{k_{T}}(x,y)\hat{\delta}^{(s)}_{\rm sd}( x,y,\xi_{0},\zeta)+\hat{\delta}^{(s)}_{\rm sd}(x,y,\xi_{0},\zeta)\,, \tag{3.13}\] \[\delta^{\rm CS}_{\xi,\,\varepsilon}(x,y,r_{g},\xi_{0}) \equiv\hat{\delta}^{(cs)}_{\rm sd}(x,y,\xi_{0})\big{[}\delta( \xi-y)-\delta(\xi)\big{]}+\Delta\delta^{\rm CS}_{\xi,\,r_{g},\,\varepsilon}(x,y,r_{g},\xi_{0})\,,\] The corresponding soft functions are given by \[S^{\kappa}_{G,\varepsilon}\big{(}Q_{\rm cut},\zeta,\beta,\mu \big{)} \equiv\mathcal{S}^{[1]}_{k}\big{(}\cdot\,,\hat{\Theta}^{{}^{S_{G} }}_{\varepsilon},\mu\big{)}+\mathcal{O}(\alpha_{s}^{2})\,, \tag{3.14}\] \[S^{\kappa}_{c,\varepsilon}\big{(}\tilde{k},r_{g},Q_{\rm cut}, \beta,\mu\big{)} \equiv\frac{1}{QQ^{\frac{1}{1+\beta}}_{\rm cut}}\mathcal{S}^{[1] }_{\kappa}\bigg{(}\xi=\frac{\tilde{k}}{QQ^{\frac{1}{1+\beta}}_{\rm cut}}, \delta^{\rm CS}_{\xi,\,\varepsilon},\mu\bigg{)}+\mathcal{O}(\alpha_{s}^{2})\,.\] Here, the second term in Eq. (3.13) in the global soft measurement function \(\hat{\Theta}^{{}^{S_{G}}}_{\varepsilon}\) is required to appropriately cancel the UV divergences in of the virtual piece. In the collinear-soft measurement function, the measurement \(\Delta\delta^{\rm CS}_{\xi,\,r_{g},\,\varepsilon}\) is simply the collinear limit of the one in Eq. (3.9), which yields \[S_{c,\varepsilon}^{\kappa}\big{(}\tilde{k},r_{g},Q_{\rm cut},\beta,\mu\big{)}=S_{c,\varepsilon}^{\kappa}\big{(}\tilde{k},\beta,\mu\big{)}+\Delta S_{r_{g}, \varepsilon}^{\kappa}\big{(}\tilde{k},r_{g},Q_{\rm cut},\beta,\alpha_{s}(\mu )\big{)}\,, \tag{111}\] However, as we did above for the usual soft drop case, we will continue to include the power suppressed terms and employ the full measurement in Eq. (108). It is helpful to consider the \(\beta=0\) and \(\beta>0\) cases separately. For \(\beta=0\), the soft functions are given by \[S_{G,\varepsilon}^{\kappa,\rm bare}\big{(}Q_{\rm cut},\zeta,\beta=0,\mu \big{)} =\frac{1}{\xi_{0}}\frac{\alpha_{s}C_{\kappa}}{\pi}\bigg{(}\frac{1} {\epsilon}+2{\rm ln}\Big{(}\frac{\mu}{Q_{\rm cut}}\Big{)}\bigg{)}\,, \tag{112}\] \[S_{c,\varepsilon}^{\kappa,\rm bare}\big{(}\tilde{k},\beta=0,\mu\big{)} =\frac{\alpha_{s}C_{\kappa}}{\pi}\frac{1}{\xi_{0}}\bigg{[}-\frac{ \delta(\tilde{k})}{\varepsilon}+\mathcal{L}_{0}(\tilde{k},\mu^{2})\bigg{]}\,,\] and the same \(r_{g}\)-dependent correction as in Eq. (97). Here we find an extra log-divergence [71] which is unrelated to the soft drop factorization we considered above. The extra divergence leads to a nontrivial non-cusp anomalous dimension \(\varepsilon\,\gamma_{0}^{\varepsilon}(z_{\rm cut})\) where \[\gamma_{0}^{\varepsilon,S_{0}^{\kappa}}=-\gamma_{0}^{\varepsilon,S_{c}^{ \kappa}}\equiv\gamma_{0}^{\varepsilon}(z_{\rm cut})=\frac{8C_{\kappa}}{z_{ \rm cut}}\,,\qquad(\beta=0)\,. \tag{113}\] We can verify that for \(\mu_{cs}=\mu_{gs}=\mu\) the fixed order pieces add to yield the correct fixed order result in Eq. (97) in the limit \(\xi\ll\xi_{0}\) by noting that \[Q_{\rm cut}^{\frac{1}{1+\beta}}S_{c,\varepsilon}^{\kappa}\big{(}Q _{\rm cut}^{\frac{1}{1+\beta}}\ell^{+},\beta=0,\mu\big{)} =\frac{1}{\xi_{0}}\frac{\alpha_{s}C_{\kappa}}{\pi}Q_{\rm cut} \mathcal{L}_{0}\big{(}Q_{\rm cut}\ell^{+},\mu^{2}\big{)}\] \[=\frac{1}{\xi_{0}}\frac{\alpha_{s}C_{\kappa}}{\pi}\bigg{[} \mathcal{L}_{0}\big{(}\ell^{+},\mu\big{)}-{\rm ln}\Big{(}\frac{\mu}{Q_{\rm cut }}\Big{)}\delta(\ell^{+})\bigg{]} \tag{114}\] adding to this the contribution from global soft function, we find \[\delta(\ell^{+}) S_{G,\varepsilon}^{\kappa}\big{(}Q_{\rm cut},\zeta,\beta=0,\mu \big{)}+Q_{\rm cut}^{\frac{1}{1+\beta}}S_{c,\varepsilon}^{\kappa}\big{(}Q_{ \rm cut}^{\frac{1}{1+\beta}}\ell^{+},\beta=0,\mu\big{)}\] \[=\frac{1}{\xi_{0}}\frac{\alpha_{s}C_{\kappa}}{\pi}\bigg{[} \mathcal{L}_{0}\big{(}\ell^{+},\mu\big{)}+{\rm ln}\Big{(}\frac{\mu}{Q_{\rm cut }}\Big{)}\delta(\ell^{+})\bigg{]}\] \[=\frac{1}{\xi_{0}}\frac{\alpha_{s}C_{\kappa}}{\pi}\bigg{[}\frac{ \Theta(\ell^{+})}{\ell^{+}}\bigg{]}_{+}^{[Q_{\rm cut}]}\,.\] which is precisely the limit \(\ell^{+}/Q_{\rm cut}\ll\zeta\) of \(\Delta S_{{\rm sd},\varepsilon}^{\kappa}\) above for \(\beta=0\). On the other hand, for \(\beta>0\), there is no straightforward way to split \(\Delta S_{{\rm sd},\varepsilon}^{\kappa}\) in Eq. (97) into two pieces as the divergence is a power law and we simply include the piece as in the case of plain jet mass region. Thus, the resummed formula in soft drop resummation region is given by \[\tilde{\mathcal{G}}_{\kappa,\,\varepsilon}^{\rm sd\,res} (\xi,r_{g},Q,Q_{\rm cut},\zeta,\beta,\mu_{\rm sd})=N_{\kappa}^{ \rm evol}\big{(}\mu_{N},\mu_{gs},Q,Q_{\rm cut},\zeta,\beta\big{)}\mathcal{S}_ {\rm NGL}^{\kappa}\big{(}t\big{[}Q_{\rm cut},Q\big{]}\big{)} \tag{115}\] \[\times\mathcal{J}_{\kappa,\,\varepsilon}^{\rm sd\,res.}[\partial_ {\Omega};\xi,Q,Q_{\rm cut},\mu_{\rm sd}]\;\mathcal{Q}_{\kappa,\,\varepsilon}^ {\rm sd\,res.}\big{(}\Omega,\xi,r_{g},Q_{\rm cut},\zeta,\beta,\alpha_{s}(\mu_{ cs})\big{)}\Big{|}_{\Omega=\tilde{\omega}(\mu_{cs},\mu_{J})}\,.\] \[\mathcal{J}_{\kappa,\,\varepsilon}^{\rm sd\,res.}[\partial_{\Omega}] =\bigg{(}1+\delta_{\beta,0}\eta\big{(}\gamma_{0}^{\varepsilon}(z_{ \rm cut}),\mu_{cs},\mu_{gs}\big{)}\bigg{)}\mathcal{J}_{\kappa}^{\rm sd\,res}[ \partial_{\Omega}] \tag{3.21}\] \[\quad+\frac{\delta_{\beta,0}}{\xi\xi_{0}}\;e^{K_{cs}+K_{J}}\frac{ \big{(}\mu_{J}^{2}\big{)}^{\omega_{J}}\big{(}Q\mu_{cs}\big{)}^{\omega_{cs}}}{( \xi Q^{2})^{\Omega}}\Big{(}\frac{\mu_{cs}}{Q_{\rm cut}}\Big{)}^{\frac{\omega_ {cs}}{1+\beta}}\] \[\quad\times\left[\frac{2\alpha_{s}(\mu_{gs})C_{\kappa}}{\pi}{\rm ln }\Big{(}\frac{\mu_{gs}}{Q_{\rm cut}}\Big{)}-\frac{\alpha_{s}(\mu_{cs})C_{ \kappa}}{\pi}\left(\partial_{\Omega}+{\rm log}\left(\frac{\mu_{cs}}{Q\xi} \Big{(}\frac{\mu_{cs}}{Q_{\rm cut}}\Big{)}^{\frac{1}{1+\beta}}\right)\right) \right],\] Here \(\mathcal{J}_{\kappa}^{\rm sd\,res}[\partial_{\Omega}]\) is the same function that appeared above for the usual soft drop cross section in Eq. (2.104). The additional \(\mathcal{O}(\varepsilon)\) terms arise only for \(\beta=0\) case. The first of these involves \(\eta(\gamma_{0}^{\varepsilon}(z_{\rm cut}),\mu_{cs},\mu_{gs})\) which is the single log non-cusp resummation kernel arising from the new non-cusp anomlous dimension in Eq. (3.17): \[\eta\big{(}\gamma_{0}^{\varepsilon}(z_{\rm cut}),\mu_{cs},\mu_{gs}\big{)}=- \frac{\gamma_{0}^{\varepsilon}(z_{\rm cut})}{2\beta_{0}}{\rm ln}\Big{(}\frac {\alpha_{s}(\mu_{cs})}{\alpha_{s}(\mu_{gs})}\Big{)}\,. \tag{3.22}\] In the third line of Eq. (3.21) we have included the \(\mathcal{O}(\alpha_{s})\) logarithms associated with the factorized \(\beta=0\) boundary global-soft and collinear-soft functions in Eq. (3.16). For this reason these pieces are not included in the \(\mathcal{Q}_{\kappa,\varepsilon}^{\rm sd\,res.}\) kernel for \(\beta=0\). For \(\beta>0\), the non-factorized \(\Delta S_{\rm sd,\varepsilon}^{\kappa}\) contribution is included directly in \(\mathcal{Q}_{\kappa,\varepsilon}^{\rm sd\,res.}\) as indicated in Tab. 2. ### Fixed order cross section Finally, we consider the non-singular corrections from using the full soft drop condition without expansions in the soft limit. In analogy with Eq. (2.72) the measurement function is given by \[\delta_{\xi,\varepsilon}^{\rm FO}(x,y,r_{g},\xi_{0},\zeta)\equiv\hat{\Theta} _{k_{T}}(x,y)\delta_{\rm sd}(x,y,\xi_{0},\zeta)\big{[}\hat{\Theta}_{r_{g}}(x, y,r_{g})\delta(\xi-y)-\delta(\xi)\big{]}\,, \tag{3.23}\] Figure 12: Relative size of non-singular corrections to the boundary soft drop cross section. Using the full soft drop condition shifts the cusp slightly to the left of \(\xi_{0}^{\prime}\) (green-dashed vertical line). Again restricting to \(\xi>0\) for the differential cross section we can ignore the \(\delta(\xi)\) term. The two delta-functions fix both \(x\) and \(y\), such that the fixed order cross section is given by \[\xi\tilde{\mathcal{G}}^{\text{FO}[1]}_{\kappa,\varepsilon}(\xi,r_{g}) =(1+\delta_{\kappa,g})j(x^{*},\xi)\frac{\alpha_{s}}{2\pi}\hat{ \Theta}_{k_{T}}(x^{*},\xi)\hat{\Theta}_{r_{g}}\big{(}x^{*},\xi,r_{g}\big{)} \sum_{\kappa^{\prime}}\hat{P}_{\kappa^{\prime}\kappa}(x^{*}) \tag{3.24}\] \[\quad+\delta_{\kappa,q}|j(1-x^{*},\xi)|\frac{\alpha_{s}}{2\pi} \hat{\Theta}_{k_{T}}(x^{*},\xi)\hat{\Theta}_{r_{g}}\big{(}x^{*},\xi,r_{g}\big{)} \sum_{\kappa^{\prime}}\hat{P}_{\kappa^{\prime}\kappa}(1-x^{*})\,,\] where \(x^{*}\) is the solution of the soft drop constraint in Eq. (3.2) for \(y=\xi\) and \(j(x,y)\) is the Jacobian given by \[j(x,y)=\bigg{(}\frac{1+y\zeta^{2}}{1-y\zeta^{2}}\bigg{)}\bigg{(}1+\frac{\beta} {2}\frac{(1-2x)(1-y\zeta^{2})}{(1-x)+xy\zeta^{2}}\bigg{)}^{-1}\,. \tag{3.25}\] In the soft limit (\(x\sim y,x\to 0\)), \(j(x,y)=2/(2+\beta)\). The \(\delta_{\text{sd}}\) constraint can be easily solved for \(\beta=0\), for which we find \[x^{*}(y,\xi_{0},\beta=0)=1-\frac{1-\xi_{0}(1+y\zeta^{2})}{1-y\zeta^{2}}\,. \tag{3.26}\] For other values of \(\beta\) we employ the following approximation which fairly well reproduces the exact solution for the range of \(z_{\text{cut}}\) and \(\beta\) that we consider: \[x^{*}(y,\xi_{0},\beta>0)\approx y^{\frac{\beta}{2+\beta}}\xi_{0}^{\frac{2}{2+ \beta}}-y\zeta^{2}+\frac{1}{4}y\zeta^{2}\bigg{(}\frac{z_{\text{cut}}}{0.1} \bigg{)}\,. \tag{3.27}\] Since the \(r_{g}\)-dependence in the fixed order cross section in Eq. (3.24) only arises from the measurement theta-function and collinear matrix elements, to gauge the size of non-singular corrections we can simply consider the boundary soft drop cross section for \(r_{g}=1\). To this end we define \[\Delta\tilde{\mathcal{G}}^{\text{n.s.}[1]}_{\kappa,\,\text{sd}, \varepsilon}\big{(}\xi,\alpha_{s}(\mu_{N})\big{)}\equiv\tilde{\mathcal{G}}^{ \text{FO}}_{\kappa,\varepsilon}\big{(}\xi,r_{g}=1\big{)}-Q\Delta S^{\kappa}_{ \text{sd},\varepsilon}\big{(}Q\xi,Q_{\text{cut}},\zeta,\beta,\alpha_{s}(\mu_{ N})\big{)}\,. \tag{3.28}\] In Fig. 12 we show the size of the non-singular correction relative to the fixed-order result. We see that the deviation is below \(5\%\) which we will find to be well within the perturbative uncertainties. ### \(R_{g}\)-weighted boundary soft drop cross section Having discussed the construction of the soft drop boundary cross section we now use it to compute the \(r_{g}\) moments. The first step is to combine the results in the plain jet mass and soft drop resummation regions to compute the matched cross section. This is straightforwardly obtained by repeating the steps for the max-\(R_{g}\) cross section in Eq. (2.117): \[\tilde{\mathcal{G}}^{\text{match}}_{\kappa,\varepsilon}(\xi,r_{g})\equiv \tilde{\mathcal{G}}^{\text{sd}}_{\kappa,\varepsilon}(\xi,r_{g},\mu_{\text{ sd}\to\text{plain}})+\big{[}\tilde{\mathcal{G}}^{\text{plain}}_{\kappa, \varepsilon}(\xi,r_{g},\mu_{\text{plain}})-\tilde{\mathcal{G}}^{\text{sd}}_{ \kappa,\varepsilon}(\xi,r_{g},\mu_{\text{plain}})\big{]}\,. \tag{3.29}\] Using this result, we can compute the \(C^{(n)}_{2\kappa}(\xi)\) moments: \[C^{(n)}_{2\kappa}(\xi) =\frac{\xi}{\tilde{\mathcal{G}}^{\rm match}_{\kappa}(\xi)}\int_{r _{g}^{\rm min}(\xi)}^{r_{g}^{\rm max}(\xi)}{\rm d}r_{g}\,r_{g}^{n}\,w_{\rm sd \rightarrow\rm plain}(\xi,r_{g})\,\frac{{\rm d}\tilde{\mathcal{G}}^{\rm match }_{\kappa,\varepsilon}(\xi,r_{g})}{{\rm d}r_{g}}\,. \tag{113}\] Next, we note that the cross section \(\partial_{r_{g}}\tilde{\mathcal{G}}^{\rm match}_{\kappa,\varepsilon}(\xi,r_{g})\) is proportional to \(\Psi(\xi,r_{g})\) defined in Eq. (110): \[\xi\frac{{\rm d}\tilde{\mathcal{G}}^{\rm match}_{\kappa,\varepsilon}(\xi,r_{g })}{{\rm d}r_{g}}=\Psi(\xi,r_{g})\,\mathcal{F}^{\rm match}_{\kappa,\varepsilon }(\xi,r_{g})\,, \tag{114}\] and have factored out the pre-factor in Eq. (109), \[\Psi(\xi,r_{g})\equiv\frac{\xi}{\xi_{0}^{\prime}}\Big{(}\frac{r_{g}}{v(r_{g}) }\Big{)}^{2}\partial_{r_{g}}v(r_{g})\,. \tag{115}\] such that using Eq. (107) \(C^{(n)}_{2\kappa}(\xi)\) in Eq. (113) becomes \[C^{(n)}_{2\kappa}(\xi) =\frac{1}{\tilde{\mathcal{G}}^{\rm match}(\xi)}\int_{\gamma_{\rm min }(\xi)}^{\min\{1,\gamma_{\rm max}(\xi)\}}{\rm d}\gamma\,\big{[}r_{g}(\gamma) \big{]}^{n}\Big{(}\frac{r_{g}^{\rm max}(\xi\gamma)}{\gamma}\Big{)}^{2} \mathcal{F}^{\rm match}_{\kappa,\varepsilon}\big{(}\xi,r_{g}^{\rm max}(\xi \gamma)\big{)}\,. \tag{116}\] where \[r_{g}(\xi,\gamma)\equiv r_{g}^{\rm max}(\xi\gamma)\,,\qquad\quad\gamma_{\rm min }(\xi)\equiv\frac{\xi_{0}^{\prime}v(\sqrt{\xi})}{\xi}\,,\qquad\quad\gamma_{ \rm max}(\xi)=\frac{\xi_{0}^{\prime}}{\xi}\,. \tag{117}\] In Fig. 13 we demonstrate how the matching for \(C^{(-1)}_{2\kappa}\) works. We see that the overlap piece completely cancels the soft drop resummed cross section in the plain jet mass region and the plain jet mass cross section in the soft drop resummation region. In Fig. 14 we show the impact of scale variations and the variation of the nuisance parmaeter \(a_{20,\varepsilon}^{\rm max}\) for two-loop non-logarithmic pieces for \(C^{(-1,2)}_{2\kappa}\) moments. Unlike the doubly differential cross section Figure 13: Matching the result for \(C^{(n)}_{2\kappa}\) for \(n=-1\) for quark and gluon jets across the soft drop resummation and plain jet mass regions. The vertical lines denote the extent of the SDOE region. considered in the previous section the boundary cross section is not directly related to single differential jet mass cross section. As a result we find a significant impact of scale variation left in the ratio, which dominates the nuisance parmaeter variation. The variations for \(C_{2\kappa}^{(2)}\) are somewhat smaller than those of the \(C_{2\kappa}^{(-1)}\) moment. This is because for positive moments \(n=2\), the contribution from the int-\(R_{g}\) and min-\(R_{g}\) regions is naturally power suppressed. The scale variations for small \(r_{g}\) tend to be larger due to smaller scales being probed. ## 4 Numerical results and comparison with simulations In this section we show a comparison of the numerical implementation of the NNLL calculation of the moments above against simulations of quarks and gluon jets in \(e^{+}e^{-}\) and \(pp\) collisions in Pythia 8.306 and Herwig 7.2.3. In the simulations we reconstruct anti-\(k_{T}\)[88] jets using Fastjet[89], and analyze them using jet analysis software JETlib[90]. ### Results for \(C_{1}\) In Fig. 15 we show compare the analytical prediction for \(C_{1\kappa}^{(n)}\) for phenomenologically relevant cases of \(n=1,4\) for quark jets in \(e^{+}e^{-}\to q\bar{q}\) parton level process simulated at leading order for all combinations of \(z_{\rm cut}\in\{0.05,0.1,0.2\}\) (rows) and \(\beta\in\{0,1,2\}\) (columns). We will Figure 14: \(C_{2\kappa}^{(n)}\) for \(n=-1\) (top) and \(n=2\) (bottom) for quark and gluon jets. The vertical lines denote the extent of the SDOE region. stick to leading order treatment of the hard scattering since the main goal of this work is the hadronization model and its interface with the parton shower which can be well isolated with the help of soft drop in the SDOE region. Within the SDOE region shown between the vertical lines we find the MC extraction to agree very well with the analytical result, and we expect this to work even when higher order hard matching corrections are included. Beyond the cusp, however, we expect that the prediction for \(\xi_{0}<\xi\lesssim 1\) to be modified with more precise matching to account for additional jets, though continue to find a very good agreement for \(e^{+}e^{-}\) collisions for our LO simulations. We also simulate the \(e^{+}e^{-}\to h_{0}\to gg\) process to study gluon jets in isolation shown in Fig. 16 and find similar results as quark jets. In Figs. 17 and 18 we compare the analytical prediction for quarks and gluon jets in \(pp\) collisions against simulations. Here we simulate \(pp\to Z+q/g\) jet process and sample the leading jet. We see that both MC simulations yield almost identical results and agree with the NNLL result in the SDOE region, though there are some small noticeable differences for \(\beta=2\) case. We find that despite the differences in the theory setup for inclusive jets and the jet selection criteria in simulations they remain close to the analytical result in the plain jet mass region for of many of the soft drop combinations. ### Results for \(C_{2}\) Next, we show a comparison of NNLL results for \(C_{2\kappa}^{(n)}\) for \(n=-1,2\) and parton level simulations for the same processes as considered above. Here we use an improved method for extracting of \(C_{2\kappa}^{(n)}\) moments than what was originally employed in Ref. [70]. In Ref. [70] the soft drop condition was shifted by a small amount \(\varepsilon/r_{g}\) to result in a new constraint \(\Theta(z-\xi_{0}r_{g}^{\beta}+\varepsilon/r_{g})\) which upon numerically differentiating with respect to \(\varepsilon\) resulted in the \(C_{2\kappa}^{(-1)}\) moment. This approach however fails to work for positive moments that we consider because for small angles and \(n>0\) the \(\varepsilon r_{g}^{n}\) becomes too small for numerical differentiation. Instead, we calculate \(C_{2\kappa}^{(n)}\) here via a weighted cross-section given by \[C_{2\kappa}^{\text{MC},(n)}(\xi)=\frac{\xi}{\text{d}\hat{\sigma}_{\kappa}^{ \text{MC}}/\text{d}\xi}\int\text{d}r_{g}\;\sum_{\varepsilon_{i}}\frac{a_{i}^ {\text{fd}}}{\big{(}r_{g}^{\text{max}}(\xi)\big{)}^{\beta}\varepsilon}\,r_{g} ^{n}w_{\text{sd}\to\text{plain}}(\xi,r_{g})\frac{\text{d}\hat{\sigma}_{ \kappa}^{\text{MC}}(\varepsilon_{i})}{\text{d}r_{g}\text{d}\xi} \tag{100}\] where \(\frac{\text{d}\hat{\sigma}_{\kappa}^{\text{MC}}(\varepsilon_{i})}{\text{d}r_ {g}\text{d}\xi}\) is obtained by differentiating the jet with the shifted soft drop condition \[\Theta_{\text{sd}}^{\text{MC}}(\varepsilon)\equiv\Theta\bigg{(}z-\xi_{0}r_{g }^{\beta}+\varepsilon\big{(}r_{g}^{\text{max}}(\xi)\big{)}^{\beta}\bigg{)}\,. \tag{101}\] for a range of uniformly spaced \(\{\varepsilon_{i}\}\) choices and the corresponding finite difference coefficients \(\{a_{i}^{\text{fd}}\}\) to build in numerical derivative. Doing so ensure that the shifted term remains commensurate in size with the \(\xi_{0}r_{g}^{\beta}\) term. The results for \(e^{+}e^{-}\to q\bar{q}\) process are shown in Fig. 19. Unlike in the case of \(C_{1\kappa}^{(n)}\), the simulations in Pythia and Herwig do not agree with each other for \(\beta>0\). Nevertheless, for all combinations of \(z_{\text{cut}}\) and \(\beta\) shown the simulations agree with analytical calculations within the NNLL uncertainty band, with the Herwig result being closer to the central curve. Comparing Figure 15: Comparison of NNLL prediction of \(C_{1}^{(1)}\) (top) and \(C_{1}^{(4)}\) (bottom) against parton level simulations of quark jets in \(e^{+}e^{-}\) collisions in Pythia 8.306 and Herwig 7.2.3. Figure 16: Comparison of NNLL prediction of \(C_{1}^{(1)}\) and \(C_{1}^{(4)}\) against parton level simulations of gluon jets in isolation in Pythia 8.306 and Herwig 7.2.3. Figure 17: Comparison of NNLL prediction of \(C_{1}^{(1)}\) (top) and \(C_{1}^{(4)}\) (bottom) against parton level simulations of quark jets in \(pp\) collisions in Pythia 8.306 and Herwig 7.2.3. Figure 18: Comparison of NNLL prediction of \(C_{1}^{(1)}\) and \(C_{1}^{(4)}\) against parton level simulations of gluon jets in \(pp\) collisions in Pythia 8.306 and Herwig 7.2.3. Figure 19: Comparison of NNLL prediction of \(C_{2\kappa}^{(-1)}\) and \(C_{2\kappa}^{(2)}\) against parton level simulations in Pythia 8.306 and Herwig 7.2.3. Figure 20: Comparison of NNLL prediction of \(C_{2\kappa}^{(-1)}\) and \(C_{2\kappa}^{(2)}\) against parton level simulations of gluon jets in isolation in Pythia 8.306 and Herwig 7.2.3. Figure 21: Comparison of NNLL prediction of \(C_{2\kappa}^{(-1)}\) and \(C_{2\kappa}^{(2)}\) against parton level simulations of quark jets in Pythia 8.306 and Herwig 7.2.3 in \(pp\) collisions. The additional contribution beyond the cusp region arises from ungroomed initial state radiation. Figure 22: Comparison of NNLL prediction of \(C_{2\kappa}^{(-1)}\) and \(C_{2\kappa}^{(2)}\) against parton level simulations of gluon jets in Pythia 8.306 and Herwig 7.2.3 in \(pp\) collisions. The additional contribution beyond the cusp region arises from ungroomed initial state radiation. \(n=-1\) and \(n=2\) we see a better agreement between the analytical results and simulations for \(n=2\). This is due to the fact that for \(n=-1\) there is larger weight associated with smaller angles in Eq. (101) which we need to manually eliminate using the weight function to capture only the max-\(R_{g}\) regime contribution. We have also displayed for comparison the NLL\({}^{\prime}\) result from Ref. [71] where the power corrections close to the cusp were systematically neglected. We see that including these power corrections results in a more accurate location of the cusp and agrees better with simulations. This is particularly noticeable for \(\beta=1,2\). In Fig. 20 we show the results for gluon jets simulated in \(e^{+}e^{-}\) collisions and can draw similar conclusions as the quark case. In Figs. 21 and 22 comparison of analytical results with simulations in \(pp\) collisions for quark and gluon jets is shown. Contrary to the \(e^{+}e^{-}\) case the simulations differ significantly in the plain jet mass region due to additional inevitable contribution from the initial state radiation. In the SDOE region, the analytical result agrees with the simulations, except however for the \(n=-1\) moment, there is a disagreement for jet masses closer to the cusp. These differences for \(pp\) collisions between our analytical results and MC simulations are however not troublesome since we expect the contribution of ISR to nonperturbative effects to be proportional to \(C_{2\kappa}^{(2)}\), which is suppressed relative to that of the collinear-soft radiation within the jet. Thus, as far as the nonperturbative corrections associated with boundary soft drop are concerned, the same moment \(C_{2\kappa}^{(-1)}\) describes both \(e^{+}e^{-}\) and \(pp\) scenario. For \(n=2\) moment, we however find a comparatively better agreement between MC and analytical calculation in the SDOE region. ## 5 Conclusion In this paper we have calculated the key ingredients required to describe hadronization and underlying event corrections to the groomed jet mass spectrum in a model independent framework. These ingredients are differential jet mass cross sections weighted by certain moments of groomed jet radius and occur in combination with universal nonperturbative \(\mathcal{O}(\Lambda_{\text{QCD}})\) constants for hadronization corrections and analogous parameters for underlying event contribution to the groomed jet mass. A precise knowledge of these perturbative weights is crucial for precision measurements with soft drop jet mass as higher order calculations become available. The results for the moments associated with hadronization derived in this paper were used in Ref. [51] to ascertain the ultimate precision possible on the strong coupling constant determination when all the nonperturbative power corrections are left unconstrained. At the same time, this field-theory based formalism imposes highly nontrivial constraints on the form and functional dependence of power corrections on jet kinematics and grooming parameters. This property of nonperturbative corrections can be used as a benchmark tool for improving modeling of hadronization and its interface with parton showers in event generators. The NNLL predictions for the perturbative weights calculated in this paper were employed in Ref [78] for a detailed calibration of hadronization models. We calculated the perturbative weights associated with power corrections by computing the NNLL result for the doubly differential cross section and the boundary soft drop cross section. We improved upon the earlier calculation in Ref. [71] and extended the previous result into the ungroomed region and performed a more accurate treatment of the prediction near soft drop cusp. We also accounted for the \(\mathcal{O}(\alpha_{s})\) non-singular corrections and parameterized the effect of two-loop non-logarithmic pieces via nuisance parameters. We compared the phenomenologically relevant \(R_{g}\)-moments of these cross sections against parton level extractions from Pythia 8.306 and Herwig 7.2.3 simulations. The simulations agreed well within the uncertainty bands for \(e^{+}e^{-}\to q\bar{q}\) and \(e^{+}e^{-}\to gg\) processes within the SDOE region. We saw some disagreement near the cusp for \(C_{2\kappa}^{(-1)}\) moment for \(pp\) collisions due to inevitable contribution from the initial state radiation in simulations. On the other hand, the moments that are relevant for ISR and the underlying event were found to be in better agreement in the SDOE region for simulations of jets in \(pp\) collisions. In summary, we believe that this paper provides motivation for carrying out analyses with unfolded data from colliders. With the prospects of high quality data to be delivered from high-luminosity phase of the LHC, it will be very exciting to use the approach presented in this paper to further our understanding of hadronization effects as well as carry out precision measurements using groomed observables. ## Acknowledgements I am grateful to Anna Ferdinand and Kyle Lee for their collaboration during initial stages. I thank Anna Ferdinand for cross checking the results of the computations against Monte Carlo simulations. I thank Kyle Lee for discussions on the effective field theory regions and modes, and cross checking various intermediate calculations. I thank Simon Platzer for technical support with Herwig. A numerical implementation of the NNLL calculation described above in C++ building on core classes of SCETlib[91] will be made available as a part of the scetlib::sd module [92]. I thank Johannes Michel for support with above mentioned software. I acknowledge support from DESY (Hamburg, Germany), a member of the Helmholtz Association HGF. I was previously a member of the Lancaster-Manchester-Sheffield Consortium for Fundamental Physics, which is supported by the UK Science and Technology Facilities Council (STFC) under grant number ST/T001038/1. ## Appendix A Anomalous dimensions In this appendix we consolidate the anomalous dimensions up to \(\mathcal{O}(\alpha_{s}^{2})\) for NNLL resummation of the various factorization functions appearing in our analysis as well as results of certain functions in Laplace space. We refer the reader to App. A of Ref. [71] for a details of the notation for anomalous dimension and RG evolution kernels. The cusp anomalous dimensions are given by \[\Gamma_{N^{\kappa}_{\rm incl}}[\alpha_{s}] =-2\,C_{\kappa}\Gamma^{\rm cusp}[\alpha_{s}]\,,\] (A.1) \[\Gamma_{J^{\kappa}}[\alpha_{s}] =+2\,C_{\kappa}\Gamma^{\rm cusp}[\alpha_{s}]\,,\] \[\Gamma_{\mathcal{C}^{\kappa}}[\alpha_{s}] =+2\,C_{\kappa}\Gamma^{\rm cusp}[\alpha_{s}]\,,\] \[\Gamma_{S^{\kappa}_{em}}[\alpha_{s}] =-2C_{\kappa}\Gamma^{\rm cusp}[\alpha_{s}]\,,\] \[\Gamma_{S^{\kappa}_{G}}[\alpha_{s}] =+2C_{\kappa}\Gamma^{\rm cusp}[\alpha_{s}]\,,\] \[\Gamma_{S^{\kappa}_{eg}}[\alpha_{s}] =-2C_{\kappa}\Gamma^{\rm cusp}[\alpha_{s}]\,,\] \[\Gamma_{S^{\kappa}_{c}}[\alpha_{s}] =-2C_{\kappa}\Gamma^{\rm cusp}[\alpha_{s}]\,,\] where with the convention \[\Gamma^{\rm cusp}[\alpha_{s}]=\sum_{n=0}^{\infty}\Gamma^{\rm cusp}_{n}\Big{(} \frac{\alpha_{s}}{4\pi}\Big{)}^{n+1}\,,\] (A.2) up to NNLL we have [93; 94; 95] \[\Gamma^{\rm cusp}_{0} =4\,,\] (A.3) \[\Gamma^{\rm cusp}_{1} =8\bigg{[}\Big{(}\frac{67}{18}-\frac{\pi^{2}}{6}\Big{)}C_{A}- \frac{5}{9}n_{f}\bigg{]}\] \[\Gamma^{\rm cusp}_{2} =16\bigg{[}\Big{(}\frac{245}{24}-\frac{67}{54}+\frac{11\pi^{4}}{1 80}+\frac{11}{6}\Big{)}C_{A}^{2}+\Big{(}-\frac{209}{108}+\frac{5\pi^{2}}{27}- \frac{7}{3}\zeta_{3}\Big{)}C_{A}n_{f}\] \[\qquad\qquad+\Big{(}\frac{-55}{27}+2\zeta_{3}\Big{)}C_{F}n_{f}- \frac{1}{27}n_{f}^{2}\bigg{]}\,.\] Next, we state the non-cusp anomalous dimensions following the same series expansion as in Eq. (A.2). The non-cusp anomalous dimensions of the normalization factor \(N^{\kappa}_{\rm incl}\) for quark and gluon jets are given by \[\gamma_{0}^{Nq}=-6C_{F}\,,\qquad\gamma_{0}^{Ng}=-2\beta_{0}\,,\] (A.4) \[\gamma_{1}^{Nq}=\Big{(}-3+4\pi^{2}-48\zeta_{3})C_{F}^{2}+\bigg{(}-\frac{961}{ 27}-\frac{11\pi^{2}}{3}+52\zeta_{3}\bigg{)}C_{F}C_{A}+\bigg{(}\frac{260}{27}+ \frac{4\pi^{2}}{3}\bigg{)}C_{F}n_{f}T_{F}\,,\] \[\gamma_{1}^{Ng}=\bigg{(}-\frac{118}{9}+4\zeta_{3}\bigg{)}C_{A}^{2}+\bigg{(}- \frac{38}{9}+\frac{\pi^{2}}{3}\bigg{)}C_{A}\beta_{0}-4\beta_{1}\,.\] (A.5) For the jet function we have \[\gamma_{0}^{J_{q}}=6C_{F}\,,\qquad\gamma_{0}^{J_{g}}=2\beta_{0}\,,\] (A.6) \[\gamma_{1}^{J_{q}} =C_{F}\left[C_{F}\left(3-4\pi^{2}+48\zeta_{3}\right)+C_{A}\left(\frac {1769}{27}+\frac{22\pi^{2}}{9}-80\zeta_{3}\right)+T_{R}n_{f}\left(-\frac{484}{2 7}-\frac{8\pi^{2}}{9}\right)\right]\,,\] \[\gamma_{1}^{J_{g}} =C_{A}^{2}\left(\frac{2192}{27}-\frac{22\pi^{2}}{9}-32\zeta_{3} \right)+C_{A}T_{R}n_{f}\left(-\frac{736}{27}+\frac{8\pi^{2}}{9}\right)-8C_{F}T _{R}n_{f}\,. \tag{111}\] The one-loop non-cusp anomalous dimension of all the soft functions are zero (with the exception of the boundary soft drop non-cusp anomalous dimension for \(\beta=0\) in Eq. (109)). For \(S_{c_{m}}^{\kappa}\) (and \(S_{\rm plain}^{\kappa}\)), we have \[\gamma_{1}^{S_{c_{m}}^{\kappa}}=C_{\kappa}\bigg{(}C_{A}\Big{(}-\frac{808}{27} +\frac{11\pi^{2}}{9}+28\zeta_{3}\Big{)}+n_{f}T_{F}\Big{(}\frac{224}{27}-\frac {4\pi^{2}}{9}\Big{)}\bigg{)}\,. \tag{112}\] At two-loops a numerical approximation of the global soft anomalous dimension reads [77; 51]: \[\gamma_{1}^{S_{G}^{\kappa}}(\beta)=\frac{C_{\kappa}}{1+\beta}\bigg{(}\gamma_{ C_{F}}^{S_{G}^{\kappa}}(\beta)+n_{f}\gamma_{T_{F}}^{S_{G}^{\kappa}}(\beta)+ \gamma_{C_{A}}^{S_{G}^{\kappa}}(\beta)\bigg{)}\,, \tag{113}\] where \[\gamma_{C_{F}}^{S_{G}^{\kappa}}(\beta) =C_{F}\big{(}0.00563338\beta^{3}-0.621462\beta^{2}-1.11337\beta+ 16.9974\big{)}\,, \tag{114}\] \[\gamma_{T_{F}}^{S_{G}^{\kappa}}(\beta) =T_{F}\big{(}-0.26041\beta^{3}+2.01765\beta^{2}+3.48117\beta-10.93 41\big{)}\,,\] \[\gamma_{C_{A}}^{S_{G}^{\kappa}}(\beta) =C_{A}\big{(}+0.640703\beta^{3}+3.37308\beta^{2}+3.68876\beta-20.4 351\big{)}\,.\] The anomalous dimension of the hard-collinear function \(\mathcal{C}^{\kappa}(\xi,r_{g},Q,\mu)\) is simply the negative of that of the normalization factor \(N_{\rm incl}^{\kappa}\). Likewise the anomalous dimension of \(S_{c_{g}}^{\kappa}\) is negative of that of the global soft function: \[\gamma_{\mathcal{C}^{\kappa}}[\alpha_{s}] =-\gamma_{N^{\kappa}}[\alpha_{s}]\,, \tag{114}\] \[\gamma_{S_{cg}^{\kappa}}[\alpha_{s}] =-\gamma_{S_{G}^{\kappa}}[\alpha_{s}]\,.\] Finally, the non-cusp anomalous dimension of the collinear soft function \(S_{c}^{\kappa}\) can be obtained using RG consistency of the max-\(R_{g}\) cross section: \[\gamma_{1}^{S_{c}^{\kappa}}(\beta)=-\gamma_{1}^{N^{\kappa}}-\gamma_{1}^{J_{ \kappa}}-\gamma_{1}^{S_{G}^{\kappa}}(\beta)\,. \tag{115}\] ## Appendix B Computing the resummation kernels Here we outline the computation of the resummation kernels that appear due to non-logarithmic fixed order corrections in soft functions that are not simply constants and unrelated to the boundary condition of the RG evolution of the soft function. For compactness, we write all the relevant soft functions as \[a_{\kappa}\mathcal{S}_{\kappa}^{(\text{sd})}(\xi) \equiv Q\Delta\mathcal{S}_{\text{sd}}^{\kappa[1]}\big{(}Q\xi,\alpha_ {s}(\mu)\big{)}\,,\quad a_{\kappa}\mathcal{S}_{\kappa}^{(r_{g})}(\xi) \equiv QQ_{\text{cut}}^{\frac{1}{1+\beta}}\Delta\mathcal{S}_{r_{g}}^{\kappa[1 ]}\Big{(}\xi QQ_{\text{cut}}^{\frac{1}{1+\beta}},r_{g},\alpha_{s}(\mu)\Big{)}\,,\] \[a_{\kappa}\mathcal{S}_{\kappa,\varepsilon}^{(\text{sd})}(\xi) \equiv Q\Delta\mathcal{S}_{\text{sd},\,\varepsilon}^{\kappa[1]}\big{(}Q \xi,\alpha_{s}(\mu)\big{)}\,,\quad a_{\kappa}\mathcal{S}_{\kappa,\varepsilon}^ {(r_{g})}(\xi) \equiv QQ_{\text{cut}}^{\frac{1}{1+\beta}}\Delta\mathcal{S}_{r_{g},\,\varepsilon}^{\kappa[1]}\Big{(}\xi QQ_{\text{cut}}^{\frac{1}{1+\beta}},r_{g },\alpha_{s}(\mu)\Big{)}\,. \tag{113}\] where we have defined \(a_{\kappa}\equiv\alpha_{s}(\mu)C_{\kappa}/\pi\), and have left the \(r_{g}\)-dependence of \(\mathcal{S}_{\kappa}^{(r_{g})}\) and \(\mathcal{S}_{\kappa,\varepsilon}^{(r_{g})}\) implicit. Next, we write \[\mathcal{Q}_{\kappa}^{[1]X}(\Omega) \equiv a_{\kappa}\frac{e^{\gamma_{E}\Omega}}{\Gamma(-\Omega)} \Big{(}\delta_{X,\text{plain}}\mathcal{R}_{\kappa}^{(\text{sd})}\big{(} \Omega\big{)}+\mathcal{R}_{\kappa}^{(r_{g})}\big{(}\Omega\big{)}\Big{)}\,, \tag{114}\] \[\mathcal{Q}_{\kappa}^{[2]X}(\Omega) \equiv a_{\kappa}^{2}\frac{e^{\gamma_{E}\Omega}}{\Gamma(-\Omega)} \delta_{X,\text{plain}}\mathcal{R}_{\kappa}^{(\text{sd},r_{g})}\big{(}\Omega \big{)}\,,\] \[\partial_{r_{g}}\mathcal{Q}_{\kappa,\varepsilon}^{[1]X}\big{(}\Omega \big{)} \equiv a_{\kappa}\frac{2}{2+\beta}\frac{e^{\gamma_{E}\Omega}}{ \Gamma(-\Omega)}\frac{\Psi(\xi,r_{g})}{\xi}\mathcal{R}_{\kappa,\varepsilon}^{ \prime(r_{g})}\big{(}\Omega\big{)}\,,\] \[\partial_{r_{g}}\mathcal{Q}_{\kappa,\varepsilon}^{[2]X}\big{(}\Omega \big{)} \equiv a_{\kappa}^{2}\frac{2}{2+\beta}\frac{e^{\gamma_{E}\Omega}}{ \Gamma(-\Omega)}\frac{\Psi(\xi,r_{g})}{\xi}\sum_{AB\in\{\text{sd},\,r_{g}\}_{ X}}\mathcal{R}_{\kappa,\varepsilon}^{\prime AB}(\Omega)\,,\] where \(X\in\{\text{plain, sd res.}\}\) and the set \(\{\text{sd},r_{g}\}_{X}\) refers to the combinations of soft functions in Eq. (113) relevant for plain jet mass and soft drop jet mass resummation region, as summarized in Tab. 2. Here we have explicitly included a derivative with respect to \(r_{g}\) required for \(r_{g}\)-differential cross section since we do intend to match the max-\(R_{g}\) cross section with other regimes. Considering directly the \(r_{g}\)-differential cross section helps us to avoid considering some of the more complicated cross terms. The \(r_{g}\)-derivatives of the soft functions are given by \[Q_{\text{cut}}^{\frac{1}{1+\beta}}\partial_{r_{g}}\Delta S_{r_{g }}^{\kappa[1]}\Big{(}\ell^{+}Q_{\text{cut}}^{\frac{1}{1+\beta}},r_{g},\alpha_ {s}(\mu)\Big{)} =\frac{2\alpha_{s}C_{\kappa}}{\pi}\frac{\Theta(\ell^{+}-Q_{\text {cut}}^{\prime}v(r_{g}))}{\ell^{+}}\,, \tag{115}\] \[Q_{\text{cut}}^{\frac{1}{1+\beta}}\partial_{r_{g}}\Delta S_{r_{g },\varepsilon}^{\kappa[1]}\Big{(}\ell^{+}Q_{\text{cut}}^{\frac{1}{1+\beta}},r_{g },\alpha_{s}(\mu)\Big{)} =\frac{2}{2+\beta}\frac{\alpha_{s}C_{\kappa}}{\pi}\frac{1}{\xi_{ 0}^{\prime}}\Big{(}\frac{r_{g}}{v(r_{g})}\Big{)}^{2}\partial_{r_{g}}v(r_{g}) \delta(\ell^{+}-Q_{\text{cut}}^{\prime}v(r_{g}))\,.\] In the last two lines, for the soft drop boundary cross section we have factored out the pre-factor defined in Eq. (3.3). In terms of the rescaled-soft functions defined in Eq. (113), from Eq. (95) we have \[\mathcal{R}_{\kappa}^{(\text{sd}/r_{g})}\big{(}\Omega\big{)} =\int_{0}^{\infty}\text{d}y\;\mathcal{L}_{0}^{-\Omega}\big{(}1-y \big{)}\,\xi\mathcal{S}_{\kappa}^{(\text{sd}/r_{g})}(\xi y)\,, \tag{110}\] \[\mathcal{R}_{\kappa}^{(\text{sd},r_{g})}\big{(}\Omega\big{)} =\int_{0}^{\infty}\text{d}y_{A}\text{d}y_{B}\;\mathcal{L}_{0}^{ -\Omega}\big{(}1-(y_{A}+y_{B})\big{)}\,\xi\mathcal{S}_{\kappa,\varepsilon}^{( \text{sd})}(\xi y_{A})\,\xi\mathcal{S}_{\kappa}^{(r_{g})}(\xi y_{B})\,,\] \[\frac{\Psi(\xi,r_{g})}{\xi}\frac{2}{2+\beta}\mathcal{R}_{\kappa, \varepsilon}^{\prime(r_{g})}(\Omega) =\int_{0}^{\infty}\text{d}y\;\mathcal{L}_{0}^{-\Omega}\big{(}1- y\big{)}\,\xi\partial_{r_{g}}\mathcal{S}_{\kappa,\varepsilon}^{r_{g}}(\xi y)\,,\] \[\frac{\Psi(\xi,r_{g})}{\xi}\frac{2}{2+\beta}\mathcal{R}_{\kappa, \varepsilon}^{\prime AB}(\Omega) =\int_{0}^{\infty}\text{d}y_{A}\text{d}y_{B}\,\mathcal{L}_{0}^{ -\Omega}\big{(}1-(y_{A}+y_{B})\big{)}\partial_{r_{g}}\Big{[}\xi\mathcal{S}_{ \kappa,\varepsilon}^{A}(\xi y_{A})\,\xi\mathcal{S}_{\kappa}^{B}(\xi y_{B}) \Big{]}\,.\] To proceed further we define the functions, \[\gamma(\xi,r_{g})\equiv\frac{\xi_{0}^{\prime}v(r_{g})}{\xi}\,, \quad f(w)\equiv\frac{\text{ln}({r_{g}}^{2})}{w}\,,\quad h(w)\equiv\frac{1}{ w}\text{ln}\bigg{(}(1+\zeta^{2})\Big{(}\frac{\gamma_{\text{max}}}{w}\Big{)}^{ \frac{2}{2+\beta}}-\zeta^{2}\bigg{)}\,, \tag{111}\] \[\gamma_{\text{max}}(\xi)\equiv\frac{\xi_{0}^{\prime}}{\xi}\,, \qquad f^{\prime}(w)\equiv\frac{2}{wr_{g}}\,,\qquad g(w)\equiv\frac{1}{w^{2} }\bigg{(}(1+\zeta^{2})\Big{(}\frac{\gamma_{\text{max}}}{w}\Big{)}^{\frac{2}{2 +\beta}}-\zeta^{2}\bigg{)}^{-1}\,.\] We see that in terms of \(\gamma(\xi,r_{g})\) the prefactor \(\Psi(\gamma,r_{g})\) using Eq. (32) becomes \[\Psi(\gamma,r_{g})=\Big{(}\frac{r_{g}}{\gamma}\Big{)}^{2}\partial_{r_{g}}\gamma (\xi,r_{g})\,. \tag{112}\] In terms of these functions we have \[\xi\mathcal{S}_{\kappa}^{(\text{sd})}(\xi w) =-\big{[}\Theta(\gamma_{\text{max}}-w)h(w)\big{]}_{+}^{[\gamma_{ \text{max}}]}\,,\] \[\xi\mathcal{S}_{\kappa}^{(r_{g})}(\xi w) =\Theta(w-\gamma)\Big{[}f(w)+\Theta(\gamma_{\text{max}}-w)h(w) \Big{]}\,, \tag{113}\] \[\xi\mathcal{S}_{\kappa,\varepsilon}^{(\text{sd})}(\xi w) =\frac{2}{2+\beta}\big{[}\Theta(\gamma_{\text{max}}-w)g(w)\big{]}_ {+}^{[\gamma_{\text{max}}]}\,,\] \[\xi\mathcal{S}_{\kappa,\varepsilon}^{(r_{g})}(\xi w) =-\frac{2}{2+\beta}\Theta(\gamma_{\text{max}}-w)\Theta(w-\gamma)g (w)\,,\] such that from Eq. (110), the one-loop kernels are given by \[\mathcal{R}_{\kappa}^{(\text{sd})}(\Omega) =-\int_{0}^{\infty}\text{d}w\;\mathcal{L}_{0}^{-\Omega}\big{(}1-w \big{)}\big{[}\Theta(\gamma_{\text{max}}-w)\Theta(w)h(w)\big{]}_{+}^{[\gamma_{ \text{max}}]}\,, \tag{114}\] \[\mathcal{R}_{\kappa}^{(r_{g})}(\Omega) =\int_{0}^{\infty}\text{d}w\;\mathcal{L}_{0}^{-\Omega}\big{(}1-w \big{)}\Theta(w-\gamma)\bigg{[}f(w)+\Theta(\gamma_{\text{max}}-w)h(w)\bigg{]}\,,\] \[\mathcal{R}_{\kappa,\varepsilon}^{\prime(r_{g})}(\Omega) =\mathcal{L}_{0}^{-\Omega}\big{(}1-\gamma\big{)}\,,\] and the two-loop kernels \({\cal R}^{(\text{sd},r_{g})}_{\kappa}\) and \({\cal R}^{\prime AB}_{\kappa,\varepsilon}\) being \[{\cal R}^{(\text{sd},r_{g})}_{\kappa} =-\int_{0}^{\infty}\text{d}w\,\text{d}u\,{\cal L}_{0}^{-\Omega}(1- w-u)\big{[}\Theta(\gamma_{\text{max}}-u)\Theta(u)h(u)\big{]}_{+}^{[\gamma_{\text{ max}}]} \tag{114}\] \[\qquad\qquad\times\Theta(w-\gamma)\Big{[}f(w)+\Theta(\gamma_{ \text{max}}-w)h(w)\Big{]}\,,\] \[{\cal R}^{\prime(\text{sd},r_{g})}_{\kappa,\varepsilon}(\Omega) =-\int_{0}^{\infty}\text{d}w\,{\cal L}_{0}^{-\Omega}\big{(}1-w- \gamma\big{)}\big{[}\Theta(\gamma_{\text{max}}-w)\Theta(w)h(w)\big{]}_{+}^{[ \gamma_{\text{max}}]}\,,\] \[{\cal R}^{\prime(r_{g},\text{sd})}_{\kappa,\varepsilon}(\Omega) =+\Psi^{-1}\int_{0}^{\infty}\text{d}w\,\text{d}u\,{\cal L}_{0}^{ -\Omega}\big{(}1-u-w\big{)}f^{\prime}(w)\Theta(w-\gamma)\big{[}\Theta(\gamma_ {\text{max}}-u)\Theta(u)g(u)\big{]}_{+}^{[\gamma_{\text{max}}]}\,,\] \[{\cal R}^{\prime(r_{g},r_{g})}_{\kappa,\varepsilon}(\Omega) =-\Psi^{-1}\int_{0}^{\infty}\text{d}w\,\text{d}u\,{\cal L}_{0}^{- \Omega}\big{(}1-u-w\big{)}f^{\prime}(w)\Theta(w-\gamma)\Theta(\gamma_{\text{ max}}-u)\Theta(u-\gamma)g(u)\] \[\quad+\int_{0}^{\infty}\text{d}w\,{\cal L}_{0}^{-\Omega}\big{(}1 -w-\gamma\big{)}\Theta(w-\gamma)\bigg{[}f(w)+\Theta(\gamma_{\text{max}}-w)h(w )\bigg{]}\,.\] Note that we have not included \({\cal R}^{(\text{sd},\text{sd})}_{\kappa,\varepsilon}\) in the list above as it does not carry any \(r_{g}\) dependence and vanishes upon taking \(r_{g}\)-derivative. To further compactify these expressions we introduce the following transforms acting on a function or a distribution \(F(w)\): \[q\{F\}(u,u_{\text{min}}) \equiv\Theta(1-u_{\text{min}}-u)\int_{0}^{\infty}\text{d}w\,{ \cal L}_{0}^{-\Omega}\big{(}1-u-w\big{)}F(w)\Theta(w-u_{\text{min}})\,, \tag{115}\] \[q_{+}\{F\}(u,\,u_{\text{max}}) \equiv\Theta(1-u)\int_{0}^{\infty}\text{d}w\,{\cal L}_{0}^{- \Omega}\big{(}1-u-w\big{)}\Big{[}\Theta(u_{\text{max}}-w)\Theta(w)F(w)\Big{]} _{+}^{[u_{\text{max}}]}\,.\] In terms of these transforms the above expressions become \[{\cal R}^{(\text{sd})}_{\kappa}(\Omega) =-q_{+}\{h\}(0,\gamma_{\text{max}})\,, \tag{116}\] \[{\cal R}^{(r_{g})}_{\kappa}(\Omega) =q\{f+h\}(0,\gamma)-q\{h\}(0,\gamma_{\text{max}})\,,\] \[{\cal R}^{(\text{sd},r_{g})}_{\kappa}(\Omega) =-\int_{\gamma}^{\text{min}\{\gamma_{\text{max}},1\}}\text{d}w\,q _{+}\{h\}\big{(}w,\gamma_{\text{max}}\big{)}\Big{[}f(w)+\Theta(\gamma_{\text{ max}}-w)h(w)\Big{]}\,,\] \[{\cal R}^{\prime(\text{sd},r_{g})}_{\kappa,\varepsilon}(\Omega) =-q_{+}\{h\}(\gamma,\gamma_{\text{max}})\,,\] \[{\cal R}^{\prime(r_{g},\text{sd})}_{\kappa,\varepsilon}(\Omega) =\Psi^{-1}\int_{0}^{\gamma_{\text{max}}}\text{d}u\,g(u)\Big{[}q \{f^{\prime}\}(u,\gamma)-q\{f^{\prime}\}(0,\gamma)\Big{]}\,,\] \[{\cal R}^{\prime(r_{g},r_{g})}_{\kappa,\varepsilon}(\Omega) =-\Psi^{-1}\int_{\gamma}^{\gamma_{\text{max}}}\text{d}u\,g(u)\,q \{f^{\prime}\}(u,\gamma)+\Big{[}q\{f+h\}(\gamma,\gamma)-q\{h\}(\gamma,\gamma_{ \text{max}})\Big{]}\,.\] The transform on the combination \(f+h\) in the second and last line is necessary to stabilize the numerical integration. The transforms simplify to the following expressions: \[q\{F\}(u,\,u_{\rm min}) =\left[\frac{F(1-u)}{-\Omega\big{(}1-u_{\rm min}-u\big{)}^{\Omega}}+ \int_{u_{\rm min}}^{1-u}{\rm d}w\;\frac{F(w)-F(1-u)}{\big{(}1-w-u\big{)}^{1+ \Omega}}\right],\] \[q_{+}\{F\}(u,\,u_{\rm max}) =\Theta(1-u-u_{\rm max})\int_{0}^{u_{\rm max}}{\rm d}w\,F(w) \Big{[}\big{(}1-w-u\big{)}^{-1-\Omega}-\big{(}1-u\big{)}^{-1-\Omega}\Big{]}\] \[\quad+\Theta(u_{\rm max}-1+u)\Bigg{[}\int_{0}^{1-u}{\rm d}w\; \bigg{(}\frac{\big{(}F(w)-F(1-u)\big{)}}{\big{(}1-w-u\big{)}^{1+\Omega}}- \frac{F(w)}{\big{(}1-u\big{)}^{1+\Omega}}\bigg{)}\] \[\quad-\int_{1-u}^{u_{\rm max}}{\rm d}w\;\frac{F(w)}{\big{(}1-u \big{)}^{1+\Omega}}+\frac{F(1-u)}{-\Omega\big{(}1-u\big{)}^{\Omega}}\Bigg{]}\,.\] (B.12) In the first expression, the \(w\) integration always sees the zero of the argument of \(\mathcal{L}_{0}^{-\Omega}\) for every value of \(u\). In the second expression, we see that only in the second and third line is the plus prescription of \(\mathcal{L}_{0}^{-\Omega}\) necessary. In writing the integrals in the form shown above, we regulate singularities in \(F(w)\) at \(w\to 0\) and in \(\mathcal{L}_{0}^{-\Omega}(1-u)\) for \(u\to 1\). Finally, we also state the results for \(\Omega\to 0\) limit: \[\lim_{\Omega\to 0}\frac{e^{\gamma_{E}\Omega}}{\Gamma(-\Omega)}q\{F\}(u,\,u_{ \rm min}) =\Theta(1-u_{\rm min}-u)F(1-u)\,,\] (B.13) \[\lim_{\Omega\to 0}\frac{e^{\gamma_{E}\Omega}}{\Gamma(-\Omega)}q_{+}\{F \}(u,\,u_{\rm max}) =\Big{[}\Theta\big{(}u_{\rm max}-(1-u)\big{)}\Theta(1-u)F(1-u) \Big{]}_{+}^{[u_{\rm max}]}\,.\] ## Appendix C Profile functions Here we discuss the profile functions that incorporate the appropriate canonical scales for various factorization functions as well as enable us to estimate perturbative uncertainty through their varaitions. Here we simply summarize the final formulae of original implementation discussed in great detail in Ref. [71] in the soft drop resummation region, which was further extended into the plain jet mass region in Ref. [51]. Firstly, the hard scale and global-soft scales are defined as \[\mu_{N}\equiv e_{N}Q\,,\qquad\mu_{gs}\equiv e_{N}Q_{\rm cut}\,,\qquad e_{N} \in[0.5,2]\,,\] (C.1) where the parameter \(e_{N}\) is varied in the range shown. In the fixed-order cross section we use the same \(\mu_{N}\) scale. We vary the two scales with the same parameters so as to be consistent with matching at the soft drop cusp by ensuring that \(\mu_{gs}/\mu_{N}=\xi_{0}\). ### Plain jet mass profiles Next, we summarize the profile scales and their variations associated with the plain jet mass cross section as described in Ref. [51]: \[\tilde{\mu}_{s}^{\text{plain}}(\xi;\lambda) \equiv\mu_{N}\Big{[}f_{\text{vary}}^{\text{plain}}(\xi)\Big{]}^{ \lambda}f_{\text{run}}^{\text{plain}}(\xi)\,, \tag{108}\] \[\mu_{s}^{\text{plain}}(\xi;\lambda) \equiv f_{\text{freeze}}\big{[}\tilde{\mu}_{s}^{\text{plain}}(\xi; \lambda)\big{]}\,,\] \[\mu_{J}^{\text{plain}}(\xi;\lambda,\gamma) \equiv\mu_{N}^{\frac{1}{2}+\gamma}\big{(}\mu_{s}^{\text{plain}}( \xi;\lambda)\big{)}^{\frac{1}{2}-\gamma}\,.\] The jet and soft scales being proportional to \(\mu_{N}\) inherit the \(e_{N}\) norm-variation in Eq. (106). The interpolation of the canonical ungroomed soft scales between various regions is governed by \(f_{\text{run}}^{\text{plain}}(\xi)\) given by \[f_{\text{run}}^{\text{plain}}(\xi)\equiv\left\{\begin{array}{ll}x_{0}\big{(} 1+\frac{\xi^{2}}{4x_{0}^{2}}\big{)}&\xi\leq 2x_{0}\\ \xi&2x_{0}<\xi\leq x_{1}\\ \xi+\frac{(2-x_{2}-x_{3})(\xi-x_{1})^{2}}{2(x_{2}-x_{1})(x_{3}-x_{1})}&x_{1}< \xi\leq x_{2}\\ 1-\frac{(2-x_{1}-x_{2})(\xi-x_{3})^{2}}{2(x_{3}-x_{1})(x_{3}-x_{2})}&x_{2}<\xi \leq x_{3}\\ 1&x_{3}<\xi\leq 1\end{array}\right.\,. \tag{109}\] The regions \(\xi<x_{0}\), \(2x_{0}<\xi<x_{1}\) and \(x_{3}<\xi<1\) respectively correspond to the ungroomed-nonperturbative, ungroomed-resummation and fixed order region. \(x_{0}\) is determined by the point where the soft scale freezes to a nonperturbative scale, \[x_{0}=\frac{n_{0}}{(\mu_{N}/1\text{GeV})}\,, \tag{110}\] where we take the default value of \(n_{0}=1\). Additionally, varying this parameter tests sensitivity of the cross section to nonperturbative effects: \[\text{Vary }\alpha_{s}\text{ freezing scale:}\qquad\qquad\qquad n_{0}\in[0.75,1.2 5]\,. \tag{111}\] The other three parameters take different values depending on the transition point \(\xi_{0}\). Firstly, \(x_{3}\) is given by \[\xi_{0}<0.2 \qquad\qquad\qquad:\qquad\qquad\qquad x_{3}=0.2\,, \tag{112}\] \[0.2\leq\xi_{0}<0.25 \qquad\qquad\qquad:\qquad\qquad\qquad x_{3}=\frac{0.25+\xi_{0}}{2 }\,,\] \[0.25\leq\xi_{0} \qquad\qquad\qquad:\qquad\qquad\qquad\qquad x_{3}=\min\{\xi_{0}+0. 1,1\}\,,\] and \(x_{1}\) and \(x_{2}\) are given by \[x_{1}=\Theta(x_{3}-1.15\xi_{0})1.15\xi_{0}+\Theta(1.15\xi_{0}-x_{3})\xi_{0}\,, \qquad x_{2}=\frac{x_{1}+x_{3}}{2}\,. \tag{113}\] Having defined the canonical soft scale \(\tilde{\mu}_{s}(\xi)\) we implement its variation in the plain jet mass resummation region alone, different than the overall normalization probed by \(e_{N}\), using the trumpet function \[f_{\rm vary}^{\rm plain}(\xi)\equiv\left\{\begin{array}{ll}1&0<\xi\leq\xi_{0} \\ \zeta\big{(}\xi,\xi_{0},x_{\rm mid};1,2\big{)}\,,&\xi_{0}\leq\xi<x_{\rm mid}\\ \zeta\big{(}\xi,x_{\rm mid},x_{3};2,1\big{)}\,,&x_{\rm mid}\leq\xi\leq x_{3}\\ 1&x_{3}<\xi\leq 1\end{array}\right.. \tag{112}\] where \[x_{\rm mid}\equiv\frac{\xi_{0}+x_{3}}{2}\,, \tag{113}\] and \[\zeta\big{(}\xi,x_{\rm start},x_{\rm end};a_{1},a_{2}\big{)}\equiv\left\{ \begin{array}{ll}a_{1}&\xi\leq x_{\rm start}\\ a_{1}+\frac{2(a_{2}-a_{1})(\xi-x_{\rm start})^{2}}{(x_{\rm end}-x_{\rm start})^ {2}}\,,&x_{\rm start}\leq\xi<\frac{x_{\rm start}+x_{\rm end}}{2}\\ a_{2}-\frac{2(a_{2}-a_{1})(x_{\rm end}-\xi)^{2}}{(x_{\rm end}-x_{\rm start})^ {2}}\,,&\frac{x_{\rm start}+x_{\rm end}}{2}\leq\xi\leq x_{\rm end}\\ a_{2}&x_{\rm end}<\xi\end{array}\right.. \tag{114}\] This variation is controlled via the parameter \(\lambda\): \[\text{Trumpet variation in plain jet mass region:}\qquad\quad\lambda\in[-1,0.3]\,. \tag{115}\] Finally we avoid the scale varying below the nonperturbative scale \(n_{0}\) by re-freezing via the function \[f_{\rm freeze}[\mu]\equiv\left\{\begin{array}{ll}\mu\,,&\mu\geq 2n_{0}\\ n_{0}\Big{(}1+\frac{\mu^{2}}{4n_{0}^{2}}\Big{)}\,,&\mu<2n_{0}\end{array}\right.. \tag{116}\] The jet scale in Eq. (110) is derived from the ungroomed soft scale using the canonical see-saw relation with \(\gamma=0\). Breaking this canonical relation defines another variation: \[\text{Break jet-hard-soft see-saw canonical relation:}\qquad\quad\gamma\in[-0.1,0.1]\,, \tag{117}\] We have written a "plain" subscript on the jet scale as we use a sightly different prescription for freezing in the nonperturbative region for the jet scale in the soft drop resummed cross section below. ### Soft drop profiles We first summarize min-\(R_{g}\) profiles: \[\tilde{\mu}_{cs_{g}}(r_{g};\alpha) \equiv\left(f_{\rm vary}^{\rm sd\,res.}(r_{g}^{1+\beta})\right)^{ \alpha}\mu_{gs}\,f_{\rm run}^{\rm sd\,res.}\!\left(r_{g}^{1+\beta}\right), \tag{118}\] \[\mu_{cs_{g}}(r_{g};\alpha) \equiv f_{\rm freeze}\big{[}\tilde{\mu}_{cs_{g}}(r_{g};\alpha)\big{]}\,,\] \[\mu_{\mathcal{C}}(r_{g};\alpha,\gamma) \equiv\mu_{N}\bigg{(}\frac{\mu_{cs_{g}}(r_{g};\alpha)}{\mu_{gs}} \bigg{)}^{\frac{1-\gamma}{1+\gamma}\frac{1}{1+\beta}},\] Here the soft scale depends on the groomed jet radius \(r_{g}\) instead of the jet mass, and the interpolating function is defined to be \[f_{\rm run}^{\rm sd\,res.}(r_{g})\equiv\left\{\begin{array}{ll}y_{0}\Big{(}1+ \frac{r_{g}^{2}}{4y_{0}^{2}}\Big{)}&\qquad r_{g}\leq 2y_{0}\\ r_{g}&\qquad 2y_{0}<r_{g}\leq 1\end{array}\right.\,. \tag{111}\] Here, the parameter governing the transition into the (soft drop) nonperturbative region is given by \[y_{0}\equiv\frac{n_{0}}{(\mu_{gs}/1\,{\rm GeV})}>\frac{\Lambda_{\rm QCD}}{Q_{ \rm cut}}\,. \tag{112}\] As above, the trumpet variation that vanishes at the end points is governed by \[f_{\rm vary}^{\rm sd\,res.}(r_{g})=\left\{\begin{array}{ll}2(1-r_{g}^{2})\,, &\qquad r_{g}<0.5\\ 1+2(1-r_{g})^{2}\,,&\qquad 0.5\leq r_{g}\leq 1\end{array}\right.\,, \tag{113}\] With \(\alpha=0\) in Eq. (110) the new variation we have is given by Trumpet variation in the soft drop resummation region: \[\alpha\in[-1,1]\,.\] (114) As in the case of jet scale above, the hard-collinear scale \(\mu_{\mathcal{C}}\) is derived from the \(\mu_{cs_{g}}\) scale as shown in Eq. (110) using the canonical relation for \(\gamma=0\). To ensure that the jet scale in the max-\(R_{g}\) and int-\(R_{g}\) region merges with the hard-collinear scale for \(\gamma\neq 0\), we are required to use the same parameter for the two and vary them together as in Eq. (110). Next, we summarize the implementation of profiles for max-\(R_{g}\) cross section in soft drop resummation region: \[\tilde{\mu}_{s}^{\rm sd\,res.}(\xi) \equiv\mu_{gs}\left[f_{\rm run}^{\rm sd\,res.}\bigg{(}\Big{(} \frac{\xi}{\xi_{0}}\Big{)}^{\frac{1+\beta}{2+\beta}}\bigg{)}\right]^{\frac{2+ \beta}{1+\beta}}, \tag{115}\] \[\tilde{\mu}_{cs}(\xi;\alpha,\rho) \equiv\left[f_{\rm vary}^{\rm sd\,res.}\bigg{(}\Big{(}\frac{\xi}{ \xi_{0}}\Big{)}^{\frac{1+\beta}{2+\beta}}\bigg{)}\right]^{\alpha}\big{(} \tilde{\mu}_{s}^{\rm sd\,res.}(\xi)\big{)}^{\frac{1+\beta+\rho}{2+\beta}} \big{(}\mu_{gs}\big{)}^{\frac{1-\rho}{2+\beta}}\,,\] \[\mu_{cs}(\xi;\alpha,\rho) \equiv f_{\rm freeze}\big{[}\tilde{\mu}_{cs}(\xi;\alpha,\rho)\big{]}\,,\] \[\mu_{J}(\xi;\alpha,\gamma,\rho) \equiv\big{[}\mu_{\mathcal{C}}(r_{g};\alpha,\gamma)\big{]}^{ \frac{1}{2}+\gamma}\Bigg{[}\mu_{cs}(\xi;\alpha,\rho)\left(\frac{\mu_{cs}(\xi; \alpha,\rho)}{\mu_{gs}}\right)^{\frac{1}{1+\beta}}\Bigg{]}^{\frac{1}{2}-\gamma}.\] In addition to the functions and variation parameters discussed already above, we have a new canonical relation between the ungroomed soft scale, collinear-soft scale and global-soft scale. This relation allows us to define the c-soft scale in terms of the other two, whereas the auxiliary "ungroomed-soft scale" \(\tilde{\mu}_{s}^{\rm sd\,res.}(\xi)\) itself is derived using the \(\mu_{cs_{g}}\) scale. Thus, the variation of \(\rho\) with default value \(\rho=0\) now probes the effect of breaking this canonical relation: Break \(\mu_{gs}\), \(\mu_{cs}\) and plain soft see-saw canonical relation: \[\rho\in[-0.1,0.1]\,.\] (116) Lastly, in the intermediate region, we use the \(\mu_{J},\mu_{gs},\mu_{cs_{g}}\) and the hard scale above, and use the soft drop canonical see-saw relation to define the \(\mu_{cs_{m}}\) scale: \[\mu_{cs_{m}}(\xi,r_{g};\alpha,\rho)\equiv\mu_{cs}(\xi;\alpha,\rho) \left(\frac{\mu_{cs}(\xi;\alpha,\rho)}{\mu_{gs}}\right)^{\frac{1}{1+\beta}} \!\left(\frac{\mu_{cs_{g}}(r_{g};\alpha)}{\mu_{gs}}\right)^{\frac{-(1-\rho)}{1 +\rho+\beta}}. \tag{102}\] This concludes the summary of profile scales implementation and their variations. ## Appendix D Weight functions Here we describe how the weight functions constructed in Ref. [71] are extended into the plain jet mass resummation region. We first list down the relevant canonical profile scales \[\mu_{J}^{\text{can.}}=Q\sqrt{\xi}\,,\qquad\mu_{gs}^{\text{can.}} =Q_{\text{cut}}\,,\qquad\mu_{cs_{m}}^{\text{can.}}=Q\xi/r_{g}\,,\qquad\mu_{cs_ {g}}^{\text{can.}}=Q_{\text{cut}}r_{g}^{1+\beta}\,, \tag{103}\] Using these scales and the size of power corrections in the intermediate-\(R_{g}\) regime in Eq. (113) we can define parameters that control transition from intermediate to min and max-\(R_{g}\) regimes. For transition from intermediate to min-\(R_{g}\) regime, we define \[\lambda_{\text{min}}\equiv\frac{\mu_{cs_{m}}}{\mu_{J}}=\frac{ \sqrt{\xi}}{r_{g}}\,. \tag{104}\] In transitioning to the max-\(R_{g}\) regime, we define two parameters for cases \(\xi<\xi_{0}^{\prime}\) and \(\xi>\xi_{0}^{\prime}\): \[\lambda_{\text{max}}^{\text{sd.res.}}\equiv\frac{\mu_{cs_{g}}}{ \mu_{cs_{m}}}=\frac{\xi_{0}}{\xi}r_{g}^{2+\beta}\,,\qquad\qquad\lambda_{ \text{max}}^{\text{plain}}\equiv\frac{\mu_{cs_{g}}}{\mu_{gs}}=r_{g}^{1+\beta }\,. \tag{105}\] Next, to determine whether or not the intermediate regime is required, we calculate the angle \(r_{g,t}\) for which \(\lambda_{\text{min}}=\lambda_{\text{max}}\): \[r_{g,t}^{\text{sd.res.}}(\xi)\equiv\Big{[}\frac{\xi}{\xi_{0}} \sqrt{\xi}\Big{]}^{\frac{1}{3+\beta}}\,,\qquad r_{g,t}^{\text{plain}}(\xi) \equiv\big{(}\sqrt{\xi}\big{)}^{\frac{1}{2+\beta}}\,. \tag{106}\] which can be combined as \[r_{g,t}(\xi)\equiv\Big{(}\frac{\xi}{\xi_{0}}\Big{)}^{a_{\text{sd.}}(\log_{10}\xi)}(\sqrt{\xi})^{a_{\text{plain}}(\log_{10}\xi)}\,, \tag{107}\] where the exponents are modified as a function of \(\xi\), and are given by \[a_{\text{sd.}}(\log_{10}\xi)\equiv a\Big{(}\log_{10}(\xi);\;\frac{1}{3+\beta},0\Big{)}\,, \tag{108}\] \[a_{\text{plain}}(\log_{10}\xi)\equiv a\Big{(}\log_{10}(\xi);\;\frac{1}{3+\beta },\frac{1}{2+\beta}\Big{)}\,,\] where \(a(\log_{10}(\xi);x,y)\) is a function that smoothly transitions from the value \(x\) to \(y\) as \(\xi\) is increased past \(\xi=\xi_{0}^{\prime}\), and is defined as \[a(\log_{10}(\xi);\,x,y)\equiv\zeta\big{(}\log_{10}\xi,\;\big{[} \log_{10}(\xi_{0}^{\prime})-\delta\xi\big{]},\;\big{[}\log_{10}(\xi_{0}^{ \prime})+\delta\xi\big{]};\,x,y\big{)}\,, \tag{109}\] where the function \(\zeta(\xi,x_{\rm start},x_{\rm end};a_{1},a_{2})\) was defined above in Eq. (111) and we set \(\delta\xi=0.75\). We will take \(\lambda=1/3\) as the reference power correction to determine the validity of the intermediate regime; i.e. if \(\lambda_{\rm max},\lambda_{\rm min}<\lambda\) then we implement resummation in the intermediate region defined by \[r_{g,{\rm PC}}^{\rm min}(\xi,\lambda)\leq r_{g}\leq r_{g,{\rm PC}}^{\rm max}( \xi,\lambda)\,, \tag{112}\] where \[r_{g,{\rm PC}}^{\rm min}(\xi,\lambda)\equiv\frac{\sqrt{\xi}}{ \lambda}\,, \tag{113}\] Now for the max-\(R_{g}\) regime, we have two cases: \[r_{g,{\rm PC}}^{\rm sd.res.}(\xi,\lambda)\equiv\Big{(}\frac{ \xi}{\xi_{0}}\lambda\Big{)}^{\frac{1}{2+\beta}}\,,\qquad r_{g,{\rm PC}}^{\rm plain }(\xi,\lambda)\equiv\lambda^{\frac{1}{1+\beta}}\,, \tag{114}\] the following combination of which defines \(r_{g,{\rm PC}}^{\rm max}\) in Eq. (112): \[r_{g,{\rm PC}}^{\rm max}(\xi,\lambda)\equiv\Big{(}\frac{\xi}{ \xi_{0}}\Big{)}^{a_{\rm sd}^{\rm PC}(\log_{10}\xi)}\lambda^{a_{\rm plain}^{ \rm PC}(\log_{10}\xi)}\,, \tag{115}\] where \[a_{\rm sd}^{\rm PC}(\log_{10}\xi)\equiv a\Big{(}\log_{10}\xi;\; \frac{1}{2+\beta},0\Big{)}\,,\] \[a_{\rm plain}^{\rm PC}(\log_{10}\xi)\equiv a(\log_{10}\xi;\; \frac{1}{2+\beta},\frac{1}{1+\beta}\Big{)}\,. \tag{116}\] We finally introduce a transition function \(X(r_{g},r_{g,t})\) for a given value of \(\xi\): \[X(r_{g},r_{g,t})=\frac{1}{2}\bigg{(}1+\tanh\Big{(}x_{t}\frac{r_ {g}-r_{g,t}}{r_{g}^{\rm max}(\xi)-r_{g}^{\rm min}(\xi)}\Big{)}\bigg{)}\,, \qquad x_{t}=20\,, \tag{117}\] and design the weight functions for the three EFT regimes as \[\text{2-EFT:}\quad w_{\rm max}=X(r_{g},r_{g,t}(\xi))\,,\qquad\quad w _{\rm int}=0\,, \tag{118}\] \[\text{3-EFT:}\quad w_{\rm max}=X(r_{g},r_{g,{\rm PC}}^{\rm max}( \xi,\lambda))\,,\quad w_{\rm int}=\big{[}1-X(r_{g},r_{g,{\rm PC}}^{\rm max}( \xi,\lambda))\big{]}X(r_{g},r_{g,{\rm PC}}^{\rm min}(\xi,\lambda))\,,\] In either of these cases \(w_{\rm min}=1-w_{\rm int}-w_{\rm max}\). With the above construction, these weight functions turn on in their respective regime.
2303.10206
Non-stationary $α$-fractal functions and their dimensions in various function spaces
In this article, we study the novel concept of non-stationary iterated function systems (IFSs) introduced by Massopust in 2019. At first, using a sequence of different contractive operators, we construct non-stationary $\alpha$-fractal functions on the space of all continuous functions. Next, we provide some elementary properties of the fractal operator associated with the nonstationary $\alpha$-fractal functions. Further, we show that the proposed interpolant generalizes the existing stationary interpolant in the sense of IFS. For a class of functions defined on an interval, we derive conditions on the IFS parameters so that the corresponding non-stationary $\alpha$-fractal functions are elements of some standard spaces like bounded variation space, convex Lipschitz space, and other function spaces. Finally, we discuss the dimensional analysis of the corresponding non-stationary $\alpha$-fractal functions on these spaces.
Anarul Islam Mondal, Sangita Jha
2023-03-17T18:57:41Z
http://arxiv.org/abs/2303.10206v1
# Non-stationary \(\alpha\)-fractal functions and their dimensions in various function spaces ###### Abstract. In this article, we study the novel concept of non-stationary iterated function systems (IFSs) introduced by Massopust in 2019. At first, using a sequence of different contractive operators, we construct non-stationary \(\alpha\)-fractal functions on the space of all continuous functions. Next, we provide some elementary properties of the fractal operator associated with the non-stationary \(\alpha\)-fractal functions. Further, we show that the proposed interpolant generalizes the existing stationary interpolant in the sense of IFS. For a class of functions defined on an interval, we derive conditions on the IFS parameters so that the corresponding non-stationary \(\alpha\)-fractal functions are elements of some standard spaces like bounded variation space, convex Lipschitz space, and other function spaces. Finally, we discuss the dimensional analysis of the corresponding non-stationary \(\alpha\)-fractal functions on these spaces. Key words and phrases:Fractal functions (primary) and Attractor and Non-stationary iterated function system, and Function spaces and Fractal dimension 2020 Mathematics Subject Classification: 28A80 (primary) 26A18 and 35B41 and 41A30 and 46B70 ## 1. Introduction Traditional interpolants, such as polynomials, trigonometric, rational, and spline functions, are always differentiable many times, with the possible exception of a finite set of points. On the other hand, real-world problems are complicated and rarely exhibit a sense of smoothness in their traces. Fractal functions are designed to approximate complicated, extremely irregular structures. However, despite its natural appearance, this approach received significantly less attention until 1986. Barnsley [4, 5] first introduced fractal interpolation functions (FIFs) to address this issue. FIFs, in general, are self-similar/affine and the Hausdorff Besicovitch dimensions of their graphs are non-integers. The main advantage here is the free choice of scaling factors and the self-referentiality character of fractal functions. The free choice allows us to select either a smooth or a non-smooth approximant. One can generalize classical interpolation techniques using smooth FIFs, for instance, see [7]. Inspired by Barnsley's construction of FIFs and targeting the non-smooth approximants, Navascues [20] introduced a family of fractal functions associated with a given continuous \(f\) and the IFS parameters. This method of fractal perturbation provides a bounded linear operator, known as the \(\alpha\)-fractal operator [23, 31], which links the theory of FIF to the area of Functional analysis, Operator theory, Harmonic analysis, and Approximation theory. Also, many researchers have looked into the theory of dimensional and analytical aspects of FIFs in various directions and domains (for example, see the contribution of Chand[7], Viswanathan[31], Vijender[30], Verma[29], Ruan[25], and others [2, 20, 24, 27] and the references therein ). A useful method to construct fractals is by obtaining the fixed points of contractive operators for a special type of IFS [13]. In the existing literature, the fractal defined as the attractor of a single IFS is self-similar, that is, its local shape is consistent under certain contraction maps. However, it has been observed that a sequence of IFSs is used in non-stationary subdivision schemes [10]. Recently, Levin, et al. [15] have introduced a broader category of sequences comprising several contractive operators. As a generalisation of the Banach fixed point theorem, they study the trajectories of contraction mappings in [10, 15]. It creates limit attractors with varying shapes or features at various scales. Up to now, researchers have used one contractive operator and iterated it finite or infinite times to get a stationary fractal function, which may not always provide a new class of fractal functions. By utilising the idea of forward and backward trajectories, Massopust [19] introduces new types of fractal functions with various local and global behaviours and expands fractal interpolation to a new and more adaptable environment. In this paper, we define the aforementioned fractal operator on the set of all continuous functions but now in the non-stationary setting. As we use different contractive operators, this may give new fractal functions. We also study the essential properties of such an operator. The study of the fractal operators in various function spaces helps in investigating shape-preserving approximation in those function spaces. We continue to explore the aforementioned fractal operator with the non-stationary setting on the space of functions of bounded variations \(\mathcal{BV}(I)\), function space \(V_{\beta}([0,1])\) and on the convex Lipschitz space \(\mathcal{V}^{\theta}(I)\). The study of computing the dimension of fractal sets is one of the open problems in fractal geometry. Recently, several researchers made serious efforts to compute the box/Hausdorff dimension of fractal functions and \(\alpha\)-fractal functions. We refer the reader to [3, 8] for studying the fractal dimension of stationary \(\alpha\)-fractal functions in a few function spaces and in [26] for studying the box-dimension of general recurrent fractal functions. An effort on calculating the fractal dimension of the Riemann-Liouville fractional integral of 1-dimensional continuous functions was made by Liang [16, 17]. In the present article, we attempt to find a bound of the box dimension of the proposed interpolant by constructing the non-stationary \(\alpha\)-fractal functions in suitable function spaces. Also, we point out that our results are generalizing certain existing results for appropriate parameters. FIFs have been found to have greater advantages than traditional interpolants when it comes to fitting and approximating naturally occurring functions with self-similarity. In practice, the FIF method has been used in disciplines such as image compression [1], signal processing [9], and physics [6] as an alternative to traditional interpolation methods. The stationary FIF can have local or global data point dependence, and FIF maintains self-referentiality. In addition, the non-stationary settings advanced fractal functions to incorporate the scale and location dependent features also. These motivate the study of non-stationary \(\alpha\)-fractal functions and we believe that the work in the present article will also find many applications to the best of our knowledge. The rest of the article is organized as follows. We review the concepts of trajectories and IFSs that are necessary to build non-stationary FIFs in Section 2. We describe the construction of the non-stationary \(\alpha\)-fractal functions in Section 3. The related fractal operator on \(\mathcal{C}(I)\) is explored in Section 4. In the final part, we define non-stationary \(\alpha\)-fractal functions on various function spaces and study their dimensions. ## 2. Notation and Preliminaries For a fixed \(k\in\mathbb{N}\), we shall write \[\mathbb{N}_{k}=\{1,2,3,\ldots,k\}\text{ and }\ \mathbb{N}_{k}^{0}=\{0,1,2,3, \ldots,k\}.\] Let \((X,d)\) be a complete metric space. For a map \(w:X\longrightarrow X\), let \[Lip(w)=\sup\left\{\frac{d(w(x),w(y))}{d(x,y)}:x,y\in X,x\neq y\right\}\] denote the Lipschitz constant of \(w\). If \(Lip(w)<\infty\), then \(w\) is called a Lipschitz function, and if \(Lip(w)<1\), then \(w\) is called a contraction. Let \(\mathcal{H}(X)\) denote the collection of all non-empty compact subsets of \(X\). For \(C_{1},C_{2}\in\mathcal{H}(X)\), define their Hausdorff distance as \[h(C_{1},C_{2})=max\{d(C_{1},C_{2}),d(C_{2},C_{1})\},\] where \(d(C_{1},C_{2})=\underset{x\in C_{1}}{\sup}\underset{y\in C_{2}}{\inf}\,d(x,y)\). The space \((\mathcal{H}(X),h)\) is a complete metric space known as the space of fractals. **Definition 2.1**.: An iterated function system(IFS) \(\mathcal{I}=\{X;w_{i}:i\in\mathbb{N}_{N}\}\) consists of a complete metric space \((X,d)\) with \(N\) continuous maps \(w_{i}:X\longrightarrow X\). The IFS \(\mathcal{I}\) is hyperbolic if each \(w_{i}\) in \(\mathcal{I}\) is a contraction. For a hyperbolic IFS \(\mathcal{I}\), the set valued Hutchinson map \(W:\mathcal{H}(X)\longrightarrow\mathcal{H}(X)\) is defined as \[W(B)=\bigcup_{i=1}^{N}w_{i}(B).\] It is known that \(W\) is a contraction map on \(\mathcal{H}(X)\) with the Lipschitz constant \(Lip(W)=\max\{Lip(w_{i}):i=1,2,\ldots,N\}.\) By using the Banach fixed point theorem, there exists a unique \(A\in\mathcal{H}(X)\) such that \(A=W(A)\). This \(A\) is called the attractor of the IFS. The attractor \(A\) can be obtained as the limit of the iterative process \(A_{k}=W(A_{k-1});k\in\mathbb{N}\), where \(A_{0}\in\mathcal{H}(X)\) is any arbitrary set. Notice that as \(A\) satisfies the self-referential equation \[A=W(A)=\bigcup_{i=1}^{N}w_{i}(A),\] the attractor is, in general, a fractal set. Let \((X,d)\) be a complete metric space and \(\{T_{m}\}_{m\in\mathbb{N}}\) be a sequence of transformations on \(X\). **Definition 2.2**.: A subset \(\mathcal{P}\) of \(X\) is called an invariant set of the sequence \(\{T_{m}\}_{m\in\mathbb{N}}\) if for all \(m\in\mathbb{N}\) and for all \(x\in\mathcal{P},T_{m}(x)\in\mathcal{P}\). We shall look at the following result to determine how to obtain an invariant set from a sequence of transformations \(\{T_{m}\}_{m\in\mathbb{N}}\). **Lemma 2.3** ([15]).: _Let \(\{T_{m}\}_{m\in\mathbb{N}}\) be a sequence of transformations on \((X,d)\). Suppose there exists a \(q\in X\) such that for all \(x\in X\)_ \[d(T_{m}(x),q)\leq\mu d(x,q)+M,\mu\in[0,1),M>0.\] _Then the ball \(B_{r}(q)\) of radius \(r=\frac{M}{1-\mu}\) centered at \(q\) is an invariant set for \(\{T_{m}\}_{m\in\mathbb{N}}\)._ **Definition 2.4**.: (Forward and Backward Trajectories) Let \(\{T_{m}\}_{m\in\mathbb{N}}\) be a sequence of Lipschitz maps on the metric space \(X\). The Forward and backward procedures are defined as \[\phi_{m}:=T_{m}\ o\ T_{m-1}\ o\ \ldots\ o\ T_{1}\ \ \text{and}\ \ \psi_{m}:=T_{1}\ o\ T_{2}\ o\ \ldots\ o\ T_{m}.\] The limits of forward trajectories might not always produce new fractal classes, as was noticed in [15]. However, backward trajectories converge under relatively moderate conditions, even when forward trajectories do not converge to a (contractive) IFS, and may lead to the generation of new classes of fractal sets. We summarize the result in the following theorem. **Theorem 2.5**.: _[_15_]_ _Let \(\{W_{m}\}_{m\in\mathbb{N}}\) be a family of set-valued maps of the form_ \[W_{m}(A_{0}):=\bigcup_{i=1}^{n_{m}}w_{i,m}(A_{0}),\ \ A_{0}\in\mathcal{H}(X),\] _where the elements are collections \(W_{m}=\{w_{i,m}:i\in\mathbb{N}_{n_{m}}\}\) of contractions constituting an IFS on the complete metric space \((X,d)\). Assume that_ 1. _there exists a nonempty closed invariant set_ \(\mathcal{P}\subset X\) _for_ \(\{w_{i,m}\},\ i\in\mathbb{N}_{n_{m}},m\in\mathbb{N};\)__ _and_ 2. \(\sum\limits_{m=1}^{\infty}\prod\limits_{j=1}^{m}Lip(W_{j})<\infty.\)__ _Then the backward trajectories \(\{\psi_{m}(A_{0})\}\) converge to a unique attractor \(A\subseteq\mathcal{P}\) for any initial \(A_{0}\subseteq\mathcal{P}\)._ We now recall the two notions of fractal dimensions. For more details, readers are encouraged to study the book [11]. **Definition 2.6**.: Let \(N_{\rho}(E)\) be the least number of sets with the diameter at most \(\rho\) that can cover \(E\), where \(E\) is a non-empty bounded subset of \(\mathbb{R}^{n}\). The upper box dimension and lower box dimension of \(E\), respectively, are defined as \[\overline{\dim}_{B}(E)=\limsup_{\delta\to 0}\frac{\log N_{\rho}(E)}{\log \frac{1}{\rho}},\] \[\underline{\dim}_{B}(E)=\liminf_{\rho\to 0}\frac{\log N_{\rho}(E)}{\log \frac{1}{\rho}}.\] **Definition 2.7**.: The diameter of a non-empty set \(U\subset\mathbb{R}^{n}\) is defined as \[|U|=\sup\{|x-y|:x,y\in U\}.\] Let \(E\) be a non-empty bounded subset of \(\mathbb{R}^{n}\). We say that \(\{U_{i}\}\) is a \(\rho\)-cover of \(E\) if \(\{U_{i}\}\) is a countable collection of sets of diameter at most \(\rho\) which covers \(E\). Let \(s\geq 0\). For any \(\rho>0\), we define \[H^{s}_{\rho}(E)=\inf\left\{\sum_{i=1}^{\infty}|U_{i}|^{s}:\{U_{i}\}\text{ is a }\rho-\text{cover of }F\right\}.\] The \(s\)-dimensional Hausdorff measure of \(E\) is defined by \[H^{s}(E)=\lim_{\rho\to 0}H^{s}_{\rho}(E).\] **Definition 2.8**.: Let \(s\geq 0\). The Hausdorff dimension of a set \(E\subset\mathbb{R}^{n}\) is defined by \[\dim_{H}(E)=\sup\{s:H^{s}(E)=\infty\}=\inf\{s:H^{s}(E)=0\}.\] ## 3. Construction of non-stationary \(\alpha\) -fractal function Let \(I=[a,b]\) and \(f:I\longrightarrow\mathbb{R}\) be a continuous function. Define a partition \(\Delta\) by \[\Delta=\{(x_{0},x_{1},\ldots,x_{N}):a=x_{0}<x_{1}<\cdots<x_{N}=b\}.\] For \(i\in\mathbb{N}_{N}\), let \(I_{i}=[x_{i-1},x_{i}]\). Suppose the affine maps \(l_{i}:I\longrightarrow I_{i}\) are defined as follows \(l_{i}(x)=a_{i}x+e_{i},\ i\in\mathbb{N}_{N}\), where \(a_{i},e_{i}\) are chosen in such a way that the maps \(l_{i}\) satisfy \(l_{i}(x_{0})=x_{i-1},\ l_{i}(x_{N})=x_{i}.\) Let \(m\in\mathbb{N}\) and set \(\mathbf{K}=I\times\mathbb{R}\). We use the following notation: \[\alpha_{m}:=(\alpha_{1,m},\alpha_{2,m},\ldots,\alpha_{N,m}),\ \ \alpha:=\{\alpha_{m}\}_{m\in\mathbb{N}}\ \text{ and }\ b:=\{b_{m}\}_{m\in\mathbb{N}}.\] We define \(F_{i,m}:\mathbf{K}\longrightarrow\mathbb{R}\) by \[F_{i,m}(x,y)=\alpha_{i,m}(x)y+f(l_{i}(x))-\alpha_{i,m}(x)b_{m}(x),\] where \(\alpha_{i,m}:I\longrightarrow\mathbb{R}\) are continuous functions such that \[\|\alpha\|_{\infty}=\sup\{\|\alpha_{m}\|_{max}:m\in\mathbb{N}\}<1,\ \text{ where }\ ||\alpha_{m}||_{max}=\max\{||\alpha_{i,m}||_{\infty}:i\in\mathbb{N}_{N}\},\] and \(b_{m}\in\mathcal{C}(I)\) such that \[b_{m}\neq f,\ b_{m}(x_{0})=f(x_{0})\ \ and\ \ b_{m}(x_{N})=f(x_{N}).\] For each \(i\in\mathbb{N}_{N}\), we define \[W_{i,m}:\mathbf{K}\longrightarrow I_{i}\times\mathbb{R}\ \text{by}\ W_{i,m}(x,y)=(l_{i}(x),F_{i,m}(x,y)).\] Now we have a sequence of IFSs \[\mathcal{I}_{m}=\{\mathbf{K};W_{i,m}:i\in\mathbb{N}_{N}\}.\] One can show that for each \(m\in\mathbb{N}\), the IFS \(\mathcal{I}_{m}\) has a unique attractor \(G\), and it is the graph of a continuous function that interpolates the given data [4]. **Proposition 3.1**.: _[_22_, Proposition 2.6]_ _Let \(\{T_{m}\}_{m\in\mathbb{N}}\) be a sequence of Lipschitz maps on a complete metric space \((X,d)\) with Lipschitz constant \(\delta_{m}\). If there exists \(x^{*}\) in the space such that the sequence \(\{d(x^{*},T_{m}(x^{*}))\}\) is bounded, and \(\sum_{m=1}^{\infty}\prod_{i=1}^{m}\delta_{i}<\infty\), then the sequence \(\{\psi_{m}(x)\}\) converges for all \(x\in X\) to a unique limit \(\bar{x}\)._ Now, let \(C_{f}(I)=\{g\in\mathcal{C}(I):g(x_{i})=f(x_{i}),\ i=0,N\}\). Then \(C_{f}(I)\) is a complete metric space. For \(m\in\mathbb{N}\), we define a sequence of Read-Bajraktarevic (RB) operators \(T^{\alpha_{m}}:C_{f}(I)\longrightarrow C_{f}(I)\) by \[(T^{\alpha_{m}}g)(x)=F_{i,m}\ (\ {l_{i}}^{-1}(x),\ g({l_{i}}^{-1}(x))),\ x\in I_{i}, \ i\in\mathbb{N}_{N}.\] **Proposition 3.2**.: _[_22_, Proposition 2.9.]_ _The above operators \(T^{\alpha_{m}}:C_{f}(I)\longrightarrow C_{f}(I)\) are well defined for each \(m\in\mathbb{N}\)._ Using similar ideas from [14, 22], we have the following result. **Theorem 3.3**.: _Consider the sequence of operators \(\{T^{\alpha_{m}}\}_{m\in\mathbb{N}}\) on \(C_{f}(I)\) defined above with the conditions described. Then for every \(h\in C_{f}(I)\), the sequence \(\{T^{\alpha_{1}}\ o\ T^{\alpha_{2}}\ o\ldots o\ T^{\alpha_{m}}h\}\) converges to a map \(f_{b}^{\alpha}\) of \(C_{f}(I)\)._ **Definition 3.4**.: The function \(f_{b}^{\alpha}\) is called a non-stationary \(\alpha\)-fractal function with respect to \(f,\alpha,b\) and the partition \(\Delta\) as described above. _Remark 3.5_.: Note that, as each \(T^{\alpha_{m}}\) is a contraction, there is a unique stationary function \(f_{m}^{\alpha}\) such that \(T^{\alpha_{m}}(f_{m}^{\alpha})=f_{m}^{\alpha}\) and it satisfies the functional equation: \[f_{m}^{\alpha}(x)=F_{i,m}\ (\ Q_{i}(x),\ f_{m}^{\alpha}(Q_{i}(x)))\ \ \forall\ \ x\in I_{i},\] where \(Q_{i}(x):={l_{i}}^{-1}(x)\). That is, \[f_{m}^{\alpha}(x)=f(x)+\alpha_{i,m}(Q_{i}(x)).f_{m}^{\alpha}(Q_{i}(x))-\alpha_ {i,m}(Q_{i}(x))b_{m}(Q_{i}(x)).\] ## 4. Associated fractal operator on \(\mathcal{C}(I)\) Let \(||\alpha||_{\infty}=\sup\limits_{m\in\mathbb{N}}||\alpha_{m}||_{max}<1\) and \(||b||_{\infty}:=\sup\limits_{m\in\mathbb{N}}||b_{m}||_{\infty}<\infty.\) We consider \(b_{m}=L_{m}f\) such that \(L_{m}:\mathcal{C}(I)\rightarrow\mathcal{C}(I)\) is a linear bounded operator satisfying \(L_{m}f(x_{i})=f(x_{i})\) for \(m\in\mathbb{N},\ i=0,N,\) and \(||L||_{\infty}:=\sup\limits_{m\in\mathbb{N}}||L_{m}||<\infty.\) Let \(f\in\mathcal{C}(I)\). We define the \(\alpha\)-fractal operator \(\mathcal{F}_{b}^{\alpha}\equiv\mathcal{F}_{\Delta,b}^{\alpha}\) as \[\mathcal{F}_{b}^{\alpha}:\mathcal{C}(I)\rightarrow\mathcal{C}(I),\ \ \mathcal{F}_{b}^{\alpha}(f)=f_{b}^{\alpha}.\] **Lemma 4.1**.: _Let \(X\) be a Banach space and \(T:X\to X\) be a bounded linear operator. If \(||T||<1\), \((Id-T)^{-1}\) exists and bounded, where \(Id\) denotes the identity operator on \(X\)._ **Lemma 4.2**.: _Let \(X\) be a normed linear space and \(T:X\to X\) be a bounded linear operator, and \(S:X\to X\) be a compact operator. Then \(ST\) and \(TS\) are compact operators._ We now describe a few properties of the non-stationary \(\alpha\)-fractal operator. Note that the following properties are also studied in the literature for the stationary case [20, 21]. However, the approach in the non-stationary setting is different, and for better understanding and completeness, we study the following properties. Note that we have considered \(L_{m}\in\mathcal{C}(I)\) and \(||L||_{\infty}:=\sup\limits_{m\in\mathbb{N}}||L_{m}||<\infty\). Hence \(C_{L}:=\sup\limits_{m\in\mathbb{N}}\{||Id-L_{m}||\}<\infty.\) **Theorem 4.3**.: _Let \(||\alpha||_{\infty}<1\) and \(Id\) be the identity operator on \(\mathcal{C}(I)\)._ 1. _For_ \(f\in\mathcal{C}(I)\)_, the perturbation error satisfies the following inequality:_ \[||f_{b}^{\alpha}-f||_{\infty}\leq\frac{||\alpha||_{\infty}}{1-||\alpha||_{ \infty}}\sup\limits_{m\in\mathbb{N}}\{||f-L_{m}f||_{\infty}\}\leq\frac{|| \alpha||_{\infty}}{1-||\alpha||_{\infty}}C_{L}||f||_{\infty}.\] 2. _If_ \(\alpha=0\)_, then_ \(\mathcal{F}_{b}^{\alpha}\) _is norm preserving. Infact it holds that_ \(\mathcal{F}_{b}^{0}\equiv Id\)_._ 3. _The fractal operator_ \(\mathcal{F}_{b}^{\alpha}:\mathcal{C}(I)\longrightarrow\mathcal{C}(I)\) _is linear and bounded with respect to the uniform norm._ _._ 4. _For a suitable value of the scaling function, the operator_ \(\mathcal{F}_{b}^{\alpha}\) _is an approximation type operator._ 5. _For_ \(||\alpha||_{\infty}<\frac{1}{1+C_{L}}\) _, the fractal operator_ \(\mathcal{F}_{b}^{\alpha}\) _is bounded below. In particular,_ \(\mathcal{F}_{b}^{\alpha}\) _is one to one._ 6. _If_ \(||\alpha||_{\infty}<\frac{1}{1+C_{L}}\)_, then_ \(\mathcal{F}_{b}^{\alpha}\) _has a bounded inverse and consequently a topological isomorphism._ 7. _The fixed points of_ \(L_{m}\) _are also the fixed points of_ \(\mathcal{F}_{b}^{\alpha}\)_._ 8. _If_ \(1\) _belongs to the spectrum of_ \(L_{m}\)_, then_ \(1\leq||\mathcal{F}_{b}^{\alpha}||\)_._ 9. _For_ \(||\alpha||_{\infty}<\frac{1}{1+C_{L}}\) _, the fractal operator_ \(\mathcal{F}_{b}^{\alpha}\) _is not a compact operator._ 10. _For_ \(||\alpha||_{\infty}<\frac{1}{1+C_{L}}\) _, the fractal operator_ \(\mathcal{F}_{b}^{\alpha}\) _has a closed range._ Proof.: 1. From the definition of RB operators, we have (4.1) \[(T^{\alpha_{m}}g)(x)=f(x)+\alpha_{i,m}(Q_{i}(x)).g(Q_{i}(x))-\alpha_{i,m}(Q_{i }(x))b_{m}(Q_{i}(x))\] for all \(x\in I_{i}\), \(i\in\mathbb{N}_{N}\) and \(m\in\mathbb{N}\). Now \(\forall\ x\in I_{i}\), \[T^{\alpha_{1}}\;o\;T^{\alpha_{2}}\;o\ldots o\;T^{\alpha_{m}}f(x)-f(x)=\alpha_ {i,1}(Q_{i}(x))(T^{\alpha_{2}}\;o\;T^{\alpha_{3}}\;o\ldots o\;T^{\alpha_{m}}f -b_{1})(Q_{i}(x)).\] Inductively, we get (4.2) \[T^{\alpha_{1}}\;o\;T^{\alpha_{2}}\;o\ldots o\;T^{\alpha_{m}}f(x)-f(x)=\sum_{l =1}^{m}\alpha_{i,1}(Q_{i}(x))\ldots\alpha_{i,l}(Q_{i}^{l}(x))(f-b_{l})(Q_{i}^ {l}(x),\] where \(Q_{i}^{l}\) is a suitable finite composition of maps \(Q_{i}\). Taking limit as \(m\to\infty\), we get (4.3) \[f_{b}^{\alpha}(x)-f(x) =\lim_{m\to\infty}\sum_{l=1}^{m}\alpha_{i,1}(Q_{i}(x))\ldots \alpha_{i,l}(Q_{i}^{l}(x))(f-b_{l})(Q_{i}^{l}(x))\] \[=\lim_{m\to\infty}\sum_{l=1}^{m}\alpha_{i,1}(Q_{i}(x))\ldots \alpha_{i,l}(Q_{i}^{l}(x))(f-L_{l}f)(Q_{i}^{l}(x)).\] So, \[||f_{b}^{\alpha}-f||_{\infty} \leq\lim_{m\to\infty}\sum_{l=1}^{m}||\alpha||_{\infty}^{l}||f-L_{ l}f||_{\infty}\] \[\leq\lim_{m\to\infty}\sum_{l=1}^{m}||\alpha||_{\infty}^{l}\sup_{m \in\mathbb{N}}||f-L_{m}f||_{\infty}\] \[=\sum_{l=1}^{\infty}||\alpha||_{\infty}^{l}\sup_{m\in\mathbb{N}}|| f-L_{m}f||_{\infty}\] \[=\frac{||\alpha||_{\infty}}{1-||\alpha||_{\infty}}C_{L}||f||_{ \infty}.\] (4.4) \[||f_{b}^{\alpha}-f||_{\infty}\leq\frac{||\alpha||_{\infty}}{1-|| \alpha||_{\infty}}C_{L}||f||_{\infty}.\] 2. From equation (4.4), we have \[||f_{b}^{\alpha}-f||_{\infty}\leq\frac{||\alpha||_{\infty}}{1-||\alpha||_{\infty}} C_{L}||f||_{\infty}\] \[\implies||\mathcal{F}_{b}^{\alpha}(f)-f||_{\infty}\leq\frac{||\alpha||_{ \infty}}{1-||\alpha||_{\infty}}C_{L}||f||_{\infty}.\] If \(\alpha=0\), then \(||\mathcal{F}_{b}^{\alpha}(f)-f||_{\infty}=0\). Therefore, \(\mathcal{F}_{b}^{\alpha}(f)=f\quad\Longrightarrow\ \mathcal{F}_{b}^{0}=Id\). 3. Let \(f,g\in\mathcal{C}(I)\) and \(c,d\in\mathbb{R}\). Then from equation (4.3), we have for all \(x\in I_{i}\), \[(cf)_{b}^{\alpha}(x)=(cf)(x)+\lim_{m\rightarrow\infty}\sum_{l=1}^{m}\alpha_{ i,1}(Q_{i}(x))\ldots\alpha_{i,l}(Q_{i}^{l}(x))(cf-L_{l}(cf))(Q_{i}^{l}(x)),\] \[(dg)_{b}^{\alpha}(x)=(dg)(x)+\lim_{m\rightarrow\infty}\sum_{l=1}^{m}\alpha_{ i,1}(Q_{i}(x))\ldots\alpha_{i,l}(Q_{i}^{l}(x))(dg-L_{l}(dg))(Q_{i}^{l}(x)).\] As \(L_{l}\) is linear, so that \[(cf)_{b}^{\alpha}(x)+(dg)_{b}^{\alpha}(x)\] \[=(cf+dg)(x)+\lim_{m\rightarrow\infty}\sum_{l=1}^{m}\alpha_{i,1}(Q_{i}(x)) \ldots\alpha_{i,l}(Q_{i}^{l}(x))[cf+dg-L_{l}(cf+dg)](Q_{i}^{l}(x)).\] Also, \[(cf+dg)_{b}^{\alpha}(x) =(cf+dg)(x)\] \[+\lim_{m\rightarrow\infty}\sum_{l=1}^{m}\alpha_{i,1}(Q_{i}(x))\ldots \alpha_{i,l}(Q_{i}^{l}(x))\times[cf+dg-L_{l}(cf+dg)](Q_{i}^{l}(x)).\] Hence we deduce that, \[(cf+dg)_{b}^{\alpha}(x)=(cf)_{b}^{\alpha}(x)+(dg)_{b}^{\alpha}(x).\] That is, \((cf+dg)_{b}^{\alpha}=(cf)_{b}^{\alpha}+(dg)_{b}^{\alpha}\quad\Longrightarrow \ \mathcal{F}_{b}^{\alpha}(cf+dg)=c\mathcal{F}_{b}^{\alpha}(f)+d\mathcal{F}_{b}^{ \alpha}(g)\). This proves the linearity of the operator \(\mathcal{F}_{b}^{\alpha}\). From (4.4), we have \[||f_{b}^{\alpha}||_{\infty}-||f||_{\infty}\leq||f_{b}^{\alpha}-f||_{\infty} \leq\frac{||\alpha||_{\infty}}{1-||\alpha||_{\infty}}C_{L}||f||_{\infty}.\] That is, \[||\mathcal{F}_{b}^{\alpha}(f)||_{\infty}\leq\left(1+\frac{||\alpha||_{\infty}} {1-||\alpha||_{\infty}}C_{L}\right)||f||_{\infty}.\] \[\implies||\mathcal{F}_{b}^{\alpha}||\leq\left(1+\frac{||\alpha||_{\infty}}{1- ||\alpha||_{\infty}}C_{L}\right).\] Therefore the operator \(\mathcal{F}_{b}^{\alpha}\) is bounded. 4. Let \(\epsilon>0\). We choose the scaling sequence \(\alpha\) such that \(||\alpha||_{\infty}<\frac{\epsilon}{\epsilon+C_{L}||f||_{\infty}}\). Using equation (4.4), we obtain \[||f_{b}^{\alpha}-f||_{\infty}<\epsilon\] \[\implies||\mathcal{F}_{b}^{\alpha}(f)-f||_{\infty}<\epsilon.\] Consequently, the operator \(\mathcal{F}_{b}^{\alpha}\) is of approximation type. 5. If \(||\alpha||_{\infty}<\frac{1}{1+C_{L}}\), then we have from equation (4.4) \[||f||_{\infty}-||f_{b}^{\alpha}||_{\infty}\leq||f_{b}^{\alpha}-f||_{\infty} \leq\frac{||\alpha||_{\infty}}{1-||\alpha||_{\infty}}C_{L}||f||_{\infty}.\] \[\implies\left(1-\frac{||\alpha||_{\infty}}{1-||\alpha||_{\infty}}C_{L}\right) ||f||_{\infty}\leq||\mathcal{F}_{b}^{\alpha}(f)||_{\infty}.\] \[\implies\left(\frac{1-||\alpha||_{\infty}(1+C_{L})}{1-||\alpha||_{\infty}} \right)||f||_{\infty}\leq||\mathcal{F}_{b}^{\alpha}(f)||_{\infty}.\] This shows that \(\mathcal{F}_{b}^{\alpha}\) is bounded from below. Consequently, \(\mathcal{F}_{b}^{\alpha}\) is injection. 6. From equation (4.4), we have \[||Id-\mathcal{F}_{b}^{\alpha}||\leq\frac{||\alpha||_{\infty}}{1-||\alpha||_{ \infty}}C_{L}.\] As \(||\alpha||_{\infty}<\frac{1}{1+C_{L}}\), we get \(||Id-\mathcal{F}_{b}^{\alpha}||<1\). Since \(\mathcal{F}_{b}^{\alpha}\) is bounded, \((Id-\mathcal{F}_{b}^{\alpha})\) is also bounded. Hence by using Lemma 4.1, \(\mathcal{F}_{b}^{\alpha}=Id-(Id-\mathcal{F}_{b}^{\alpha})\) is invertible and the inverse is bounded. 7. Let for each \(m\in\mathbb{N}\), \(f\) is fixed point of \(L_{m}\). Then \(L_{m}(f)=f\) for each \(m\in\mathbb{N}\). From equation (4.3), we have \(f_{b}^{\alpha}(x)-f(x)=0\). So that \(\mathcal{F}_{b}^{\alpha}f(x)=f(x)\). This implies that \(\mathcal{F}_{b}^{\alpha}(f)=f\). 8. Let \(g\in\mathcal{C}(I)\) with \(||g||_{\infty}=1\) and \(L_{m}f=f\). Then item (7) gives, \(\mathcal{F}_{b}^{\alpha}g=g\). Consequently, \(||\mathcal{F}_{b}^{\alpha}g||_{\infty}=||g||_{\infty}.\) From the definition of operator norm, we have \(1\leq||\mathcal{F}_{b}^{h}||\). 9. From item (6), we have for \(||\alpha||_{\infty}<\frac{1}{1+C_{L}}\), the operator \(\mathcal{F}_{b}^{\alpha}:\mathcal{C}(I)\longrightarrow\mathcal{C}(I)\) is one-one. Note that the range space \(\mathcal{F}_{b}^{\alpha}(\mathcal{C}(I))\) is infinite dimensional. We define the inverse map \((\mathcal{F}_{b}^{\alpha})^{-1}:\mathcal{F}_{b}^{\alpha}(\mathcal{C}(I)) \longrightarrow\mathcal{C}(I)\). For the choice of \(\alpha\), we know from item (5) that \(\mathcal{F}_{b}^{\alpha}\) is bounded below and hence it follows that \((\mathcal{F}_{b}^{\alpha})^{-1}\) is a bounded linear operator. If possible, let \(\mathcal{F}_{b}^{\alpha}\) be a compact operator. Then by Lemma 4.2, we conclude that the operator \(Id=(\mathcal{F}_{b}^{\alpha})(\mathcal{F}_{b}^{\alpha})^{-1}:\mathcal{F}_{b}^{ \alpha}(\mathcal{C}(I))\longrightarrow\mathcal{C}(I)\) is a compact operator, which is a contradiction to the infinite dimensionality of the space \(\mathcal{F}_{b}^{\alpha}(\mathcal{C}(I))\). Hence \(\mathcal{F}_{b}^{\alpha}\) is not a compact operator. 10. Let \(\{f_{b,r}^{\alpha}\}_{r\in\mathbb{N}}\) be a sequence in \(\mathcal{F}_{b}^{\alpha}(\mathcal{C}(I))\) such that \(f_{b,r}^{\alpha}\to g\). Then \(\{f_{b,r}^{\alpha}\}_{r\in\mathbb{N}}\) is a Cauchy sequence in \(\mathcal{C}(I)\). Thus from \[||f_{r}-f_{s}||_{\infty}\leq\frac{1-||\alpha||_{\infty}}{1-(1+C_{L})||\alpha||_ {\infty}}||f_{b,r}^{\alpha}-f_{b,s}^{\alpha}||_{\infty},\] the sequence \(\{f_{r}\}\) is a Cauchy sequence in \(\mathcal{C}(I)\). Consequently, there exists \(f\in\mathcal{C}(I)\) such that \(f_{r}\to f\). Using the continuity of the operator \(\mathcal{F}_{b}^{\alpha}\), we obtain \(g=\mathcal{F}_{b}^{\alpha}(f_{r})=f_{b,r}^{\alpha}\). _Remark 4.4_.: Let \(\alpha_{i,m}(x)=\alpha_{i}\) and \(b_{m}=L_{m}f=Lf\) for all \(m\in\mathbb{N}\). Then from equation (4.2), we have \[(T^{\alpha})^{m}f(x)-f(x)=\sum_{l=1}^{m}\alpha_{i}^{l}(f-Lf)(Q_{i}^{l}(x)).\] Taking limit as \(m\to\infty\), we get \[f^{\alpha}(x)-f(x)=\lim_{m\to\infty}\sum_{l=1}^{m}\alpha_{i}^{l}(f-Lf)(Q_{i}^{l}(x )).\] So, \[||f^{\alpha}-f||_{\infty} \leq\lim_{m\to\infty}\sum_{l=1}^{m}|\alpha|_{\infty}^{l}||f-Lf||_{\infty}\] \[=(\sum_{l=1}^{\infty}|\alpha|_{\infty}^{l})\cdot||f-Lf||_{\infty}\] \[\leq\frac{|\alpha|_{\infty}}{1-|\alpha|_{\infty}}(1+||L||)||f||_{\infty}\] \[\implies||f^{\alpha}||_{\infty}-||f||_{\infty}\ \leq\ ||f^{\alpha}-f||_{\infty}\ \leq\ \frac{|\alpha|_{\infty}}{1-|\alpha|_{\infty}}(1+||L|)||f||_{\infty}\] So that, \[||f^{\alpha}||_{\infty}\leq\frac{1+|\alpha|_{\infty}||L||}{1-|\alpha|_{\infty} }\cdot||f||_{\infty},\] which is the same upper bound of the stationary \(\alpha\)-fractal functions given in Proposition 2 of [21]. ## 5. Non-stationary fractal functions on different function spaces The aim of this section is to study the non-stationary \(\alpha\)-fractal functions in different function spaces. We start this with the bounded variation space. ### Space of functions of bounded variation on \(I\) **Definition 5.1**.: Let \(f:I\to\mathbb{R}\) be a function. For each partition \(P_{I}:x_{0}<x_{1}<\cdots<x_{N}\) of the interval \(I\), we define \[V_{P_{I}}(f,I)=\sum_{i=1}^{N}|f(x_{i})-f(x_{i-1})|\] and \[V(f,I)=\sup_{P_{I}}V_{P_{I}}(f,I)=\sup_{P_{I}}\sum_{i=1}^{N}|f(x_{i})-f(x_{i-1} )|,\] where the supremum is taken over all the partitions \(P_{I}\) of the interval \(I\). If the total variation of \(f\) is finite, i.e., \(V(f,I)<\infty\), we say \(f\) is of bounded variation on \(I\). **Theorem 5.2**.: _[_16_]_ _If \(f\) is continuous bounded variation function on an interval \(I\), then \(dim_{H}(Graph\ (f))=dim_{B}(Graph\ (f))=1.\)_ Let \(\mathcal{BV}(I)\) denote the set of all functions of bounded variation on \(I\). On \(\mathcal{BV}(I)\), we define a norm \(||.||_{\mathcal{BV}}\) by \[||f||_{\mathcal{BV}}=|f(x_{0})|+V(f,I).\] We know that with respect to this norm, \(\mathcal{BV}(I)\) is a Banach space. Let \(f\in\mathcal{BV}(I)\) and define \(\mathcal{BV}_{f}(I)=\{g\in\mathcal{BV}(I):\ f(x_{0})=g(x_{0}),\ f(x_{N})=g(x_{N})\}.\) Note that \(\mathcal{BV}_{f}(I)\) is also complete with respect to the metric induced by the norm \(||.||_{\mathcal{BV}}\). Let \(\alpha_{m}:=(\alpha_{1,m},\alpha_{2,m},\ldots,\alpha_{N,m}),\ \ \alpha:=\{\alpha_{m}\}_{m\in\mathbb{N}},\ \ b:=\{b_{m}\}_{m\in\mathbb{N}},\) and \(||\alpha||_{\infty}=\sup\limits_{m\in\mathbb{N}}||\alpha_{m}||_{max}<1,\ \ ||b||_{ \mathcal{BV}}:=\sup\limits_{m\in\mathbb{N}}||b_{m}||_{\mathcal{BV}}<\infty.\) With all these setups, we have the following result: **Theorem 5.3**.: _Let \(f,b_{m}\in\mathcal{BV}(I)\) be such that \(b_{m}(x_{0})=f(x_{0}),\ b_{m}(x_{N})=f(x_{N})\) and the sequence of scaling functions \(\alpha_{i,m}\in\mathcal{BV}(I)\) be such that \(||\alpha_{i,m}||_{\mathcal{BV}}<\dfrac{1}{2N}\). Then the following hold._ 1. _The RB operator_ \(T^{\alpha_{m}}\) _defined in equation (_4.1_) is well defined on_ \(\mathcal{BV}_{f}(I)\)_._ 2. \(T^{\alpha_{m}}:\mathcal{BV}_{f}(I)\longrightarrow\mathcal{BV}_{f}(I)\subset \mathcal{BV}(I)\) _is, in reality, a contraction map._ 3. _There exists a unique function_ \(f^{\alpha}_{b,\mathcal{BV}}\in\mathcal{BV}_{f}(I)\) _such that the backward trajectories_ \(T^{\alpha_{i}}\) \(o\) \(T^{\alpha_{2}}\) \(o\) \(\ldots\)_o_ \(T^{\alpha_{m}}g\) _of_ \((T^{\alpha_{m}})\) _converges to the map_ \(f^{\alpha}_{b,\mathcal{BV}}\) _for every_ \(g\in\mathcal{BV}_{f}(I)\)_._ _Furthermore, we have \(dim_{H}(Graph\ (f^{\alpha}_{b}))=dim_{B}(Graph\ (f^{\alpha}_{b}))=1\)._ Proof.: 1. From the definition of RB operators, we have for \(m\in\mathbb{N}\) \[(T^{\alpha_{m}}g)(x)=f(x)+\alpha_{i,m}(Q_{i}(x)).g(Q_{i}(x))-\alpha_{i,m}(Q_{i }(x))b_{m}(Q_{i}(x)),\ \ x\in I_{i},\ i\in\mathbb{N}_{N}.\] As \(f,\alpha_{i,m},g,b_{m}\in\mathcal{BV}_{f}(I)\), so that \((T^{\alpha_{m}}g)(x)\in\mathcal{BV}_{f}(I)\) whenever \(g\in\mathcal{BV}_{f}(I)\). Therefore the RB operator \(T^{\alpha_{m}}\) is well defined on \(\mathcal{BV}_{f}(I)\). 2. Let \(P:x_{i-1}=t_{0,i}<t_{1,i}<t_{2,i}<\cdots<t_{k_{i},i}=x_{i}\) be a partition of the interval \(I_{i}=[x_{i-1},x_{i}]\), where \(k_{i}\in\mathbb{N}\). Now, \[|(T^{\alpha_{m}}g-T^{\alpha_{m}}h)(t_{j,i})-(T^{\alpha_{m}}g-T^{ \alpha_{m}}h)(t_{j-1,i})|\] \[= |\alpha_{i,m}(Q_{i}(t_{j,i}))(g-h)(Q_{i}(t_{j,i}))-\alpha_{i,m}(Q _{i}(t_{j-1,i}))(g-h)(Q_{i}(t_{j-1,i}))|\] \[\leq \bigg{|}\alpha_{i,m}(Q_{i}(t_{j,i}))\bigg{(}(g-h)(Q_{i}(t_{j,i})) -(g-h)(Q_{i}(t_{j-1,i}))\bigg{)}\bigg{|}\] \[+\bigg{|}\bigg{(}\alpha_{i,m}(Q_{i}(t_{j,i}))-\alpha_{i,m}(Q_{i}( t_{j-1,i}))\bigg{)}(g-h)(Q_{i}(t_{j-1,i}))\bigg{|}\] \[\leq \|\alpha_{i,m}\|_{\mathcal{BV}}|(g-h)(Q_{i}(t_{j,i}))-(g-h)(Q_{i} (t_{j-1,i}))|\] \[+|\alpha_{i,m}(Q_{i}(t_{j,i}))-\alpha_{i,m}(Q_{i}(t_{j-1,i}))||g- h\|_{\mathcal{BV}}\] Taking sum over \(j=1\) to \(k_{i}\), we have \[\sum\limits_{j=1}^{k_{i}}|(T^{\alpha_{m}}g-T^{\alpha_{m}}h)(t_{j,i})-(T^{\alpha_{m}}g-T^{\alpha_{m}}h)(t_{j-1,i})|\] \[\leq ||\alpha_{i,m}||_{\mathcal{BV}}\sum\limits_{j=1}^{k_{i}}|(g-h)(Q_ {i}(t_{j,i}))-(g-h)(Q_{i}(t_{j-1,i}))|\] \[+\|g-h\|_{\mathcal{BV}}\sum\limits_{j=1}^{k_{i}}|\alpha_{i,m}(Q_ {i}(t_{j,i}))-\alpha_{i,m}(Q_{i}(t_{j-1,i}))|\] Since \(x_{0}=Q_{i}(t_{0,i})<Q_{i}(t_{1,i})<\cdots<Q_{i}(t_{k_{i},i})=x_{N}\) is a partition of the interval \(I=[x_{0},x_{N}]\), so that \[\sum_{j=1}^{k_{i}}|(T^{\alpha_{m}}g-T^{\alpha_{m}}h)(t_{j,i})-(T^{ \alpha_{m}}g-T^{\alpha_{m}}h)(t_{j-1,i})|\] \[\leq ||\alpha_{i,m}||_{\mathcal{B}\mathcal{V}}V(g-h,I)+\|g-h\|_{ \mathcal{B}\mathcal{V}}V(\alpha_{i,m},I)\] \[\leq ||\alpha_{i,m}||_{\mathcal{B}\mathcal{V}}\|g-h\|_{\mathcal{B} \mathcal{V}}+\|g-h\|_{\mathcal{B}\mathcal{V}}||\alpha_{i,m}||_{\mathcal{B} \mathcal{V}}\] \[= 2||\alpha_{i,m}||_{\mathcal{B}\mathcal{V}}\|g-h\|_{\mathcal{B} \mathcal{V}}\] This inequality holds for every partition \(P\) of \(I_{i}\). Hence \[V(T^{\alpha_{m}}g-T^{\alpha_{m}}h,I_{i})\leq 2||\alpha_{i,m}||_{\mathcal{B} \mathcal{V}}\|g-h\|_{\mathcal{B}\mathcal{V}}.\] As \[V(T^{\alpha_{m}}g-T^{\alpha_{m}}h,I)=\sum_{i=1}^{N}V(T^{\alpha_{m}}g-T^{\alpha _{m}}h,I_{i})\leq 2N||\alpha_{i,m}||_{\mathcal{B}\mathcal{V}}\|g-h\|_{ \mathcal{B}\mathcal{V}}\] and \(T^{\alpha_{m}}g(x_{0})=T^{\alpha_{m}}h(x_{0})=f(x_{0})\), we have \[||T^{\alpha_{m}}g-T^{\alpha_{m}}h||_{\mathcal{B}\mathcal{V}}\leq 2N||\alpha_{i,m}|| _{\mathcal{B}\mathcal{V}}\|g-h\|_{\mathcal{B}\mathcal{V}}.\] Since \(||\alpha_{i,m}||_{\mathcal{B}\mathcal{V}}<\dfrac{1}{2N}\), each \(T^{\alpha_{m}}\) is a contraction map on the complete metric space \(\mathcal{B}\mathcal{V}_{f}(I)\). 3. Let \(g\in\mathcal{B}\mathcal{V}_{f}(I)\) be arbitrary. We check that \(\{||T^{\alpha_{m}}g-g||_{\mathcal{B}\mathcal{V}}\}\) is bounded. To do this we first calculate \(V(T^{\alpha_{m}}g-g,I)\). Now \[|(T^{\alpha_{m}}g-g)(t_{j,i})-(T^{\alpha_{m}}g-g)(t_{j-1,i})|\] \[= |(f-g)(t_{j,i})-(f-g)(t_{j-1,i})\] \[\leq \left|(f-g)(t_{j,i})-(f-g)(t_{j-1,i})\right|\] \[+\left|\alpha_{i,m}(Q_{i}(t_{j,i}))\bigg{(}(g-b_{m})(Q_{i}(t_{j,i }))-(g-b_{m})(Q_{i}(t_{j-1,i}))\bigg{)}\right|\] \[+\left|\bigg{(}\alpha_{i,m}(Q_{i}(t_{j,i}))-\alpha_{i,m}(Q_{i}(t_{ j-1,i}))\bigg{)}(g-b_{m})(Q_{i}(t_{j-1,i}))\right|\] \[\leq |(f-g)(t_{j,i})-(f-g)(t_{j-1,i})|+\|\alpha_{i,m}\|_{\mathcal{B} \mathcal{V}}|(g-b_{m})(Q_{i}(t_{j,i}))-(g-b_{m})(Q_{i}(t_{j-1,i}))|\] \[+\left|\alpha_{i,m}(Q_{i}(t_{j,i}))-\alpha_{i,m}(Q_{i}(t_{j-1,i}) )\right|\|g-h\|_{\mathcal{B}\mathcal{V}}\] Taking sum over \(j=1\) to \(k_{i}\), we have \[\sum_{j=1}^{k_{i}}|(T^{\alpha_{m}}g-g)(t_{j,i})-(T^{\alpha_{m}}g-g)( t_{j-1,i})|\] \[\leq\sum_{j=1}^{k_{i}}|(f-g)(t_{j,i})-(f-g)(t_{j-1,i})|\] \[\quad+||\alpha_{i,m}||_{\mathcal{BV}}\sum_{j=1}^{k_{i}}|(g-b_{m}) (Q_{i}(t_{j,i}))-(g-b_{m})(Q_{i}(t_{j-1,i}))|\] \[\quad+\|g-b_{m}\|_{\mathcal{BV}}\sum_{j=1}^{k_{i}}\left|\alpha_{i, m}(Q_{i}(t_{j,i}))-\alpha_{i,m}(Q_{i}(t_{j-1,i}))\right|\] \[\leq V(f-g,I_{i})+||\alpha_{i,m}||_{\mathcal{BV}}V(g-b_{m},I)+\|g -b_{m}\|_{\mathcal{BV}}V(\alpha_{i,m},I)\] \[\leq V(f-g,I_{i})+||\alpha_{i,m}||_{\mathcal{BV}}||g-b_{m}||_{ \mathcal{BV}}+\|g-b_{m}\|_{\mathcal{BV}}||\alpha_{i,m}||_{\mathcal{BV}}.\] By a similar argument as in item (2), we get \[V(T^{\alpha_{m}}g-g,I_{i})\leq V(f-g,I_{i})+2||\alpha_{i,m}||_{\mathcal{BV}}|| g-b_{m}||_{\mathcal{BV}}.\] Using the conditions on sequence of scaling functions, we get \[V(T^{\alpha_{m}}g-g,I) =\sum_{i=1}^{N}V(T^{\alpha_{m}}g-g,I_{i})\] \[\leq V(f-g,I)+2||g-b_{m}||_{\mathcal{BV}}\sum_{i=1}^{N}||\alpha_{ i,m}||_{\mathcal{BV}}\] \[\leq\|f-g\|_{\mathcal{BV}}+\|g-b_{m}\|_{\mathcal{BV}}\] Also, \[|(T^{\alpha_{m}}g-g)(x_{0})|=|f(x_{0})+\alpha_{1,m}(g-b_{m})(x_{0})-g(x_{0})| =0.\] Now, \[||T^{\alpha_{m}}g-g||_{\mathcal{BV}}\] \[=|(T^{\alpha_{m}}g-g)(x_{0})|+V(T^{\alpha_{m}}g-g,I)\] \[\leq\|f-g\|_{\mathcal{BV}}+\|g-b_{m}\|_{\mathcal{BV}}\] \[=||f||_{\mathcal{BV}}+||g||_{\mathcal{BV}}+||g||_{\mathcal{BV}}+|| b_{m}||_{\mathcal{BV}}\] \[\leq||f||_{\mathcal{BV}}+2||g||_{\mathcal{BV}}+||b||_{\mathcal{BV}}.\] Clearly, the bound is independent of \(m\). Applying Theorem 3.3, \(\exists\) a unique \(f^{\alpha}_{b,\mathcal{BV}}\in\mathcal{BV}_{f}(I)\) such that \(f^{\alpha}_{b,\mathcal{BV}}=\lim\limits_{m\to\infty}T^{\alpha_{1}}\ o\ T^{ \alpha_{2}}\ o\ldots o\ T^{\alpha_{m}}g\) for any \(g\in\mathcal{BV}_{f}(I)\). Also, applying Theorem 5.2, we have \(dim_{H}(Graph\ (f^{\alpha}_{b}))=dim_{B}(Graph\ (f^{\alpha}_{b}))=1\). _Remark 5.4_.: If we take \(\alpha_{i,m}=\alpha_{i},\ b_{m}=b\) for all \(m\in\mathbb{N}\), then we get \[f^{\alpha}_{b,\mathcal{BV}}=\lim\limits_{m\to\infty}T^{\alpha}\ o\ T^{\alpha} \ o\ldots o\ T^{\alpha}g=\lim\limits_{m\to\infty}(T^{\alpha})^{m}\] for any \(g\in\mathcal{BV}_{f}(I).\) So that, \(f^{\alpha}_{b,\mathcal{BV}}\longrightarrow f^{\alpha}_{\mathcal{BV}},\text{ as }m \rightarrow\infty,\) where \(f^{\alpha}_{\mathcal{BV}}\) is the stationary \(\alpha\)-fractal function on the space \(\mathcal{BV}(I)\) that appeared in [28]. That is, the non-stationary \(\alpha\)-fractal function tends to stationary \(\alpha\)-fractal function on \(\mathcal{BV}(I)\) for the above particular choice of IFS parameters. ### Function space \(V_{\beta}[0,1]\) Let \(\beta\in[1,2]\), and define \[C_{\beta}[0,1]=\{f\in\mathcal{C}[0,1]:\overline{dim}_{B}\ G_{f}\leq\beta\}.\] Let \(f\in\mathcal{C}[0,1]\) and \(S\subset[0,1]\). We define the range of \(f\) on \(S\) as \[R_{f}(S)=\sup_{x,y\in S}|f(x)-f(y)|.\] Let \(R(\delta,f)=\sum_{S\in\Delta_{S}}R_{f}(S)\). For \(\delta>0\), let \(\Delta_{\delta}\) be the set defined below \[\Delta_{\delta}=\bigcup_{n=0}^{\lceil\delta^{-1}\rceil-1}[n\delta,(n+1)\delta].\] It follows that \[\delta^{-1}\sum_{S\in\Delta_{\delta}}R_{\lambda f}(S)\leq N_{\delta}(G_{f}) \leq 2(\delta^{-1}+1)+\delta^{-1}\sum_{S\in\Delta_{\delta}}R_{\lambda f}(S).\] _Remark 5.5_.: Note that the above estimation is a useful technique in fractal geometry for calculating the box dimension of fractal functions.[11] **Theorem 5.6**.: _Let \(g,f\in\mathcal{C}[0,1]\) and \(\lambda\) be a real number. Then for \(0<\delta\leq 1,\)_ 1. \(R(\delta,\lambda f)=|\lambda|\cdot R(\delta,f)\)_,_ 2. \(R(\delta,f+g)\leq R(\delta,f)+R(\delta,g)\)_,_ 3. \(R(\delta,fg)\leq||g||_{\infty}\cdot R(\delta,f)+||f||_{\infty}\cdot R(\delta, g).\)__ Proof.: 1. For \(\lambda\in\mathbb{R}\), we have \[R(\delta,\lambda f) =\sum_{S\in\Delta_{\delta}}R_{\lambda f}(S)\] \[=\sum_{S\in\Delta_{\delta}}\sup_{x,y\in S}|\lambda f(x)-\lambda f (y)|\] \[=|\lambda|\sum_{S\in\Delta_{\delta}}\sup_{x,y\in S}|f(x)-f(y)|\] \[=|\lambda|\sum_{S\in\Delta_{\delta}}R_{f}(S)=|\lambda|\cdot R( \delta,f).\] 2. For the second one the proof is as follows: \[R(\delta,f+g) =\sum_{S\in\Delta_{\delta}}R_{f+g}(S)\] \[=\sum_{S\in\Delta_{\delta}}\sup_{x,y\in S}|(f+g)(x)-(f+g)(y)|\] \[\leq\sum_{S\in\Delta_{\delta}}\sup_{x,y\in S}(|f(x)-f(y)|+|g(x)-g( y)|)\] \[=\sum_{S\in\Delta_{\delta}}R_{f}(S)+\sum_{S\in\Delta_{\delta}}R_{ g}(S)\] \[=R(\delta,f)+R(\delta,g).\] 3. The lines of proof for the last one is as follows \[R(\delta,fg) =\sum_{S\in\Delta_{\delta}}R_{fg}(S)\] \[=\sum_{S\in\Delta_{\delta}}\sup_{x,y\in S}|(fg)(x)-(fg)(y)|\] \[=\sum_{S\in\Delta_{\delta}}\sup_{x,y\in S}|f(x)g(x)-f(y)g(x)+f(y) g(x)-f(y)g(y)|\] \[\leq\sum_{S\in\Delta_{\delta}}\{\sup_{x,y\in S}|f(x)-f(y)||g(x)|+ \sup_{x,y\in S}|f(y)||g(x)-g(y)|\}\] \[\leq\sum_{S\in\Delta_{\delta}}\{||g||_{\infty}\sup_{x,y\in S}|f( x)-f(y)|+||f||_{\infty}\sup_{x,y\in S}|g(x)-g(y)|\}\] \[=||g||_{\infty}\sum_{S\in\Delta_{\delta}}R_{f}(S)+||f||_{\infty} \sum_{S\in\Delta_{\delta}}R_{g}(S)\] \[=||g||_{\infty}\cdot R(\delta,f)+||f||_{\infty}\cdot R(\delta,f).\] For \(\beta\geq 1\), we define a function space \[V_{\beta}[0,1]=\{g\in\mathcal{C}[0,1]:||g||_{\beta}<\infty\},\] where \(||g||_{\beta}=||g||_{\infty}+\sup_{0<\delta\leq 1}\frac{R(\delta,g)}{ \delta^{1-\beta}}\). **Proposition 5.7**.: Let \(f\in V_{\beta}[0,1]\), we define \(||f||_{\beta}=||f||_{\infty}+\sup_{0<\delta\leq 1}\frac{R(\delta,f)}{\delta^{1- \beta}}\). Then \(||.||_{\beta}\) forms a norm on \(V_{\beta}[0,1]\). Proof.: 1. Let \(f=0\). Then \(||f||_{\beta}=||0||_{\beta}=0\). Conversely, let \(||f||_{\beta}=0.\) Then \(||f||_{\infty}+\sup_{0<\delta\leq 1}\frac{R(\delta,f)}{\delta^{1-\beta}}=0\) \[\implies||f||_{\infty}=0\text{ and }\sup_{0<\delta\leq 1}\frac{R(\delta,f)}{\delta^{1- \beta}}=0\implies f=0.\] 2. Let \(\lambda(\neq 0)\in\mathbb{R}\).Then \[||\lambda f||_{\beta} =||\lambda f||_{\infty}+\sup_{0<\delta\leq 1}\frac{R(\delta, \lambda f)}{\delta^{1-\beta}}\] \[=|\lambda|\cdot||f||_{\infty}+|\lambda|\cdot\sup_{0<\delta\leq 1} \frac{R(\delta,f)}{\delta^{1-\beta}}\] \[=|\lambda|\cdot||f||_{\beta}.\] 3. Let \(f,g\in V_{\beta}(I)\). Then \[||f+g||_{\beta} =||f+g||_{\infty}+\sup_{0<\delta\leq 1}\frac{R(\delta,f+g)}{ \delta^{1-\beta}}\] \[\leq||f||_{\infty}+||g||_{\infty}+\sup_{0<\delta\leq 1}\frac{R( \delta,f)}{\delta^{1-\beta}}+\sup_{0<\delta\leq 1}\frac{R(\delta,g)}{\delta^{1- \beta}}\] \[=||f||_{\beta}+||g||_{\beta}.\] **Lemma 5.8**.: _Let \(\beta\in[1,2]\). Then \((V_{\beta}[0,1],||.||_{\beta})\) is a Banach space._ Proof.: The lemma follows from Lemma 3.1. of [12]. Now, we define non-stationary fractal functions on the space \((V_{\beta}[0,1],||.||_{\beta})\). For notational simplicity we denote the space \(V_{\beta}[0,1]\) by \(\mathcal{V}(I)\), where \(I=[0,1]\). **Theorem 5.9**.: _Let \(f\in\mathcal{V}(I)\) and define_ \[\mathcal{V}_{f}(I)=\{g\in\mathcal{V}(I):g(x_{0})=f(x_{0}),g(x_{N})=f(x_{N})\}.\] _Let \(b_{m}\in\mathcal{V}_{f}(I)\) be such that \(\|b\|_{\beta}:=\sup_{m\in\mathbb{N}}\|b_{m}\|_{\beta}<\infty\). Also, assume that the scaling functions \(\alpha_{i,m}\) are constants such that \(|\alpha|_{\infty}=\sup_{m\in\mathbb{N}}\{|\alpha_{m}|_{max}\}=\sup_{m\in \mathbb{N}}\{\max_{i\in\mathbb{N}_{N}}|\alpha_{i,m}|\}<1\). Then the following hold._ 1. _The RB operator_ \(T^{\alpha_{m}}\) _defined in equation (_4.1_) is well defined on_ \(\mathcal{V}_{f}(I)\)_._ 2. _In fact,_ \(T^{\alpha_{m}}:\mathcal{V}_{f}(I)\longrightarrow\mathcal{V}_{f}(I)\subset \mathcal{V}(I)\) _is a contraction map._ 3. _There exists a unique function_ \(f^{\alpha}_{b,V}\in\mathcal{V}_{f}(I)\) _such that the sequence_ \(\{T^{\alpha_{1}}\ o\ T^{\alpha_{2}}\ o\ldots o\ T^{\alpha_{m}}g\}\) _converges to the map_ \(f^{\alpha}_{b,V}\) _for every_ \(g\in\mathcal{V}_{f}(I)\)_._ Proof.: 1. We have, \[||T^{\alpha_{m}}g||_{\beta} =||T^{\alpha_{m}}g||_{\infty}+\sup_{0<\delta\leq 1}\frac{R( \delta,T^{\alpha_{m}}g)}{\delta^{1-\beta}}\] \[=||T^{\alpha_{m}}g||_{\infty}+\sup_{0<\delta\leq 1}\frac{\sum \limits_{S\in\Delta_{\delta}}R_{T^{\alpha_{m}}g}(S)}{\delta^{1-\beta}}\] Now, \[R_{T^{\alpha_{m}}g}(S)\] \[=\sup_{x,y\in S}|T^{\alpha_{m}}g(x)-T^{\alpha_{m}}g(y)|\] \[\leq\sup_{x,y\in S}|f(x)-f(y)|+\max_{i\in\mathbb{N}_{N}}\sup_{x,y \in S_{i}}|\alpha_{i,m}.(g-b_{m})(Q_{i}(x))-\alpha_{i,m}.(g-b_{m})(Q_{i}(y))|,\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{where $S_{i}$ is a subset of $I_{i}$}\] \[=\sup_{x,y\in S}|f(x)-f(y)|\] \[\qquad\qquad+\max_{i\in\mathbb{N}_{N}}\Big{(}|\alpha_{i,m}|\Big{)} \sup_{x,y\in S_{i}}|\left[(g(Q_{i}(x))-g(Q_{i}(y)))-(b_{m}(Q_{i}(x))-b_{m}(Q_{ i}(y)))\right]|\] \[\leq\sup_{x,y\in S}|f(x)-f(y)|+\max_{i\in\mathbb{N}_{N}}\Big{(}| \alpha_{i,m}|\Big{)}\sup_{\tilde{x},\tilde{y}\in S}(|g(\tilde{x})-g(\tilde{y })|+|b_{m}(\tilde{x})-b_{m}(\tilde{y})|)\] \[=R_{f}(S)+\max_{i\in\mathbb{N}_{N}}\Big{(}|\alpha_{i,m}|\Big{)}(R _{g}(S)+R_{b_{m}}(S)).\] Taking sum over \(\Delta_{\delta}\), we get \[\sum_{S\in\Delta_{\delta}}R_{T^{\alpha_{m}}g}(S)\leq\sum_{S\in \Delta_{\delta}}R_{f}(S)+\max_{i\in\mathbb{N}_{N}}\Big{(}|\alpha_{i,m}|\Big{)} \left(\sum_{S\in\Delta_{\delta}}R_{g}(S)+\sum_{S\in\Delta_{\delta}}R_{b_{m}}(S )\right)\] \[\implies\sup_{0<\delta\leq 1}\frac{\sum_{S\in\Delta_{\delta}}R_{T^{ \alpha_{m}}g}(S)}{\delta^{1-\beta}}\] \[\leq\sup_{0<\delta\leq 1}\frac{\sum_{S\in\Delta_{\delta}}R_{f}(S)}{ \delta^{1-\beta}}+\max_{i\in\mathbb{N}_{N}}\Big{(}|\alpha_{i,m}|\Big{)}\left( \sup_{0<\delta\leq 1}\frac{\sum_{S\in\Delta_{\delta}}R_{g}(S)}{ \delta^{1-\beta}}+\sup_{0<\delta\leq 1}\frac{\sum_{S\in\Delta_{\delta}}R_{b_{m}}(S)}{ \delta^{1-\beta}}\right).\] Therefore, using the definition of the norm \(||.||_{\beta}\), we get \[||T^{\alpha_{m}}g||_{\beta}-||T^{\alpha_{m}}g||_{\infty}\leq||f||_{\beta}-||f|| _{\infty}+\max_{i\in\mathbb{N}_{N}}\Big{(}|\alpha_{i,m}|\Big{)}(||g||_{\beta}- ||g||_{\infty}+||b_{m}||_{\beta}-||b_{m}||_{\infty})\] \[\implies||T^{\alpha_{m}}g||_{\beta}\leq||f||_{\beta}+\max_{i\in\mathbb{N}_{N} }\Big{(}|\alpha_{i,m}|\Big{)}(||g||_{\beta}+||b_{m}||_{\beta}).\] Since \(f,g,b_{m}\in\mathcal{V}(I)\), the previous estimate ensures that \(||T^{\alpha_{m}}g||_{\beta}<\infty\) and hence that \(T^{\alpha_{m}}g\in\mathcal{V}(I)\). Also \(T^{\alpha_{m}}g(x_{0})=f(x_{0})\) and \(T^{\alpha_{m}}g(x_{N})=f(x_{N})\), so that \(T^{\alpha_{m}}g\in\mathcal{V}_{f}(I)\). Therefore the RB operator is well defined on \(\mathcal{V}_{f}(I)\). 1. Let \(g_{1},g_{2}\in\mathcal{V}_{f}(I)\). For \(x\in I_{i}\), \[|(T^{\alpha_{m}}g_{1}-T^{\alpha_{m}}g_{2})(x)| =|\alpha_{i,m}||(g_{1}-g_{2})(Q_{i}(x))|\] \[\leq\max_{i\in\mathbb{N}_{N}}\Big{(}|\alpha_{i,m}|\Big{)}||g_{1}-g _{2}||_{\infty},\] and hence \[||(T^{\alpha_{m}}g_{1}-T^{\alpha_{m}}g_{2})||_{\infty}\leq\max_{i\in\mathbb{N} _{N}}\Big{(}|\alpha_{i,m}|\Big{)}\cdot||g_{1}-g_{2}||_{\infty}.\] Along lines similar to the estimation of \(R_{T^{\alpha_{m}}g}(S)\), we get \[R_{T^{\alpha_{m}}g_{1}-T^{\alpha_{m}}g_{2}}(S)\leq\max_{i\in\mathbb{N}_{N}} \Big{(}|\alpha_{i,m}|\Big{)}R_{g_{1}-g_{2}}(S).\] Therefore, \[\sup_{0<\delta\leq 1}\frac{\sum\limits_{S\in\Delta_{\delta}}R_{T^{\alpha_{m}}g_{1} -T^{\alpha_{m}}g_{2}}(S)}{\delta^{1-\beta}}\leq\max\limits_{i\in\mathbb{N}_{N}} \Big{(}|\alpha_{i,m}|\Big{)}\sup_{0<\delta\leq 1}\frac{\sum\limits_{S\in\Delta_{ \delta}}R_{g_{1}-g_{2}}(S)}{\delta^{1-\beta}}.\] \[\implies||T^{\alpha_{m}}g_{1}-T^{\alpha_{m}}g_{2}||_{\beta}-||(T^ {\alpha_{m}}g_{1}-T^{\alpha_{m}}g_{2})||_{\infty}\] \[\qquad\leq\max\limits_{i\in\mathbb{N}_{N}}\Big{(}|\alpha_{i,m}| \Big{)}\Big{(}||g_{1}-g_{2}||_{\beta}-||g_{1}-g_{2}||_{\infty}\Big{)}.\] \[||T^{\alpha_{m}}g_{1}-T^{\alpha_{m}}g_{2}||_{\beta} \leq\max\limits_{i\in\mathbb{N}_{N}}\Big{(}|\alpha_{i,m}|\Big{)} (||g_{1}-g_{2}||_{\beta}-||g_{1}-g_{2}||_{\infty})\] \[\qquad+||(T^{\alpha_{m}}g_{1}-T^{\alpha_{m}}g_{2})||_{\infty}\] \[\leq\max\limits_{i\in\mathbb{N}_{N}}\Big{(}|\alpha_{i,m}|\Big{)} (||g_{1}-g_{2}||_{\beta}-||g_{1}-g_{2}||_{\infty})\] \[\qquad+\max\limits_{i\in\mathbb{N}_{N}}\Big{(}|\alpha_{i,m}|\Big{)} |g_{1}-g_{2}||_{\infty}\] \[\leq|\alpha|_{\infty}||g_{1}-g_{2}||_{\beta}.\] The assumption on the scaling sequence ensures that \(T^{\alpha_{m}}\) is a contraction on \(\mathcal{V}_{f}(I)\). 3. Let \(g\in\mathcal{V}_{f}(I)\) be arbitrary. We show that \(\{||T^{\alpha_{m}}g-g||_{\beta}\}\) is bounded. Adapting similar calculation as in the estimation of \(R_{T^{\alpha_{m}}g}(S)\), we get \[R_{T^{\alpha_{m}}g-g}(S)\leq R_{f}(S)+\max\limits_{i\in\mathbb{N}_{N}}\Big{(}| \alpha_{i,m}|\Big{)}R_{b_{m}}(S).\] Thus, \[\sup_{0<\delta\leq 1}\frac{\sum\limits_{S\in\Delta_{\delta}}R_{T^{ \alpha_{m}}g-g}(S)}{\delta^{1-\beta}}\leq\sup_{0<\delta\leq 1}\frac{\sum \limits_{S\in\Delta_{\delta}}R_{f}(S)}{\delta^{1-\beta}}+\max\limits_{i\in \mathbb{N}_{N}}\Big{(}|\alpha_{i,m}|\Big{)}\sup_{0<\delta\leq 1}\frac{\sum \limits_{S\in\Delta_{\delta}}R_{b_{m}}(S)}{\delta^{1-\beta}}.\] \[\implies||T^{\alpha_{m}}g-g||_{\beta}-||(T^{\alpha_{m}}g-g)||_{ \infty}\leq||f||_{\beta}-||f||_{\infty}+\max\limits_{i\in\mathbb{N}_{N}}\Big{(} |\alpha_{i,m}|\Big{)}(||b_{m}||_{\beta}-||b_{m}||_{\infty}).\] \[\implies||T^{\alpha_{m}}g-g||_{\beta}\leq||f||_{\beta}+\max\limits_{i \in\mathbb{N}_{N}}\Big{(}|\alpha_{i,m}|\Big{)}||b_{m}||_{\beta}\leq||f||_{ \beta}+|\alpha|_{\infty}||b||_{\beta}.\] Applying Theorem 3.3, \(\exists\) a unique \(f^{\alpha}_{b,V}\in\mathcal{V}_{f}(I)\) such that \(f^{\alpha}_{b,V}=\lim\limits_{m\rightarrow\infty}T^{\alpha_{1}}\)\(o\)\(T^{\alpha_{2}}\)\(o\ldots o\)\(T^{\alpha_{m}}g\) for any \(g\in\mathcal{V}_{f}(I)\). _Remark 5.10_.: In [12], Falconer and Fraser proved that for \(\beta\in[1,2),C_{\beta}[0,1]=\bigcap\limits_{k\in\mathbb{N}}V\limits_{\beta+ \dfrac{1}{k}}[0,1].\) It is not known whether \(C_{\beta}[0,1]\) is a complete normed space. A suitable norm on \(C_{\beta}[0,1]\) may attract the researchers to define non-stationary fractal function on \(C_{\beta}[0,1]\) and calculate its dimension. That is, for any \(\beta\in[1,2)\), one can find a fractal function with dimension less than or equal to \(\beta\) with respect to a suitable norm. We will construct a fractal function on the convex Lipschitz space and calculate its dimension in the next subsection. ### Convex Lipschitz space **Definition 5.11**.: Let \(I=[a,b]\) and \(\theta:\mathbb{R}^{+}\longrightarrow\mathbb{R}^{+}\). A function \(g\) is called convex Lipschitz of order \(\theta\) on an interval \(I\) provided there exists a constant \(M\) such that \[|\Delta(u,v,\delta)|:=|g(u+\delta v)-(\delta g(u+v)+(1-\delta)g(u))|\leq M \theta(v),\] for \(a\leq u<u+v\leq b\) and \(0\leq\delta\leq 1\). The set of all convex Lipschitz functions of order \(\theta\) on \(I\) is denoted by \(\mathcal{V}^{\theta}(I)\). That is, \[\mathcal{V}^{\theta}(I)=\{g:I\longrightarrow\mathbb{R}:g\text{ is convex Lipschitz of order }\theta\}.\] It is simple to verify that \(\mathcal{V}^{\theta}\) is a vector space over the field \(\mathbb{R}\), which we call the convex Lipschitz space of order \(\theta\). For \(g\in\mathcal{V}^{\theta}(I)\), we define \(||g||_{\mathcal{V}^{\theta}}=||g||_{\infty}+[g]^{*}\), where \[[g]^{*}=\sup_{a\leq u<u+v\leq b}\frac{|\Delta(u,v,\delta)|}{\theta(v)}=\sup_{ a\leq u<u+v\leq b}\frac{|g(u+\delta v)-(\delta g(u+v)+(1-\delta)g(u))|}{ \theta(v)}.\] It is simple to verify that \(||.||_{\mathcal{V}^{\theta}}\) defines a norm on \(\mathcal{V}^{\theta}(I)\). **Theorem 5.12**.: _Let \(f\) be a convex Lipschitz function of order \(\theta\). Then_ 1. \([\alpha f]^{*}=|\alpha|[f]^{*}\)__ 2. \([f\pm g]^{*}\leq[f]^{*}+[g]^{*}\)_._ Proof.: It follows from the definition of \([f]^{*}\). **Proposition 5.13**.: [8] The convex Lipschitz space \(\mathcal{V}^{\theta}(I)\) with respect to the norm \(||.||_{\mathcal{V}^{\theta}}\) forms a complete metric space. Our following proposition is collected from [18]; it will help us to calculate the dimension of the non-stationary fractal function, which will occur in the upcoming theorem. **Proposition 5.14**.: Let \(\theta:\mathbb{R}^{+}\longrightarrow\mathbb{R}^{+}\) be a continuous map such that: 1. For \(s>0,\ \theta(s)>0\); 2. \(\limsup_{s\to 0}[\frac{s}{\theta(s)}]<\infty\) and 3. there exists \(\gamma\geq 0\) such that \(\lim_{s\to 0}\frac{\theta(cs)}{\theta(s)}=c^{\gamma}\) for all \(c>0\). If \(g\in\mathcal{C}[0,1]\cap\mathcal{V}^{\theta}(I)\), then 1. for \(\theta(s)=s^{\epsilon}\), \(dim_{H}(Graph\ (g))\leq\overline{dim}_{B}(Graph\ (g))\leq 2-\epsilon\). 2. for \(\theta(s)=-s\ln s\), \(dim_{H}(Graph\ (g))=dim_{B}(Graph\ (g))=1\). **Theorem 5.15**.: _Suppose that \(f\in\mathcal{V}^{\theta}(I)\) and define_ \[\mathcal{V}^{\theta}_{f}(I)=\{g\in\mathcal{V}^{\theta}(I):g(x_{0})=f(x_{0}),g (x_{N})=f(x_{N})\}.\] _Suppose \(b_{m}\in\mathcal{V}^{\theta}_{f}(I)\) be such that \([b]^{*}:=\sup_{m\in\mathbb{N}}[b_{m}]^{*}<\infty\) and the scaling functions \(\alpha_{i,m}\) are constants such that \(S:=\max\{\max_{i}|\alpha_{i,m}|,\ \max_{i}|\alpha_{i,m}|\frac{\theta(Y)}{\theta(a_{i}Y)}\}<1,\) where \(Y=\frac{y}{a_{i}}\). Then the following hold._ 1. _The RB operator_ \(T^{\alpha_{m}}\) _defined in equation (_4.1_) is well defined on_ \(\mathcal{V}^{\theta}_{f}(I)\)_._ 2. _In fact,_ \(T^{\alpha_{m}}:\mathcal{V}^{\theta}_{f}(I)\longrightarrow\mathcal{V}^{\theta }_{f}(I)\subset\mathcal{V}^{\theta}(I)\) _is a contraction map._ _._ 3. _There exists a unique function_ \(f^{\alpha}_{b,\mathcal{V}^{\theta}}\in\mathcal{V}^{\theta}_{f}(I)\) _such that for every_ \(g\in\mathcal{V}^{\theta}_{f}(I)\) _the sequence_ \(\{T^{\alpha_{1}}\ o\ T^{\alpha_{2}}\ o\ldots o\ T^{\alpha_{m}}g\}\) _converges to the map_ \(f^{\alpha}_{b,\mathcal{V}^{\theta}}\) _._ Proof.: 1. Clearly, \(\mathcal{V}^{\theta}_{f}(I)\subset\mathcal{V}^{\theta}(I)\) is a closed subset of \(\mathcal{V}^{\theta}(I)\) and so that with respect to the metric induced by the norm \(||.||_{\mathcal{V}^{\theta}}\), \(\mathcal{V}^{\theta}_{f}(I)\) is a complete metric space. We have, \[(T^{\alpha_{m}}g)(x)=f(x)+\alpha_{i,m}.(g-b_{m})(Q_{i}(x)),\] where \(Q_{i}(x)=l_{i}^{-1}(x)=(x-e_{i})/a_{i}\). As \(f,g,b_{m}\in\mathcal{V}^{\theta}_{f}(I),(T^{\alpha_{m}}g)(x)\in\mathcal{V}^{ \theta}_{f}(I)\). Therefore the RB-operator is well defined on \(\mathcal{V}^{\theta}_{f}(I)\). 2. For \(g,h\in\mathcal{V}^{\theta}_{f}(I)\), \[||T^{\alpha_{m}}g-T^{\alpha_{m}}h||_{\mathcal{V}^{\theta}}=||T^{\alpha_{m}}g- T^{\alpha_{m}}h||_{\infty}+[T^{\alpha_{m}}g-T^{\alpha_{m}}h]^{*}.\] Let \(Q_{i}(x)=X\) and \(y/a_{i}=Y\). Now, \[[T^{\alpha_{m}}g-T^{\alpha_{m}}h]^{*}\] \[= \sup_{a\leq x<x+y\leq b}\left[\frac{|(T^{\alpha_{m}}g-T^{\alpha_{ m}}h)(x+\delta y)-(\delta(T^{\alpha_{m}}g-T^{\alpha_{m}}h)(x+y)+(1-\delta)(T^{ \alpha_{m}}g-T^{\alpha_{m}}h)(x))|}{\theta(y)}\right]\] \[= \max_{i}\sup_{a\leq a_{i}X+e_{i}<a_{i}(X+Y)+e_{i}\leq b}\left[ \frac{|\alpha_{i,m}||(g-h)(X+\delta Y)-(\delta(g-h)(X+Y)+(1-\delta)(g-h)(X))|} {\theta(Y)}\times\right.\] \[\left.\frac{\theta(Y)}{\theta(a_{i}Y)}\right]\] \[= \max_{i}|\alpha_{i,m}|\times\frac{\theta(Y)}{\theta(a_{i}Y)}\times [g-h]^{*}.\] Also, \[||T^{\alpha_{m}}g-T^{\alpha_{m}}h||_{\infty}=|\alpha_{i,m}||(g-h)(Q_{i}(x))| \leq\max_{i}|\alpha_{i,m}|\cdot||g-h||_{\infty}.\] Therefore, \[||T^{\alpha_{m}}g-T^{\alpha_{m}}h||_{\mathcal{V}^{\theta}} \leq\max_{i}|\alpha_{i,m}|\cdot||g-h||_{\infty}+\max_{i}|\alpha_{ i,m}|\times\frac{\theta(Y)}{\theta(a_{i}Y)}\times[g-h]^{*}\] \[\leq S\cdot||g-h||_{\infty}+S\cdot[g-h]^{*}\] \[=S||g-h||_{\mathcal{V}^{\theta}}.\] Since \(S<1\), \(T^{\alpha_{m}}\) is a contraction map for each \(m\in\mathbb{N}\). 3. Let us take an arbitrary function \(g\in\mathcal{V}^{\theta}(I)\). We have to check if the sequence \(\{||T^{\alpha_{m}}g-g||_{\mathcal{V}^{\theta}}\}\) is bounded. \[||T^{\alpha_{m}}g-g||_{\mathcal{V}^{\theta}}=||T^{\alpha_{m}}g-g||_{\infty}+[T ^{\alpha_{m}}g-g]^{*}.\] Now, \[[T^{\alpha_{m}}g-g]^{*}\] \[=\sup_{a\leq x<x+y\leq b}\left[\frac{|(T^{\alpha_{m}}g-g)(x+\delta y )-(\delta(T^{\alpha_{m}}g-g)(x+y)+(1-\delta)(T^{\alpha_{m}}g-g)(x))|}{\theta(y)}\right]\] \[\leq\sup_{a\leq x<x+y\leq b}\left[\frac{|(f-g)(x+\delta y)-( \delta(f-g)(x+y)+(1-\delta)(f-g)(x))|}{\theta(y)}\right]\] \[+\max_{i}\sup_{a\leq a_{i}X+e_{i}<a_{i}(X+Y)+e_{i}\leq b}\] \[\left[\frac{|\alpha_{i,m}|(g-b_{m})(X+\delta Y)-(\delta(g-b_{m})( X+Y)+(1-\delta)(g-b_{m})(X))|}{\theta(Y)}\times\frac{\theta(Y)}{\theta(a_{i}Y)}\right]\] \[=[f-g]^{*}+\max_{i}|\alpha_{i,m}|\times\frac{\theta(Y)}{\theta(a_ {i}Y)}\times[g-b_{m}]^{*}\] \[\leq[f]^{*}+[g]^{*}+S\cdot([g]^{*}+[b_{m}]^{*})\] \[\leq[f]^{*}+(1+S)\cdot[g]^{*}+S\cdot[b]^{*}.\] So the bound is independent of \(m\). Applying Theorem 3.3, there exists a unique \(f^{\alpha}_{b,\mathcal{V}^{\theta}}\in\mathcal{V}^{\theta}_{f}(I)\) such that \(f^{\alpha}_{b,\mathcal{V}^{\theta}}=\lim_{m\to\infty}T^{\alpha_{1}}\ o\ T^{ \alpha_{2}}\ o\ldots o\ T^{\alpha_{m}}g\) for any \(g\in\mathcal{V}^{\theta}_{f}(I)\). _Remark 5.16_.: If we take \(\alpha_{i,m}=\alpha_{i}\) for all \(m\in\mathbb{N}\), then we get \[f^{\alpha}_{b,\mathcal{V}^{\theta}}=\lim_{m\to\infty}T^{\alpha}\ o\ T^{\alpha} \ o\ldots o\ T^{\alpha}g=\lim_{m\to\infty}(T^{\alpha})^{m}\] for any \(g\in\mathcal{V}^{\theta}_{f}(I)\). So that, \(f^{\alpha}_{b,\mathcal{V}^{\theta}}\longrightarrow f^{\alpha}_{\mathcal{V}^{ \theta}},\ \text{as}\ m\to\infty\), where \(f^{\alpha}_{\mathcal{V}^{\theta}}\) is the stationary \(\alpha\)-fractal function on convex Lipschitz space \(\mathcal{V}^{\theta}(I)\) that appeared in [8]. **Theorem 5.17**.: _Let \(f,b_{m}(m\in\mathbb{N})\in\mathcal{V}^{\theta}_{f}(I)\) and \(\alpha_{i,m}\) are constants satisfying all the hypotheses of Theorem 5.15. Also let \(f\) be continuous on \([0,1]\). Then_ 1. _for_ \(\theta(t)=t^{\epsilon}\)_,_ \(dim_{H}(Graph\ (f^{\alpha}_{b,\mathcal{V}^{\theta}}))\leq\overline{dim}_{B}( Graph\ (f^{\alpha}_{b,\mathcal{V}^{\theta}}))\leq 2-\epsilon\)_._ 2. _for_ \(\theta(t)=-t\ln t\)_,_ \(dim_{H}(Graph\ (f^{\alpha}_{b,\mathcal{V}^{\theta}}))=dim_{B}(Graph\ (f^{\alpha}_{b, \mathcal{V}^{\theta}}))=1\)_._ Proof.: For the given \(S<1\), we have from Theorem 5.15, \(f^{\alpha}_{b,\mathcal{V}^{\theta}}\in\mathcal{V}^{\theta}(I)\). We conclude the above results by applying Proposition 5.14.
2308.12763
Eventually Constant and stagnating functions in non-Lindelöf spaces
Inspired by recent work of A. Mardani which elaborates on the elementary fact that for any continuous function $f:\omega_1\times\mathbb{R}\to\mathbb{R}$, there is an $\alpha\in\omega_1$ such that $f(\langle\beta,x\rangle) = f(\langle\alpha,x\rangle)$ for all $\beta\ge\alpha$ and $x\in\mathbb{R}$, we introduce four properties $\mathsf{P}(X,Y)$, $\mathsf{P}\in\{\mathsf{EC},\mathsf{S},\mathsf{L},\mathsf{BR}\}$, which are different formalizations of the idea vaguely stated as "given a continuous $f:X\to Y$, there is a small subspace of $X$ outside of which $f$ does not do anything much new". We say that the spaces $X,Y$ satisfy the property $\mathsf{EC}(X,Y)$ (resp. $\mathsf{S}(X,Y)$) [resp. $\mathsf{L}(X,Y)$] iff given $f:X\to Y$, then there is a Lindel\"of $Z\subset X$ such that $f(X-Z)$ is a singleton (resp. there is a retraction $r:X\to Z$ such that $f\circ r = f$) [resp. $f(Z) = f(X)$]. ($\mathsf{BR}(X,Y)$ is defined similarly.) We investigate the relations between these four and other classical topological properties. Two variants of each property are given depending on whether $Z$ can be chosen to be closed. Here is a sample of our results. An uncountable subspace $T$ of a tree of height $\omega_1$ is $\omega_1$-compact iff $\mathsf{S}(T,Y)$ holds for any metrizable space $Y$ of cardinality $>1$. If $M$ is a $\aleph_1$-strongly collectionwise Hausforff non-metrizable manifold satisfying either a weakening of $\mathsf{S}(M,\mathbb{R})$ or $\mathsf{EC}(M,\mathbb{R})$, then $M$ is $\omega_1$-compact. The property $\mathsf{L}(M,\mathbb{R})$ holds for any manifold while $\mathsf{L}(M,\mathbb{R}^2)$ does not. Under PFA, a locally compact countably tight space $Y$ for which $\mathsf{EC}(\omega_1,Y)$ holds is isocompact, while there are counterexamples under $\clubsuit_C$. Some of our results are restatements of other researchers work put in our context.
Mathieu Baillif
2023-08-24T13:11:51Z
http://arxiv.org/abs/2308.12763v3
# Eventually Constant and stagnating functions in non-Lindelof spaces ###### Abstract Inspired by recent work of A. Mardani which elaborates on the elementary fact that for any continuous function \(f:\omega_{1}\times\mathbb{R}\to\mathbb{R}\), there is an \(\alpha\in\omega_{1}\) such that \(f(\langle\beta,x\rangle)=f(\langle\alpha,x\rangle)\) for all \(\beta\geq\alpha\) and \(x\in\mathbb{R}\), we introduce four properties \(\mathsf{P}(X,Y)\), \(\mathsf{P}\in\{\mathsf{EC},\mathsf{S},\mathsf{L},\mathsf{BR}\}\), which are different formalizations of the idea vaguely stated as "given a continuous \(f:X\to Y\), there is a small subspace of \(X\) outside of which \(f\) does not do anything much new". More precisely, we say that the spaces \(X,Y\) satisfy the property \(\mathsf{EC}(X,Y)\) (resp. \(\mathsf{S}(X,Y)\)) [resp. \(\mathsf{L}(X,Y)\)] iff given \(f:X\to Y\), then there is a Lindelof \(Z\subset X\) such that \(f(X-Z)\) is a singleton (resp. there is a retraction \(r:X\to Z\) such that \(f\circ r=f\)) [resp. \(f(Z)=f(X)\)]. (\(\mathsf{BR}(X,Y)\) is defined similarly.) We investigate the relations between these four and other classical topological properties. Actually, two variants \(\mathsf{P},\mathsf{P}_{\mathsf{cl}}\) of each property are given depending on whether \(Z\) can be chosen to be closed. To get an idea of what our results look like, here is a sample. An uncountable subspace \(T\) of a tree of height \(\omega_{1}\) is \(\omega_{1}\)-compact iff \(\mathsf{S}(T,Y)\) holds for any metrizable space \(Y\) of cardinality \(>1\). (The case \(Y=\mathbb{R}\) and \(T\) a Suslin tree was proved by Steprans long ago.) If \(M\) is a \(\aleph_{1}\)-strongly collectionwise Hausdorff non-metrizable manifold satisfying either (a weakening of) \(\mathsf{S}(M,\mathbb{R})\) or \(\mathsf{EC}(M,\mathbb{R})\), then \(M\) is \(\omega_{1}\)-compact. The property \(\mathsf{L}(M,\mathbb{R})\) holds for any manifold while \(\mathsf{L}(M,\mathbb{R}^{2})\) does not. Under **PFA**, a locally compact countably tight space \(Y\) for which \(\mathsf{EC}(\omega_{1},Y)\) holds is isocompact, while there are counterexamples under \(\clubsuit_{C}\). Some of our results are (more or less elaborate) restatements of other researchers work put in our context. ###### Contents * 1 Introduction * 2 Does a lazy constant broken record stagnate? * 3 Consecuences of \(\mathsf{EC}(X,\mathbb{R})\); \(\mathsf{EC}(\omega_{1},Y)\) and isocompactness of \(Y\) * 4 \(\mathsf{S}(T,\mathbb{R})\) and Suslin trees * 5 \(\mathsf{BR}(X,Y)\), \(\mathsf{L}(X,Y)\) when \(Y\) is metric and \(X\) an increasing chain of 'nice' subspaces * 6 A word on products on the target space * 7 Type I manifolds: when \(\mathsf{EC}\not\Rightarrow\mathsf{S}\). * 8 \(w\mathsf{S}(M,\mathbb{R})\), normality, collectionwise Hausdorff and \(\omega_{1}\)-compactness for manifolds * 9 If'small' means 'compact' **10 Summary and tables** ## 1 Introduction In this note, by'space' we mean topological Hausdorff space (in particular'regular' and 'normal' imply Hausdorff) and every function is assumed continuous, unless specified. We denote ordered pairs with brackets \(\langle\cdot,\cdot\rangle\), reserving parenthesis for intervals in (totally) ordered spaces. We refer to [10] for standard topological notions not defined in our text. Notice however that we sometimes depart from the conventions of [10] about including or not separation axioms into properties like pseudocompactness. We will specify when this is the case. Our starting point is the following well known elementary result. (For a proof, combine Lemma B.30 in [11] with the fact that \(\mathbb{R}\) is separable.) As usual \(\omega_{1}\) is the first uncountable ordinal endowed with the order topology. **Theorem 1.1**.: _Let \(f:\omega_{1}\times\mathbb{R}\to\mathbb{R}\) be continuous. Then there is \(\alpha\in\omega_{1}\) such that \(f(\langle\beta,x\rangle)=f(\langle\alpha,x\rangle)\) for all \(\beta\geq\alpha\) and \(x\in\mathbb{R}\)._ To summarize in a very imprecise catchphrase: _Outside of a small subspace, \(f\) does not do anything much new.1_ Here of course the small subspace is \([0,\alpha]\times\mathbb{R}\). The purpose of this note is to investigate the notions obtained when'small' and 'nothing much new' are interpreted in various ways. Well, not so various as to the former: 'Small' will almost always mean 'Lindelof' (the only exception being the last section). This might be surprising at first since compactness is arguably a more natural mesure of smallness of a subspace. But we believe that our choice yields more interesting results, as replacing it by compactness as we do in Section 9 seems to (almost always) confine us to pseudocompact spaces, at least for real valued maps. The phrase '\(f\) does not do anything much new' gives way to more varied interpretations (at least to the extent permitted by our imagination). A rather general interpretation is that a there is a small subset whose image is equal to that of the whole domain space. In Theorem 1.1, \(f([0,\alpha]\times\mathbb{R})=f(\omega_{1}\times\mathbb{R})\). Figuratively speaking, \(f\) (or \(\omega_{1}\times\mathbb{R}\)) is a lazy explorer of \(\mathbb{R}\): after a small exploration it remains at the same places forever.2 Looking at it from the other side, we can say that outside of a small subset, any value taken by \(f\) will be taken again and again, like a broken record, as we go further away (horizontally in this case) in the domain space. In Theorem 1.1, if \(\beta>\alpha\), there is \(\gamma>\beta\) such that \(f(\langle\gamma,t\rangle)=f(\langle\beta,t\rangle)\) (this actually holds for each \(\gamma\)). Another more restrictive (a priori) interpretation is that the map \(f\)_stagnates_ outside of a small subspace; that is, there is a retraction \(r\) of \(X\) onto a small subset such that \(f\circ r=f\). A _retraction_ is a map \(r:X\to Z\subset X\) whose restriction to \(Z\) is the identity. In Theorem 1.1 we may define it as \(r(\langle\beta,t\rangle)=\langle\min\{\alpha,\beta\},t\rangle\). Lastly, looking only at horizontal lines, we see that any \(g:\omega_{1}\to\mathbb{R}\) is _eventually constant_, in the sense that \(g\) is constant outside of the small subset \([0,\alpha]\). These four interpretations are formalized in the following definition. Footnote 1: Another phrasing is \(f\)_does not cause much ruckus outside of a very small world._ We feel that this phrase can be also seen an an overstatement of this article’s fate. **Definition 1.2**.: _Let \(X,Y\) be spaces with \(X\) non-Lindelof. Then \(X,Y\) satisfy the left side of the table below iff for each \(f:X\to Y\), there is a Lindelof subset \(Z\subset X\) such that the right side holds. The middle boxes contain the shorthands for each property._ \begin{tabular}{|l|l|l|} \hline \(X\) _is eventually constant in \(Y\)_ & \(\mathsf{EC}(X,Y)\) & \(f(X-Z)\) _is a singleton_ \\ \hline \(X\) _stagnates in \(Y\)_ & \(\mathsf{S}(X,Y)\) & _there is a retraction_ \\ & & \(r:X\to Z\) _satisfying_ \(f=f\circ r\) \\ \hline \(X\) _is a lazy explorer of \(Y\)_ & \(\mathsf{L}(X,Y)\) & \(f(Z)=f(X)\) \\ \hline \(X\) _is a broken record in \(Y\)_ & \(\mathsf{BR}(X,Y)\) & _for each Lindelof \(W\supset Z\),_ \\ & & \(f(X-W)=f(X-Z)\) \\ \hline \end{tabular} _If \(Z\) can be chosen to be closed, we write \(\mathsf{EC_{cl}}(X,Y)\), etc, for the stronger properties._ (We already stress that there are spaces \(X\) for which, really, nothing much new happens outside of some closed Lindelof subset for any real valued map, but none of these properties with \(Y=\mathbb{R}\) hold, see Example 2.10 (c).) Notice in passing that any space \(X\) with the property that any real valued fonction on \(X\) is constant satisfies \(\mathsf{P_{cl}}(X,\mathbb{R})\) for each \(\mathsf{P}\in\{\mathsf{EC},\mathsf{S},\mathsf{L},\mathsf{BR}\}\). Such spaces do exist, some being even regular (for instance [29, Example 92]). Spaces \(Y\) satisfying \(\mathsf{EC}(\omega_{1},Y)\) are called \(\omega_{1}\)-squat by A. Mardani, whose PhD thesis [17] contains results about this and related classes of spaces which motivated this work. The term'squat' was first used by D. Gauld (in particular in [11]) for almost the same property. This note contains our musings about the interplays these eight notions have with each other and other topological properties in various classes of spaces. It is organized in sections of variable length whose titles hint to their contents. We were driven by pure curiosity and the pleasure of wandering in this landscape3. In some cases we have tried to obtain quite general results (for instance in sections 2-3), while in others we have concentrated our efforts on particular classes of spaces, such as set-theoretic trees (the entire section 4) and non-metrizable manifolds (all of sections 7-8 and some of section 5). As such, this note is more akin to a little stroll in the garden, with rusty old tools in hands, peeking below scattered rocks and looking for strange insects; than to securing the foundations for a (future) twenty storeys tower. We do not know if our results will seem appealing to other researchers who tend to enjoy the scenery differently from us, but we hope that the reader's mood is bucolic enough to at least enjoy some of them. The sections are somewhat independent, although the later ones tend to refer to the previous ones (what a surprise). Section 2 contains basic results used pervasively, and section 10 contains tables which briefly summarize the properties of the strange insects we encountered. Footnote 3: We believe that it is important for the reader to keep that in mind. Despite the length of this paper, we have a leisurely approach and do not claim any grand result. We end this introduction by recalling definitions that are either not-so-standard or slightly different from the usual ones. By _cover_ of a space \(X\) is understood a family of open sets whose union contains \(X\). A cover is a _chain cover_ if it is linearly ordered by the inclusion relation. Any non-trivial chain cover (that is, one without the whole space as a maximal element) has a subcover indexed by a regular cardinal whose members are pairwise distinct. We always implicitely take such a subcover and use such indexing. A space \(X\) is of _Type I_ (Nyikos [20]) iff \(X=\cup_{\alpha<\omega_{1}}X_{\alpha}\), where \(X_{\alpha}\) is open, \(\overline{X_{\alpha}}\subset X_{\beta}\) whenever \(\alpha<\beta\), and \(\overline{X_{\alpha}}\) is Lindelof for each \(\alpha\). Although it it not included in the usual definition, for simplicity we assume in this note that Type I spaces are _not_ Lindelof, that is, \(X\neq X_{\alpha}\) for each \(\alpha\). If \(\cup_{\alpha<\beta}X_{\alpha}=X_{\beta}\) whenever \(\beta\) is limit, the cover \(\{X_{\alpha}\,:\,\alpha<\omega_{1}\}\) is called _canonical_. Any chain cover of a Type I space has a subcover indexed by \(\omega_{1}\) which can be made canonical by adding the missing \(X_{\alpha}\)'s. Any two canonical covers agree on a _club_ (i.e. closed and unbounded) subset of \(\omega_{1}\), as easily seen. Hence, if \(X\) is a Type I space, \(X_{\alpha}\) will always denote the \(\alpha\)th member of some (often implicit) canonical cover. We borrow the usual vocabulary used in \(\omega_{1}\) for Type I spaces: a subset of \(X\) is _bounded_ iff it is contained in some \(X_{\alpha}\) and _unbounded_ otherwise, and _club_ means closed and unbounded. The closure of a Lindelof subset is Lindelof in a Type I space. By _manifold_ or _\(n\)-manifold_ (when we want to emphasize the dimension) we mean a connected space locally homeomorphic to \(\mathbb{R}^{n}\). A _surface_ is a 2-manifold. Connectedness is often an indispensable property in our results about manifolds. Manifolds with boundary have some points (those in the manifold boundary) with open neighborhoods homeomorphic to \(\mathbb{R}_{\geq 0}\times\mathbb{R}^{n-1}\) (and no open neighborhood homeomorphic to \(\mathbb{R}^{n}\)). Unless specified, the word 'boundary' alone means 'topological boundary' and not'manifold boundary'. The longray \(\mathbb{L}_{\geq 0}\) is the 1-manifold with boundary \(\omega_{1}\times[0,1)\) with lexicographic order topology (_not_ the product topology). We often view \(\omega_{1}\) as a subset of \(\mathbb{L}_{\geq 0}\) by identifying \(\alpha\) with \(\langle\alpha,0\rangle\), and hence write for instance \(\mathbb{L}_{\geq 0}-\omega_{1}\) instead of \(\mathbb{L}_{\geq 0}-\omega_{1}\times\{0\}\). The open longray \(\mathbb{L}_{+}\) is obtained by deleting the 0 point in \(\mathbb{L}_{\geq 0}\). A space is _\(\omega\)-bounded_ iff any countable subset has a compact closure. For Type I spaces this is equivalent to being countably compact. A manifold is \(\omega\)-bounded iff it is countably compact and of Type I [11, Thm 4.10]. A _longpipe_[11, Def. 4.11] is an \(\omega\)-bounded (hence Type I) surface \(S=\cup_{\alpha\in\omega_{1}}S_{\alpha}\) such that \(S_{\alpha+1}\) is homeomorphic to the cylinder \(\mathbb{S}^{1}\times[0,1)\) and the topological boundary of \(S_{\alpha+1}\) in \(S_{\beta}\) is homeomorphic to the circle for each \(\alpha\) and \(\beta\geq\alpha+2\). (This may not be true if \(\alpha\) is a limit ordinal, though.) For more on non-metrizable manifolds and longpipes, see [11] and [20]. Finally, recall that a space is _\(\omega_{1}\)-compact_ or has _countable spread_ iff its closed discrete subspaces are at most countable. The following classical facts (see e.g. [10, Thm 4.1.15, Thm 4.1.17]) will be useful in some proofs. **Lemma 1.3**.: _In a metrizable space, the properties \(\omega_{1}\)-compact, Lindelof, hereditarily Lindelof, separable and hereditarily separable are all equivalent, and compactness is equivalent to countable compactness._ ## 2 Does a lazy constant broken record stagnate? An astute reader probably suspects that the answer to the question in this section's title is _absolutely yes, but sometimes no, and reciprocally_. This section is devoted to giving more details about this answer (and the question itself). When there is no risk of confusion, we abbreviate the statement 'for all Hausdorff spaces \(X,Y\), \(\mathsf{P}_{1}(X,Y)\Rightarrow\mathsf{P}_{2}(X,Y)\)' by '\(\mathsf{P}_{1}\Rightarrow\mathsf{P}_{2}\)'. '\(\mathsf{P}_{1}\not\Rightarrow\mathsf{P}_{2}\)' and '\(\mathsf{P}_{1}\Leftrightarrow\mathsf{P}_{2}\)' are to be understood similarly. Notice that for \(\mathsf{S}(X,Y)\), \(Z\) has to be closed in Definition 1.2 since it is the set of fixed points of the retraction and the spaces are Hausdorff, so \(\mathsf{S}\Leftrightarrow\mathsf{S}_{\mathrm{cl}}\). The others implications in the lemma below are immediate from the definitions. (For aesthetical reasons, we tend to denote implication by single arrows \(\longrightarrow\) in somewhat complicated diagrams, and by double arrows \(\Longrightarrow\) in one-liners formulas. We hope that it does not cause confusion.) **Lemma 2.1**.: _The following implications hold._ Implications not shown actually do not hold for all spaces \(X,Y\). In particular, none of \(\mathsf{L}\), \(\mathsf{EC}\), \(\mathsf{BR}\) imply their \(\mathsf{cl}\)-counterpart in general. It is however the case for Type I spaces. **Lemma 2.2**.: _For Type I spaces \(X\), \(\mathsf{P}(X,Y)\Leftrightarrow\mathsf{P}_{\mathrm{cl}}(X,Y)\) for any space \(Y\) and \(\mathsf{P}\in\{\mathsf{EC},\mathsf{L},\mathsf{BR},\mathsf{S}\}\)._ Proof.: For \(\mathsf{S}\), this always holds regardless of whether \(X\) is Type I. Notice that in the definitions of properties \(\mathsf{EC},\mathsf{L},\mathsf{BR}\) in 1.2, if the right side holds for the Lindelof subset \(Z\) then it holds for any Lindelof subset containing \(Z\). If \(X\) is Type I, a Lindelof subspace of \(X\) is contained in \(\overline{X_{\alpha}}\) (Lindelof itself) for some \(\alpha\), which yields the result. Let us now have a look at arrows not in Lemma 2.1. Our first example is trivial. **Example 2.3**.: _None of \(\mathsf{S}_{\mathsf{cl}},\mathsf{L}_{\mathsf{cl}},\mathsf{BR}_{\mathsf{cl}}\) imply \(\mathsf{EC}\): A space \(X\) consisting of the disjoint union of two copies of \(\omega_{1}\) satisfies the first three properties for real maps but not the last one._ Details.: By Theorem 1.1. Taking a map sending one copy of \(\omega_{1}\) on \(0\) and the other on \(1\) shows that \(\mathsf{EC}(X,\mathbb{R})\) does not hold. Other examples as 2.3 are the _long line_\(\mathbb{L}\) made of two copies of \(\mathbb{L}_{\geq 0}\) glued at their \(0\)-point, and the space \(\omega_{1}\times\mathbb{R}\). We now show that \(\mathsf{P}\not\Rightarrow\mathsf{P}_{\mathsf{cl}}\) for \(\mathsf{P}\in\{\mathsf{EC},\mathsf{L},\mathsf{BR}\}\). The idea is actually quite simple and can be infered from the two next lemmas. The first one is trivial. **Lemma 2.4**.: _Let \(\mathsf{P}\in\{\mathsf{EC},\mathsf{L},\mathsf{BR}\}\) and \(X\), \(Y\) be spaces such that \(X=A\cup B\) where \(A\) is Lindelof and \(\mathsf{P}(B,Y)\) holds. Then \(\mathsf{P}(X,Y)\) holds._ **Lemma 2.5**.: _Let \(X\) be a space with a dense countable subspace of isolated points which we identify with the integers \(\omega\). If \(X-\omega\) is not Lindelof, then \(\mathsf{L}_{\mathsf{cl}}(X,\mathbb{R})\) and \(\mathsf{BR}_{\mathsf{cl}}(X,\mathbb{R})\) do not hold._ Proof.: Let \(f:X\to\mathbb{R}\) be defined by \(f(n)=1/n\) for \(n\in\omega\) and \(f(x)=0\) for \(x\not\in\omega\). Then \(f\) is continuous. Any \(Z\subset X\) such that \(f(Z)=f(X)\) must contain all of \(\omega\), hence its closure is all of \(X\), which is not Lindelof. It follows that \(\mathsf{L}_{\mathsf{cl}}(X,\mathbb{R})\) does not hold. The same function shows that \(\mathsf{BR}_{\mathsf{cl}}(X,\mathbb{R})\) does not hold: given \(Z\subset X\), if \(\omega\not\subset Z\), for any \(x\in\omega-Z\) we have \[f(x)\in f(X-Z)\neq f(X-(Z\cup\{x\}))\not\ni f(x).\] Hence, \(Z\supset\omega\) and its closure must again be the entire space. Hence, spaces as in this lemma are good candidates to show that \(\mathsf{P}\not\to\mathsf{P}_{\mathsf{cl}}\). To complete the task, the following result is another tool. Recall that \(\beta X\) is the Cech-Stone compactification of the space \(X\). **Theorem 2.6**.: _Let \(X\) be Tychonoff and let \(\beta X\) be its Cech-Stone compactification. (a) If \(|\beta X-X|=1\), then \(\mathsf{EC}(X,\mathbb{R})\) holds. (b) If \(\beta X-X\) is at most countable and \(X\) is locally compact, then \(\mathsf{L}(X,\mathbb{R})\) holds._ Recall that a \(0\)_-set_ (resp. a _co-\(0\)-set_) in a space \(X\) is a preimage of \(\{0\}\) (resp. of \((0,1]\)) under a map \(X\to[0,1]\). Proof.: If \(A,B\) are disjoint \(0\)-sets in \(X\), then their closure is disjoint in \(\beta X\). (All claimed properties of \(\beta X\) in this proof can be found in [10, Section 3.6].) This implies that \(|\beta X-X|=1\) is equivalent to the property that given two disjoint \(0\)-sets in \(X\), then at least one is compact. We show in Theorem 3.3 below that this implies \(\mathsf{EC}(X,\mathbb{R})\), which proves (a). Assume as in (b) that \(\beta X-X\) is at most countable, with \(X\) locally compact, and let \(f:X\to\mathbb{R}\) be given. We may assume that the range of \(f\) is contained in \([0,1]\). Let \(\beta f:\beta X\to[0,1]\) be the extension of \(f\) to all of \(\beta X\) (which always exists, see e.g. [10, Theorem 3.6.1]). By local compactness, \(X\) is open in \(\beta X\) (see e.g. [10, Theorem 3.5.8]). Then \(C=\beta f(\beta X-X)\) is compact and countable in \([0,1]\). If \(E\subset[0,1]\) is closed and disjoint from \(C\), then \(f^{-1}(E)\) is compact in \(X\), otherwise its closure in \(\beta X\) intersects \(\beta X-X\), and hence \(E\cap C\neq\varnothing\). Since any open subset of \([0,1]\) is an \(F_{\sigma}\), \(Z_{0}=f^{-1}([0,1]-C)\) is Lindelof. Add to \(Z_{0}\) one preimage of each \(c\in C\) such that \(f^{-1}(C)\cap X\neq\varnothing\), to obtain a Lindelof subset \(Z\) with full image. **Example 2.7** (S. Mrowka, in effect).: _There is a locally compact first countable pseudocompact separable space \(X\) such that: (a) \(\operatorname{\mathsf{EC}}(X,\mathbb{R})\) holds and hence so do \(\mathsf{L}(X,\mathbb{R})\) and \(\operatorname{\mathsf{BR}}(X,\mathbb{R})\), (b) \(\mathsf{L}_{\mathsf{cl}}(X,\mathbb{R})\) does not hold and thus neither do \(\operatorname{\mathsf{EC}}_{\mathsf{cl}}(X,\mathbb{R})\) and \(\mathsf{S}(X,\mathbb{R})\), (c) \(\operatorname{\mathsf{BR}}_{\mathsf{cl}}(X,\mathbb{R})\) does not hold._ A space \(X\) is _pseudocompact_ iff every real valued map defined on \(X\) has a bounded image. We do _not_ assume that pseudocompact spaces are Tychonoff, departing from the definition given in [10, p. 208]. See e.g. [30] for more on pseudocompact spaces. Recall also that a family of sets is _almost disjoint_ iff the intersection of any two members is finite. Details.: Recall that a \(\psi\)-space is the union of an open countable discrete space (which we may take to be \(\omega\)) and an uncountable discrete subspace whose points are given by a maximal family \(\mathcal{R}\) of almost disjoint subsets of \(\omega\). A neighborhood basis of \(A\in\mathcal{R}\) is given by \(\{A\}\cup(A-F)\) where \(F\) is finite. All \(\psi\)-spaces are pseudocompact and locally compact. S. Mrowka [18, Theorem 3.11] shows how to construct a \(\psi\)-space \(X\) whose Cech-Stone compactification is the one point compactification. This implies that \(\operatorname{\mathsf{EC}}(X,\mathbb{R})\) holds by Theorem 2.6. Of course, \(\mathsf{L}_{\mathsf{d}}(X,\mathbb{R})\) and \(\operatorname{\mathsf{BR}}_{\mathsf{cl}}(X,\mathbb{R})\) do not hold by Lemma 2.5. Notice in passing that a disjoint union of two spaces as in Example 2.7 yields a space satisfying \(\mathsf{L}(X,\mathbb{R})\) and \(\operatorname{\mathsf{BR}}(X,\mathbb{R})\) but neither their cl-versions nor \(\operatorname{\mathsf{EC}}(X,\mathbb{R})\). Another way of obtaining \(\mathsf{P}\not\Rightarrow\mathsf{P}_{\mathsf{cl}}\) is the following. For the definition of the uncountable cardinal \(\mathfrak{p}\) and more on the subject, see see e.g. [8, Chapter 3]. **Example 2.8** (P. Nyikos).: _There is a locally compact first countable separable normal space \(X\) such that \(\operatorname{\mathsf{EC}}(X,\mathbb{R})\) holds but \(\mathsf{L}_{\mathsf{d}}(X,\mathbb{R})\) and \(\operatorname{\mathsf{BR}}_{\mathsf{cl}}(X,\mathbb{R})\) do not. If \(\mathfrak{p}=\omega_{1}\), the space can be made to be moreover countably compact._ Details.: If \(X\) is as in Lemma 2.5 but such that \(X-\omega\) is homeomorphic to \(\omega_{1}\) (in the order topology), then \(\operatorname{\mathsf{EC}}(X,\mathbb{R})\) holds (by Lemma 2.4) while \(\mathsf{L}_{\mathsf{cl}}(X,\mathbb{R})\) and \(\operatorname{\mathsf{BR}}_{\mathsf{cl}}(X,\mathbb{R})\) do not. There are many ways to obtain normal first countable and locally compact such spaces, see for instance [23]. Nyikos ([21], Theorem 2.1 and Example 3.4) shows how to obtain a countably compact example whenever \(\mathfrak{p}=\omega_{1}\). A variation of this space can be adapted to obtain a surface, see section 9. The next examples show in particular that \(\mathsf{L}\not\Rightarrow\operatorname{\mathsf{BR}}\) and \(\operatorname{\mathsf{BR}}\not\Rightarrow\mathsf{L}\). It is convenient to first introduce the following notation. If \(f:X\to Y\) is a map with \(X\) of Type I, set: \[\operatorname{\mathsf{Bd}}(f)=\{y\in Y\,:\,f^{-1}(\{y\})\text{ is Lindelof in }X\}.\] **Lemma 2.9**.: _Let \(X,Y\) be spaces. Then_ \[\operatorname{\mathsf{BR}}(X,Y)\text{ holds }\Longleftrightarrow f^{-1}( \operatorname{\mathsf{Bd}}(f))\text{ is contained in a Lindelof subset for any }f:X\to Y.\] Proof.: Essentially by unraveling the definitions. Here are some details for the skeptics. Suppose that \(\operatorname{\mathsf{BR}}(X,Y)\) holds. Let \(f:X\to Y\) be given and let \(Z\) be Lindelof such that for any Lindelof \(W\supset Z\), \(f(X-W)=f(X-Z)\). Let \(y\in\operatorname{\mathsf{Bd}}(f)\), then \(W=f^{-1}(\{y\})\cup Z\) is Lindelof. If \(y\) has a preimage outside of \(Z\), then \(f(X-W)\) does not contain \(y\) while \(f(X-Z)\) does, a contradiction. Hence \(f^{-1}(\operatorname{\mathsf{Bd}}(f))\subset Z\). Conversely, let \(f:X\to Y\) be given such that \(f^{-1}(\operatorname{\mathsf{Bd}}(f))\) is contained in a Lindelof subset of \(X\). Hence \(f^{-1}(\operatorname{\mathsf{Bd}}(f))\subset\overline{X_{\alpha}}\) for some \(\alpha\). Let \(W\supset\overline{X_{\alpha}}\) be Lindelof. If \(y\in f(X-\overline{X_{\alpha}})\), then \(f^{-1}(\{y\})\) is unbounded, and hence \(f^{-1}(\{y\})\not\subset W\). It follows that \(y\in f(X-W)\). This shows that \(\operatorname{\mathsf{BR}}(X,Y)\) holds. **Example 2.10**.: _There are Type I locally metrizable spaces \(H^{-}\) and \(H^{+}\) such that: (a) \(\mathsf{S_{cl}}(H^{-},\mathbb{R})\) and thus \(\mathsf{L_{cl}}(H^{-},\mathbb{R})\) hold but \(\mathsf{BR}(H^{-},\mathbb{R})\) does not, (b) \(\mathsf{BR_{cl}}(H^{+},\mathbb{R})\) holds but \(\mathsf{L}(H^{+},\mathbb{R})\) does not, (c) \(H=H^{-}\sqcup H^{+}\), the topological disjoint sum of \(H^{-}\) and \(H^{+}\), has a partition into closed non-Lindelof subsets such that for each map \(f:H\to\mathbb{R}\) there is a Lindelof subset \(Z\) such that \(f\) is constant on each member of the partition outside of \(Z\) but none of \(\mathsf{P}(H,\mathbb{R})\) holds for \(\mathsf{P}\in\{\mathsf{EC},\mathsf{L},\mathsf{BR},\mathsf{S}\}\)._ (Recall that for Type I spaces the properties are equivalent to their \(\mathsf{cl}\)-counterparts, we worded the theorem in the strongest form.) DetailsThe examples are simple subspaces of \(\omega_{1}\times[0,1]\). Take an uncountable subset \(S=\{s_{\alpha}\,:\,\alpha\in\omega_{1}\}\) of \([0,1]\) with dense complement (all the \(s_{\alpha}\) are distinct). Then set: \[H^{+}=\bigcup_{\alpha\in\omega_{1}}[\alpha,\omega_{1})\times\{s _{\alpha}\}\] \[H^{-}=\omega_{1}\times[0,1]-H^{+}.\] (The intervals \([\alpha,\omega_{1})\) are taken in \(\omega_{1}\).) Then both are a Type I spaces with canonical covers \(H^{-}_{\alpha},H^{+}_{\alpha}\) given by the intersection of the space with \([0,\alpha)\times[0,1]\). (a) Let \(f:H^{-}\to[0,1]\) be the projection on the second coordinate. Then \(\mathsf{Bd}(f)=S\) and \(f^{-1}(\mathsf{Bd}(f))\) is unbounded (it contains \([0,\alpha)\times\{s_{\alpha}\}\) for each \(\alpha\)); hence, \(\mathsf{BR}(H^{-},\mathbb{R})\) does not hold. To see that \(\mathsf{S}(H^{-},\mathbb{R})\) does hold, take a countable dense subset \(Q=\{q_{n}\,:\,n\in\omega\}\) of \([0,1]-S\). Since \(S\) has dense complement, \(\overline{Q}=[0,1]\). Given a map \(g:H^{-}\to\mathbb{R}\), there is \(\beta\) such that \(g\) is constant above \(\beta\) on each horizontal \(\omega_{1}\times\{q_{n}\}\), and thus by density on every horizontal line, even those who do not go the entire length. Hence the retraction \(r(\langle\alpha,t\rangle)=\langle\min\{\alpha,\beta\},t\rangle\) satisfies \(g\circ r=g\). (b) The fact that \(S\) has dense complement is actually irrelevant for this part. The projection on the second coordinate shows that \(\mathsf{L}(H^{+},\mathbb{R})\) does not hold: new values are introduced as far as one wants. To see that \(\mathsf{BR}(H^{+},\mathbb{R})\) does hold, use the same argument as in (a) with a countable dense subset \(\{s_{\alpha_{n}}\,:\,n\in\omega\}\) of \(S\) to show that there is \(\beta\) such that \(f\) is eventually constant above \(\beta\) on every horizontal line. Take \(Z=\overline{H^{+}_{\beta}}\), then any Lindelof \(W\supset Z\) satisfies \(f(H^{+}-Z)=f(H^{+}-W)\). (c) Notice that while \(H^{+}\cup H^{-}=\omega_{1}\times[0,1]\), the topological disjoint sum \(H=H^{+}\sqcup H^{-}\) has a finer topology, as both \(H^{+}\) and \(H^{-}\) are clopen in \(H\). Partition \(H\) into its horizontal lines. Then the claimed properties are immediate from the arguments above. It is not possible to find a manifold having the same properties as \(H^{+}\): Theorem 5.1 implies that \(\mathsf{L}(X,\mathbb{R})\) holds for any manifold \(M\). We however do not know the answer to the following question. **Question 2.11**.: _Is there a manifold \(M\) such that \(\mathsf{S}(M,\mathbb{R})\) holds but not \(\mathsf{BR}(M,\mathbb{R})\)?_ See section 5 for more on \(\mathsf{BR}\) and sections 7-8 for more on \(\mathsf{S}\). While \(\mathsf{EC}(X,Y)\) (and a fortiori \(\mathsf{EC_{cl}}(X,Y)\)) seems to be a stronger property than \(\mathsf{S}(X,Y)\), it does not imply it since \(X\) might lack retractions on sufficiently large subspaces (see Example 2.8). Actually, \(\mathsf{EC}(X,Y)\) does imply the negation of a property weaker than \(\mathsf{S}(X,Y)\) if \(X\) is a longpipe, see Theorem 7.6 and Example 7.7. This was the only implication not in Lemma 2.1 and not ruled out by our examples so far. Let us end this section by getting away with general (almost) trivialities. Firstly, the following obviously holds (as already noted in [17, Lemma 4.3.34] for a space eventually constant in another): **Lemma 2.12**.: _Let \(X,Y,Z\) be spaces such that there is a continuous \(1\)-to-\(1\) map \(Y\to Z\). Then \(\mathsf{P}(X,Z)\Longrightarrow\mathsf{P}(X,Y)\) for \(\mathsf{P}\in\{\mathsf{EC},\mathsf{L},\mathsf{BR},\mathsf{S}\}\) or the \(\mathsf{cl}\) versions._ **Corollary 2.13**.: _Let \(\tau\supset\rho\) be Hausdorff topologies on \(Y\). Then \(\mathsf{P}(X,\langle Y,\rho\rangle)\Longrightarrow\mathsf{P}(X,\langle Y,\tau\rangle)\) for \(\mathsf{P}\in\{\mathsf{EC},\mathsf{L},\mathsf{BR},\mathsf{S}\}\) or the \(\mathsf{cl}\) versions._ Proof.: \(id:\langle Y,\tau\rangle\rightarrow\langle Y,\rho\rangle\) is a \(1\)-to-\(1\) continuous map. Finally, the next lemmas are also almost immediate. **Lemma 2.14**.: _Let \(X=\cup_{\alpha\in\omega_{1}}X_{\alpha}\) be a Type I space and \(Y\) be a countable space with no separation axiom assumed. Then \(\mathsf{L}(X,Y)\) and \(\mathsf{BR}(X,Y)\) hold._ Proof.: Let \(f:X\to Y\) be given. Then \(f(X_{\alpha})\) and \(f(X-X_{\alpha})\) are respectively increasing and decreasing \(\omega_{1}\)-sequences of countable sets, they must then stagnate above some \(\alpha\). **Lemma 2.15**.: _Let \(X\) be a space containing a clopen uncountable discrete subspace \(A\) and let \(Y\) be a space. Then the following hold. (a) If \(|Y|\geq\aleph_{1}\), then neither \(\mathsf{L}(X,Y)\) nor \(\mathsf{BR}(X,Y)\) do hold. (b) If \(|Y|\geq 2\), then neither \(\mathsf{EC}(X,Y)\) nor \(\mathsf{S}(X,Y)\) do hold._ Proof.: (a) By assumption, we can define a map \(f:X\to Y\) which is \(1\)-to-\(1\) on \(A\) and constant on \(X-A\). Then \(f\) contradicts \(\mathsf{L}(X,Y)\) and \(\mathsf{BR}(X,Y)\). (b) Partition \(A\) into two clopen discrete uncountable subsets \(A_{0},A_{1}\). Define \(f:X\to Y\) that sends \(A_{0}\) to one of the points of \(Y\) and \(S-A_{0}\) to the other one. This defines a continuous map which contradicts \(\mathsf{S}(S,Y)\) and \(\mathsf{EC}(S,Y)\) Consecuences of \(\mathsf{EC}(X,\mathbb{R})\); \(\mathsf{EC}(\omega_{1},Y)\) and isocompactness of \(Y\) Theorem 1.1 shows in particular that \(\mathsf{EC}(\omega_{1},\mathbb{R})\) holds. In this section, we first investigate which spaces satisfy \(\mathsf{EC}(X,\mathbb{R})\), and when does \(\mathsf{EC}(X,\mathbb{R})\) imply \(\mathsf{EC}(X,Y)\). (Our results have similarities with those in [24, Section 7].) Then, we look for properties of \(Y\) that imply or are implied by \(\mathsf{EC}(Z,Y)\) when \(Z\) is similar (in some way) to \(\omega_{1}\). The notions of C-closedness and isocompactness will be central (see definitions below). It is well known and easy to prove that finite unions and at most countable intersections of \(0\)-sets are \(0\)-sets, and finite intersections and countable unions of co-\(0\)-sets are co-\(0\)-sets. If \(Y\) is perfectly normal (in particular, metric), then the preimage of a closed subset of \(Y\) is a \(0\)-set. **Lemma 3.1**.: _Let \(X\) be a space, and \(E\subset D\subset X\) be subspaces such that \(E\) is Lindelof and \(D\) is non-Lindelof. Then the following hold. (a) There is an open \(U\supset E\) such that \(D-U\) is non-Lindelof. (b) If \(X\) is Tychonoff and \(D\) is a \(0\)-set, then there is an open \(U\supset E\) such that \(D-U\) is a non-Lindelof \(0\)-set._ Proof.: Since \(D\) is non-Lindelof and \(E\) Lindelof, let \(\mathcal{U}\) be a cover of \(D\) without countable subcover and let \(\mathcal{U}_{0}\subset\mathcal{U}\) be a countable subcover of \(E\). Then \(D-\cup\mathcal{U}_{0}\) is non-Lindelof, which proves (a). For (b), since \(X\) is Tychonoff for each \(x\in E\) we may fix \(g_{x}:X\rightarrow[0,1]\) such that \(g_{x}(x)=1\) and \(g_{x}\) is \(0\) outside of \(\cup\mathcal{U}_{0}\). Let \(\{x_{n}\,:\,n\in\omega\}\) be such that \(\mathcal{W}=\{g_{x_{n}}^{-1}((0,1])\,:\,n\in\omega\}\) is a cover of \(E\). Then \(\cup\mathcal{W}\) is a countable union of co-\(0\)-sets and hence a co-\(0\)-set and is included in \(\cup\mathcal{U}_{0}\). It follows that \(D-\cup\mathcal{W}\) is a \(0\)-set which is non-Lindelof since it contains the closed non-Lindelof subset \(D-\cup\mathcal{U}_{0}\). **Definition 3.2**.: _We say that a space \(X\) has property \(\mathsf{IC}\) (resp. \(\mathsf{IO}\)) iff given two disjoint closed subsets (resp. \(0\)-sets) of \(X\), at least one of them is Lindelof._ The following theorem was essentially proved for Type I spaces in the preprint [3, Thm 6.1 and Lemma 6.2]. **Theorem 3.3**.: _Let \(X\) be a space. Then the properties below are related as follows._ _(1)\(\Longleftrightarrow\)(2)\(\Longleftrightarrow\)(3a)\(\Longleftrightarrow\)(3b)\(\Longleftarrow\)(4a)\(\Longleftrightarrow\)(4b)_ _Moreover, if \(X\) is Tychonoff then (3a)\(\Longleftrightarrow\)(3b) and if \(X\) is normal, all properties are equivalent._ _(1) \(\mathsf{EC}(X,\mathbb{R})\) holds,_ _(2) \(\mathsf{EC}(X,Y)\) holds when \(Y\) is a metric space,_ _(3a) \(X\) satisfies \(\mathsf{IO}\),_ _(3b) given two non-Lindelof \(0\)-sets of \(X\), their intersection is non-Lindelof,_ _(4a) \(X\) satisfies \(\mathsf{IC}\),_ _(4b) given two closed non-Lindelof subsets of \(X\), their intersection is non-Lindelof._ Proof.: If \(X\) is Lindelof, we have nothing to do, hence we assume that \(X\) is not Lindelof in what follows. (2)\(\Rightarrow\)(1), (3b)\(\Rightarrow\)(3a), (4b)\(\Rightarrow\)(3b), (4b)\(\Rightarrow\)(4a) are all immediate. (1)\(\Rightarrow\)(3a). Let \(f:X\to\mathbb{R}\) be given. If \(A=f^{-1}(\{0\})\) is non-Lindelof, then since \(\mathsf{EC}(X,\mathbb{R})\) holds \(f\) must be eventually constant on \(0\). Hence, since \(f^{-1}(\mathbb{R}-\{0\})\) is a countable union of \(0\)-sets, it is contained in a Lindelof subset \(Z\). Any closed \(B\) disjoint from \(A\) is contained in \(Z\) and hence Lindelof. (3a)\(\Rightarrow\)(2). Let \(f:X\to Y\) be given, with \(Y\) a metric space. Then \(f(X)\) has countable spread since any uncountable closed discrete subspace \(D\subset f(X)\) can be partitioned in two disjoint such subspaces whose preimages yield disjoint non-Lindelof \(0\)-sets of \(X\). By Lemma 1.3\(f(X)\) is hereditarily Lindelof. Let \(B(y,\epsilon)\) denote the open ball of radius \(\epsilon\) around \(y\) in \(f(X)\). For each \(\epsilon>0\), by Lindelofness there is at least one \(y\in f(X)\) such that \(f^{-1}(B(y,\epsilon))\) is non-Lindelof. Moreover, if \(f^{-1}(\overline{B(y,\epsilon)})\) is non-Lindelof, then \(f^{-1}(f(X)-\overline{B(y,\epsilon)})\) is Lindelof, because it is a countable union of \(0\)-sets disjoint from \(f^{-1}(\overline{B(y,\epsilon)})\). For each \(n\in\omega\), choose \(y_{n}\in f(X)\) such that \(f^{-1}(\overline{B(y_{n},1/n)})\) is non-Lindelof and \(y_{n+1}\in B(y_{n},1/n)\). Then \(f(X)\bigcap\cap_{n\in\omega}\overline{B(y_{n},1/n)}\) is non-empty (otherwise \(f(X)=\cup_{n\in\omega}(f(X)-\overline{B(y_{n},1/n)})\) would have Lindelof preimage) and thus contains exactly one point \(y\). By construction, the complement of \(y\) has a Lindelof preimage, hence \(f\) is eventually constant on \(y\). (4a)\(\Rightarrow\)(4b) Let \(C_{1},C_{2}\subset X\) be closed and non-Lindelof. By way of contradiction assume that \(C_{1}\cap C_{2}\) is (at most) Lindelof. By Lemma 3.1 (a), there are open \(U,V\) both containing \(C_{1}\cap C_{2}\) such that \(C_{1}-U\) and \(C_{2}-V\) are disjoint non-Lindelof closed subsets of \(X\), contradicting \(\mathsf{IC}\). (3a)\(\Rightarrow\)(3b) when \(X\) is Tychonoff. Same proof as (4a)\(\Rightarrow\)(4b), using Lemma 3.1 (b). (3a)\(\Rightarrow\)(4a) if \(X\) is normal. Given two disjoint closed sets \(E,F\) in \(X\) we obtain two disjoint \(0\)-sets \(A\supset E,B\supset F\) with a Urysohn fonction. If both \(E\) and \(F\) are non-Lindelof, then so are \(A\) and \(B\), contradicting (b). Half of next result is a direct consequence of Theorem 3.3. **Theorem 3.4**.: _If \(\mathsf{EC}(X,\mathbb{R})\) holds and \(X\) is either normal or \(\aleph_{1}\)-scwH and Tychonoff, then \(X\) is \(\omega_{1}\)-compact._ Recall that a space is \(\kappa\)_-(strongly) collectionwise Hausdorff_, abbreviated \(\kappa\)_-(s)cwH_, iff for any closed discrete subspace \(D\) of cardinality \(\leq\kappa\) there is a disjoint (resp. discrete) collection of open sets \(\mathcal{U}=\{U_{d}\,:\,d\in D\}\) such that \(d\in U_{d}\) for each \(d\in D\). Such a \(\mathcal{U}\) is called a _disjoint (resp. discrete) expansion_ of \(D\). A normal \(\kappa\)-cwH space is \(\kappa\)-scwH, as well known. Proof.: A closed discrete subset \(D\) (of cardinality \(\aleph_{1}\)) can be partitioned in two such subsets \(D_{1},D_{2}\), which contradicts \(\mathsf{IC}\). This proves the result if \(X\) is normal. If \(X\) is Tychonoff and \(\aleph_{1}\)-scwH, then \(D\) is contained in a \(0\)-set \(E\) which is the union of a discrete family \(\{E_{d}\,:\,d\in D\}\), where \(d\in E_{d}\) for each \(d\in D\). Indeed, take a discrete expansion \(\mathcal{U}=\{U_{d}\,:\,d\in D\}\) of \(D\) and \(g_{d}:X\to[0,1]\) which is \(1\) on \(d\) and \(0\) outside of \(U_{d}\). By discreteness, \(g_{i}=\sum_{d\in D_{i}}g_{d}\) (\(i=1,2\)) is a continuous function, and \(E_{i}=g_{i}^{-1}(\{1\})\) (\(i=1,2\)) contradict \(\mathsf{I0}\). Notice that there are non-\(\omega_{1}\)-compact spaces \(X\) such that \(\mathsf{EC}(X,\mathbb{R})\) holds; Example 2.7 is one, Example 3.5 below is another. **Example 3.5** (Nyikos, in effect).: _There is a non-\(\omega_{1}\)-compact and non-normal Type I surface \(M\) such that \(\mathsf{EC_{cl}}(M,\mathbb{R})\) and \(\mathsf{S}(M,\mathbb{R})\) hold._ Details.: This example is due to Nyikos and is described in [24, section 6 & 7]. Every property we will claim in this proof is proved in Nyikos' paper to which we refer for details. Let us give a quick description. We first consider a tangent bundle \(T\mathbb{L}_{+}\) of \(\mathbb{L}_{+}\) given by a smoothing. The removal the \(0\)-section \(L_{0}\) of \(T\mathbb{L}_{+}\), which is a copy of \(\mathbb{L}_{+}\), disconnects \(T\mathbb{L}_{+}\) into two homeomorphic connected submanifolds \(T^{+}\) and \(T^{-}\). Since \(T\mathbb{L}_{+}\) is not trivial, \(T^{+}\) does not contain a copy of \(\mathbb{L}_{+}\), actually there is no unbounded map \(\mathbb{L}_{+}\to T^{+}\). Write \(\pi:T\mathbb{L}_{+}\to\mathbb{L}_{+}\) for the bundle projection. Both \(T\mathbb{L}_{+}\) and \(T^{+}\) are Type I manifolds, a canonical cover being given by the fibers \(\{U_{\alpha}=\pi^{-1}(0,\alpha)\,:\,\alpha\in\omega_{1}\}\) (and their intersections with \(T^{+}\) for this latter space). for \(\alpha\in\omega_{1}\). The choice of the smoothing is important since \(T\mathbb{L}_{+}\) and \(T^{+}\) can exhibit quite different topological properties depending on it. Here, as we need \(\mathsf{EC_{cl}}(T^{+},\mathbb{R})\) (and thus \(\mathsf{EC_{cl}}(T^{-},\mathbb{R})\)) to hold, we take what Nyikos calls a _smoothing of class 7_, see [24, Section 6 & 7] for the construction and the proof. Notice that \(T^{+}\) is not normal. Set \(M\) to be \(T\mathbb{L}_{+}-\omega_{1}\), where \(\omega_{1}\) is seen as a subset of \(L_{0}\). Then \(M\) is neither \(\omega_{1}\)-compact nor normal but \(\mathsf{EC_{cl}}(M,\mathbb{R})\) holds. Indeed, any \(f:S\to\mathbb{R}\) is eventually constant on both \(T^{+}\) and \(T^{-}\), by density and connectedness it is eventually constant on all of \(M\). By construction, \(U_{\alpha}\) is homeomorphic to \((0,\alpha)\times\mathbb{R}\) for each \(\alpha\). Given \(\alpha\), one may easily define \(r_{\alpha}:T\mathbb{L}_{+}\to U_{\alpha+1}\) which is the identity on \(\overline{U_{\alpha}}\) and sends all of \(T\mathbb{L}_{+}-U_{\alpha+1}\) to a point in \(U_{\alpha+1}-\overline{U_{\alpha}}\), as seen in Figure 1. Then \(r_{\alpha}\) is well defined on \(M\) for each \(\alpha\). Together with \(\mathsf{EC_{cl}}(M,\mathbb{R})\), it implies that \(\mathsf{S}(M,\mathbb{R})\) holds as well. We now investigate for which spaces \(X\), \(Y\) does \(\mathsf{EC}(X,\mathbb{R})\) imply \(\mathsf{EC}(X,Y)\). It might be the right time for a trivial remark: if \(Y\supset X\) and \(X\) is not Lindelof, then the inclusion \(i:X\to Y\) shows that \(\mathsf{EC}(X,Y)\) does not hold. Since any Tychonoff space is a subspace of a compact space (namely: its Cech-Stone compactification), it would be silly to try to show that \(\mathsf{EC}(X,Y)\) holds for some class of non-Lindelof Tychonoff spaces \(X\) by relying only on covering properties of \(Y\), we also need some local properties. We shall see that having \(G_{\delta}\) points plays a central role. A space is _C-closed_ iff any countably compact subspace is closed. The class of C-closed spaces contains in particular the hereditarily meta-Lindelof spaces, the sequential spaces and Figure 1: The retraction \(r_{\alpha}\) of Example 3.5. It is the identity on the blue region. the regular spaces with \(G_{\delta}\) points, see e.g. [14]. The following lemma is immediate, although it implies for instance [11, Lemma A.33]. **Lemma 3.6**.: _Let \(X\) be countably compact and \(Y\) be C-closed. Then any \(f:X\to Y\) is a closed map._ Theorem 3.8 below is a slight generalization of [11, Lemmas B.29-30], with a simpler proof. We first state a useful technical lemma from [5, Lemma 2.2]. Given a chain cover \(\mathcal{U}=\{U_{\alpha}\,:\,\alpha\in\kappa\}\) of \(X\), a subset of \(X\) is \(\mathcal{U}\)-unbounded iff it is not contained in any \(U_{\alpha}\). It is well known that a space is compact iff any chain cover is trivial, that is, has a maximal element which is the whole space. (To our knowledge, this is due to Alexandroff and Urysohn [1] in 1929.) **Lemma 3.7** ([5, Lemma 2.2]).: _Let \(f:X\to Y\) be such that \(X\) is countably compact and noncompact, \(f(X)\subset Y\) is compact, and \(Y\) is C-closed. Let \(\mathcal{U}=\{U_{\alpha}\,:\,\alpha\in\kappa\}\), \(\kappa\geq\omega_{1}\), be a chain cover of \(X\). Then, there is \(c\in Y\) such that \(f^{-1}(\{c\})\) is \(\mathcal{U}\)-unbounded (and hence non-Lindelof)._ We include a proof for completeness. Proof.: For each \(\alpha\in\kappa\) choose \(x_{\alpha}\in X-U_{\alpha}\) and set \(E_{\alpha}=\{x_{\beta}\,:\,\beta\geq\alpha\}\). Then \(\overline{E_{\alpha}}\cap U_{\alpha}=\varnothing\) and each \(E_{\alpha}\) is \(\mathcal{U}\)-unbounded. Since \(\overline{E_{\alpha}}\) is countably compact, \(f(X)\) is compact and \(Y\) is C-closed, it follows that \(f(\overline{E_{\alpha}})\subset f(X)\) is compact. Any finite intersection of \(E_{\alpha}\) is nonempty, hence by compactness \(\cap_{\alpha\in\kappa}f(\overline{E_{\alpha}})\neq\varnothing\). Take \(c\) is this intersection, then \(f^{-1}(\{c\})\cap\overline{E_{\alpha}}\neq\varnothing\) for each \(\alpha\), which shows that \(f^{-1}(\{c\})\) is \(\mathcal{U}\)-unbounded. We state (and prove) the next theorem in an almost asurd level of generality. The _linear Lindelof number_\(\ell L(X)\) of a space \(X\) is the smallest \(\kappa\) such that any chain cover of \(X\) has a subcover of cardinality \(\leq\kappa\). A space is _\([\aleph_{0},\kappa]\)-compact_ iff any cover by at most \(\kappa\) open sets has a countable subcover. **Theorem 3.8**.: _Let \(X\) be a countably compact non-compact space with linear Lindelof number \(\kappa\geq\omega_{1}\). Let \(Y\) be a \([\aleph_{0},\kappa]\)-compact C-closed space. Then the following holds. (a) For each \(f:X\to Y\), \(f(X)\) is compact and there is \(c\in Y\) with a non-Lindelof preimage in \(X\). (b) If \(X\) satisfies \(\mathsf{IC}\) and \(Y\) has \(G_{\delta}\) points, then \(\mathsf{EC}(X,Y)\) holds. (c) If \(X\) satisfies \(\mathsf{IO}\) and \(Y\) is Tychonoff with \(G_{\delta}\) points, then \(\mathsf{EC}(X,Y)\) holds._ This theorem is very similar to Lemma 6.11 in [5]. We will however repeat the proof for completeness. Recall that as said earlier a regular space with \(G_{\delta}\) points is C-closed, hence in (c) there is some redundancy in the assumptions. Actually, the image of \(X\) in \(Y\) is first countable in (b) and (c) since a countably compact space with \(G_{\delta}\) points is first countable. (This actually holds even for feebly compact spaces, see [27, Prop. 2.2] for more on the subject.) Typical examples of \(X\) satisfying the hypothesis of (b) are \(\omega_{1}\) and \(\mathbb{L}_{\geq 0}\). Notice that if \(X\) is Type I, then \(\kappa=\omega_{1}\), hence \(Y\) may be assumed to be \([\aleph_{0},\aleph_{1}]\)-compact, which is equivalent to the property that for any uncountable subset \(B\subset Y\) there is a point of \(Y\) any of whose neighborhoods contains uncountably many members of \(B\) - such a point is called a _condensation point of \(B\)_. An immediate corollary of Theorems 3.3 & 3.8 is the following. **Corollary 3.9**.: _Let \(X\) be countably compact and \(Y\) be a Tychonoff Lindelof space with \(G_{\delta}\) points. Then \(\mathsf{EC}(X,\mathbb{R})\) is equivalent to \(\mathsf{EC}(X,Y)\)._ Proof.: \(Y\) is \([\aleph_{0},\kappa]\)-compact for each \(\kappa\) by Lindelofness, and if \(\mathsf{EC}(X,\mathbb{R})\) holds then by Theorem 3.3\(X\) satisfies \(\mathsf{IO}\). Then \(\mathsf{EC}(X,Y)\) follows by Theorem 3.8 (c). Proof of Theorem 3.8.: (a) Clearly, \(f(X)\) is countably compact and closed in \(Y\) and \(\ell L(f(X))\leq\ell L(X)=\kappa\). Given a chain cover \(\mathcal{U}\) of \(f(X)\), there is a subcover of cardinality \(\leq\kappa\). By \([\aleph_{0},\kappa]\)-compactness of \(Y\), there is a countable subcover and hence a finite one, so \(f(X)\) is compact. Lemma 3.7 yields the result. (b) Given \(f:X\to Y\), by (a) there is \(c\in Y\) with a non-Lindelof preimage. Let \(V_{n}\subset Y\) be open sets such that \(\{c\}=\cap_{n\in\omega}V_{n}\). By \(\mathsf{IC}\), \(f^{-1}(Y-V_{n})\) must be Lindelof, and hence \(f^{-1}(Y-\{c\})=\cup_{n\in\omega}f^{-1}(Y-V_{n})\) is also Lindelof. (c) Define \(V_{n}\) as in (b) and take \(g:Y\to\mathbb{R}\) which is \(0\) on \(c\) and \(1\) on \(Y-V_{n}\). Since \((g\circ f)^{-1}(\{0\})\supset f^{-1}(\{c\})\) is non-Lindelof, \((g\circ f)^{-1}(\{1\})\supset f^{-1}(Y-V_{n})\) must be Lindelof, and we conclude as in (b). A quick corollary of Theorem 3.8 is the following. **Corollary 3.10**.: _Let \(X\) be a countably compact space satisfying \(\mathsf{IC}\). Then \(\mathsf{EC}(X,Y)\) holds whenever \(Y\) is a C-closed isocompact space with \(G_{\delta}\) points._ A space is _isocompact_ iff every closed countably compact subset is compact. Hence a space is C-closed and isocompact iff each countably compact subset is compact. Such a space is sometimes called _hereditarily isocompact_. For more on isocompact spaces, see for instance [2, 7]. Note for instance that a meta-Lindelof space is isocompact. Proof of Corollary 3.10.: Let \(f:X\to Y\). Since \(Y\) is C-closed, \(Z=f(X)\) is a closed countably compact subspace of \(Y\). Thus \(Z\) is compact, and in particular \([\aleph_{0},\kappa]\)-compact. We may thus apply (b) of Theorem 3.8. A particular case of Corollary 3.10 is the following: **Corollary 3.11** (A. Mardani, Prop. 4.3.12 and 4.3.23 in [17]).: _Let \(Y\) be either a realcompact space with \(G_{\delta}\) points or a discrete space. Then \(\mathsf{EC}(\omega_{1},Y)\) holds._ Proof.: If \(Y\) is discrete, each countably compact subspace is finite and hence closed. Recall that a realcompact space is Tychonoff and hence C-closed if it has \(G_{\delta}\) points. A countably compact closed subset of a realcompact space is compact, see e.g. [10, 3.11.1 & 3.11.4]. It follows that a realcompact space is isocompact. Let us now show that if \(Y\) is confined to regular spaces, then there is a way to obtain (b) (and (a)) of Theorem 3.8 without the assumption that \(X\) is countably compact whenever \(X\) is 'narrow' enough. The proof of the next theorem is essentially done in the preprint [5, Lemmas 6.10-6.11], but again, we repeat it for completeness. **Theorem 3.12**.: _Let \(X\) be a space such that any countable family of closed non-Lindelof subsets of \(X\) has a non-Lindelof intersection. Let \(Y\) be a regular \([\aleph_{0},\kappa]\)-compact space with \(G_{\delta}\) points, where \(\kappa=\ell L(X)\). Then \(\mathsf{EC}(X,Y)\) holds._ A typical example for \(X\) is a stationary subset of \(\omega_{1}\) with subspace topology. Proof.: If \(X\) is Lindelof, there is nothing to do, hence we assume \(X\) to be non-Lindelof. By assumption \(X\) satisfies \(\mathsf{IC}\). The proof of (b) in Theorem 3.8 above does not use countable compactness and only relies on \(\mathsf{IC}\), the fact that points are \(G_{\delta}\) and that there is \(c\in Y\) with non-Lindelof preimage. We now show that the latter holds in our case as well. Let thus \(\mathcal{U}=\{U_{\alpha}\,:\,\alpha\in\lambda\}\) (\(\omega_{1}\leq\lambda\leq\kappa\)) be a chain cover of \(X\) such that \(X\not\subset U_{\alpha}\) for each \(\alpha\), with \(\lambda\) a regular cardinal. Take \(x_{\alpha}\) in \(X-U_{\alpha}\) and set \(A_{\alpha}=\{x_{\beta}\,:\,\beta>\alpha\}\). Then \(\overline{A_{\alpha}}\cap U_{\alpha}=\varnothing\), each \(\overline{A_{\alpha}}\) is \(\mathcal{U}\)-unbounded, and a countable intersection of \(\overline{A_{\alpha}}\)'s is nonempty. If \(|f(A_{0})|<\lambda\), there is some \(c\) such that \(f^{-1}(\{c\})\) is \(\mathcal{U}\)-unbounded. Else, let \(c\) be a point of \(f(A_{0})\) such that any neighborhood of \(c\) contains \(\lambda\)-many points of \(f(A_{0})\) (which exists by \([\aleph_{0},\kappa]\)-compactness). Since \(Y\) is regular and points are \(G_{\delta}\), there are open \(U_{n}\subset Y\), \(n\in\omega\), such that \(\{c\}=\cap_{n\in\omega}\overline{U_{n}}\). Then \(f^{-1}(\overline{U_{n}})\) is \(\mathcal{U}\)-unbounded (and thus non-Lindelof) in \(X\) for each \(n\). Hence, their intersection is non-Lindelof as well and equal to \(f^{-1}(\{c\})\). It seems difficult to significantly weaken the hypothesis about \(G_{\delta}\) points in Theorem 3.8, as the simple next example shows. **Example 3.13**.: _The quotient space \(Y=\omega_{1}/\Lambda\) (where \(\Lambda\) is the subspace of limit ordinals) is Frechet-Urysohn (hence \(C\)-closed), compact (hence isocompact), but \(\mathsf{EC}(\omega_{1},Y)\) does not hold, as shown by the quotient map._ This example is also treated in [17, Ex. 4.2.42]. The only non-isolated point of \(Y\) is not a \(G_{\delta}\) due to the pressing down lemma (see any book on set theory, e.g. [16, Lemma 6.15]). A canonical case in Corollary 3.10 is \(X=\omega_{1}\). We now go the other way and ask: Does \(\mathsf{EC}(\omega_{1},Y)\), along with some'mild' properties of \(Y\), imply isocompactness of \(Y\)? By a mild property we mean something that is shared by many spaces but does not render the question trivial4. An interesting case is that of locally compact countably tight spaces, for which the answer depends on the axioms of set theory. Firstly, Theorem 3.15 just below shows that a 'yes' is consistent. It is yet another example of the "ubiquity" of \(\omega_{1}\) in countably compact non-compact spaces under the proper forcing axiom **PFA**, and is actually only a restatement of other authors old results from our point of view. Secondly, Example 3.16 below (also due to another author) shows that 'no' is also consistent. We first show a small lemma. Footnote 4: We are aware that this phrase is very similar to “By something yellow we mean something containing yellowness”, which does not bring much enlightment. **Lemma 3.14**.: _If \(X=\cup_{\alpha\in\omega_{1}}X_{\alpha}\) is of Type I and countably compact, then \(X_{\alpha}\neq\overline{X_{\alpha}}\) for limit \(\alpha\), so \(\cup_{\alpha\in\Lambda}(\overline{X_{\alpha}}-X_{\alpha})\) is a perfect preimage of \(\omega_{1}\)._ Recall that a map is _perfect_ iff it is closed and points have compact preimages and that \(\Lambda\subset\omega_{1}\) is the subset of limit ordinals. Proof.: Take a strictly increasing sequence \(\alpha_{n}\) whose limit is \(\alpha\) and for each \(n\) some \(x_{n}\in X_{\alpha_{n+1}}-X_{\alpha_{n}}\). This sequence cannot have an accumulation point in \(X_{\alpha}\). This shows that \(X_{\alpha}\) is not countably compact when \(\alpha\) is limit. If \(X\) itself is countably compact, then \(X_{\alpha}\) is not closed for limit \(\alpha\), hence \(\overline{X_{\alpha}}-X_{\alpha}\neq\varnothing\). The map \(\cup_{\alpha\in\Lambda}(\overline{X_{\alpha}}-X_{\alpha})\to\omega_{1}\) sending each point in \(\overline{X_{\alpha}}-X_{\alpha}\) to \(\alpha\) is perfect and its image is homeomorphic to \(\omega_{1}\). **Theorem 3.15** (**Pfa**).: _Let \(Y\) be a countably tight space such that \(\mathsf{EC}(\omega_{1},Y)\) holds. If \(Y\) is moreover either of Type I or locally compact, then \(Y\) is isocompact._ Proof.: We assume first that \(Y=\cup_{\alpha\in\omega_{1}}Y_{\alpha}\) is of Type I. Suppose that \(Y\) is not isocompact, then \(Y\) has a countably compact closed subset \(Z\) which is not compact. Since each \(\overline{Y_{\alpha}}\) is Lindelof, \(Z\not\subset\overline{Y_{\alpha}}\) so \(Z\) is actually a Type I countably compact space with \(Z_{\alpha}=Z\cap Y_{\alpha}\). By Lemma 3.14\(\cup_{\alpha\in\Lambda}(\overline{Z_{\alpha}}-Z_{\alpha})\) is a perfect preimage of \(\omega_{1}\) which is moreover countably tight. (The inclusion \(\overline{Z_{\alpha}}\subset Z\cap\overline{Y_{\alpha}}\) could be strict, but \(\overline{Z_{\alpha}}\) is compact anyway.) By [9] it contains a copy of \(\omega_{1}\) under **PFA**, and there is a non-eventually constant map \(\omega_{1}\to Y\). Assume now that \(Y\) is locally compact. A closed subset of \(Y\) is also locally compact. By [6, Thm 2.6], under **PFA** a locally compact countably compact non-compact space contains a perfect preimage of \(\omega_{1}\), and we conclude as above. The full force of **PFA** is probably not needed in the proof, but it seems that more than **MA+\(\neg\)CH** is, as the following examples (both due to Nyikos in [19]) show. The axiom \(\clubsuit_{C}\) is a weakening of \(\Diamond\) compatible with **MA+\(\neg\)CH**. See Nyikos article for details. (It is worth noting that [19] is still a preliminary draft at the time of writing, but there is a similar construction of a surface under \(\Diamond\) in [20, Ex. 6.17], and we did not find anything dubious in Nyikos' construction.) **Example 3.16** (\(\clubsuit_{C}\)).: _There is a a longpipe (thus \(\omega\)-bounded \(2\)-manifold) \(Y\) which is not isocompact but \(\mathsf{EC}(\omega_{1},Y)\) holds. Also, there is a \(2-1\) closed preimage of \(\omega_{1}\)\(P\) with the same properties._ Details.: The surface is described in [19, Section 5] and the so called'sprat' \(P\) is given by Theorem 2.1 in the same paper and the remarks after. Both are regular and first countable and hence C-closed, Type I, countably compact, non-compact, and neither contain a copy of \(\omega_{1}\). Actually, both satisfy a property stronger than (c) of Theorem 3.3: any club subset of \(Y\) (resp. of \(P\)) contains \(\overline{Y_{\alpha}}-Y_{\alpha}\) (resp. \(\overline{P_{\alpha}}-P_{\alpha}\)) for a club set of \(\alpha\). Moreover any bounded subset of \(Y\) or \(P\) embeds in \(\mathbb{R}^{2}\). (This is a detail, but in Nyikos construction \(Y_{\alpha}\) is homeomorphic to \(\mathbb{R}^{2}\), however choosing a small closed disk in \(Y_{0}\) and removing its interior, we obtain \(Y_{\alpha}\) homeomorphic to the cylinder, as in our definition of longpipe.) The next lemma shows that a map \(f\) from \(\omega_{1}\) to \(Y\) or \(P\) must have a bounded image, and since any bounded subset of \(Y\) or \(P\) embeds in \(\mathbb{R}^{2}\), \(f\) must be eventually constant. **Lemma 3.17**.: _If \(Y\) is a Type I C-closed space and \(f:\omega_{1}\to Y\) is unbounded, then there is a copy of \(\omega_{1}\) in \(Y\)._ Proof.: The image of \(f\) is club by Lemma 3.6, hence by a routine argument similar to that of [11, Lemma 1.19], the set \(C=\{\alpha\in\omega_{1}\,:\,f(\alpha)\in\overline{Y_{\alpha}}-Y_{\alpha}\}\) is club as well. Thus \(C\) embeds in \(Y\) and is a copy of \(\omega_{1}\). We note that there are longpipes \(Y\) (which are of course non isocompact) satisfying \(\mathsf{EC}(\mathbb{L}_{\geq 0},Y)\) in **ZFC**, see Example 7.7 below. ## 4 \(\mathsf{S}(T,\mathbb{R})\) and Suslin trees A tree \(T\) is a partially ordered set such that each point has a well ordered set of predecessors. We usually denote the order by \(<\), and \(>,\leq,\geq\) are defined as usual. Here all trees are endowed with the order topology (also called interval topology): a basis is given by the intervals \(\{z\in T\,:\,x<z\leq y\}\) for each \(x,y\in T\). We assume that our trees are Hausdorff, that is, if \(x,y\in T\) are at a limit level and have the same predecessors, then \(x=y\). It follows that any tree is a \(0\)-dimensional space. A _chain_ is a totally ordered subset and an _antichain_ a subset with pairwise incomparable elements. The \(\alpha\)-th level of \(T\) consists of the members whose set of predecessors has order type \(\alpha\). Points in the \(\alpha\)-th level are often said to be at height \(\alpha\). The height of \(T\) is the smallest ordinal \(\beta\) such that the \(\beta\)-th level of \(T\) is empty. An \(\omega_{1}\)_-tree_ has countable levels and height \(\omega_{1}\). A tree is _Suslin_ if it has height \(\omega_{1}\) and its chains and antichains are at most countable. Recall that Suslin trees do not exist in **ZFC** alone, but do exist under **V=L** or \(\Diamond\), for instance. A subset \(D\subset T\) is _order dense_ iff given any \(x\in T\) there is \(y\in D\) with \(y>x\). For a tree \(T\) and \(t\in T\), write \(T_{\geq t}=\{s\in T\,:\,s\geq t\}\), \(t\mid\alpha\) for the unique predecessor of \(t\) at level \(\alpha\) (if \(t\) is below the \(\alpha\)-th level, \(t\upharpoonright\alpha=t\)) and \(T_{<\alpha}\) for the subset of elements at level \(<\alpha\). If \(T\) is an \(\omega_{1}\)-tree then it is a Type I space and the \(T_{<\alpha}\) form a canonical cover, moreover each \(T_{<\alpha}\) embeds in \(\mathbb{R}\). (For the latter claim, notice that \(T_{<\alpha}\) is a second countable \(0\)-dimensional space, hence embeds in the Cantor set.) Given an ordinal \(\alpha\), define \(r_{\alpha}:T\to T_{<\alpha+1}\) as \(r_{\alpha}(t)=t\upharpoonright\alpha\). Notice that if \(\beta\geq\alpha\) and \(f\circ r_{\alpha}=f\) for some \(f:T\to Y\), then \(f\circ r_{\beta}=f\) as well. If \(A\subset T\), denote by \(A^{\Downarrow}\) its downward closure \(\{x\in T\,:\,\exists y\in A\;x\leq y\}\). If \(x\in A\subset T\) we let \(A_{\geq x}\) be \(A\cap T_{\geq x}\). Since we are going to look at \(\omega_{1}\)-compact subspaces of trees, let us recall the following (more or less classical) facts. **Lemma 4.1**.: _Let \(T\) be a tree of height \(\omega_{1}\) and \(S\subset T\) be uncountable and \(\omega_{1}\)-compact in the subspace topology. Then the following hold. (a) A closed discrete subset of \(T\) is a countable union of antichains, and an antichain of \(T\) is closed discrete. (b) If \(A\subset T\), \(A\) has an uncountable antichain if and only if \(A^{\downarrow}\) has one. (c) \(S^{\downarrow}\) is the union of a countable set, at most countably many copies of \(\omega_{1}\) and a Suslin tree. (d) There is \(\alpha\in\omega_{1}\) such that \(|(S^{\downarrow})_{\geq x}|=|S_{\geq x}|\geq\aleph_{1}\) when \(x\) is above level \(\alpha\) in \(S^{\downarrow}\). (e) \(S\) intersects a stationary subset of levels of \(S^{\downarrow}\). Moreover, any closed subset of \(S^{\downarrow}\) intersects a stationary subset of levels. (f) Let \(E,F\subset S\) be closed (in \(S\)). If \(|E\cap F|\leq\aleph_{0}\), then \(|E^{\downarrow}\cap F^{\downarrow}|\leq\aleph_{0}\). (g) If \(A\subset S^{\downarrow}\) is uncountable, there is \(x\in S^{\downarrow}\) such that \((S^{\downarrow})_{\geq x}\subset A^{\downarrow}\). (h) If \(D\subset S^{\downarrow}\) is order-dense and upward-closed, then \(D\supset S^{\downarrow}-(S^{\downarrow})_{\beta}\) for some \(\beta\). (i) If \(U\subset S\) is open in \(S\) such that \(U\) intersects stationary many levels of \((S^{\downarrow})_{\geq x}\) for each \(x\in S\), then \(U\supset(S^{\downarrow}-(S^{\downarrow})_{\beta})\cap S\) for some \(\beta\)._ Proof.: All are probably part of the folklore, actually they are well known facts when \(T=S\) is a Suslin tree. Proofs of items (b) to (g) can be found in the preprint [5, Lemmas 6.6-6.7], and (a) in [25, Theorem 4.11], for instance. By considering minimal elements, it is easy to see that (h) holds if \(S^{\downarrow}\) is Suslin, hence it follows by (c). Let us prove (i). Suppose that \(U\) is as in the statement of (i). Let \(\alpha\) be given by (d). By removing the \(\alpha\)-th first levels, we may assume that \(S_{\geq x}\) is uncountable for all \(x\in S\). By (h) it is enough to show that \(F=\{x\in S\,:\,S_{\geq x}\subset U\}\) is order-dense in \(S\) (or equivalently in \(S^{\downarrow}\)) above level \(\alpha\). Fix \(z\in S\) and let \(W\) be open in \(S^{\downarrow}\) such that \(W\cap S=U\). Then \(W\) intersects stationary many limit levels of \((S^{\downarrow})_{\geq z}\). For each \(x\in W\) at a limit level with \(x>z\), let \(\sigma(x)<x\) be such that \(\{u\,:\,\sigma(x)\leq u\leq x\}\subset W\). By the pressing-down lemma for \(\omega_{1}\)-trees (see, e.g., [12, p. 154]) there is some \(y\in S^{\downarrow}\) such that \(E=\sigma^{-1}(\{y\})\) meets stationary many levels of \((S^{\downarrow})_{\geq z}\). For each \(x\in E\) the segment between \(y\) and \(x\) is contained in \(W\). We may assume that \(y\geq z\). By (g), there is some \(x\geq y\) such that \((S^{\downarrow})_{\geq x}\subset E^{\downarrow}\). But this means that \((S^{\downarrow})_{\geq x}\subset W\), hence \(S_{\geq x}\subset U\). We thus proved that there is a point \(x\) of \(F\) above \(z\), and hence \(F\) is order-dense. The following was proved by Steprans [31] when \(S=T\). (His statement is weaker, but he gives two proofs which actually show more than stated.) A proof for \(S\neq T\), adapted from Steprans', is given in [5, Lemma 6.7 (i)]. We present another proof, by forcing, also based on Steprans' ideas. **Theorem 4.2**.: _Let \(T\) be a tree of height \(\omega_{1}\), \(S\subset T\) be uncountable and \(Y\) be a space. Then the following hold. (a) If \(S\) is \(\omega_{1}\)-compact for the subspace topology and \(Y\) is submetrizable, then \(\mathsf{S}(S,Y)\) holds. If \(S=T\) then the retraction is given by \(r_{\alpha}\) for some \(\alpha\). (b) If \(|Y|\geq 2\) is a space and \(\mathsf{S}(S,Y)\) holds, then \(S\) is \(\omega_{1}\)-compact._ Recall that a space is _submetrizable_ iff it has a coarser metrizable topology. Proof.: We prove (b) first. (b) Suppose that \(S\) is not \(\omega_{1}\)-compact. By Lemma 4.1 (a)-(b) and (d), \(S\) contains an uncountable antichain \(A\) consisting of isolated points of \(S\) and thus a clopen discrete uncountable subspace. Then, apply Lemma 2.15 (b). (a) We may assume that \(Y\) is metrizable by Corollary 2.13. By Lemma 4.1 (c), the set \(E\) containing the minimal points of \[\{x\in S\,:\,S_{\geq x}\mbox{ is uncountable and totally ordered}\}\] is at most countable. If \(x\in E\), then \(S_{\geq x}\) is (homeomorphic to) an \(\omega_{1}\)-compact and hence stationary subset of \(\omega_{1}\). Recall that hereditary Lindelofness and spread are equal in a metrizable space (see Lemma 1.3); hence, \(f(S)\) is Lindelof and by Theorem 3.12\(f\) is eventually constant on \(S_{\geq x}\) for each \(x\in E\). We may thus assume by Lemma 4.1 (c) that \(S^{\downarrow}\) is a Suslin tree. We may take out the set of \(x\) such that \(|f(S_{\geq x})|=1\). If what remains is countable, we are over, otherwise by Lemma 4.1 (d) we may assume that for each \(x\in S\), \(S_{\geq x}\) is uncountable and \(|f(S_{\geq x})|>1\). We can also assume that \(S^{\downarrow}\) is rooted by adding a common root below its minimal elements. We now use a forcing argument, and force with \(S^{\downarrow}\) with the reverse order. Let \(G\) be a generic filter. Recall that if \(\langle X,\tau\rangle\) is a topological space in the ground model, then \(\tau\) serves as a base for the topology \(\tau(G)\) of \(X\) on a forcing extension by \(G\). Thus, any function that is continuous in the ground model remains so in the forcing extension, and \(Y\) remains metrizable. Since \(D_{\alpha}=\{x\in S^{\downarrow}\,:\,\mathrm{height}(x)\geq\alpha\}\) is dense, \(G\) is an new uncountable branch in \(S^{\downarrow}\). Forcing with a Suslin tree preserves stationarity and cardinals, actually it does not add countable sets to the universe, see for instance [16, Exercices VII H1-H2 and Theorems VII.5.10 & VII.8.4]. Hence, \(\omega_{1}^{V}=\omega_{1}^{V[G]}\). Since \(S\) intersects stationary many levels of \(S^{\downarrow}\) in \(V\) by Lemma 4.1 (e) it does so in \(V[G]\) as well. Hence, \(S\) intersects stationary many levels of \(G\) and \(S\cap G\) is homeomorphic to a stationary subset of \(\omega_{1}\). By Theorem 3.12, \(f\) is constant on \(G\) above some height; hence, there must be \(\alpha\in\omega_{1}\), \(s\in S\), \(u\in Y\) with \(s\Vdash\bar{f}\left(\tilde{G}-\tilde{S}_{\tilde{\alpha}}\right)=\{\tilde{u}\}\). But since \(|f(S_{\geq x})|>1\) for any \(x\in S\), \(\{z\in S\,:\,f(z)\neq u\}\) is order-dense in \(S^{\downarrow}\), and we have that \(1\Vdash\bar{f}\left(\tilde{G}-\tilde{S}\upharpoonright\check{\alpha}\right)\neq \{\tilde{u}\}\), a contradiction. It follows that \(|f(S_{\geq x})|=1\) if \(x\) is above some level \(\alpha\). Hence, given \(y\) at level \(>\alpha\), we may set \(r(y)=\min\{x\leq y\,:\,\mathrm{height}(x)\geq\alpha\}\). If \(S=T\), it is enough to set \(r(y)=y\upharpoonright\alpha\) (i.e. \(r=r_{\alpha}\)) since there are points at every level. One consequence of Theorem 4.2 is the following. **Lemma 4.3**.: _Let \(T\) be an \(\omega_{1}\)-compact tree of height \(\omega_{1}\) (in particular a Suslin tree), \(Y\) be a space and assume that \(\mathsf{S}(T,Y)\) holds. Then, given \(f:T\to Y\), the retraction witnessing \(\mathsf{S}(T,Y)\) can be chosen to be \(r_{\alpha}\) for some \(\alpha\)._ Proof.: Let \(f:T\to Y\) be given and \(r:T\to Z\subset T_{\alpha+1}\) be a retraction such that \(f\circ r=f\). Since \(T\) is \(\omega_{1}\)-compact, by Theorem 4.2\(\mathsf{S}(T,\mathbb{R})\) holds. Since \(T_{\alpha+1}\) can be embedded in \(\mathbb{R}\), there is some \(\alpha\) such that \(r\circ r_{\alpha}=r\); hence, \[f\circ r_{\alpha}=(f\circ r)\circ r_{\alpha}=f\circ(r\circ r_{\alpha})=f\circ r =f.\] This has the following corollary. **Corollary 4.4**.: _Let \(Y_{n}\), \(n\in\omega\), be spaces and \(T\) be a tree of height \(\omega_{1}\). Then \(\mathsf{S}(T,Y_{n})\) holds for each \(n\in\omega\) iff \(\mathsf{S}(T,\prod_{n\in\omega}Y_{n})\) holds._ Proof.: The reverse implication is immediate. Assume that \(\mathsf{S}(T,Y_{n})\) holds for each \(n\in\omega\). Let \(\pi_{n}\) be the projection on the \(n\)-th factor. If \(|Y_{n}|=1\) for each \(n\), there is nothing to prove. Since \(\mathsf{S}(T,Y_{n})\) holds and \(|Y_{n}|\geq 2\) for some \(n\), by Theorem 4.2 (b) \(T\) is \(\omega_{1}\)-compact. Let \(f:T\to\prod_{n\in\omega}Y_{n}\). By Lemma 4.3, for each \(n\in\omega\) there is \(\alpha_{n}\) such that \(\pi_{n}\circ f\circ r_{\alpha_{n}}=\pi_{n}\circ f\). Hence \(f\circ r_{\alpha}=f\) for \(\alpha=\sup_{n}\alpha_{n}\). Another simple consequence of Theorem 4.2 and Lemma 4.3 is that \(\mathsf{S}(T,Y)\) implies \(\mathsf{BR}(T,Y)\) when \(T\) is an \(\omega_{1}\)-tree. **Proposition 4.5**.: _Let \(T\) be a tree of height \(\omega_{1}\) and let \(Y\) be any space. Then the following holds. (a) \(\mathsf{S}(T,Y)\Rightarrow\mathsf{BR}(T,Y)\). (b) If \(|Y|\geq\aleph_{1}\), then \(\mathsf{S}(T,Y)\Leftrightarrow\mathsf{BR}(T,Y)\Leftrightarrow\mathsf{L}(T,Y)\)._ Proof.: (a) If \(|Y|=1\) there is nothing to prove, we assume \(|Y|>1\). If \(\mathsf{S}(T,Y)\) holds, then by Theorem 4.2 (b) \(T\) cannot have an uncountable antichain; hence, \(T\) is \(\omega_{1}\)-compact. By Lemma 4.1 (d), there is \(\gamma\in\omega_{1}\) such that \(T_{\geq x}\) is uncountable if \(x\) is above level \(\gamma\). Given \(f:T\to Y\), by Lemma 4.3 there is some \(\alpha\) such that \(f\circ r_{\alpha}=f\). We can take \(\alpha\geq\gamma\). Then, for each \(\beta\geq\alpha\), \(f(T-T_{<\beta})=f(T-T_{<\alpha})=f(\{x\,:\,\mathrm{height}(x)=\alpha\})\). This shows that \(\mathsf{BR}(T,Y)\) holds. (b) By (a) and Lemma 2.1, it is enough to show that \(\mathsf{L}(T,Y)\Rightarrow\mathsf{S}(T,Y)\). By Lemma 2.15, if \(|Y|\geq\aleph_{1}\) and \(T\) has an uncountable antichain \(A\), then \(\mathsf{L}(T,Y)\) does not hold. Hence, if \(\mathsf{L}(T,Y)\) holds, then \(T\) is \(\omega_{1}\)-compact. It follows that \(\mathsf{S}(T,Y)\) holds by Theorem 4.2. Notice in passing that, in all generality, \(\mathsf{S}\) and \(\mathsf{BR}\) are not equivalent for \(\omega_{1}\)-trees. **Example 4.6**.: _Let \(T\) be a tree containing an uncountable antichain and \(Y\) be a countable space. Then \(\mathsf{BR}(T,Y)\) holds but \(\mathsf{S}(T,Y)\) does not._ (The property \(\mathsf{BR}(T,Y)\) holds by Lemma 2.14.) When trying to generalize Theorem 4.2 beyond the metrizable case, we are held back by the following fact: **Example 4.7**.: _Let \(T\) be a Suslin tree such that each element has infinitely many immediate successors. Then, there is a a \(1\)-\(to\)-\(1\) continuous \(f:T\to Y\) where \(Y\) is a hereditarily Lindelof first countable monotonically normal space._ _Details._\(Y\) is actually the tree \(T\) itself with the topology given by a lexicographical ordering of \(T\) chosen so that the resulting space is a Suslin line (see just below). The lexicographical order \(\leq_{\ell}\) in \(T\) is obtained by first considering a total order \(\prec\) on \(T\) and then letting \(y<_{\ell}x\) if and only if \(y<x\) or \(y\perp x\) and \(y\restriction\alpha\prec x\restriction\alpha\), where \(\alpha\) is minimal such that \(y\restriction\alpha\neq x\restriction\alpha\). Denote the topologies induced by \(<,<_{\ell}\) on \(T\) by \(\tau_{<_{\ell}},\tau_{<_{\ell}}\). The \(<\)-minimal elements of an \(<_{\ell}\) interval \(\{y\in T\,:\,x<_{\ell}y<_{\ell}z\}\) cannot be at limit height because the tree is Hausdorff. It follows that any such \(<_{\ell}\)-interval is a union of branches starting at a successor level and is thus \(\tau_{<}\)-open in \(T\). Hence \(id:\langle T,\tau_{<}\rangle\rightarrow\langle T,\tau_{<_{\ell}}\rangle\) is continuous. It is well known that if \(\prec\) orders the immediate successors of any given point as \(\mathbb{Q}\), then \(\langle T,<_{\ell}\rangle\) is a Suslin line, see for instance [15, Lemma 14.21] for a proof. We recall that a Suslin line is a linearly ordered topological space which is ccc but not separable. Every linearly ordered space is monotonically normal and every Suslin line is first countable and hereditarily Lindelof. Recall that a regular hereditarily Lindelof space is also perfect. This shows in particular that Theorems 3.8 and 3.12 with \(\mathsf{S}\) instead of \(\mathsf{EC}\) do not hold for Suslin trees. But we can still say something if the conclusion of Lemma 3.7 holds. **Lemma 4.8**.: _Let \(Y\) be a regular space with \(G_{\delta}\) points such that for each Suslin tree \(T\) and each \(f:T\to Y\), there is \(y\in Y\) with an uncountable preimage. Then \(\mathsf{S}(T,Y)\) holds for any Suslin tree \(T\)._ Proof.: Let \(T\) be a Suslin tree. By Lemma 4.1 (d) we may assume that \(T_{\geq x}\) is uncountable for each \(x\in T\). By Lemma 4.1 (h), it is enough to show that \(E=\{x\in T\,:\,\exists y\in Y\text{ with }f(T_{\geq x})=\{y\}\}\) is order-dense in \(T\). Let thus \(z\in T\). Since \(T_{\geq z}\) is Suslin there is \(y\) such that \(A=f^{-1}(\{y\})\) is uncountable. Hence, by Lemma 4.1 (g) there is \(w\in T_{\geq z}\) such that \(T_{\geq w}\subset A^{\downarrow}\). Since \(A\) is closed, by Lemma 4.1 (e) it intersects a stationary subset of levels of \(T_{\geq u}\) for each \(u\geq w\). Let \(U_{n}\subset Y\) be open such that \(\cap_{n\in\omega}U_{n}=\{y\}\). Then each \(f^{-1}(U_{n})\) is open, order-dense and intersects stationary many levels of \(T_{\geq u}\) for each \(u\geq w\). By Lemma 4.1 (i), \(A=f^{-1}(\{y\})=\cap_{n\in\omega}f^{-1}(U_{n})\) contains all of \(T_{\geq w}\) above some level. This shows that \(E\) is order-dense. We do not know whether this lemma is of any use, that is, if there are spaces satisfying its hypotheses but not those of Theorems 4.2. \(\mathbb{R}(X,Y)\), \(\mathsf{L}(X,Y)\) when \(Y\) is metric and \(X\) an increasing chain of 'nice' subspaces The goal of this section is to prove in particular the following theorem. We recall that manifolds are assumed to be connected. **Theorem 5.1**.: _Let \(M\) be a manifold. Then \(\mathsf{L}(M,\mathbb{R})\) holds._ The proof relies on properties of increasing subsets of \(\mathbb{R}\), and more generally of metric spaces. We start by specifying some notation. Let \(\mathcal{P}\) be a topological property and \(\gamma\) be an ordinal. We say that a space \(X\) is a \(\mathcal{P}\)_-chain of length \(\gamma\)_ iff \(X=\cup_{\alpha<\gamma}H_{\alpha}\) with \(H_{\alpha}\subset H_{\beta}\) if \(\alpha<\beta\) and such that \(H_{\alpha}\) has property \(\mathcal{P}\) for each \(\alpha\). If \(H_{\alpha}\subsetneq H_{\beta}\) for each \(\alpha<\beta\), we say that the chain is strict. If the property \(\mathcal{P}\) cannot be described in one word, we may use brackets. For instance, left separated spaces are strict [closed-and-countable]-chains of length \(\omega_{1}\). Type I spaces are both strict [closed-and-Lindelof]-chains and strict open-chains of length \(\omega_{1}\). A subspace \(E\subset X\) such that \(X-E\) satisfies \(\mathcal{P}\) is said to be co-\(\mathcal{P}\). We will use the following classical fact. **Lemma 5.2**.: _Let \(Y\) be a metric space. (a) \(Y\) does not contain a strict compact-chain or a strict [co-compact]-chain of length \(\geq\omega_{1}\). (b) If \(Y\) is separable, then \(Y\) does contain neither a strict open-chain nor a strict closed-chain of length \(\geq\omega_{1}\)._ We provide a proof for completeness. Lemma 1.3 is used several times implicitely. Proof of Lemma 5.2.: We show (b) first. (b) \(Y\) is hereditarily Lindelof and hereditarily separable. A strict open-chain of length \(\omega_{1}\) is not Lindelof and a strict closed-chain of length \(\omega_{1}\) contains a non-separable subspace (see the proof of Theorem 3.1 in [28]). (a) If \(\langle H_{\alpha}\,:\,\alpha<\omega_{1}\rangle\), is a compact-chain of subspaces, then \(H=\cup_{\alpha<\omega_{1}}H_{\alpha}\) is countably compact and hence compact. Indeed, any countable subset of \(H\) is contained in some compact \(H_{\alpha}\) and has thus an accumulation point. Hence \(H\) is separable, let \(D\) be a countable dense subset. Then \(D\) is contained in some \(H_{\alpha}\), by closedness \(H=\overline{D}=H_{\alpha}\) and the chain cannot be strict. If \(\langle H_{\alpha}\,:\,\alpha<\omega_{1}\rangle\), is a chain of co-compact subspaces, then \(H_{\alpha}\cap W\) is open in \(W=Y-H_{0}\) which is compact and hence separable. By (b) the open chain \(\langle H_{\alpha}\cap W\,:\,\alpha<\omega_{1}\rangle\) in \(W\) cannot be strict. Hence the chain of \(H_{\alpha}\) is not strict either. **Corollary 5.3**.: \(\mathbb{R}\) _does contain neither a strict connected-chain nor a strict [co-connected]-chain of length \(\omega_{1}\)._ Proof.: Since the connected subsets of \(\mathbb{R}\) are the intervals, an infinite strict connected-chain or [co-connected]-chain contains a strict open-chain of the same length. We say that \(X\) is a _slowpen chain_ if is a strict open-chain \(X=\cup_{\alpha\in\gamma}U_{\alpha}\), such that \(\cup_{\beta<\alpha}U_{\beta}=U_{\alpha}\) when \(\alpha\) is limit and \(U_{\alpha+1}=U_{\alpha}\cup E_{\alpha}\), where \(E_{\alpha}\) is Lindelof for each \(\alpha\). **Lemma 5.4**.: _Let \(X=\cup_{\alpha<\gamma}U_{\alpha}\) be a slowpen chain. If \(U_{\alpha}\) is connected for each \(\alpha\), then \(\mathsf{L}(X,\mathbb{R})\) holds._ Proof.: If \(\gamma<\omega_{1}\), \(X\) is Lindelof and there is nothing to prove. Assume that \(\gamma\geq\omega_{1}\). Let \(f:X\to\mathbb{R}\) be given. Let \(U_{\alpha},E_{\alpha}\), \(\alpha<\gamma\) be as in the definition of a slowpen chain, hence \(X=\cup_{\alpha<\gamma}U_{\alpha}\). We prove by induction on \(\alpha\) that there is a Lindelof subset \(L_{\alpha}\subset X\) such that \(f(L_{\alpha})=f(U_{\alpha})\). This yields the theorem when \(\alpha=\gamma+1\). If \(\alpha\) is countable, \(U_{\alpha}\) is Lindelof hence we may set \(L_{\alpha}=U_{\alpha}\). If \(\alpha=\beta+1\), then the Lindelof subset \(L_{\alpha}=L_{\beta}\cup E_{\beta}\) satisfies \(f(U_{\alpha})=f(L_{\alpha})\). If \(\mathrm{cof}(\alpha)=\omega\), choose an \(\omega\)-sequence \(\beta_{n}\nearrow\alpha\) and set \(L_{\alpha}=\cup_{n\in\omega}L_{\beta_{n}}\). Then \(L_{\alpha}\) is Lindelof and \(f(U_{\alpha})=f(L_{\alpha})\). Assume now that \(\operatorname{cof}(\alpha)>\omega\). Since \(U_{\beta}\) is connected for each \(\beta<\alpha\), \(f(U_{\beta})\subset\mathbb{R}\) is connected. Hence, \(\langle f(U_{\beta}):\beta<\alpha\rangle\) does not have a cofinal strict subchain by Corollary 5.3. It follows that there is some \(\beta<\alpha\) such that \(f(U_{\alpha})=f(U_{\beta})=f(L_{\beta})\). **Lemma 5.5**.: _Let \(M\) be a manifold. Then \(M\) is a slowpen chain \(\cup_{\alpha<\gamma}U_{\alpha}\) of length \(\gamma<\mathfrak{c}^{+}\) with each \(U_{\alpha}\) connected. Moreover, each \(U_{\alpha}\) is a submanifold of \(M\) and \(M\) is metrizable iff \(\gamma\) is countable._ Proof.: Our argument is very similar to the proof of Theorem 2.9 in [20] where it is shown that the cardinality of a manifold is the continuum. Recall that (hereditary) Lindelofness and metrizability are equivalent for manifolds (see e.g. [11, Thm 2.1]). Fix an Euclidean open connected open subset \(U_{0}\). We proceed by induction and assume that the chain of connected open sets \(\cup_{\beta<\alpha}U_{\beta}\) is a slowpen chain. If \(\alpha\) is limit, we let \(U_{\alpha}=\cup_{\beta<\alpha}U_{\beta}\). Given \(U_{\alpha}\), if \(U_{\alpha}=M\), we are over. If not, by connectedness of \(M\) we have that \(\overline{U_{\alpha}}\neq U_{\alpha}\), so we may choose a point \(x\in\overline{U_{\alpha}}-U_{\alpha}\). We then let \(U_{\alpha+1}\) be the union of \(U_{\alpha}\) and an Euclidean connected set \(E_{\alpha}\) containing \(x\). Connectedness of \(U_{\alpha}\) for each \(\alpha\) is immediate. By construction the chain is slowpen and strict up to \(\alpha\). Since \(U_{\alpha}\) is open, it is a submanifold of \(M\). Since \(|M|=\mathfrak{c}\), there is some \(\gamma<\mathfrak{c}^{+}\) such that \(M=U_{\gamma}\). If \(\gamma\) is countable, \(M\) is Lindelof and hence metrizable. If \(\gamma\) is uncountable, it contains the non-Lindelof subset \(\cup_{\alpha<\omega_{1}}U_{\alpha}\), which shows that \(M\) is not metrizable. Theorem 5.1 follows immediately by Lemmas 5.4-5.5. Notice however that \(\mathbb{R}\) cannot be replaced by \(\mathbb{R}^{2}\), as shown by Example 6.4 below. Also, we cannot replace \(\mathsf{L}\) by \(\mathsf{BR}\). **Example 5.6**.: _There is a surface (without boundary) \(P\) such that \(\mathsf{BR}(P,\mathbb{R})\) does not hold._ Details.: This example is classical: Take \(P\) to be the Prufer surface (separable version with boudary) \(\mathbb{R}\times\mathbb{R}_{>0}\cup\cup_{a\in\mathbb{R}}\mathbb{R}_{a}\), where each \(\mathbb{R}_{a}\) is a (distinct) copy of \(\mathbb{R}\). See [11, Ex. 1.25] for a complete description of the topology. This surface can be seen as taking \(\mathbb{R}\times\mathbb{R}_{\geq 0}\) and 'blowing up' each point \(\langle a,0\rangle\) in the bottom boundary into the open interval \(\mathbb{R}_{a}\). This gives again a surface whose boundary components are the \(\mathbb{R}_{a}\). If \(A\subset\mathbb{R}\), then \(\mathbb{R}\times\mathbb{R}_{>0}\cup\cup_{a\in A}\mathbb{R}_{a}\) is an open submanifold of \(P\), hence a Lindelof subset of \(P\) intersects at most countably many \(\mathbb{R}_{a}\). Any continuous \(f:\mathbb{R}\times\mathbb{R}_{\geq 0}\to\mathbb{R}\) yields a continuous \(f_{P}:P\to\mathbb{R}\) by letting \(f_{P}\) be constant with value \(f(a)\) on \(\mathbb{R}_{a}\) and equal to \(f\) on \(\mathbb{R}\times\mathbb{R}_{>0}\). Let \(f:\mathbb{R}\times\mathbb{R}_{\geq 0}\to\mathbb{R}\) be the projection on the first coordinate. Then \(f_{P}\) shows that \(\mathsf{BR}(P,\mathbb{R})\) does not hold. Indeed, if \(Z\subset P\) is Lindelof, choose \(a\in\mathbb{R}\) such that \(\mathbb{R}_{a}\cap Z=\varnothing\) and set \(W=Z\cup\mathbb{R}_{a}\cup\mathbb{R}\times\mathbb{R}_{>0}\). Then \(W\) is Lindelof and \(f_{P}(P-Z)\ni a\not\in f_{P}(P-W)\). We can obtain a boundaryless version by gluing copies of \([0,1)\times\mathbb{R}\) to each boundary component \(\mathbb{R}_{a}\) and extending \(f_{P}\) the obvious way. The following proposition can be proved by exactly the same argument that we used in the proof of Lemma 5.4. **Proposition 5.7**.: _Let \(X=\cup_{\alpha\in\omega_{1}}H_{\alpha}\) be a strict Lindelof-chain (for instance, a Type I space). (a) If \(H_{\alpha}\) is connected for uncountably many \(\alpha\) then \(\mathsf{L}(X,\mathbb{R})\) holds. If \(H_{\alpha}\) is moreover closed for uncountably many \(\alpha\), then \(\mathsf{L}_{\mathsf{d}}(X,\mathbb{R})\) holds. (b) If \(X-H_{\alpha}\) is connected for uncountably many \(\alpha\) then \(\mathsf{BR}(X,\mathbb{R})\) holds. If \(H_{\alpha}\) is moreover closed for uncountably many \(\alpha\), then \(\mathsf{BR}_{\mathsf{d}}(X,\mathbb{R})\) holds._ Finally, along the same lines we have: **Theorem 5.8**.: _Let \(X\) be a space, \(Y\) be a metrizable space and \(\kappa\) be an infinite cardinal. (a) If \(X\) is a strict compact-chain of length \(\kappa\), then \(\mathsf{L}_{\mathsf{d}}(X,Y)\) holds. (b) If \(X\) is countably compact and a strict [open-and-contained-in-a-Lindelof-set]-chain of length \(\kappa\), then \(\mathsf{BR}(X,Y)\) holds._ _(c) In particular, if \(X\) is a countably compact Type I space, then both \(\mathsf{BR}_{\mathsf{cl}}(X,Y)\) and \(\mathsf{L}_{\mathsf{cl}}(X,Y)\) hold._ Proof.: Let us write \(X=\cup_{\alpha<\kappa}H_{\alpha}\) and fix \(f:X\to Y\). There is nothing to prove if \(\kappa\) has countable cofinality because \(X\) is then Lindelof. We thus assume that \(\mathrm{cf}(\kappa)\) is uncountable. Lemma 1.3 will be used again implicitely in what follows. (a) Assume that each \(H_{\alpha}\) is compact. Then \(f(H_{\alpha})\) is also compact; hence, Lemma 5.2 (a) implies that \(f(H_{\alpha})=f(X)\) for some \(\alpha\). (b) We may assume that \(Y=f(X)\). By assumption \(Y\) is compact and thus separable (by Lemma 1.3). Since \(H_{\alpha}\) is open, \(X-H_{\alpha}\) is countably compact, \(f(X-H_{\alpha})\) is compact and thus closed. By Lemma 5.2 (b) the chain \(U_{\alpha}=Y-f(X-H_{\alpha})\) cannot contain a strict cofinal chain; hence, it must stagnate above some \(\alpha\). It follows that \(f(X-H_{\alpha})=f(X-H_{\beta})\) for each \(\beta\geq\alpha\). Notice that since the \(H_{\alpha}\) are open, any Lindelof subset of \(X\) must be contained in some \(H_{\gamma}\) (otherwise they form a cover without countable subcover). Hence \(\mathsf{BR}(X,Y)\) holds if \(H_{\alpha}\) is contained in a Lindelof set. (c) If \(X\) is of Type I, \(X=\cup_{\alpha<\omega_{1}}X_{\alpha}=\cup_{\alpha<\omega_{1}}\overline{X_{ \alpha}}\). By countable compactness \(\overline{X_{\alpha}}\) is compact, hence \(\mathsf{L}_{\mathsf{cl}}(X,Y)\) follows by (a). Lemma 2.2 and (b) imply that \(\mathsf{BR}_{\mathsf{cl}}(X,Y)\) holds. We could have proved that \(\mathsf{L}_{\mathsf{cl}}(X,Y)\) holds in (c) with the following lemma (since the image of \(X\) in \(Y\) is hereditarily separable). **Lemma 5.9**.: _Let \(Y\) be hereditarily separable. If \(X\) is an \(\omega\)-bounded space (in particular, a Type I countably compact space), then \(\mathsf{L}_{\mathsf{cl}}(X,Y)\) holds and \(f(X)\) is compact for any \(f:X\to Y\)._ Proof.: Given \(f:X\to Y\), define \(E\subset X\) by taking one preimage of each point in a countable dense subset \(D\) of \(f(X)\). By \(\omega\)-boundedness \(\overline{E}\) and thus \(f(\overline{E})\) are compact, so in particular \(f(\overline{E})\) is closed. It follows that \(f(\overline{E})=\overline{f(E)}=\overline{D}=f(X)\), proving that \(f(X)\) is compact and that \(\mathsf{L}_{\mathsf{cl}}(X,Y)\) holds. Notice that we cannot weaken the assumption to '\(X\) is countably compact', as shown by the next example. **Example 5.10** (Folklore).: _There is a countably compact space \(X\) such that neither \(\mathsf{L}_{\mathsf{cl}}(X,\mathbb{R})\) nor \(\mathsf{BR}_{\mathsf{cl}}(X,\mathbb{R})\) hold._ Details.: The idea is to obtain a countably compact non-Lindelof space with a countable dense subset of isolated points, and to apply Lemma 2.5. Let \(\beta\omega\) be the Cech-Stone compactification of the integers \(\omega\). (The integers are given the discrete topology.) The closure of any infinite set in \(\beta\omega\) has cardinality \(2^{\epsilon}\) (see e.g. [33, Lemma 0.1]), in particular if \(p\in\beta\omega-\omega\), then \(\beta\omega-\{p\}\) is a countably compact non-compact (since non-closed) subspace of \(\beta\omega\) in which the integers are dense. Under additional axioms, there are examples with more properties. For instance, an Ostaszewski space (built with \(\diamondsuit\), see [26]) is itself a locally compact hereditarily separable countably compact non-compact space, the identity map thus violates \(\mathsf{L}(X,X)\). Example 2.8 (when \(\mathfrak{p}=\omega_{1}\)) is another one with the same properties except hereditary separability. **Question 5.11**.: _Is there is a countably compact space \(X\) such that \(\mathsf{L}(X,\mathbb{R})\) or \(\mathsf{BR}(X,\mathbb{R})\) does not hold?_ Notice in passing that in Theorem 5.8 (c) and Lemma 5.9, the stronger property \(\mathsf{L}_{\mathsf{cpt}}(X,Y)\) (defined in Section 9) holds. A word on products on the target space Let us have a quick look at which properties are easily seen to be preserved under products in the target space. The first observation is obvious: **Lemma 6.1**.: _Let \(\mathsf{P}\) be any property in Definition 1.2, \(X\) and \(Y_{j}\) be non empty spaces for \(j\) in some index set \(J\). Then \(\mathsf{P}(X,\prod_{j\in J}Y_{j})\Rightarrow\mathsf{P}(X,Y_{k})\) for each \(k\in J\)._ Proof.: Each \(Y_{k}\) is homeomorphic to a subspace of \(\prod_{j\in J}Y_{j}\). Apply Lemma 2.12. The next lemma is also almost immediate. **Lemma 6.2**.: _Let \(X\) and \(Y_{n}\), \(n\in E\), be spaces. (a) If \(E\) is countable and \(\mathsf{EC}(X,Y_{n})\) holds for each \(n\in E\), then \(\mathsf{EC}(X,\prod_{n\in E}Y_{n})\) holds. (b) If \(E\) is finite and \(\mathsf{EC}_{\mathsf{cl}}(X,Y_{n})\) holds for each \(n\in E\), then \(\mathsf{EC}_{\mathsf{cl}}(X,\prod_{n\in E}Y_{n})\) holds._ Proof.: (a) Given \(f:X\to\prod_{n\in E}Y_{n}\), for each \(n\in E\) there is a Lindelof subspace \(Z_{n}\subset X\) such that the projection \(\pi_{n}\circ f:X\to Y_{n}\) is constant outside of \(Z_{n}\). Hence \(f\) is constant outside of the Lindelof set \(\cup_{n\in E}Z_{n}\). (b) As in (a). For \(\mathsf{S}\), such a simple argument does not work even for \(2\) factors, as it is not clear a priori whether the retractions corresponding to each projection can be combined. It is the case for \(\omega_{1}\)-trees, as seen in Corollary 4.4 above. **Question 6.3**.: _Are there spaces \(X,Y_{1},Y_{2}\) such that \(\mathsf{S}(X,Y_{i})\) holds for \(i=1,2\) and \(\mathsf{S}(X,Y_{1}\times Y_{2})\) does not?_ On the other hand, \(\mathsf{L}\) and \(\mathsf{BR}\) behave badly under products, even when the domain space is a manifold. **Example 6.4**.: _Let \(M\) be the surface_ \[\mathbb{L}_{\geq_{0}}\times[-2,1]\,-\,\cup_{\alpha\in\omega_{1}}\{\alpha\} \times[-1,1].\] _Then \(\mathsf{L}_{\mathsf{cl}}(M,\mathbb{R})\) and \(\mathsf{BR}_{\mathsf{cl}}(M,\mathbb{R})\) hold but \(\mathsf{L}(M,\mathbb{R}^{2})\) and \(\mathsf{BR}(M,\mathbb{R}^{2})\) do not. Moreover, \(\mathsf{S}(M,\mathbb{R})\) does not hold either._ Figure 2: Example 6.4. _Details._ Since \(M\) is a manifold, \(\mathsf{L}(M,\mathbb{R})\) holds by Theorem 5.1 above and \(\mathsf{L}_{\mathsf{cl}}(M,\mathbb{R})\) follows since \(M\) is Type I. Define \(M_{\alpha}\) to be the subset of points of \(M\) with first coordinate \(<\alpha\). Since \(M-M_{\alpha}\) is connected for each \(\alpha\), \(\mathsf{BR}(M,\mathbb{R})\) (hence, \(\mathsf{BR}_{\mathsf{cl}}(M,\mathbb{R})\)) holds by Proposition 5.7 (b). To show that \(\mathsf{L}(M,\mathbb{R}^{2})\) and \(\mathsf{BR}(M,\mathbb{R}^{2})\) do not hold, let \(\{e_{\alpha}\,:\,\alpha\in\omega_{1}\}\) (all distinct) be a subset of the unit circle centered at the origin in \(\mathbb{R}^{2}\). Let \(I_{\alpha}\) be the line segment joining the origin and \(e_{\alpha}\). Define \(f:M\to\mathbb{R}^{2}\) as follows. If \(x\in\mathbb{L}_{\geq 0}\times[-2,0]\cap M\), \(f(x)\) is the origin. If \(x=\langle u,t\rangle\) with \(\alpha<u<\alpha+1\) and \(t\in(0,1]\), then \(f(x)\) is the point of \(I_{\alpha}\) at distance \(t\) from the origin. It is easy to check that \(f\) is continuous and violates \(\mathsf{L}(M,\mathbb{R}^{2})\) and \(\mathsf{BR}(M,\mathbb{R}^{2})\) since the preimage of \(I_{\alpha}\) minus the origin is \((\alpha,\alpha+1)\times(0,1]\). To finish, observe that \(\cup_{\alpha<\omega_{1}}(\alpha,\alpha+1)\times(0,1)\) is a discrete collection of open subsets of \(M\) (see the first paragraph of Section 8 for a reminder of the definition). We show in Section 8 (Lemma 8.5) that this implies that \(\mathsf{S}(M,\mathbb{R})\) (and the weaker property \(w\mathsf{S}(M,\mathbb{R})\) defined in Section 7) does not hold. ## 7 Type I manifolds: when \(\mathsf{EC}\not\Rightarrow\mathsf{S}\). As written before, \(\mathsf{EC}\) has the looks of a being stronger property than \(\mathsf{S}\), as being eventually constant is a rather strong assumption for a map. But \(\mathsf{S}\) asks for retractions, and some spaces do lack retractions on smaller subspaces. Actually, even a weaker property (almost stagnation, defined just next) might fail while \(\mathsf{EC}\) holds. The next definition could be subtitled "two new ways to procrastinate". **Definition 7.1**.: _Given spaces \(X,Y\), we say that \(X\) almost stagnates (resp. weakly stagnates) in \(Y\), written \(a\mathsf{S}(X,Y)\) (resp. \(w\mathsf{S}(X,Y)\)), iff for each \(f:X\to Y\) there is a Lindelof \(Z\subset X\) and \(r:X\to Z\) such that \(f\circ r=f\) (resp. \(f\circ r\upharpoonright(X-Z)=f\upharpoonright(X-Z)\))._ We could have chosen \(a\mathsf{S}\) as the "official" definition for stagnation instead of the one with retractions, but all our counterexamples in this and the next section are actually counter-examples to \(a\mathsf{S}\) and not only to \(\mathsf{S}\), and Suslin trees in Section 4 do satisfy the stronger property. The next lemma is immediate from the definitions (we again abbreviate \(\forall X,Y\,\mathsf{EC}(X,Y)\) simply by \(\mathsf{EC}\), etc.) **Lemma 7.2**.: \(\mathsf{EC}\)\(w\mathsf{S}\)\(a\mathsf{S}\)\(\mathsf{S}\)\(\mathsf{S}\) It is natural to ask whether some of these arrows do reverse. We already know that \(\mathsf{S}\not\Rightarrow\mathsf{EC}\); hence, \(w\mathsf{S}\not\Rightarrow\mathsf{EC}\) as well. The main result of this section implies that \(\mathsf{EC}(M,\mathbb{R})\not\Rightarrow a\mathsf{S}(M,\mathbb{R})\) for \(M\) a manifold, and hence \(w\mathsf{S}\not\Rightarrow a\mathsf{S}\) as well. As a small warm up, let us show that some manifolds do lack retractions. This is weaker than what we really need for the main theorem of this section, but the proof contains one idea that will use again, which is contained in the following lemma. It might be interesting to note that this is the only result from basic algebraic topology that we need. **Lemma 7.3**.: _Let \(\mathbb{S}^{1}=[0,1]/0\sim 1\) be the circle viewed as the interval with identified endpoints. Let \(C=[a,b]\times\mathbb{S}^{1}\) be the cylinder, let \(\pi:C\to\mathbb{S}^{1}\) be the projection on the second coordinate, and let \(d_{0},d_{1}:\mathbb{S}^{1}\to C\) be such that \(\pi\circ d_{0}=\pi\circ d_{1}=id_{\mathbb{S}^{1}}\). Let \(I_{0}=[0,\frac{1}{3}]\), \(I_{1}=[\frac{1}{3},\frac{2}{3}]\) and \(I_{1}=[\frac{2}{3},1]\), seen as subsets of \(\mathbb{S}^{1}\). Let \(r:C\to C\) be given and set \(s_{i}=\pi\circ r\circ d_{i}:\mathbb{S}^{1}\to\mathbb{S}^{1}\), \(i=0,1\). If \(s_{0}(I_{k})\subset I_{k}\) for \(k=0,1,2\), then \(s_{1}\) is not (homotopic to) a constant map._ Proof.: The assumptions imply that \(s_{0}(I_{k})=I_{k}\). Sneak into the first few classes of a course on the fundamental group to conclude. **Proposition 7.4**.: _If \(X=\cup_{\alpha\in\omega_{1}}X_{\alpha}\) is a longpipe such that \(\operatorname{\mathsf{EC}}(X,\mathbb{R})\) holds, then \(X\) does not retract on any \(\overline{X_{\alpha}}\)._ Proof.: First, notice that \(\operatorname{\mathsf{EC}}(X,\mathbb{R}^{2})\) holds by Lemma 6.2. Suppose that \(r:X\to\overline{X_{\alpha}}\) is a retraction. Since \(\overline{X_{\alpha}}\) embeds in \(\mathbb{R}^{2}\), for some \(\gamma\) we have \(r(X-X_{\gamma})=\{x\}\). Since \(r\) is the identity on \(\overline{X_{\alpha}}\), \(\gamma>\alpha\). The boundary of \(\overline{X_{\alpha+1}}\) is homeomorphic to the circle (it might not be true for \(\overline{X_{\alpha}}\) though). Fix any successor \(0<\beta\leq\alpha\). The boundary of \(\overline{X_{\beta}}\) is also homeomorphic to the circle. Take a homeomorphism \(\overline{X_{\gamma+1}}\to[0,3]\times\mathbb{S}^{1}\) that sends \(\overline{X_{\beta}}\) and \(\overline{X_{\gamma+1}}-X_{\gamma}\) respectively to \(S_{1}=[0,1]\times\mathbb{S}^{1}\) and \(S_{2}=[2,3]\times\mathbb{S}^{1}\). This yields a retraction of the cylinder \([0,3]\times\mathbb{S}^{1}\) into itself which is the identity on \(S_{1}\) and such that \(S_{2}\) is sent to a point. But this is impossible by Lemma 7.3. The fact that the longpipes are 'cylinders piled up' is important in this proposition. Each \(X_{\alpha+1}\) in a longpipe is homeomorphic to \([0,1)\times\mathbb{S}^{1}\), \(X\) is thus a surface whose manifold boundary is homeomorphic to \(\mathbb{S}^{1}\). By sewing a disc in this 'hole', we obtain what we call a _sealed longpipe_. Such a sealed longpipe \(W\) has each \(\overline{W_{\alpha+1}}\) homeomorphic to the closed \(2\)-disc and its boundary in any \(W_{\beta}\) for higher \(\beta\) is homeomorphic to the circle. **Lemma 7.5**.: _Let \(X\) be a sealed longpipe, then \(X\) retracts on \(\overline{X_{\alpha+1}}\)\(\forall\alpha\in\omega_{1}\)._ In the remaining of this section, we let \(B(a)\) denote the closed disk of radius \(a\) centered at the origin in \(\mathbb{R}^{2}\). Proof.: There is a homeomorphism \(\overline{X_{\alpha+2}}\to B(2)\) that sends \(\overline{X_{\alpha+1}}\) to \(B(1)\). Take a retraction of \(B(2)\) on \(B(1)\) which sends all the boundary of \(B(2)\) to the origin. This yields a retraction \(r:\overline{X_{\alpha+2}}\to\overline{X_{\alpha+1}}\) such that the boundary of \(\overline{X_{\alpha+2}}\) is sent to a point \(x\in X_{\alpha+1}\). Extend it to the whole \(X\) by \(r(y)=x\) for all \(y\not\in\overline{X_{\alpha+2}}\). However, the fact that there are retractions 'as high as one wants' does not ensure that \(\operatorname{\mathsf{EC}}\) implies \(a\mathsf{S}\). Actually, quite the opposite is true as the main result of this section shows. **Theorem 7.6**.: _Let \(X\) be either a longpipe or a sealed longpipe. If \(\operatorname{\mathsf{EC}}(X,\mathbb{R})\) holds, then \(a\mathsf{S}(X,\mathbb{R})\) does not hold._ Before proving this theorem, let us show that there are concrete examples of such (sealed) longpipes. We already encountered one: Theorems 3.3 and 7.6 show that the longpipe built with \(\clubsuit_{C}\) in Example 3.16 does not satisfy \(\mathsf{S}(Y,\mathbb{R})\). But there are examples in \(\operatorname{\mathbf{ZFC}}\). **Example 7.7** (Nyikos, in effect).: _There are sealed longpipes \(X=\cup_{\alpha\in\omega_{1}}X_{\alpha}\) as in Theorem 7.6 and Lemma 7.5. That is, \(X\) retracts on each \(\overline{X_{\alpha+1}}\), \(\operatorname{\mathsf{EC}}(X,\mathbb{R})\) holds but \(a\mathsf{S}(X,\mathbb{R})\) does not._ Details.: This example is also due to Nyikos and is described in [24, p. 210]. It is actually a longpipe, but we may sew a disc in \(X_{1}\) to obtain a sealed one. As in Example 3.5 to which we refer for definitions, we first consider a tangent bundle \(T\mathbb{L}_{+}\) of \(\mathbb{L}_{+}\) given by some smoothing. Write \(\pi:T^{+}\to\mathbb{L}_{+}\) for the bundle projection. As written in Example 3.5, in principle, the details of how the smoothing is built are important, since \(T\mathbb{L}_{+}\) and \(T^{+}\) may have quite different topological properties depending on the particular construction. This is however irrelevant in our case, as we only use the fact that maps \(T^{+}\to\mathbb{R}\) are constant on the fibers above some some club \(C\subset\mathbb{L}_{+}\), which is true for any smoothing [24, Corollary 4.15]. Now consider the \(\mathbb{Z}\)-action on the fibers \(\langle x,i\rangle\mapsto 2^{i}\cdot x\) (see [24, p. 210]). Quotienting by this action, we obtain a longpipe \(X\) (each fiber is now a circle), and the quotient map \(q:T^{+}\to X\) is actually a covering. There is thus a unique \(\widetilde{\pi}:X\to\mathbb{L}_{+}\) such that the left part of the diagram below commutes. Notice that there is no unbounded map \(\mathbb{L}_{+}\to X\) (otherwise it could be lifted to \(T^{+}\), see e.g. [13, Theorem 5, Chapter 2, Section 4]). Now, given \(\widetilde{f}:X\to\mathbb{R}\), \(f=\widetilde{f}\circ q:T^{+}\to\mathbb{R}\) is constant on the fibers above a club \(C\subset\mathbb{L}_{+}\) and hence so is \(\widetilde{f}\). It follows that \(X\) satisfies \(\mathsf{l0}\). By Theorem 3.8 (c) \(\mathsf{EC}(X,\mathbb{R})\) holds. Hence by Theorem 7.6\(a\mathsf{S}(X,\mathbb{R})\) does not hold. The proof of Theorem 7.6 is done by exhibiting a function \(f:X\to\mathbb{R}\) such that any \(r\) with \(f\circ r=f\) has "wrong" homotopy properties; that is, we may apply Lemma 7.3 (as in Proposition 7.4). For this purpose, we will define families of maps such that if \(f_{0},f_{1}\) are distinct member of one family, then \(f_{0}\circ r\neq f_{1}\) for any \(r\) with relevant domain and range. Our first proof used a kind of construction in three stages. At that time we found convenient to try to be a bit systematic in our treatment, to use (commutative) diagrams and a general abstract lemma (with an almost trivial proof) enabling us to pass from a stage to the next. We then noticed that a much simpler construction was available in just two stages. We could have given the direct argument but chosed to keep first approach with the abstract lemma since we find that it separates the proof intomore transparent steps. So, let us state and prove our abstract lemma. **Lemma 7.8**.: _Let \(Z_{0},Z_{1},X,Y\) be spaces and \(f_{0},f_{1},\widetilde{f}_{0},\widetilde{f}_{1},\psi_{0},\psi_{1},\varphi\) be as in the diagram below, with \(\widetilde{f}_{0}=f_{0}\circ\psi_{0}\), \(\widetilde{f}_{1}=f_{1}\circ\psi_{1}\) and \(\psi_{1}\circ\varphi=id_{X}\)._ _If there is no \(r:X\to X\) such that \(f_{0}\circ r=f_{1}\), then there is no \(\widetilde{r}:Z_{1}\to Z_{0}\) such that \(\widetilde{f}_{0}\circ\widetilde{r}=\widetilde{f}_{1}\)._ Proof.: If \(\widetilde{f}_{0}\circ\widetilde{r}=\widetilde{f}_{1}\) for some \(\widetilde{r}:Z_{1}\to Z_{0}\), set \(r=\psi_{0}\circ\widetilde{r}\circ\varphi\), then \(r:X\to X\) and \[f_{0}\circ r=f_{0}\circ\psi_{0}\circ\widetilde{r}\circ\varphi=\widetilde{f}_{ 0}\circ\widetilde{r}\circ\varphi=\widetilde{f}_{1}\circ\varphi=f_{1}\circ \psi_{1}\circ\varphi=f_{1},\] a contradiction. Our building block is a simple family of interval maps whose idea was given to us by D. Gauld. In the remaining of this section, we denote the closed unit interval \([0,1]\) by \(I\). Let \(u\in(0,\frac{1}{2})\). We define \(f_{u}:I\to I\) to be the map (depicted in Figure 3, left) that takes values \(0,\frac{1}{2}+u,\frac{1}{2}-u,1\) at \(0,\frac{1}{3},\frac{2}{3},1\), respectively, and is linear in between. **Lemma 7.9** (D. Gauld).: _Let \(0<u,v<\frac{1}{2}\) with \(u\neq v\). Then there is no continuous \(r:I\to I\) such that \(f_{u}\circ r=f_{v}\)._ Proof.: Suppose \(u>v\). Then \(r\) must be increasing up to \(1/3\) but \(r(1/3)<1/3\), then since \(f_{v}\) must decrease, \(r\) is decreasing until \(2/3\), then again increasing with \(r(1)=1\). But we run into problems when \(r(x)\) reaches \(1/3\) since then \(f_{u}\circ r\) starts decreasing while \(f_{v}\) is not. Suppose now that \(u<v\). Then we run into problems as soon as \(r(x)\) reaches \(1/3\). Let \(i:I\to B(1)\), \(j:B(1)\to I\) be defined as \(i(t)=\langle 1-t,0\rangle\) and \(j(x)=1-|x|\) (where \(|x|\) is the Euclidean norm). Define \(\widetilde{f}_{u}:B(1)\to I\) as \(\widetilde{f}_{u}=f_{u}\circ j\). **Corollary 7.10**.: \(\exists\widetilde{r}:B(1)\to B(1)\) _with \(\widetilde{f}_{u}\circ\widetilde{r}=\widetilde{f}_{v}\) iff \(u=v\)._ Proof.: The result follows immediately by the fact that \(j\circ i=id_{I}\), the diagram below and Lemmas 7.8 and 7.9. The family of maps \(\widetilde{f}_{u}\) enables us to prove the following. **Lemma 7.11**.: _Let \(C=[0,2]\times\mathbb{S}^{1}\) be the cylinder, and set \(B=B(2)\subset\mathbb{R}^{2}\). Then the following hold. (a) There is \(f:C\to I\), constant on \([1,2]\times\mathbb{S}^{1}\) such that if \(r:C\to C\) satisfies \(f\circ r=f\), then \(r\upharpoonright\{2\}\times\mathbb{S}^{1}\) is not (homotopic to) a constant map. (b) There is \(g:B\to I\), constant on \(\{x\in B\,:\,|x|\geq 1\}\), such that if \(s:B\to B\) satisfies \(g\circ s=g\), then \(s\) is not constant on \(\partial B=\{x\in B\,:\,|x|=2\}\)._ Proof.: (a) We see \(\mathbb{S}^{1}\) again as the interval \(I\) with endpoints identified. Let \(E\), \(D_{0}\), \(D_{1}\) and \(D_{2}\) be the closed regions (boundaries are included) of \(C\) depicted in Figure 4 (left). Fix any homeomorphisms \(\phi_{k}:D_{k}\to B(1)\), \(k=0,1,2\). Fix three distincts points \(u_{0},u_{1},u_{2}\in(0,\frac{1}{2})\). Define \(f:C\to I\) to be constant on \(0\) on \(E\) and equal to \(\widetilde{f}_{u_{k}}\circ\phi_{k}\) on \(D_{k}\), for \(k=0,1,2\). By definition, \(\widetilde{f}_{u_{k}}\circ\phi_{k}\) takes value \(0\) on the boundary of \(D_{k}\). It follows that \(f\) is continuous. Let \(r:C\to C\) be such that \(f\circ r=f\). Since \(f^{-1}(\{0\})\) is equal to \(E\) union the boundaries of \(D_{0}\) and \(D_{1}\), by connectedness \(r\) must send the interior of \(D_{0}\) to the interior of \(D_{0}\), \(D_{1}\) or \(D_{2}\). Hence, \(r\upharpoonright D_{0}\) has range included in \(D_{k}\) where \(k\) is \(0\), \(1\) or \(2\). But we are in the following situation: Figure 4: The subsets for defining the functions \(f:C\to I\) and \(g:B\to I\). By Lemma 7.8 and Corollary 7.10, we must have \(k=0\), and \(r\) sends thus \(D_{0}\) into itself. The same argument shows that \(r\) sends also \(D_{1}\) and \(D_{2}\) respectively into themselves. Let \(d:\mathbb{S}^{1}\to C\) be \(d(x)=\langle\frac{1}{2},x\rangle\) and \(\pi:C\to\mathbb{S}^{1}\) be the projection on the second coordinate. By definition \(\pi\circ r\circ d\) sends the intervals \([0,\frac{1}{3}],[\frac{1}{3},\frac{2}{3}],[\frac{2}{3},1]\subset\mathbb{S}^{1}\) respectively into themselves. By Lemma 7.3, \(r\) cannot be (homotopic to) a constant map on \(\{2\}\times\mathbb{S}^{1}\). (b) The proof is almost the same, but this time we define \(E\), \(D_{k}\) (\(k=0,1,2,3\)) as in the righthandside of Figure 4. Similarly as in (a), take four distinct points \(u_{0},u_{1},u_{2},u_{3}\in(0,\frac{1}{2})\), fix homeomorphisms \(\phi_{k}:D_{k}\to B(1)\), and define \(g\) exactly as \(f\). Then, argue as in (a) to show that if \(g\circ s=g\), then \(s\) sends \(D_{1}\cup D_{2}\cup D_{3}\) to itself. By restricting to this subset, we are exactly in the same situation than in (a), and we may conclude. We may now prove Theorem 7.6. Proof of Theorem 7.6.: Let \(C\) be the cylinder \([0,2]\times\mathbb{S}^{1}\) and \(f:C\to\mathbb{R}\) be given by Lemma 7.11 (a). If \(X=\cup_{\alpha<\omega_{1}}X_{\alpha}\) is a longpipe, fix a homeomorphism \(\Psi:\overline{X_{1}}\to C\) and let \(\widetilde{f}\) be defined as \(f\circ\Psi\) on \(\overline{X_{1}}\) and constant on \(0\) elswhere. If there is \(r:X\to\overline{X_{\alpha}}\) such that \(\widetilde{f}\circ r=\widetilde{f}\), then since \(\overline{X_{\alpha}}\) embeds in \(\mathbb{R}^{2}\), by \(\mathsf{EC}(X,\mathbb{R})\), there is some \(\beta\) such that \(r\) is constant on \(X-X_{\beta}\). We may assume that \(\beta>\alpha\). Then \(r\upharpoonright\overline{X_{\beta+1}}\) is a map sending a cylinder into itself and constant on the upper boundary. But this is impossible by Lemma 7.11 (a). If \(X\) is a sealed longpipe, the proof is the same, using \(g\) given by Lemma 7.11 (b). We end this section with a last example. The proofs of the claimed properties are left to the reader, as they are very similar to what we just did. **Example 7.12**.: _Let \(P^{0},P^{1}\) be longpipes such that \(\mathsf{EC}(P^{k},\mathbb{R})\) hold for \(k=0,1\). Let \(P\) be obtained by gluing \(P^{0}\) and \(P^{1}\) along their boundary (which is homeomorphic to a circle) in any way. Then \(w\mathsf{S}(P,\mathbb{R})\) holds but \(a\mathsf{S}(P,\mathbb{R})\) and \(\mathsf{EC}(P,\mathbb{R})\) do not._ \(w\mathsf{S}(M,\mathbb{R})\), normality, collectionwise Hausdorff and \(\omega_{1}\)-compactness for manifolds Recall that a collection of subsets of a space \(X\) is _discrete_ iff for each \(x\in X\) there is an open set containing it which intersects at most one member of the collection. In a discrete collection \(\{U_{\alpha}\,:\,\alpha\in\lambda\}\), \(\cup_{\alpha\in K}\overline{U_{\alpha}}\) is closed for any \(K\subset\lambda\). Space is _\(\kappa\)-[strongly] collectionwise Hausdorff (abbreviated \(\kappa\)-[\(s\)]cwH)_ iff any closed discrete subset \(\{x_{\alpha}\,:\,\alpha\in\lambda\}\subset X\) of size \(\lambda\leq\kappa\) can be expanded to a disjoint [resp. discrete] collection of open sets \(\{U_{\alpha}\,:\,\alpha\in\lambda\}\) with \(x_{\alpha}\in U_{\alpha}\). If \(X\) is \(\kappa\)-[s]cwH for each \(\kappa\), we say that \(X\) is [s]cwH. It happens that, in the world of manifolds, \(\mathsf{S}(X,\mathbb{R})\) and its weakenings \(a\mathsf{S}(X,\mathbb{R})\) and \(w\mathsf{S}(X,\mathbb{R})\) (see Definition 7.1) have an interesting interplay with normality, collectionwise Hausdorffness and \(\omega_{1}\)-compactness. Let us first give an example. **Example 8.1**.: _(Similar to [4, Ex. 5.5]) Let \(X=\mathbb{L}_{\geq 0}\times\mathbb{R}-\omega_{1}\times\{0\}\). Then \(X\) is a cwH non-normal and non-\(\omega_{1}\)-compact surface, and \(\mathsf{S}(X,\mathbb{R})\) holds._ Here, of course, \(\omega_{1}\) is seen as a subspace of \(\mathbb{L}_{\geq 0}\). All the properties are proved as in [4, Ex. 5.5] except \(\mathsf{S}(X,\mathbb{R})\) which is proved exactly as in Example 2.10. **Question 8.2**.: _If \(M\) is a normal manifold such that \(\mathsf{S}(M,\mathbb{R})\) (or \(a\mathsf{S}(M,\mathbb{R})\), or \(w\mathsf{S}(M,\mathbb{R})\)) hold, is then \(M\)\(\omega_{1}\)-compact?_ Notice that this question asks for a partial generalization of Theorem 3.4 (where we have \(\mathsf{EC}\) instead of \(\mathsf{S}\) or its weakenings). The next theorem is a partial answer for the \(a\mathsf{S}\)-version. **Theorem 8.3**.: _Let \(M\) be an \(\aleph_{1}\)-scwH manifold. If \(w\mathsf{S}(M,\mathbb{R})\) holds then \(M\) is \(\omega_{1}\)-compact._ Recall that a normal \(\kappa\)-cwH space is \(\kappa\)-scwH, so Theorem 8.3 answers Problem 8.2 if the following old open problem - which is probably more fundamental than anything done in the present paper and already appeared various times in print - has an affirmative answer: **Problem 8.4**.: _Is every normal manifold \(\aleph_{1}\)-scwH?_ As far as we know, there is no consistent counter-example to Problem 8.4, and the answer is affirmative under **V=L** or in any model obtained after forcing with a Suslin tree, see [32]. For the proof of Theorem 8.3 we will use the fonctions \(\widetilde{f}_{u}=f_{u}\circ j\), where \(f_{u}\) is defined in Figure 3 and \(j\) just before Corollary 7.10. The bulk of the argument for proving Theorem 8.3 is done in the next lemma. **Lemma 8.5**.: _Let \(M\) be a manifold and \(D\) be a discrete collection of open sets in \(M\). If \(D\) is uncountable, then \(w\mathsf{S}(M,\mathbb{R})\) does not hold._ Proof.: Let \(B\) be the closed unit ball in \(\mathbb{R}^{n}\), where \(n\) is the dimension of \(M\). We may assume that \(D=\{N_{\alpha}\,:\,\alpha\in\omega_{1}\}\) and that each \(N_{\alpha}\) is homeomorphic to \(B\). Fix a homeomorphism \(\psi_{\alpha}:B\to N_{\alpha}\). Fix a (non-continuous) 1-to-1 map \(\varphi:\omega_{1}\to(0,\frac{1}{2})\). Define \(h:M\to[0,1]\) as follows. On the complement of the union of the interiors of the \(N_{\alpha}\), \(h\) takes value \(0\). Then, define \(h\upharpoonright N_{\alpha}\) as \(\widetilde{f}_{\varphi(\alpha)}\circ\psi_{\alpha}^{-1}\). By construction \(h\) is \(0\) on the boundary of \(N_{\alpha}\); hence, \(h\) is continuous by discreteness of the \(N_{\alpha}\)'s. Suppose that there is some Lindelof \(Z\subset M\) and \(r:M\to Z\) such that \(h\circ r\upharpoonright(X-Z)=h\upharpoonright(X-Z)\). Then \(Z\) intersects at most countably many \(N_{\alpha}\) by Lindelofness. Fix \(\alpha\) such that \(Z\cap N_{\alpha}=\varnothing\). Then, by connectedness, the image under \(r\) of \(N_{\alpha}\) must be contained in some \(N_{\beta}\), and we have the following diagram. This yields a contradiction by Lemmas 7.8 and 7.9. Proof of Theorem 8.3.: Suppose there is an uncountable closed discrete subset \(D\), up to taking a subset, we may assume that \(|D|=\aleph_{1}\). Expand \(D\) to a discrete collection of closed neighborhoods \(\{N_{w}\,:\,w\in D\}\). Then apply Lemma 8.5. Note in passing that since any countable subset of a manifold is contained in an open set homeomorphic to \(\mathbb{R}^{n}\) (see [11, Cor. 3.4]), an \(\omega_{1}\)-compact manifold is cwH. But it may fail to be scwH (at least consistently), see Example 9.6 below, which is moreover non-normal. This space is eventually constant in \(\mathbb{R}\). Another non-normal manifold \(M\) such that \(\mathsf{EC}(M,\mathbb{R})\) holds, but Type I this time, is Example 3.5. If'small' means 'compact' In this brief section, we look at the properties obtained if every instance of 'Lindelof' is replaced by 'compact' in Definition 1.2. We call the corresponding properties \(\mathsf{P}_{\mathsf{cpt}}\) for \(\mathsf{P}\in\{\mathsf{EC},\mathsf{S},\mathsf{L},\mathsf{BR}\}\). Let us first gather some trivial facts in a lemma. **Lemma 9.1**.: _Let \(\mathsf{P}\in\{\mathsf{EC},\mathsf{S},\mathsf{L},\mathsf{BR}\}\). Then the following hold._ _(a)_ \(\mathsf{BR}_{\mathsf{cpt}}\Longleftarrow\mathsf{EC}_{\mathsf{cpt}} \Longrightarrow\mathsf{L}_{\mathsf{cpt}}\Longleftarrow\mathsf{S}_{\mathsf{ cpt}}\)_._ _(b)_ \(\mathsf{P}_{\mathsf{cpt}}\Longrightarrow\mathsf{P}_{\mathsf{d}}\)_._ _(c) If_ \(X\) _is countably compact and_ \(Y\) _any space then_ \(\mathsf{P}_{\mathsf{cl}}(X,Y)\Longleftrightarrow\mathsf{P}_{\mathsf{cpt}}(X,Y)\)_._ _(d) If_ \(\mathsf{L}_{\mathsf{cpt}}(X,\mathbb{R})\) _holds then_ \(X\) _is pseudocompact._ By (d) and Theorem 5.1, any Type I non pseudocompact manifold satisfies \(\mathsf{L}_{\mathsf{cl}}(X,\mathbb{R})\) but not \(\mathsf{L}_{\mathsf{cpt}}(X,\mathbb{R})\). For instance, \(\mathsf{S}_{\mathsf{cl}}(\mathbb{L}_{+},\mathbb{R})\) and \(\mathsf{BR}_{\mathsf{d}}(\mathbb{L}_{+},\mathbb{R})\) hold, but their cpt-counterparts do not. We note also that the converse of point (d) does not hold for Hausdorff spaces: **Example 9.2** ([30], Ex. 3.1).: _Let \([0,1]_{s}\) be the space obtained by refining the Euclidean topology on \([0,1]\) letting \(\{1/n\,:\,n\in\omega\}\) be closed. Then \([0,1]_{s}\) is pseudocompact and non-compact, but \(\mathsf{L}_{\mathsf{cpt}}([0,1]_{s},\mathbb{R})\) does not hold, as shown by \(id:[0,1]_{s}\to[0,1]\)._ In view of points (b)-(d) above, it seems interesting to see whether \(\mathsf{P}_{\mathsf{cl}}(X,Y)\Rightarrow\mathsf{P}_{\mathsf{cpt}}(X,Y)\) when \(X\) is pseudocompact. Recall that a normal pseudocompact space is countably compact and a Lindelof regular space is normal [10, Thms 3.10.21 & 3.8.2]. A closed subset of a pseudocompact space may fail to be pseudocompact, but the following is well known (see e.g. [30, p. 447]: **Lemma 9.3**.: _Let \(U\) be an open subset of a Tychonoff pseudocompact space \(X\). Then \(\overline{U}\) is pseudocompact._ This lemma gives almost immediately the following theorems. We first show that the situation is quite simple for Type I spaces. **Theorem 9.4**.: _Let \(X=\cup_{\alpha\in\omega_{1}}X_{\alpha}\) be a regular Type I pseudocompact space and \(Y\) be any space. Then \(X\) is countably compact, and thus \(\mathsf{P}(X,Y)\Longleftrightarrow\mathsf{P}_{\mathsf{cpt}}(X,Y)\) for each \(\mathsf{P}\in\{\mathsf{EC},\mathsf{S},\mathsf{L},\mathsf{BR}\}\)._ Proof.: In a regular Type I space \(X\), each \(\overline{X_{\alpha}}\) is Lindelof regular and hence normal. It follows that \(X\) is Tychonoff. Hence, each \(\overline{X_{\alpha}}\) is pseudocompact by Lemma 9.3 and thus compact. If \(X\) is not countably compact, then there is a countable closed discrete subset \(D\) in \(X\). Then, for some \(\alpha\), \(D\) is contained in \(\overline{X_{\alpha}}\) which is a contradiction. Conclude with Lemmas 2.2 and 9.1 (c). When \(X\) is not of Type I, we can still say something for \(\mathsf{EC}\). **Theorem 9.5**.: _Let \(X\) be a Tychonoff pseudocompact space and \(Y\) be any space. Then_ \[\mathsf{EC}_{\mathsf{cl}}(X,Y)\Longleftrightarrow\mathsf{EC}_{\mathsf{cpt}}(X,Y).\] Proof.: The reverse implication is immediate. Suppose that \(\mathsf{EC}_{\mathsf{cl}}(X,Y)\) holds. Given \(f:X\to Y\), there is some \(y\in Y\) such that \(U=f^{-1}(Y-\{y\})\) is contained in a closed Lindelof subset \(Z\) of \(X\). Then \(Z\) is (regular Lindelof hence) normal, and \(\overline{U}\subset Z\) as well. By Lemma 9.3, \(\overline{U}\) is countably compact and hence compact. By definition, \(f\) is constant outside of \(\overline{U}\). This shows that \(\mathsf{EC}_{\mathsf{cpt}}(X,Y)\) holds. We note that the implication \(\mathsf{EC}(X,\mathbb{R})\Rightarrow\mathsf{EC}_{\mathsf{cpt}}(X,\mathbb{R})\) for pseudocompact \(X\) does not hold, as shown by Example 2.7. If \(\mathfrak{b}=\omega_{1}\), there is even a manifold counter-example. (This example was also alluded to in Section 2.) Recall that \(\mathfrak{b}\) is the smallest cardinality of an \(<^{*}\)-unbounded family of functions \(\omega\to\omega\), where \(f<^{*}g\) iff there is some \(n\in\omega\) such that \(f(m)<g(m)\) when \(m\geq n\). We can assume that such an unbounded family is well ordered by \(<^{*}\) (see e.g. [8, Theorem 3.3]). Recall in passing that \(\omega_{1}\leq\mathfrak{p}\leq\mathfrak{b}\leq 2^{\omega}\) and that each inequality may be strict depending on the model of set theory. **Example 9.6** (Nyikos [22, Ex. 6.3]).: _An \(\omega_{1}\)-compact surface \(S\) which is not countably compact, such that \(\mathsf{EC}(X,\mathbb{R})\) holds but \(\mathsf{EC_{\mathsf{cpt}}}(X,\mathbb{R})\) and \(\mathsf{EC_{\mathsf{cl}}}(X,\mathbb{R})\) do not. Moreover, \(\mathsf{BR}(X,\mathbb{R})\) and \(\mathsf{L}(X,\mathbb{R})\) hold. If \(\mathfrak{b}=\omega_{1}\), then \(S\) can be made pseudocompact and not scwH._ Idea of the construction.: We only give a sketch, as the general construction is detailed in [11, Example 1.29], and appeared for the first time in [22, Ex. 6.3]. The idea is quite similar to (a version) of Example 2.8. Start with an \(<^{*}\)-unbounded family of functions \(f_{\alpha}:\omega\to\omega\), \(\alpha\in\mathfrak{b}\). We might assume that each \(f_{\alpha}\) is strictly increasing and \(f_{\alpha}(0)=0\). Now consider (the graphs of) the strictly increasing maps \([0,1)\to[0,1)\) (with supremum \(1\)) given by first embedding \(\omega\times\omega\) in \([0,1)^{2}\), sending \(\langle n,m\rangle\) to \(\langle 1-\frac{1}{n+1},1-\frac{1}{m+1}\rangle\) and interpolating linearly in between to obtain maps \(\widehat{f}_{\alpha}:[0,1)\to[0,1)\). The surface can then be seen as the unit square \([0,1]^{2}\) with \(\langle 1,1\rangle\) removed, to which is attached a copy of \(\mathbb{L}_{\geq 0}\) in such a way that \(\lim_{x\to 1}\widehat{f}_{\alpha}(x)=\alpha\in\mathbb{L}_{\geq 0}\) when \(\alpha<\omega_{1}\). (The actual construction by Nyikos is actually slightly different, but only on a superficial level.) \(\mathsf{EC}(S,\mathbb{R})\) holds because any real valued map on \(\mathbb{L}_{\geq 0}\) is eventually constant and the remainder of the space is Lindelof. The construction is made such that the subset \([0,1)\times\{1\}\) is closed (hence \(S\) is not countably compact), and \(\{1\}\times[0,1)\) is "attached" at the start of the copy of \(\mathbb{L}_{\geq 0}\). Since any uncountable subset has a cluster point either in \([0,1]^{2}-\{\langle 1,1\rangle\}\) or in \(\mathbb{L}_{\geq 0}\), \(S\) is \(\omega_{1}\)-compact. The real valued map consisting of the projection on the second factor on \([0,1]^{2}-\{\langle 1,1\rangle\}\) and constant on \(1\) on \(\mathbb{L}_{\geq 0}\) contradicts \(\mathsf{EC_{\mathsf{cl}}}(X,\mathbb{R})\). Since \(S\) is a strict chain given by \(([0,1]^{2}-\{\langle 1,1\rangle\})\cup[0,\alpha)\) (where the interval is a subset of \(\mathbb{L}_{\geq 0}\)), \(\mathsf{BR}(S,\mathbb{R})\) holds by Proposition 5.7, and \(\mathsf{L}(S,\mathbb{R})\) holds by Theorem 5.1. If \(\mathfrak{b}>\omega_{1}\), then \(\lim_{x\to 1}f_{\alpha}(x)\) does not exist when \(\alpha\geq\omega_{1}\), but if on the contrary \(\mathfrak{b}=\omega_{1}\), Nyikos showed that the resulting surface is pseudocompact. Indeed, in this case, any countable sequence in \([0,1]\times[0,1)\) has a cluster point. Moreover, the closed discrete subspace \(\{\langle 1-1/n,1\rangle\,:\,n\in\omega\}\) cannot be expanded to a discrete open collection, because taking one point in each open set intersected with \([0,1]\times[0,1)\) yields a cluster point. Hence, \(S\) is not scwH. Notice that \(w\mathsf{S}(S,\mathbb{R})\) holds since \(\mathsf{EC}(S,\mathbb{R})\) holds. Whether \(a\mathsf{S}(S,\mathbb{R})\), \(\mathsf{S}(S,\mathbb{R})\), \(\mathsf{L_{\mathsf{cl}}}(S,\mathbb{R})\) or \(\mathsf{BR_{\mathsf{cl}}}(S,\mathbb{R})\) hold is less clear. ## 10 Summary and tables In this section, we summarize some of our results in a concise (albeit incomplete) form. First, and just for the pleasure of drawing an overly complicated diagram, the following implications hold by Lemmas 2.1, 7.2 and 9.1 (whose proofs are all immediate). Recall that by \(\mathsf{P}_{1}\Rightarrow\mathsf{P}_{2}\) we mean \(\forall X,Y\,\mathsf{P}_{1}(X,Y)\Rightarrow\mathsf{P}_{2}(X,Y)\). If the domain space is Type I, the vertical arrows between \(\mathsf{P}\) and \(\mathsf{P}_{\mathsf{d}}\) reverse (Lemma 2.2) and if the domain space is Type I and countably compact, the vertical arrows between \(\mathsf{P}_{\mathsf{cl}}\) and \(\mathsf{P}_{\mathsf{cpt}}\)reverse (Theorem 9.4). The thick red arrows are the only ones for which we do not know a counterexample to their converse, as shown by the tables below. These red arrows yield the following questions that an attentive reader had probably already formulated in their head, and maybe even solved, which is not our case. **Question 10.1**.: _Are there spaces \(X,Y\) such that \(a\mathsf{S}(X,Y)\) holds but not \(\mathsf{S}(X,Y)\)? Is there an example with \(Y=\mathbb{R}\) and/or \(X\) a manifold?_ **Question 10.2**.: _Are there spaces \(X,Y\) such that \(a\mathsf{S}(X,Y)\) holds but not \(a\mathsf{S}_{\mathsf{d}}(X,Y)\)? Is there an example with \(Y=\mathbb{R}\) and/or \(X\) a manifold?_ The tables below compile most of our (counter-)examples (and some positive results). We believe that our choice of notation is self explanatory. Table 1 summarizes quickly some of the results of section 3. In the same vein, the contents of section 4 and some of section 5 are summarized in the Tables 2-3. Finally, Tables 4-5 scan through the whole paper for (counter)-examples to \(\mathsf{P}(X,\mathbb{R})\). There, we let \(K_{0}\) and \(K_{1}\) be the spaces described in Examples 7.7 and 7.12. We denote the discrete space of cardinality \(\aleph_{1}\) by \(D_{\aleph_{1}}\). The question mark "?" means that we do not know whether the property in question holds (it might however not be difficult to find out). The fact that \(a\mathsf{S}(X,\mathbb{R})\) does not hold for Examples 2.7-2.8 (Table 4) follows easily with the same maps we used to show that \(\mathsf{L}_{\mathsf{d}}(X,\mathbb{R})\) does not hold. The fact that \(w\mathsf{S}(X,\mathbb{R})\) does not hold for Example 5.6 follows from Lemma 8.5 (Table 4). In Table 5, Lemma 9.1 (d) is responsible for the cpt-properties not holding, since the spaces in question are not pseudocompact. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & & \multicolumn{5}{c|}{Property of \(Y\)} \\ \cline{3-8} Reference & Axiom & isocompact & \(G_{\delta}\) points & countably tight & loc. compact & EC(\(\omega_{1},Y\)) \\ \hline Ex. 3.13 & & \(\checkmark\) & \(\times\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\times\) \\ Ex. 3.16 & \(\clubsuit_{C}\) & \(\times\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ Thm 3.15 & **PFA** & implied & & assumed & assumed & assumed \\ \hline \end{tabular} \end{table} Table 1: Section 3 – about \(\mathsf{EC}(\omega_{1},Y)\) \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Reference & Properties of \(Y\) & \(T\)\(\omega_{1}\)-cpct & \(\mathsf{S}(T,Y)\) & \(\mathsf{BR}(T,Y)\) \\ \hline Thm 4.2 \& Prop 4.5 & submetrizable & assumed & implied & implied \\ Prop. 4.5 & uncountable & & equivalent & equivalent \\ Ex. 4.6 & countable & \(\times\) & \(\times\) & \(\checkmark\) \\ Ex. 4.7 & her. separable & \(\checkmark\) & \(\times\) & \(\times\) \\ \hline \end{tabular} \end{table} Table 2: Section 4 – \(\mathsf{S}(T,Y)\) and \(\mathsf{BR}(T,Y)\) when \(T\) is an \(\omega_{1}\)-tree
2310.08571
Bucks for Buckets (B4B): Active Defenses Against Stealing Encoders
Machine Learning as a Service (MLaaS) APIs provide ready-to-use and high-utility encoders that generate vector representations for given inputs. Since these encoders are very costly to train, they become lucrative targets for model stealing attacks during which an adversary leverages query access to the API to replicate the encoder locally at a fraction of the original training costs. We propose Bucks for Buckets (B4B), the first active defense that prevents stealing while the attack is happening without degrading representation quality for legitimate API users. Our defense relies on the observation that the representations returned to adversaries who try to steal the encoder's functionality cover a significantly larger fraction of the embedding space than representations of legitimate users who utilize the encoder to solve a particular downstream task.vB4B leverages this to adaptively adjust the utility of the returned representations according to a user's coverage of the embedding space. To prevent adaptive adversaries from eluding our defense by simply creating multiple user accounts (sybils), B4B also individually transforms each user's representations. This prevents the adversary from directly aggregating representations over multiple accounts to create their stolen encoder copy. Our active defense opens a new path towards securely sharing and democratizing encoders over public APIs.
Jan Dubiński, Stanisław Pawlak, Franziska Boenisch, Tomasz Trzciński, Adam Dziedzic
2023-10-12T17:56:53Z
http://arxiv.org/abs/2310.08571v2
# Bucks for Buckets (B4B): Active Defenses Against Stealing Encoders ###### Abstract Machine Learning as a Service (MLaaS) APIs provide ready-to-use and high-utility encoders that generate vector representations for given inputs. Since these encoders are very costly to train, they become lucrative targets for model stealing attacks during which an adversary leverages query access to the API to replicate the encoder locally at a fraction of the original training costs. We propose _Bucks for Buckets (B4B)_, the first _active defense_ that prevents stealing while the attack is happening without degrading representation quality for legitimate API users. Our defense relies on the observation that the representations returned to adversaries who try to steal the encoder's functionality cover a significantly larger fraction of the embedding space than representations of legitimate users who utilize the encoder to solve a particular downstream task. B4B leverages this to adaptively adjust the utility of the returned representations according to a user's coverage of the embedding space. To prevent adaptive adversaries from eluding our defense by simply creating multiple user accounts (sybils), B4B also individually transforms each user's representations. This prevents the adversary from directly aggregating representations over multiple accounts to create their stolen encoder copy. Our active defense opens a new path towards securely sharing and democratizing encoders over public APIs. ## 1 Introduction In model stealing attacks, adversaries extract a machine learning model exposed via a public API by repeatedly querying it and updating their own stolen copy based on the obtained responses. Model stealing was shown to be one of the main threats to the security of machine learning models in practice [38]. Also in research, since the introduction of the first extraction attack against classifiers [40], a lot of work on improving stealing [27, 33, 40, 41], extending it to different model types [8, 37], and proposing adequate defenses [18, 25, 26, 31] has been put forward. With the recent shift in learning paradigms from supervised to self supervised learning (SSL), especially the need for new defenses becomes increasingly pressing. From an academic viewpoint, the urge arises because it was shown that SSL models (_encoders_) are even more vulnerable to model stealing [16, 29, 36] than their supervised counterparts. This is because whereas supervised models' output is low dimensional, _e.g.,_ per-class probabilities or pure labels, SSL encoders output high-dimensional representation vectors that encode a larger amount of information and thereby facilitate stealing. In addition, from a practical industry's viewpoint, defenses are required since many popular API providers, such as Coherence, OpenAI, or Clarify [1, 2, 3] already expose their high-value SSL encoders via APIs to a broad range of users. Most of the current defenses against encoder stealing are _reactive, i.e.,_ they do not actively prevent the stealing but rather aim at detecting it by adding watermarks to the encoder [14, 16] or performing dataset inference to identify stolen copies [17]. Since at the point of detection, the damage of stealing has already been inflicted, we argue that reactive defenses intervene too late and we advocate for _active_ defenses that prevent stealing while it is happening. Yet, active defenses are challenging to implement because they not only need to prevent stealing but also should preserve the utility of representations for legitimate users. The only existing active defense against encoder stealing [29] falls short on this latter aspect since it significantly degrades the quality of representations for all users. To close the gap between required and existing defenses, we propose _Bucks for Buckets (B4B)_, the first active defense against encoder stealing that does not harm utility for legitimate users. B4B leverages the observation that the representations returned to adversaries who try to steal the encoder's functionality cover a significantly larger fraction of the full embedding space than representations of legitimate users who utilize the encoder to solve a particular downstream task. To turn this observation into a practical defense, B4B is equipped with three modular building blocks: (1) The first building block is a tracking mechanism that continuously estimates the fraction of the embedding space covered by the representations returned to each user. The intuition why this is relevant is that by covering large fractions of the embedding space, the representations will suffice for an adversary to reproduce the encoder's functionality, _i.e.,_ to successfully steal it. (2) B4B's second building block consists of a cost function to translate the covered fraction of the embedding space into a concrete penalty. We require this cost function to significantly penalize adversaries trying to steal the model while having only a minimal effect on legitimate users. (3) The third building block contains transformations that can be applied to the representations on a per-user basis to prevent adaptive attackers from circumventing our defense by creating multiple user accounts (sybils) and distributing their queries over these accounts such that they minimize the overall cost. We present the different building blocks of B4B in Figure 1. While B4B's modularity enables different instantiations of the three building blocks, we propose a concrete end-to-end instantiation to showcase the practicability of our approach. To implement tracking of the covered embedding space, we employ _local sensitive hashing_ that maps any representation returned to a given user into a set of hash _buckets_. We base our cost function (_i.e.,_ the _"**bucks**"_) on utility and make B4B add noise to the representations with a magnitude that increases with the number of buckets occupied by the given user. While the scale of noise added to legitimate users' Figure 1: **Overview of B4B.** In the upper part, we present our B4B framework that consists of three modular building blocks: (1) A coverage estimation to track the fraction of embedding space covered by the representations returned to each user, (2) a cost function that serves to map the coverage to a concrete penalty to prevent stealing, and (3) per-user transformations that are applied to the returned representations to prevent sybil attacks. In the lower part, we present a concrete instantiation of B4B and the operation flow of our defense: The API calculates representations for the incoming queries. We instantiate the coverage estimation with local sensitive hashing and estimate the covered space as the fraction of _hash buckets_ occupied. We calibrate the costs by adding noise to the representations according to the coverage. We apply a set of transformations on a per-user basis. The noised and transformed representations are returned to the user. representations does not harm their downstream performance due to their small embedding space coverage, the representations returned to an adversary become increasingly noisy--significantly degrading the performance of their stolen encoder. Finally, we rely on a set of transformations (_e.g.,_ affine transformations, shuffling, padding) that preserve downstream utility [17]. While, as a consequence, legitimate users remain unaffected by these transformations, adversaries cannot directly combine the representations obtained through different sybil accounts anymore to train their stolen copy of the encoder. Instead, they first have to remap all representations into the same embedding space, which we show causes both query and computation overhead and still reduces the performance of the stolen encoder. In summary, we make the following contributions: 1. We present B4B, the first active defense against encoder stealing that does not harm legitimate users' downstream performance. B4B's three building blocks enable penalizing adversaries whose returned representations cover large fractions of the embedding space and prevent sybil attacks. 2. We propose a concrete instantiation of B4B that relies on local sensitive hashing and decreases the quality of representations returned to a user once their representations fill too many hash buckets. 3. We provide an end-to-end evaluation of our defense to highlight its effectiveness in offering high utility representations for legitimate users and degrading the performance of stolen encoders in both the single and the sybil-accounts setup. ## 2 Related Work **Model Extraction Attacks.** The goal of the model extraction attacks is to replicate the functionality of a victim model \(f_{v}\) trained on a dataset \(D_{v}\). An attacker has a black box access to the victim model and uses a stealing dataset \(D_{s}=\{q_{i},f_{v}(q_{i})\}_{i=1}^{n}\), consisting of queries \(q_{i}\) and the corresponding outputs \(f_{v}(q_{i})\) returned by the victim model, to train a stolen model \(f_{s}\). Model extraction attacks have been shown against various types of models including classifiers [24; 40] and encoders [16; 36]. **Self Supervised Learning and Encoders.** SSL is an increasingly popular machine learning paradigm. It trains encoder models to generate representations from complex inputs without relying on explicit labels. These representations encode useful features of a given input, enabling efficient learning for multiple downstream tasks. Many SSL frameworks have been proposed [9; 10; 12; 22; 23; 44]. In our work, we focus on the two popular SSL vision encoders, namely SimSiam [12] and DINO [9], which return high-quality representations that achieve state-of-the-art performance on downstream tasks when assessed by training a linear classifier directly on representations. SimSiam trains with two Siamese encoders with directly shared weights. A prediction MLP head is applied to one of the encoders \(f_{1}\), and the other encoder \(f_{2}\) has a stop-gradient, where both operations are used for avoiding collapsing solutions. In contrast, DINO shares only architecture (not weights) between a student \(f_{1}\) and a teacher model \(f_{2}\), also with the stop-gradient operation, but not the prediction head. While SimSiam uses convolutional neural networks (CNNs), DINO also employs vision transformers (ViTs). Both frameworks use a symmetrized loss of the form \(\frac{1}{2}g(f_{1}(x_{1}),f_{2}(x_{2}))+\frac{1}{2}g(f_{1}(x_{2}),f_{2}(x_{1}))\) in their optimization objectives, where \(g(\cdot,\cdot)\) is negative cosine similarity for SimSiam and cross-entropy for DINO. SimSiam and DINO's similarities and differences demonstrate our method's broad applicability across SSL frameworks. More details can be found in Appendix E. **Stealing Encoders.** The stealing of SSL encoders was shown to be extremely effective [16; 29; 36]. The goal of extracting encoders is to maximize the similarity of the outputs from the stolen local copy and the original representations output by the victim encoder. Therefore, while training the stolen copy, the adversary either imitates a self-supervised training using a contrastive loss function, _e.g._, InfoNCE [10] or SoftNN [21] or directly matches both models' representations via the Mean Squared Error (MSE) loss. To reduce the number of queries sent to the victim encoder, the attack proposed in [29] leverages the key observation that the victim encoder returns similar representations for any image and its augmented versions. Therefore, a given image can be sent to the victim while the stolen copy is trained using many augmentations of this image, where the representation of a given augmented image is approximated as the one of the original image produced by the victim encoder. **Defending Encoders.** Recently, watermarking [7; 25; 42] methods have been proposed to detect stolen encoders [14; 16; 43]. Many of these approaches use downstream tasks to check if a watermark embedded into a victim encoder is present in a suspect encoder. Dataset inference [30] is another type of encoder ownership resolution. It uses the victim's training dataset as a unique signature, leveraging the following observation: for a victim encoder trained on its private data as well as for its stolen copies, the distribution of the representations generated from the victim's training data differs from the distribution of the representations generated on the test data. In contrast, for an independently trained encoder, these two distributions cannot be distinguished, allowing the detection of stolen copies [17]. However, all the previous methods are _reactive_ and aim at detecting the stolen encoder instead of _actively_ preventing the attack. The only preliminary active defenses for encoders were proposed by [16; 29]. They either perturb or truncate the answers to poison the training objective of an attacker. These operations were shown to harm substantially the performance of legitimate users, which renders the defense impractical. In contrast, our B4B has negligible impact on the quality of representations returned to legitimate users. ## 3 Actively Defending against Model Stealing with B4B B4B aims at actively preventing model stealing while preserving high-utility representations for legitimate users. Before introducing the three main building blocks of B4B, namely (1) the estimation of embedding space coverage, (2) the cost function, and (3) the transformation of representations (see Figure 1), we detail our threat model and the observation on embedding space coverage that represents the intuition behind our approach. ### Threat Model and Intuition Our setup and the resulting threat model are inspired by public APIs, such as Coherence, OpenAI, or Clarify [1; 2; 3] that expose encoders to users through a pre-defined interface. These encoders are trained using SSL on large amounts of unlabeled data, often crawled from the internet, and therefore from diverse distributions. We notice that to provide rich representations to multiple users, the training dataset of the encoder needs to be significantly more diverse than the individual downstream tasks that the users query for representations. For instance, if the encoder behind the API is trained on the ImageNet dataset, then the legitimate users are expected to query the API for downstream tasks, such as CIFAR10 or SVHN. Similarly, if the encoder is trained on CIFAR10, the expected downstream tasks are MNIST or Fashion MNIST. Yet, in the design of our defense, we consider adversaries who can query the encoder with arbitrary inputs to obtain high-dimensional representation vectors from the encoder. Our defense is independent of the protected encoder's architecture and does not rely on any assumption about the adversary's data and query strategy. We argue that even in this restricted setup, our defense can distinguish between adversaries and legitimate users by analyzing the distribution of representations returned to them. In Figure 2, by using PCA to project representations for different datasets to a two-dimensional space, we visualize that representations for different downstream tasks cluster in _disjoint_ and _small sub-spaces_ of the full embedding space. The representations were obtained from a SimSiam encoder originally trained on ImageNet (we observe similar clustering for DINO shown in Appendix F). As a result, legitimate users can be characterized by their representations' small coverage of the embedding space. In contrast, the adversary does not aim at solving a particular downstream task. They instead would want to obtain representations that cover large fractions of the embedding space. This enables reproducing the overall functionality of the encoder (instead of only learning some local task-specific behavior). Indeed, it has been empirically shown by prior work, such as [16], that stealing with multiple distributions, _e.g.,_ by relying on the complex ImageNet dataset, yields higher performance of the stolen encoder on various downstream tasks than stealing with a downstream dataset, such as CIFAR10. As a result, intuitively, we can identify and penalize adversaries based on their coverage of the embedding space, which will be significantly larger than the coverage of legitimate users. We leverage this intuition to build our B4B defense and present our three main building blocks in the following sections. Figure 2: **Representations from Different Tasks Occupy Different Sub-Spaces of the Embedding Space. Presented for Fashion-MNIST, SVHN, CIFAR10, and STL10.** ### Building Block 1: Coverage Estimation of the Embedding Space The first building block of our B4B serves to estimate and continuously keep track of the fraction of the embedding space occupied by any given user. Let \(\mathcal{E}\) denote our embedding space of dimension \(s\), further, let \(U\) be a user with a query dataset \(D=q_{1},\ldots,q_{n}\in\mathcal{D}\) and let \(f_{v}:\mathcal{D}\rightarrow\mathcal{E}\) be our protected victim encoder that maps data points from the input to the embedding space. Assume user \(U\) has, so far, queried a subset of their data points \(q_{1},\ldots,q_{j}\) with \(j\leq n\) to the encoder and obtained the representations \(r_{1},\ldots,r_{j}\) with each \(r_{i}\in\mathbb{R}^{s}\). We aim to estimate the true fraction of the embedding space \(\mathcal{E}^{U}_{f}\) that is covered by all returned representations \(r_{1},\ldots,r_{j}\) to user \(U\) and denote our estimate by \(\tilde{\mathcal{E}}^{U}_{f}\). Local Sensitive Hashing.One of the methods to approximate the occupied space by representations returned to a given user is via Local Sensitive Hashing (LSH) [39]. We rely on this approach for the concrete instantiation of our B4B and use it to track per-user coverage of the embedding space. Standard (cryptographic) hash functions are characterized by high dispersion such that hash collisions are minimized. In contrast, LSH hashes similar data points into the same or proximal, so-called _hash buckets_. This functionality is desired when dealing with searches in high-dimensional spaces or with a large number of data points. Formally, an LSH function \(\mathcal{H}\) is defined for a metric space \(\mathcal{M}=(M,d)\), where \(d\) is a distance metric in space \(M\), with a given threshold \(T>0\), approximation factors \(f>1\), and probabilities \(P_{1}\) and \(P_{2}\), where \(P_{1}\gg P_{2}\). \(\mathcal{H}\) maps elements of the metric space to buckets \(b\in B\) and satisfies the following conditions for any two points \(q_{1},q_{2}\in M\): (1) If \(d(q_{1},q_{2})\leq T\), then \(\mathcal{H}(q_{1})=\mathcal{H}(q_{2})\) (_i.e.,_\(q_{1}\) and \(q_{2}\) collide in the same bucket \(b\)) with probability at least \(P_{1}\). (2) If \(d(q_{1},q_{2})\geq fT\), then \(\mathcal{H}(q_{1})=\mathcal{H}(q_{2})\) with probability at most \(P_{2}\). ### Building Block 2: Cost Function Design Once we can estimate the coverage of an embedding space for a given user \(U\) as \(\tilde{\mathcal{E}}^{U}_{f}\), we need to design a cost function \(\mathcal{C}:\mathbb{R}^{+}\rightarrow\mathbb{R}^{+}\) that maps from the estimated coverage to a cost. The cost function needs to be designed such that it does not significantly penalize legitimate users while imposing a severe penalty on adversaries to effectively prevent the encoder from being stolen. The semantics of the cost function's range depend on the type of costs that the defender wants to enforce. We discuss a broad range of options in Appendix C. These include monetary cost functions to adaptively charge users on a batch-query basis depending on their current coverage, costs in the form of additional computation that users need to perform in order to obtain their representations, similar to the proof of work in [18], costs in the form of delay in the interaction with the encoder behind the API [4], or costs in form of disk space that needs to be reserved by the user (similar to a proof of space [19; 20]). Which type of cost function is most adequate depends on the defender's objective and setup. Exponential Cost Functions to Adjust Utility of Representations.In the concrete instantiation of B4B that we present in this work, we rely on costs in the form of the utility of the returned representations. We choose this concrete instantiation because it is intuitive, effective, and can be directly experimentally assessed. Moreover, it is even suitable for public APIs where, for example, no monetary costs are applicable. We adjust utility by adding Gaussian noise with different standard deviation \(\sigma\) to the returned representations. Since we do not want to penalize legitimate users with small coverage but make stealing for adversaries with growing coverage increasingly prohibitive, we instantiate an exponential cost function that maps from the fraction of hash buckets occupied by the user to a value for \(\sigma\). We choose the general form of this function as \[f_{\lambda,\alpha,\beta}(\tilde{\mathcal{E}}^{U}_{f})=\lambda\times(\exp^{ \ln\frac{\alpha}{\lambda}\times\tilde{\mathcal{E}}^{U}_{f}\times\beta^{-1}}-1) \tag{1}\] where \(\lambda<1\) compresses the curve of the function to obtain low function values for small fractions of occupied buckets, and then we set a target penalty \(\alpha\) for our cost function at a specified fraction of filled buckets \(\beta\). For instance, if we want to enforce a \(\sigma\) of \(5\) at \(90\%\) of filled buckets (_i.e.,_ for \(\tilde{\mathcal{E}}^{U}_{f}=0.9\)), we would need to set \(\alpha=5\) and \(\beta=0.9\). ### Building Block 3: Per-User Representation Transformations against Sybil Attacks Given that our defense discourages users from querying with a wide variety of data points from different distributions, an adversary could create multiple fake user accounts (sybils) and query different data subsets with more uniform representations from each of these accounts. By combining all the obtained representations and using them to train a stolen copy, the adversary could overcome the increased costs of stealing. To defend against such sybil attacks, we propose individually transforming the representations on a per-user level before returning them. As a result, the adversary would first have to map all the representations to one single unified space before being able to jointly leverage the representations from different accounts for their stolen copy. Formally, for a given query \(q_{i}\), the protected victim encoder produces a representation \(r_{i}=f_{v}(q_{i})\), which is transformed by a user-specific transformation \(T_{U}(r_{i})\) before being returned to the querying user \(U\). For a new user \(U\), the defense randomly selects the transformation \(T_{U}\) from all possible choices. Note that the randomness is also added on a per-transformation basis, instead of only on the level of selecting the transformations. For example, a permutation of the elements in the output representations should be different for each user. We formulate two concrete requirements for the transformations. First, they should preserve utility for legitimate users on their downstream tasks, and second, they should be costly to reverse for an adversary. Utility Preserving Transformations.As a concrete instantiation for our B4B, we propose a set of transformations that fulfill the above-mentioned two requirements: (1) _Affine_ where we apply affine transformations to representations, (2) _Pad_ where we pad representations with constant values, (3) _Add_ where we add constant values at random positions within representations, (4) _Shuffle_ where we shuffle the elements in the representation vectors, and (5) _Binary_ where the original representations are mapped to binary vectors relying on a random partitioning of the representation space. To preserve the full amount of information contained in the original representations, in our binary transformations, we tune the length of binary representations. We visualize the operation of each of these transformations in Appendix C. All these transformations can additionally be combined with each other, which further increases the possible set of transformations applied per user. This renders it impossible for an adversary to correctly guess and reverse the applied representation. Instead, the adversary has to remap the representations over all accounts into a single embedding space in order to unify them and leverage them for training of their stolen encoder copy. We present an exhaustive list of strategies that adversaries can apply for the remapping in Appendix D. All the attack methods reduce to the minimum of remapping between representations of a pair of users, _i.e.,_ they are at least as complex as mapping between two separate accounts. In the next section, we show that our defense already impedes stealing for an adversary with two accounts. ## 4 Empirical Evaluation We first empirically evaluate our instantiation of B4B's three building blocks and show how to calibrate each of them for our defense. Finally, we provide an end-to-end evaluation that highlights B4B's effectiveness in preserving downstream utility for legitimate users while successfully preventing the stealing by adversaries. Experimental Setup.We conduct experiments on various kinds of downstream tasks and two popular SSL encoders. To test our defense, we use FashionMNIST, SVHN, STL10, and CIFAR10 as our downstream datasets, each with standard train and test splits. For stealing, we utilize training data from ImageNet and LAION-5B. We rely on encoder models from the SimSiam [12] and the DINO [9] SSL frameworks. As our victim encoders, we use the publicly available ResNet50 model from SimSiam trained for 100 epochs on ImageNet and the ViT Small DINO encoder trained for 800 epochs on ImageNet, each using batch size 256. The ViT architecture takes as input a grid of non-overlapping contiguous image patches of resolution \(N\)x\(N\). In this paper, we typically use \(N=16\). The Simsiam encoder has an output representation dimension of 2048, while DINO returns 1536 dimensional representations. We examine the utility of downstream classifiers using SimSiam's or DINO's representations obtained for the respective downstream datasets. To implement LSH, we rely on random projections [34] that we implement from scratch. For a detailed description of our stealing and downstream training parameters, we refer to Appendix F. ### Local Sensitive Hashing for Coverage Estimation We first observe that the choice of the total number of hash buckets in the LSH influences the effectiveness of our method. In the extreme, if we have a too large number of buckets, the number of buckets filled will correspond to the number of queries posed by a user which fails to capture that similar representations cover similar sub-spaces of the embedding space, and hence does not serve to approximate the total fraction of the embedding space covered. However, if we have too few buckets, even the representations for simple downstream tasks will fill large fractions of buckets, making it impossible to calibrate the cost function such that it only penalizes adversaries. We experimentally find that for our evaluated encoders, \(2^{12}\) buckets represent a good trade-off. In Appendix F.6, we present an ablation study on the effect of the number of total buckets. Our evaluation of the LSH to track coverage of the embedding space is presented in Figure 3. We observe that representations returned for standard downstream tasks (FashionMNIST, SVHN, CIFAR10) occupy a significantly smaller fraction of the total number of buckets than complex data from multiple distributions (ImageNet, LAION-5B). We present additional experimental results on measuring the coverage of the representation space in Appendix F.5. Specifically, we show that our method of measuring the embedding space coverage has broad applicability across various encoders and datasets used for pretraining. We further observe that the fraction of buckets occupied by the representations saturates over time. These findings highlight that LSH is successful in capturing the differences between legitimate users and adversaries--even in a low-query regime. Finally, we note that our total number of buckets (\(2^{12}\)) is well calibrated since, over all datasets, it successfully maps multiple representations to the same hash bucket while still filling various fractions of the total number of buckets. ### Calibrating the Cost Function We experiment with different sets of hyperparameters to instantiate the cost function from Equation (1) in the previous section (3.3). As described there, we can calibrate the function (as shown in Figure 4) such that a desired penalty (in the form of a specific \(\sigma\)) will be assigned at a certain fraction of buckets occupied. For B4B, we aim at penalizing high embedding space coverage severely. Therefore, we need to identify and optimize for two components: 1) which value of \(\sigma\) leads to significant performance drops, and 2) for what Figure 4: **Cost Function Calibration.** Figure 3: **Estimating Embedding Space Coverage through LSH on SimSiam Encoder.** We present the fraction of buckets occupied by representations of different datasets as a function of the number of queries posed to the encoder _(left)_. We observe that representations for the downstream datasets (FashionMNIST, SVHN, CIFAR10) occupy a smaller fraction of buckets than representations from the complex ImageNet dataset. Our evaluation of the number of queries whose representations are mapped to the same bucket _(right)_ indicates that our total number of buckets (\(2^{12}\)) is well calibrated for the estimation of covered representation space: over all datasets, we experience hash collisions, _i.e.,_ queries whose representations are mapped to the same buckets. This indicates that our LSH is capable of representing similarities in the representations. fraction of coverage do we want to impose this significant drop. We base both components on empirical observations. Our first observation is that for our four downstream tasks (FashionMNIST, SVHN, STL10, and CIFAR10), performance drops to 10% (_i.e.,_ random guessing) at roughly \(\sigma=0.5\). In Figure 3, we further see that with 50k queries, the downstream tasks occupy \(<30\%\) of the buckets. Ultimately, setting \(\alpha\) and \(\beta\) are design choices that an API provider needs to make in order to specify what type of query behavior they want to penalize. As very loose bounds (weak defense), based on our observation, we consider \(\sigma=1\) as a high penalty, which leads to \(\alpha=1\), and select \(\beta=0.8\). This \(\beta\) corresponds to considering 80% of buckets filled as a too-large coverage of the embedding space. We empirically observe that coverage of 80% of buckets occurs, for example, after around 100k of ImageNet queries. By choosing our target \(\beta\) so loose, _i.e.,_ significantly larger than the observed \(30\%\) for downstream tasks, we offer flexibility for the API to also provide good representations for more complex downstream tasks. Finally, to obtain a flat cost curve close to the origin--which serves to map small fractions of covered buckets to small costs--we find that we can set \(\lambda=10^{-6}\). In the Appendix, we evaluate our defense end-to-end with differently parameterized cost functions. ### Assessing the Effect of Transformations Transformations Do Not Harm Utility for Legitimate Users.We evaluate the downstream accuracy for transformed representations based on training a linear classifier on top of them. To separate the effect of the noise added by our defense from the effect of the transformations, we perform the experiments in this subsection without adding noise to the returned representations. For example, on the CIFAR10 dataset and a SimSiam encoder pre-trained on ImageNet, without any transformations applied, we obtain a downstream accuracy of 90.41% (\(\pm\) 0.02), while, with transformations, we obtain 90.24% (\(\pm\) 0.11) for Affine, 90.40% (\(\pm\) 0.05) for Pad+Shuffle, 90.18% (\(\pm\) 0.06) for Affine+Pad+Shuffle, and 88.78% (\(\pm\) 0.2) for Binary. This highlights that the transformations preserve utility for legitimate users. This holds over all datasets we evaluate as we show in Appendix F. Adversaries Cannot Perfectly Remap Representations over Multiple Sybil Accounts.To understand the impact of our per-user account transformations on sybil-attack based encoder stealing, we evaluate the difficulty of remapping representations between different sybil accounts. For simplicity, and since we established in Section 3.4 that multi-account attacks reduce to a two-account setup, we assume an adversary who queries from two sybil accounts and aims at learning to map the transformed representations from account #2 to the representation space of account #1. Using more accounts for the adversary causes a larger query overhead and potentially more performance loss from remapping. Our evaluation here, hence, represents a lower bound on the overhead caused to the adversary through our transformations. We learn the mapping between different accounts' representations by training a linear model on overlapping representations between the accounts. We assess the fidelity of remapped representations as a function of the number of overlapping queries between the accounts. As a fidelity metric for our remapping, we compare the cosine distance between representations (\(a\) and \(b\) defined as: \(1-\frac{a^{T}b}{\|a\|_{a}\|_{a}\|_{b}\|_{2}}\)). Once the remapping is trained, we evaluate by querying 10k data points from the test dataset through account #1 and then again through account #2. Then, we apply the learned remapping to the latter one and compute the pairwise cosine distances between the representations from account #1 and their remapped counterparts from account #2. Our results are depicted in Figure 5. We show that the largest cosine distance is achieved with the binary transformations, making them the most protective against the adversary since they best prevent perfect remapping, even with an overlap of as many as 10k queries between both accounts. However, these binary transformations also incur the highest drop in accuracy for legitimate users. The defender has the possibility of selecting their preferred types of transformations between representations taking into account the trade-offs between the effectiveness of the defense and the negative impact on legitimate users. Figure 5: **Quality of Remappings.** ### End-to-End Stealing of an Encoder under our Defense We perform an end-to-end study to showcase how our B4B defense affects legitimate users vs adversaries. The hyperparameters for B4B are chosen according to the empirical evaluation of the previous sections with \(2^{12}\) as the number of buckets, \(\alpha=1,\beta=0.8,\lambda=10^{-6}\) as the hyperparameter of the cost function, and different random affine transformations per-user account. Our main results are presented in Table 1. We observe that instantiating our framework with B4B has a negligible impact on legitimate users while substantially lowering the performance of stolen encoders in the case of single-user and sybil attackers. **Legitimate Users.** We compare the accuracy of downstream classifiers trained on top of unprotected vs defended encoders. The victim encoder achieves high accuracy on the downstream tasks when no defense is employed. With B4B in place, we observe that across all the downstream tasks, the drop in performance is below 1%. For example, there is only a slight decrease in the accuracy of CIFAR10 from 90.41\(\pm 0.02\)% to 90.24\(\pm 0.11\)%. B4B's small effect on legitimate users stems from the fact that their downstream representations cover a relatively small part of the representations space. This results in a very low amount of noise added to their representations which preserves performance. **Adversaries.** For adversaries who create a stolen copy of the victim encoder, we make two main observations. The most crucial one is that when our B4B is in place, the performance of the stolen copies over all downstream tasks significantly drops in comparison to when the victim encoder is unprotected (grey rows in Table 1). This highlights that our B4B effectively prevents stealing. Our next key observation concerns the number of stealing queries used by the adversary: When no defense is applied, the more queries are issued against the API (_e.g.,_ 100k instead of 50k), the higher performance of the stolen encoder on downstream tasks (_e.g.,_ CIFAR10 or FashionMNIST). In contrast, with B4B implemented as a defense, the performance decreases when using more stealing queries from a single account. This is because with more queries issued, the coverage of embedding space grows which renders the returned representations increasingly noisy and harms stealing performance. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline User & Defense & \# Queries & Dataset & Type & CIFAR10 & STL10 & SVHN & F-MNIST \\ \hline Legit & None & All & Task & Query & 90.41 \(\pm 0.02\) & 95.08\(\pm 0.13\) & 75.47\(\pm 0.04\) & 91.22\(\pm 0.11\) \\ Legit & B4B & All & Task & Query & 90.24\(\pm 0.11\) & 95.05\(\pm 0.1\) & 74.96\(\pm 0.13\) & 91.71\(\pm 0.01\) \\ Attack & None & 50K & Imgnet & Steal & 65.2\(\pm 0.03\) & 64.9\(\pm 0.01\) & 63.1\(\pm 0.01\) & 88.5 \(\pm 0.01\) \\ Attack & B4B & 50K & Imgnet & Steal & **35.72\(\pm 0.04\)** & **31.54\(\pm 0.02\)** & **19.74\(\pm 0.02\)** & **70.01\(\pm 0.01\)** \\ Attack & None & 100K & Imgnet & Steal & 68.1 \(\pm 0.03\) & 63.1 \(\pm 0.01\) & 61.5 \(\pm 0.01\) & 89.0 \(\pm 0.07\) \\ Attack & B4B & 100K & Imgnet & Steal & 12.01\(\pm 0.07\) & 13.94\(\pm 0.05\) & 19.96\(\pm 0.03\) & **69.63\(\pm 0.07\)** \\ Attack & None & 100K & LAION & Steal & 64.92\(\pm 0.03\) & 62.51\(\pm 0.03\) & 59.02\(\pm 0.02\) & 84.54\(\pm 0.01\) \\ Attack & B4B & 100K & LAION & Steal & 40.96\(\pm 0.03\) & **40.69\(\pm 0.05\)** & **34.43\(\pm 0.01\)** & **72.92\(\pm 0.01\)** \\ \hline Sybil & B4B & 2x50K & Imgnet & Steal & 39.56\(\pm 0.06\) & 38.50\(\pm 0.04\) & 23.41\(\pm 0.02\) & 77.01\(\pm 0.08\) \\ Sybil & B4B & 3x33.3k & Imgnet & Steal & 33.87\(\pm 0.05\) & 38.57\(\pm 0.06\) & 21.16\(\pm 0.01\) & 72.95\(\pm 0.05\) \\ Sybil & B4B & 4x25k & Imgnet & Steal & 33.98\(\pm 0.04\) & 34.52\(\pm 0.08\) & 21.21\(\pm 0.02\) & 70.71\(\pm 0.05\) \\ Sybil & B4B & 5x20K & Imgnet & Steal & 32.65\(\pm 0.05\) & 32.45\(\pm 0.05\) & 29.63\(\pm 0.01\) & 70.12\(\pm 0.08\) \\ Sybil & B4B & 6x16.7k & Imgnet & Steal & 26.62\(\pm 0.04\) & 26.85\(\pm 0.05\) & 24.32\(\pm 0.02\) & 70.51\(\pm 0.04\) \\ \hline \hline \end{tabular} \end{table} Table 1: **Stealing and Using Encoders With and Without our Defense.** The _USER_ column represents the type of the APIs’ user, where LEGIT denotes a legitimate user, ATTACKER stands for a standard single-account adversary, and SYBIL represents an adversary using two sybil accounts. We use InfoNCE loss for encoder extraction. # Queries stands for the number of queries used for stealing with ALL denoting that the entire downstream dataset was used. The _TYPE_ column expresses how the dataset is used. We follow the stealing setup from [17]. In the first row, we present the undefended victim encoder’s performance as the accuracy for downstream tasks trained on the encoder’s returned representations. In the following row, we show downstream utility for legitimate users when the victim encoder is defended by our B4B. Finally, (in the remaining rows) we assess the performance of stolen encoders on the downstream tasks. Our results highlight that while the performance of the encoder for legitimate users stays high, our B4B renders stealing inefficient with the stolen encoders obtaining significantly worse performance on downstream tasks. Moreover, we show that B4B can also prevent model stealing attacks with data from a different distribution than the victim encoder's training set. We highlight this in Table 1 where we also use the LAION-5B dataset to steal an ImageNet pre-trained encoder. Our results highlight first that without any defense in place, the LAION dataset is highly effective to extract the ImageNet pre-trained encoder. Second, B4B effectively defends against such attacks, and yields a significant drop in downstream accuracy (on average above 20%) of the stolen encoder. We also show that this performance drop cannot be prevented by sybil attacks. Therefore, we first consider an adversary who queries from two sybil accounts with 50k queries issued per account and the first 10k queries of both accounts used to learn the remapping of representations between them. When the adversary trains their stolen encoder copy on all the remapped representations, they increase downstream performance over querying from a single account. Yet, their performance is still significantly smaller than the performance of the victim encoder for legitimate users, or the encoder stolen from an undefended victim. Moreover, using more than two sybil accounts further reduces the attack performance as remapping complications accumulate. With ten sybils, remapping leaves no more usable data for training the stolen encoder. This demonstrates our method's advantage: increasing the number of sybil accounts makes encoder extraction impractical due to the growing remapping overhead. Overall, the results highlight that our B4B also successfully prevents sybil attacks. ### Baseline Comparison Finally, we compare our B4B against the current state-of-the-art baseline defense, namely a static addition of noise to all the returned representations (as proposed in [16] (Section A.4),[29, 36]). For direct comparability, we evaluate the defense using the our end-to-end experiment setup from the previous section. We present our results in Table 6 in Appendix F.4. Confirming the findings from [16] our results also show that defenses that rely on static noise have the great disadvantage to harm legitimate users and attackers equally. When adding noise with a small standard deviation of \(\sigma=0.1\), we observe a negligible (<1%) performance drop for both attackers and legitimate users. Adding noise with a large standard deviation of, for example, \(\sigma=10\), we observe that both legitimate users' and attackers' performance drops between 15% and >40%. In summary, these defenses can either effectively defend stealing (but harm legitimate users), or keep utility for legitimate users high (but not defend well against stealing). In contrast, our B4B is able to provide high performance for legitimate users while effectively defending the encoder against stealing attacks. ## 5 Conclusions We design B4B a new and modular active defense framework against stealing SSL encoders. All the previous approaches were either reactive, acting after the attack happened to detect stolen encoders, or lowered the quality of outputs substantially also for legitimate users which rendered such mechanisms impractical. We show that B4B successfully distinguishes between legitimate users and adversaries by tracking the embedding space coverage of users' obtained representations. B4B then leverages this tracking to apply a cost function that penalizes users based on the current space coverage, for instance, by lowering the quality of their outputs. Finally, B4B prevents sybil attacks by implementing per-user transformations for the returned representations. Through our experimental evaluation, we show that our defense indeed renders encoder stealing inefficient while preserving downstream utility for legitimate users. Our B4B is therefore a valuable contribution to a safer sharing and democratization of high-utility encoders over public APIs. ## Acknowledgments and Disclosure of Funding This research was supported by Warsaw University of Technology within the Excellence Initiative Research University (IDUB) programme, National Science Centre, Poland grant no 2020/39/O/ST6/01478, grant no 2020/39/B/ST6/01511, grant no 2022/45/B/ST6/02817, and in part by PL-Grid Infrastructure grant nr PLG/2023/016361. The authors applied for a CC BY license to any Author Accepted Manuscript (AAM) version arising from this submission, in accordance with the grants' open access conditions.
2301.11761
A Strongly Polynomial-Time Algorithm for Weighted General Factors with Three Feasible Degrees
General factors are a generalization of matchings. Given a graph $G$ with a set $\pi(v)$ of feasible degrees, called a degree constraint, for each vertex $v$ of $G$, the general factor problem is to find a (spanning) subgraph $F$ of $G$ such that $\text{deg}_F(x) \in \pi(v)$ for every $v$ of $G$. When all degree constraints are symmetric $\Delta$-matroids, the problem is solvable in polynomial time. The weighted general factor problem is to find a general factor of the maximum total weight in an edge-weighted graph. In this paper, we present the first strongly polynomial-time algorithm for a type of weighted general factor problems with real-valued edge weights that is provably not reducible to the weighted matching problem by gadget constructions.
Shuai Shao, Stanislav Živný
2023-01-27T14:59:21Z
http://arxiv.org/abs/2301.11761v3
# A Strongly Polynomial-Time Algorithm for ###### Abstract General factors are a generalization of matchings. Given a graph \(G\) with a set \(\pi(v)\) of feasible degrees, called a degree constraint, for each vertex \(v\) of \(G\), the general factor problem is to find a (spanning) subgraph \(F\) of \(G\) such that \(\deg_{F}(x)\in\pi(v)\) for every \(v\) of \(G\). When all degree constraints are symmetric \(\Delta\)-matroids, the problem is solvable in polynomial time. The weighted general factor problem is to find a general factor of the maximum total weight in an edge-weighted graph. Strongly polynomial-time algorithms are only known for weighted general factor problems that are reducible to the weighted matching problem by gadget constructions. In this paper, we present the first strongly polynomial-time algorithm for a type of weighted general factor problems with real-valued edge weights that is provably not reducible to the weighted matching problem by gadget constructions. ## 1 Introduction A matching in an undirected graph is a subset of the edges that have no vertices in common, and it is perfect if its edges cover all vertices of the graph. Graph matching is one of the most studied problems both in graph theory and combinatorial optimization, with beautiful structural results and efficient algorithms described, e.g., in the monograph of Lovasz and Plummer [10] and in relevant chapters of standard textbooks [14, 15]. In particular, the _weighted (perfect) matching problem_ is to find a (perfect) matching of the maximum total weight for a given graph of which each edge is assigned a weight. This problem can be solved in polynomial time by the celebrated Edmonds' blossom algorithm [1, 2]. Since then, a number of more efficient algorithms have been developed [1, 13, 14, 15, 16, 17, 18, 19, 20, 21]. Table III of [10] gives a detailed review of these algorithms. The _\(f\)-factor problem_ is a generalization of the perfect matching problem in which one is given a non-negative integer \(f(v)\) for each vertex \(v\in V\) of \(G=(V,E)\). The task is to find a (spanning) subgraph \(F=(V_{F},E_{F})\) of \(G\) such that \(\deg_{F}(v)=f(v)\) for every \(v\in V\).1 The case \(f(v)=1\) for every \(v\in V\) is the perfect matching problem. This problem, as well as the weighted version, can be solved efficiently by a gadget reduction to the perfect matching problem [1]. In addition, Tutte gave a characterization of graphs having an \(f\)-factor [14], which generalizes his characterization theorem for perfect matchings [14]. Subsequently, the study of graph factors has attracted much attention with many variants of graph factors, e.g., \(b\)-matchings, \([a,b]\)-factors, \((g,f)\)-factors, parity \((g,f)\)-factors, and anti-factors introduced, and various types of characterization theorems proved for the existence of such factors. We refer the reader to the book [1] and the survey [15] for a comprehensive treatment of the developments on the topic of graph factors. In the early 1970s, Lovasz introduced a generalization of the above factor problems [13, 14], for which we will need a few definitions. For any nonnegative integer \(n\), let \([n]\) denote \(\{0,1,\ldots,n\}\). A degree constraint \(D\) of arity \(n\) is a subset of \([n]\).2 We say that a degree constraint \(D\) has a gap of length \(k\) if there exists \(p\in D\) such that \(p+1,\ldots,p+k\notin D\) and \(p+k+1\in D\). An instance of the _general factor problem_ (GFP) [13, 14] is given by a graph \(G=(V,E)\) and a mapping \(\pi\) that maps every vertex \(v\in V\) to a degree constraint \(\pi(v)\subseteq[\deg_{G}(v)]\) of arity \(\deg_{G}(v)\). The task is to find a subgraph, if one exists, \(F\) of \(G\) such that \(\deg_{F}(v)\in\pi(v)\) for every \(v\in V\). The case \(\pi(v)=\{0,1\}\) for every \(v\in V\) is the matching problem, and the case \(\pi(v)=\{1\}\) for every \(v\in V\) is the perfect matching problem. Lovasz showed that the GFP is NP-complete when the degree constraint \(\{0,3\}\) of arity \(3\) (and gap \(2\)) occurs [13]. Later, answering a question of Lovasz, Cornuejols showed that the GFP is solvable in polynomial time if each degree constraint has gaps of length at most \(1\)[13]. Footnote 2: We always associate a degree constraint with an arity. Two degree constraints are different if they have different arities although they may be the same set of integers. In this paper, we consider the _weighted general factor problem_ (WGFP) where each edge is assigned a real-valued weight and the task is to find a general factor of the maximum total weight. Since the unweighted version is already hard when a degree constraint with a gap of length more than \(1\) occurs [13], we only need to consider the WGFP where each degree constraint has gaps of length at most \(1\). Some cases of the WGFP are reducible to the weighted matching or perfect matching problem by gadget constructions, and hence are polynomial-time solvable. **Definition 1.1** (Matching Gadget).: _A gadget using a set \(\mathscr{D}\) of degree constraints consists of a graph \(G=(U\cup V,E)\) where \(\deg_{G}(u)=1\) for every \(u\in U\) and there are no edges between vertices in \(U\), and a mapping \(\pi:V\to\mathscr{D}\). A matching gadget is a gadget where \(\mathscr{D}=\{\{0,1\},\{1\}\}\)._ _A degree constraint \(D\) of arity \(n\) is matching realizable if there exists a matching gadget \((G=(U\cup V,E),\pi:V\to\{\{0,1\},\{1\}\})\) such that \(|U|=n\) and for every \(k\in[n]\), \(k\in D\) if and only if for every \(W\subseteq U\) with \(|W|=k\), there exists a matching \(F=(V_{F},E_{F})\) of \(G\) such that \(V_{F}\cap U=W\) and for every \(v\in V\) where \(\pi(v)=\{1\}\), \(v\in V_{F}\)._ The degree constraint \(D=[b]\) (of arbitrary arity), where \(b>0\), for \(b\)-matchings is realizable by a gadget using only the degree constraint \(\{0,1\}\)[14]. Thus, the weighted \(b\)-matching problem is reducible to the weighted matching problem. The weighted \(b\)-matching problem is interesting in its own right in combinatorial optimization and has been well studied with many elaborate algorithms developed [15, 16, 17, 18, 19]. Besides \(b\)-matchings, Cornuejols showed that the _parity interval_ constraint \(D=\{g,g+2,\ldots,f\}\) (of arbitrary arity), where \(f\geq g\geq 0\) and \(f\equiv g\mod 2\), for parity \((g,f)\)-factors is realizable by a gadget using only the degree constraint \(\{1\}\)[14]. Later, Szabo showed that the _interval_ constraint \(D=\{g,g+1,\ldots,f\}\) (of arbitrary arity), where \(f\geq g\geq 0\), for \((g,f)\)-factors is realizable by a gadget involving edges and factor-critical subgraphs [15], which is indeed realizable by a gadget using both \(\{0,1\}\) and \(\{1\}\). Thus, the WGFP where each degree constraint is an interval or a parity interval is reducible to the weighted matching problem (with some vertices required to have degree exactly \(1\)) and hence solvable in polynomial-time by Edmonds' algorithm, although Szabo gave a different algorithm for this problem [11]. By reducing the WGFP with interval and parity interval constraints to the weighted \((g,f)\)-factor problem, a faster algorithm was obtained in [13] based on Gabow's algorithm [1]. In [11], Szabo further conjectured that the WGFP is solvable in polynomial time without requiring each degree constraint being an interval or a parity interval, as long as each degree constraint has gaps of length at most \(1\). To prove the conjecture, a natural question is then the following: _Are there other WGFPs that are polynomial-time solvable by a gadget reduction to weighted matchings?_ In other words, _are there other degree constraints that are matching realizable?_ In this paper, we show that the answer is _no_. **Theorem 1.2**.: _A degree constraint with gaps of length at most \(1\) is matching realizable if and only if it is an interval or a parity interval._ This condition is also a sufficient and necessary condition for a degree constraint to be realized by a gadget involving edges and factor-critical subgraphs [11]. With the answer for the above question being negative, new algorithms need to be devised for the WGFP with degree constraints that are not intervals or parity intervals. Unlike the weighted matching problem and the weighted \(b\)-matching problem for which various types of algorithms have been developed, only one algorithm has been presented for the more general and challenging WGFP: For the cardinality version of WGFP, i.e., the WGFP where each edge is assigned weight \(1\), Dudycz and Paluch introduced a polynomial-time algorithm for this problem with degree constrains having gaps of length at most \(1\), which leads to a pseudo-polynomial-time algorithm for the WGFP with non-negative integral edge weights [13]. Later, in an updated version [13], the algorithm was improved to be weakly polynomial-time with a running time \(O(\log Wmn^{6})\), where \(W\) is the largest edge weight, \(m\) is the number of edges and \(n\) is the number of vertices. Independently of [13], in this paper, we make the first step towards a strongly polynomial-time algorithm for the WGFP. Let \(p\geq 0\) be an arbitrary integer. Consider the following two types of degree constraints \(\{p,p+1,p+3\}\) and \(\{p,p+2,p+3\}\) (of arbitrary arity). We will call them _type-1_ and _type-2_ respectively. These are the "smallest" degree constraints that are not matching realizable. **Theorem 1.3** (Main).: _There is a strongly polynomial-time algorithm for the WGFP with real-valued edge weights where each degree constraint is an interval, a parity interval, a type-1, or a type-2 (of arbitrary arities). The algorithm runs in time \(O(n^{6})\) for a given graph with \(n\) vertices._ In particular, this gives a tractability result for the WGFP with degree constraints that are provably not matching realizable, thus going beyond existing algorithms. The algorithm is a recursive algorithm that uses as a black-box the GFP with constraints having gaps of length at most \(1\) and the WGFP with interval and parity interval constraints. For the WGFP with interval, parity interval, type-1 and type-2 degree constraints, we present a delicate structural result, which is more refined than the structural result in [13], though the result in [13] holds for the more general WGFP with all degree constraints having gaps of length at most \(1\). Equipped with this result, we are able to bound the number of recursive calls of our algorithm by the number of vertices of the graph, instead of the edge weight. In addition, as a by-product, we give a simple proof of the result of [13] for the special case of WGFP with interval, parity interval, type-1 and type-2 degree constraints by reducing the problem to WGFP on subcubic graphs and utilizing the equivalence between \(2\)-vertex connectivity and \(2\)-edge connectivity of subcubic graphs. Let \(D\) be a degree constraint of arity at most \(3\). If \(D\neq\{0,3\}\) then \(D\) is an interval, a parity interval, a type-1, or a type-2. Combining with the above-mentioned NP-hardness of the decision case [10], we obtain a complexity dichotomy for the WGFP on subcubic graphs. **Theorem 1.4**.: _The WGFP on subcubic graphs is strongly polynomial-time solvable if the degree constraint \(\{0,3\}\) of arity three does not occur. Otherwise, it is NP-hard._ Related workThe edge constraint satisfaction problem (CSP) is a type of CSPs in which every variable appears in exactly two constraints [14, 15]. The counting version of the edge-CSP is known as the Holant problem [11]. For the edge-CSP on the Boolean domain, Feder showed that the problem is NP-complete if a constraint that is not a \(\Delta\)-matroid occurs, except for those that are tractable by Schaefer's dichotomy theorem [13]. In a subsequent line of work [12, 11, 13, 14], tractability of the Boolean edge-CSP has been established for special classes of \(\Delta\)-matroids, most recently for even \(\Delta\)-matroids [15]. A complete complexity classification for the Boolean edge-CSP is still open with the conjecture that all \(\Delta\)-matroids are tractable. The graph factor problem is a special case of the Boolean edge-CSP where every constraint is symmetric (i.e, the value of the constraint only depends on the Hamming weight of its input). For a degree constraint (or a symmetric constraint), it is a \(\Delta\)-matroid if and only if it has gaps of length at most \(1\). Thus, the above conjecture holds for the symmetric Boolean edge-CSP by Cornuejols' result on the general factor problem [16]. A complexity classification for the weighted Boolean edge-CSP is certainly a more challenging goal. Our result in Theorem 1.3 gives a tractability result for the weighted Boolean edge-CSP with certain symmetric \(\Delta\)-matroids as constraints, and our result in Theorem 1.4 establishes a complexity dichotomy for the weighted Boolean edge-CSP with symmetric constraints of arity no more than \(3\). We note that the weighted Boolean edge-CSP with even \(\Delta\)-matroids as constraints is still open (although [15] solved not only the decision case but also a certain optimization variant of the problem, which is different though from the natural weighted version considered in this paper). OrganizationIn Section 2, we present basic definitions and notation. In Section 3, we describe our algorithm and give a structural result for the WGFP which ensures the correctness and the polynomial-time running time of our algorithm. In Section 4, we introduce basic augmenting subgraphs as an analogy of augmenting paths for weighed matchings and give a proof of the structural result. The proof is based on a result regarding the existence of certain basic factors for subcubic graphs, which is proved in Section 5. Finally, we discuss matching realizability and its relation with \(\Delta\)-matroids in Appendix A. ## 2 Preliminaries Let \(\mathscr{D}\) be a (possibly infinite) set of degree constraints. **Definition 2.1**.: _The weighted general factor problem parameterized by \(\mathscr{D}\), denoted by \(\operatorname{WGFP}(\mathscr{D})\), is the following computational problem. An instance is a triple \(\Omega=(G,\pi,\omega)\), where \(G=(V,E)\) is a graph, \(\pi:V\to\mathscr{D}\) assigns to every \(v\in V\) a degree constraint \(D_{v}\in\mathscr{D}\) of arity \(\deg_{G}(V)\), and \(\omega:E\to\mathbb{R}\) assigns to every \(e\in E\) a real-valued weight \(w(e)\in\mathbb{R}\). The task is to find, if one exists, a general factor \(F\) of \(G\) such that the total weight of edges in \(F\) is maximized._ _The general factor problem \(\operatorname{GFP}(\mathscr{D})\) is the decision version of \(\operatorname{WGFP}(\mathscr{D})\); i.e., deciding whether a general factor exists or not._ Suppose that \(\Omega=(G,\pi,\omega)\) is a WGFP instance. If \(F\) is a general factor of \(G\) under \(\pi\), then we say that \(F\) is a factor of \(\Omega\), denoted by \(F\in\Omega\). In terms of this inclusion relation, \(\Omega\) can be viewed as a set of subgraphs of \(G\). We extend the edge weight function \(\omega\) to subgraphs of \(G\). For a subgraph \(H\) of \(G\), its weight \(\omega(H)\) is \(\sum_{e\in E(H)}\omega(e)\) (\(\omega(H)=0\) if \(H\) is the empty graph). If \(H\) contains an isolated vertex \(v\), then \(\omega(H)=\omega(H^{\prime})\), where \(H^{\prime}\) is the graph obtained from \(H\) by removing \(v\). Moreover, \(H\in\Omega\) if and only if \(H^{\prime}\in\Omega\). In the following, without other specification, we always assume that a factor does not contain any isolated vertices. The optimal value of \(\Omega\), denoted by \(\operatorname{Opt}(\Omega)\), is \(\max_{F\in\Omega}\omega(F)\). We define \(\operatorname{Opt}(\Omega)=-\infty\) if \(\Omega\) has no factor. A factor \(F\) of \(\Omega\) is _optimal_ in \(\Omega\) if \(\omega(F)=\operatorname{Opt}(\Omega)\). For a WGFP instance \(\Omega^{\prime}=(G^{\prime},\pi^{\prime},\omega^{\prime})\), where \(G^{\prime}\subseteq G^{3}\) and \(\omega^{\prime}\) is the restriction of \(\omega\) on the edges of \(G^{\prime}\), we say \(\Omega^{\prime}\) is a _sub-instance_ of \(\Omega\), denoted by \(\Omega^{\prime}\subseteq\Omega\), if \(F\in\Omega\) for every \(F\in\Omega^{\prime}\). In particular, \(\Omega^{\prime}\) is a subset of \(\Omega\) by viewing them as two sets of subgraphs of \(G\). If \(\Omega^{\prime}\subseteq\Omega\), then \(\operatorname{Opt}(\Omega^{\prime})\leq\operatorname{Opt}(\Omega)\). For two WGFP instances \(\Omega_{1}=(G,\pi_{1},\omega)\) and \(\Omega_{2}=(G,\pi_{2},\omega)\), we use \(\Omega_{1}\cup\Omega_{2}\) to denote the union of factors of these two instances, i.e., \(\Omega_{1}\cup\Omega_{2}=\{F\subseteq G\mid F\in\Omega_{1}\text{ or }F\in\Omega_{2}\}\), and \(\Omega_{1}\cap\Omega_{2}\) to denote the intersection, i.e., \(\Omega_{1}\cap\Omega_{2}=\{F\subseteq G\mid F\in\Omega_{1}\text{ and }F\in\Omega_{2}\}\). Note that \(\Omega_{1}\cup\Omega_{2}\) and \(\Omega_{1}\cap\Omega_{2}\) are just sets of subgraphs of \(G\) and may not define WGFP instances on \(G\). We use \(\mathscr{G}_{1}\) and \(\mathscr{G}_{2}\) to denote the set of degree constraints that are intervals and parity intervals, respectively, and \(\mathscr{T}_{1}\) and \(\mathscr{T}_{2}\) to denote the set of degree constraints that are type-1 and type-2, respectively. Let \(\mathscr{G}=\mathscr{G}_{1}\cup\mathscr{G}_{2}\) and \(\mathscr{T}=\mathscr{T}_{1}\cup\mathscr{T}_{2}\). In this paper, we study the problem WGFP\((\mathscr{G}\cup\mathscr{T})\). Let \(H_{1}=(V_{1},E_{1})\) and \(H_{2}=(V_{2},E_{2})\) be two subgraphs of \(G\). The symmetric difference graph \(H_{1}\Delta H_{2}\) is the induced subgraph of \(G\) induced by the edge set \(E_{1}\Delta E_{2}\). Note that there are no isolated vertices in a symmetric difference graph. When \(E_{1}\cap E_{2}=\emptyset\), we may write \(H_{1}\Delta H_{2}\) as \(H_{1}\cup H_{2}\). When \(E_{2}\subseteq E_{1}\), we may write \(H_{1}\Delta H_{2}\) as \(H_{1}\backslash H_{2}\). A _subcubic_ graph is defined to be a graph where every vertex has degree \(1,2\) or \(3\). Unless stated otherwise, we use \(V_{G}\) and \(E_{G}\) to denote the vertex set and the edge set of a graph \(G\), respectively. **Definition 2.2** (2-vertex-connectivity).: _A connected graph \(G\) is 2-vertex-connected (or \(2\)-connected) if it has more than \(2\) vertices and remains connected by removing any vertex._ Menger's Theorem gives an equivalent definition of 2-connectivity, cf. [10] for a proof. **Theorem 2.3** (Menger's Theorem).: _A connected graph \(G\) is \(2\)-connected if and only if for any two vertices of \(G\), there exists two vertex disjoint paths connecting them (i.e., there is a cycle containing these two vertices)._ **Definition 2.4** (Bridge and 2-edge-connectivity).: _A bridge of a connected graph is an edge whose deletion makes the graph disconnected. A connected graph is 2-edge-connected if it has no bridge._ The following theorem is the edge version of Menger's Theorem. **Theorem 2.5**.: _A connected graph \(G\) is \(2\)-edge-connected if and only if for any two vertices of \(G\), there exists two edge disjoint paths connecting them._ If two paths connecting a pair of vertices are vertex-disjoint, then they are also edge-disjoint. Thus, 2-vertex-connectivity implies 2-edge-connectivity. For subcubic graphs, one can check that two edge-disjoint paths are also vertex-disjoint. Thus, for subcubic graphs, 2-vertex-connectivity is equivalent to 2-edge-connectivity. In particular, we have the following result. **Lemma 2.6**.: _If a connected subcubic graph is not 2-connected, then it contains a bridge._ The following fact regarding 2-connected graphs will also be used. **Lemma 2.7**.: _Let \(G=(V_{G},E_{G})\) be a 2-connected graph, \(H=(V_{H},E_{H})\subseteq G\), and \(u\in V_{H}\). If \(\deg_{H}(u)=2<\deg_{G}(u)=3\), then there exists a path \(p_{uw}=(V_{p_{uw}},E_{p_{uw}})\subseteq G\) with endpoints \(u\) and \(w\) for some \(w\in V_{H}\) such that \(E_{p_{uw}}\cap E_{H}=\emptyset\)._ Proof.: Since \(\deg_{H}(u)=2<\deg_{G}(u)=3\), there is an edge \(e_{vu}=(v,u)\in E_{G}\) incident to \(u\) such that \(e_{vu}\notin E_{H}\). If \(v\in V_{H}\), then the edge \(e_{vu}\) is the desired path. Thus, we may assume that \(v\notin V_{H}\). Since \(G\) is 2-connected, there is a path \(p_{vu}\) with endpoints \(v\) and \(u\) such that \(e_{vu}\notin E_{p_{vu}}\). Since \(u\in V_{H}\), \(V_{p_{vu}}\cap V_{H}\neq\emptyset\). Let \(w\) be the first vertex in the path \(p_{vu}\) (within the order of traversing the path from \(v\) to \(u\)) belonging to \(V_{H}\). Then, \(w\neq u\) since \(e_{vu}\notin E_{p_{vu}}\) and \(\deg_{G}(u)=3\). Also, \(w\neq v\) since \(v\notin V_{H}\). Let \(p_{vw}\subsetneq p_{vu}\) be the segment with endpoints \(v\) and \(w\). Then, \(E_{p_{vw}}\cap E_{H}=\emptyset\). Let \(p_{uw}\) be the path consisting of \(e_{vu}\) and \(p_{vw}\). It has endpoints \(u,w\in V_{H}\), and \(E_{p_{uw}}\cap E_{H}=\emptyset\). ## 3 Algorithm We give a recursive algorithm for the problem \(\operatorname{WGFP}(\mathscr{G}\cup\mathscr{T})\), using the problems \(\operatorname{WGFP}(\mathscr{G})\) and the decision problem \(\operatorname{GFP}(\mathscr{G}\cup\mathscr{T})\) as oracles. Given an instance \(\Omega=(G,\pi,\omega)\) of \(\operatorname{WGFP}(\mathscr{G}\cup\mathscr{T})\), we define the following sub-instances of \(\Omega=(G,\pi,\omega)\) that will be used in the recursion. Recall that \(V_{G}\) denotes the vertex set of the underlying graph \(G\). Let \(T_{\Omega}\) denote the set \(\{v\in V_{G}\mid\pi(v)\in\mathscr{T}\}\). (We may omit the subscript \(\Omega\) of \(T_{\Omega}\) when it is clear from the context.) For every vertex \(v\in T_{\Omega}\), we split the instance \(\Omega\) in two by splitting the degree constraint \(\pi(v)\) in two parity intervals. More precisely, we define \[D_{v}^{0}=\{p_{v}+1,p_{v}+3\}\ \ \text{and}\ \ D_{v}^{1}=\{p_{v}\} \text{if}\ \ \ \ \pi(v)=\{p_{v},p_{v}+1,p_{v}+3\}\in\mathscr{T}_{1};\] \[D_{v}^{0}=\{p_{v},p_{v}+2\}\ \ \text{and}\ \ D_{v}^{1}=\{p_{v}+3\} \text{if}\ \ \ \ \pi(v)=\{p_{v},p_{v}+2,p_{v}+3\}\in\mathscr{T}_{2}.\] We have \(D_{v}^{0},D_{v}^{1}\in\mathscr{G}_{2}\). For \(i\in\{0,1\}\) and \(v\in T_{\Omega}\), we define \(\Omega_{v}^{i}=(G,\pi_{v}^{i},\omega)\) to be the sub-instance of \(\Omega\) where \(\pi_{v}^{i}(x)=\pi(x)\) for every \(x\in V_{G}\backslash\{v\}\) and \(\pi_{v}^{i}(v)=D_{v}^{i}\). Then, for every \(v\in T_{\Omega}\), we have \(\Omega_{v}^{0}\cap\Omega_{v}^{1}=\emptyset\) and \(\Omega_{v}^{0}\cup\Omega_{v}^{1}=\Omega\). Moreover, \(T_{\Omega_{v}^{0}}=T_{\Omega_{v}^{1}}=T_{\Omega}\backslash\{v\}\). Let \(F\) be a factor of \(\Omega\). Similarly to above, one can partition \(\Omega\) into \(2^{|T_{\Omega}|}\) many sub-instances according to \(F\) such that each one is an instance of \(\operatorname{WGFP}(\mathscr{G})\) - for each \(v\in T_{\Omega}\), we choose one of the two splits of \(\pi(v)\) as above. (We note that the algorithm will not consider all exponentially many sub-instances.) In detail, for every vertex \(v\in T_{\Omega}\), we define \(D_{v}^{F}=D_{v}^{i}\) where \(\deg_{F}(v)\in D_{v}^{i}\) as follows: \[D_{v}^{F}=\{p_{v}\} \text{if}\ \ \ \pi(v)=\{p_{v},p_{v}+1,p_{v}+3\}\in\mathscr{T}_{1}\ \ \text{and}\ \ \deg_{F}(v)=p_{v},\] \[D_{v}^{F}=\{p_{v}+1,p_{v}+3\} \text{if}\ \ \ \pi(v)=\{p_{v},p_{v}+1,p_{v}+3\}\in\mathscr{T}_{1}\ \ \text{and}\ \ \deg_{F}(v)\neq p_{v};\] \[D_{v}^{F}=\{p_{v}+3\} \text{if}\ \ \ \ \pi(v)=\{p_{v},p_{v}+2,p_{v}+3\}\in\mathscr{T}_{2}\ \ \text{and}\ \ \deg_{F}(v)=p_{v}+3,\] \[D_{v}^{F}=\{p_{v},p_{v}+2\} \text{if}\ \ \ \pi(v)=\{p_{v},p_{v}+2,p_{v}+3\}\in\mathscr{T}_{2}\ \ \text{and}\ \ \deg_{F}(v)\neq p_{v}+3.\] By definition, \(\deg_{F}(v)\in D_{v}^{F}\subseteq\pi(v)\) and \(D_{v}^{F}\in\mathscr{G}_{2}\). In fact, \(D_{v}^{F}\) is the maximal set such that \(\deg_{F}(v)\in D_{v}^{F}\subseteq\pi(v)\) and \(D_{v}^{F}\in\mathscr{G}_{2}\). One can also check that for every \(v\in T\), \(\pi(v)\backslash D_{v}^{F}\in\mathscr{G}_{2}\), and moreover for every \(p\in D_{v}^{F}\) and \(q\in\pi(v)\backslash D_{v}^{F}\), \(p\not\equiv q\mod 2\). For every \(W\subseteq T_{\Omega}\), we define \(\Omega_{W}^{F}=(G,\pi_{W}^{F},\omega)\) to be the sub-instance of \(\Omega\) where \[\pi_{W}^{F}(v)=\pi(v)\backslash D_{v}^{F} \text{for}\ \ \ v\in W, \tag{1}\] \[\pi_{W}^{F}(v)=D_{v}^{F} \text{for}\ \ \ v\in T_{\Omega}\backslash W,\] \[\pi_{W}^{F}(v)=\pi(v) \text{for}\ \ \ v\in V\backslash T_{\Omega}.\] By definition, for every \(W\), \(\Omega_{W}^{F}\) is an instance of \(\mathrm{WGFP}(\mathscr{G})\). Moreover, we have \(\cup_{W\subseteq T}\Omega_{W}^{F}=\Omega\) and \(\Omega_{W_{1}}^{F}\cap\Omega_{W_{2}}^{F}=\emptyset\) for every \(W_{1}\neq W_{2}\). Thus, \(\{\Omega_{W}^{F}\}_{W\subseteq T_{\Omega}}\) is a partition of \(\Omega\) (viewed as a set of subgraphs of \(G\)). When \(W=\emptyset\), we write \(\Omega_{W}^{F}\) as \(\Omega^{F}\), and when \(W=\{s\}\) or \(W=\{s,t\}\), we write \(\Omega_{W}^{F}\) as \(\Omega_{s}^{F}\) or \(\Omega_{s,t}^{F}\) respectively for simplicity. Our algorithm is given in Algorithm 1. ``` 1FunctionDecision: Input : An instance \(\Omega=(G,\pi,\omega)\) of \(\mathrm{WGFP}(\mathscr{G}\cup\mathscr{T})\). Output : A factor of \(\Omega\), or "No" if \(\Omega\) has no factor. 2FunctionOptimization: Input : An instance \(\Omega=(G,\pi,\omega)\) of \(\mathrm{WGFP}(\mathscr{G})\). Output : An optimal factor of \(\Omega\), or "No" if \(\Omega\) has no factor. 3FunctionMain: Input : An instance \(\Omega=(G,\pi,\omega)\) of \(\mathrm{WGFP}(\mathscr{G}\cup\mathscr{T})\). Output : An optimal factor \(F\in\Omega\), or "No" if \(\Omega\) has no factor. 4\(T\leftarrow\{v\in V\mid\pi(v)\in\mathscr{T}\}\); 5if\(T\) is the empty setthen 6returnOptimization (\(\Omega\)); 7 8else 9Arbitrarily pick \(u\in T\); 10ifDecision (\(\Omega_{u}^{0}\)) returns "No"then 11returnMain (\(\Omega_{u}^{1}\)); 12 13else 14\(F^{\mathrm{opt}}\leftarrow\)Main (\(\Omega_{u}^{0}\)); 15foreach\(v\in T\)do 16 // Elements of \(T\) can be traversed in an arbitrary order. 17\(W\leftarrow\{u\}\cup\{v\}\); 18ifOptimization(\(\Omega_{W}^{F^{\mathrm{opt}}}\)) \(\neq\) "No" then\(F^{\prime}\leftarrow\)Optimization(\(\Omega_{W}^{F^{\mathrm{opt}}}\)); 19if\(\omega(F^{\prime})>\omega(F^{\mathrm{opt}})\)then\(F^{\mathrm{opt}}\gets F^{\prime}\); 20 21 end if 22return\(F^{\mathrm{opt}}\); 23 24 end if 25 26 end for ``` **Algorithm 1**Finding an optimal factor for an instance of \(\mathrm{WGFP}(\mathscr{G}\cup\mathscr{T})\) The following structural result for the WGFP can be used to find an optimal factor recursively. It says that given an optimal factor \(F\) of \(\Omega_{u}^{0}\) for some \(u\in T_{\Omega}\), either \(F\) is already optimal in \(\Omega\), or we can find an optimal factor of \(\Omega\) by searching at most \(n\) sub-instances of \(\Omega\) which are in \(\mathrm{WGFP}(\mathscr{G})\). **Theorem 3.1**.: _Suppose that \(\Omega=(G,\pi,\omega)\) is an instance of \(\mathrm{WGFP}(\mathscr{G}\cup\mathscr{T})\), \(F\) is a factor of \(\Omega\) and \(F\) is optimal in \(\Omega_{u}^{0}\) for some \(u\in T_{\Omega}\). Then a factor \(F^{\prime}\) is optimal in \(\Omega\) if and only if \(\omega(F^{\prime})\geq\omega(F)\) and \(\omega(F^{\prime})\geq\mathrm{Opt}(\Omega_{W}^{F})\) for every \(W\) where \(u\in W\subseteq T_{\Omega}\) and \(|W|=1\) or \(|W|=2\)._ _In other words, if \(F\) is not optimal in \(\Omega\), then there is an optimal factor of \(\Omega\) which belongs to \(\Omega_{W}^{F}\) for some \(W\) where \(u\in W\subseteq T_{\Omega}\) and \(|W|=1\) or \(|W|=2\)._ **Remark 3.2**.: _Theorem 3.1 does not hold if the condition "\(F\) is optimal in \(\Omega_{u}^{\textbf{0}}\)" is changed to "\(F\) is optimal in \(\Omega_{u}^{\textbf{1}}\)" for some \(u\in T_{\Omega}\). Consider the following example as shown in Figure 1._ _In this instance, \(\pi(u)=\pi(v)=\pi(t)=\{0,1,3\}\) (denoted by hollow nodes) and \(\pi(s)=\{0,2,3\}\) (denoted by the solid node), and \(\omega(C_{1})=\omega(p_{vs})=\omega(p_{su})=\omega(p^{\prime}_{su})=\omega(p_{ut })=\omega(C_{2})=1\). Inside the cycles \(C_{1}\) and \(C_{2}\), and the paths \(p_{vs}\), \(p_{su}\), \(p_{ut}\), and \(p^{\prime}_{su}\), there are other vertices of degree \(2\) with the degree constraint \(\{0,2\}\) so that the graph \(G\) is simple. We omit these vertices of degree \(2\) in Figure 1._ _In this case, \(T_{\Omega}=\{u,v,s,t\}\). Consider the sub-instance \(\Omega^{1}_{u}=(G,\pi^{1}_{u},\omega)\). We have \(\pi^{1}_{u}(u)=D^{1}_{u}=\{0\}\) since \(\pi(u)=\{0,1,3\}\). One can check that the only factor \(F\) of \(\Omega^{1}_{u}\) is the empty graph (assuming there are no isolated vertices in factors), and \(F\) is not optimal in \(\Omega\). In fact, the only optimal factor of \(\Omega\) is the graph \(G\) and \(G\in\Omega^{F}_{T_{\Omega}}\) where \(|T_{\Omega}|=4\). Thus, Theorem 3.1 does not hold in the case that \(F\) is optimal in \(\Omega^{1}_{u}\)._ Using Theorem 3.1, we now prove that Algorithm 1 is correct. **Lemma 3.3**.: _Given an instance \(\Omega=(G,\pi,\omega)\) of \(\operatorname{WGFP}(\mathscr{G},\mathscr{T})\), Algorithm 1 returns either an optimal factor of \(\Omega\), or "No" if \(\Omega\) has no factor._ Proof.: Recall that for an instance \(\Omega=(G,\pi,\omega)\), we define \(T_{\Omega}=\{v\in V_{G}\mid\pi(v)\in\mathscr{T}\}\) where \(V_{G}\) is the vertex set of \(G\). We prove the correctness by induction on the \(|T_{\Omega}|\). If \(|T_{\Omega}|=0\), \(\Omega\) is an instance of \(\operatorname{WGFP}(\mathscr{G})\). Algorithm 1 simply returns Optimization (\(\Omega\)). By the definition of the function Optimization, the output is correct. Suppose that Algorithm 1 returns correct results for all instances \(\Omega^{\prime}\) of \(\operatorname{WGFP}(\mathscr{G},\mathscr{T})\) where \(|T_{\Omega^{\prime}}|=k\). We consider an instance \(\Omega\) of \(\operatorname{WGFP}(\mathscr{G},\mathscr{T})\) where \(|T_{\Omega}|=k+1\). Algorithm 1 first calls the function Decision (\(\Omega^{0}_{u}\)) for some arbitrary \(u\in T\). We first consider the case that Decision (\(\Omega^{0}_{u}\)) returns "No". By the definition, \(\Omega^{0}_{u}\) has no factor. Moreover, since \(\Omega=\Omega^{0}_{u}\cup\Omega^{1}_{u}\), we have \(F\in\Omega\) if and only if \(F\in\Omega^{1}_{u}\). Then, a factor \(F\in\Omega^{1}_{u}\) is optimal in \(\Omega\) if and only if it is optimal in \(\Omega^{1}_{u}\). Note that \(\Omega^{1}_{u}\) is an instance of \(\operatorname{WGFP}(\mathscr{G},\mathscr{T})\) where \(|T_{\Omega^{1}_{u}}|=k\). By the induction hypothesis, Algorithm 1 returns a correct result Main (\(\Omega^{1}_{u}\)) for the instance \(\Omega^{1}_{u}\), which is also a correct result for the instance \(\Omega\). Now, we consider the case that Decision (\(\Omega^{0}_{u}\)) returns a factor of \(\Omega^{0}_{u}\). Then, Main (\(\Omega^{0}_{u}\)) returns an optimal factor \(F\) of \(\Omega^{0}_{u}\). After the loop (lines 1 to 1) in Algorithm 1, we get a factor \(F^{\text{opt}}\) of \(\Omega\) such that \(\omega(F^{\text{opt}})\geq\operatorname{Opt}(\Omega^{F}_{W})\) for every \(u\in W\subseteq T_{\Omega}\) where \(|W|=1\) (when \(u=v\)) or \(|W|=2\) (when \(u\neq v\)) and \(\omega(F^{\text{opt}})\geq\omega(F)\). By Theorem 3.1, \(F^{\text{opt}}\) is an optimal factor of \(\Omega\). Thus, Algorithm 1 returns a correct result. Now, we consider the time complexity of Algorithm 1. The size of an instance is defined to be the number of vertices of the underlying graph of the instance. **Lemma 3.4**.: _Run Algorithm 1 on an instance \(\Omega=(G,\pi,\omega)\) of size \(n\). Then,_ * _the algorithm will stop the recursion after at most_ \(n\) _recursive steps;_ * _the algorithm will call_ Decision _at most_ \(n\) _many times, call_ Optimization _at most_ \(\frac{n(n+1)}{2}+1\) _many times, and perform at most_ \(\frac{n(n+1)}{2}\) _many comparisons;_ * _the algorithm runs in time_ \(O(n^{6})\) Figure 1: An example that violates Theorem 3.1 when \(F\) is optimal in \(\Omega^{1}_{u}\) instead of \(\Omega^{0}_{u}\) Proof.: Let \(\Omega^{k}=\{G,\pi^{k},\omega\}\) be the instance after \(k\) many recursive steps. Here \(\Omega^{0}=\Omega\). Recall that \(T_{\Omega^{k}}=\{v\in V\mid\pi^{k}(v)\in\mathscr{T}\}\). For an instance \(\Omega^{k}\) with \(|T_{\Omega^{k}}|>0\), the recursive step will then go to the instance \((\Omega^{k})^{0}_{u}\) or \((\Omega^{k})^{1}_{u}\) for some \(u\in T_{\Omega^{k}}\). Thus, \(\Omega^{k+1}=(\Omega^{k})^{0}_{u}\) or \((\Omega^{k})^{1}_{u}\). In both cases, \(T_{\Omega^{k+1}}=T_{\Omega^{k}}\backslash\{u\}\) and hence \(|T_{\Omega^{k+1}}|=|T_{\Omega^{k}}|-1\). By design, the algorithm will stop the recursion and return Optimization (\(\Omega^{m}\)) when it reaches an instance \(\Omega^{m}\) with \(|T_{\Omega^{m}}|=0\). Thus, #recursive steps \(=m=|T_{\Omega}|-0\leq|V|=n\). To prove the second item, we consider the number of operations inside the recursive step for the instance \(\Omega^{k}=\{G,\pi^{k},\omega\}\). Note that \(k\leq n\) and \(|T_{\Omega^{k}}|=|T_{\Omega}|-k\leq n-k\). If \(|T_{\Omega^{k}}|=0\), then the algorithm will simply call Optimization once. If \(|T_{\Omega^{k}}|>0\), then inside the recursive step, the algorithm will call Decision once, and call Optimization once or \(|T_{\Omega^{k}}|\) many times depending on the answer of Decision. Moreover, in the later case, the algorithm will also perform \(|T_{\Omega^{k}}|\) many comparisons. Thus, \[\text{\#calls of Decision} =\sum_{|T_{\Omega^{k}}|>0}1=\sum_{i=1}^{|T_{\Omega}|}1=|T_{ \Omega}|\leq n.\] \[\text{\#calls of Optimization} \leq 1+\sum_{|T_{\Omega^{k}}|>0}|T_{\Omega^{k}}|=1+\sum_{i=1}^{|T_{ \Omega}|}i\leq\frac{n(n+1)}{2}+1.\] \[\text{\#comparisons} \leq\sum_{|T_{\Omega^{k}}|>0}|T_{\Omega^{k}}|\leq\frac{n(n+1)}{2}\] Let \(t_{\texttt{Main}}(n)\) denote the running time of Algorithm 1 on an instance of size \(n\), and \(t_{\texttt{Dec}}(n)\) and \(t_{\texttt{Opt}}(n)\) denote the running time of algorithms for the functions Decision and Optimization, respectively. Then, \(t_{\texttt{Dec}}(n)=O(n^{4})\) by the algorithm in [10] and \(t_{\texttt{Opt}}(n)=O(n^{4})\) by the algorithm in [10]. Thus, \(t_{\texttt{Main}}(n)\leq nt_{\texttt{Dec}}(n)+\frac{n(n+1)+2}{2}t_{\texttt{Opt }}(n)+\frac{n(n+1)}{2}=O(n^{6})\). ## 4 Proof of Theorem 3.1 In this section, we give a proof of Theorem 3.1. The general strategy is that starting with a non-optimal factor \(F\) of an instance \(\Omega=(G,\omega,\pi)\), we want to find a subgraph \(H\) of \(G\) such that by taking the symmetric difference \(F\Delta H\), we get another factor of \(\Omega\) with larger weight. The existence of such subgraphs is trivial (Lemma 4.2). However, the challenge is how to find one efficiently. As an analogy of augmenting paths in the weighted matching problem, we introduce basic augmenting subgraphs (Definition 4.3) for the weighted graph factor problem, which can be found efficiently. We will show that given a non-optimal factor \(F\), a basic augmenting subgraph always exists (Lemma 4.4, property 1). Then, we can efficiently improve the factor \(F\) to another factor with larger weight. As shown in [10], this already gave a weakly-polynomial time algorithm. However, the existence of basic augmenting subgraphs is not enough to get a strongly polynomial-time algorithm, which requires the number of improvement steps being independent of edge weights. Thus, in order to prove Theorem 3.1, which leads to a strongly polynomial-time algorithm, we further establish that there exists a basic augmenting subgraph that satisfies certain stronger properties under suitable assumptions (Lemma 4.4, property 2). This result will imply Theorem 3.1. **Definition 4.1** (\(F\)-augmenting subgraphs).: _Suppose that \(F\) is a factor of an instance \(\Omega=(G,\pi,\omega)\). A subgraph \(H\) of \(G\) is \(F\)-augmenting if \(F\Delta H\in\Omega\) and \(\omega(F\Delta H)-\omega(F)>0\)._ **Lemma 4.2**.: _Suppose that \(F\) is a factor of an instance \(\Omega\). If \(F\) is not optimal in \(\Omega\), then there exists an \(F\)-augmenting subgraph._ Proof.: Since \(F\) is not optimal, there is some \(F^{\prime}\in\Omega\) such that \(\omega(F^{\prime})>\omega(F)\). Let \(H=F\Delta F^{\prime}\). We have \(F\Delta H=F^{\prime}\in\Omega\) and \(\omega(H)=\omega(F^{\prime})-\omega(F)>0\). Thus, \(H\) is \(F\)-augmenting. Recall that for an instance \(\Omega=(G,\pi,\omega)\) of \(\operatorname{WGFP}(\mathscr{G}\cup\mathscr{T})\), \(T_{\Omega}\) is the set \(\{v\in V_{G}\mid\pi(v)\in\mathscr{T}\}\). For two factors \(F,F^{*}\in\Omega\), we define \(T_{\Omega}^{F\Delta F^{*}}=\{v\in T_{\Omega}\mid\deg_{F\Delta F^{*}}(v)\equiv 1 \mod 2\}=\{v\in T_{\Omega}\mid\deg_{F}(v)\not\equiv\deg_{F^{*}}(v)\mod 2\}\). **Definition 4.3** (Basic augmenting subgraphs).: _Suppose that \(\Omega=(G,\pi,\omega)\) is an instance of \(\operatorname{WGFP}(\mathscr{G},\mathscr{T})\), and \(F\) and \(F^{*}\) are factors of \(\Omega\) with \(\omega(F)<\omega(F^{*})\). An \(F\)-augmenting subgraph \(H=(V_{H},E_{H})\) is \((F,F^{*})\)-basic if \(H\subseteq F\Delta F^{*}\), \(|V_{H}^{\operatorname{odd}}|\leq 2\), and \(V_{H}^{\operatorname{odd}}\cap T_{\Omega}\subseteq T_{\Omega}^{F\Delta F^{*}}\) where \(V_{H}^{\operatorname{odd}}=\{v\in V_{H}\mid\deg_{H}(v)\equiv 1\mod 2\}\)._ **Lemma 4.4**.: _Suppose that \(\Omega=(G,\pi,\omega)\) is an instance of \(\operatorname{WGFP}(\mathscr{G}\cup\mathscr{T})\), and \(F\) and \(F^{*}\) are two factors of \(\Omega\)._ 1. _If_ \(\omega(F^{*})>\omega(F)\)_, then there exists an_ \((F,F^{*})\)_-basic subgraph._ 2. _If_ \(\omega(F^{*})>\operatorname{Opt}(\Omega_{W}^{F})\) _for every_ \(W\subseteq T_{\Omega}^{F\Delta F^{*}}\) _with_ \(|W|\leq 2\)_, and_ \(T_{\Omega}^{F\Delta F^{*}}\) _contains a vertex_ \(u\) _such that_ \(F\in\Omega_{u}^{0}\) _(i.e.,_ \(\deg_{F}(u)\in D_{u}^{0}\)_), then there exists an_ \((F,F^{*})\)_-basic subgraph_ \(H\) _where_ \(\deg_{H}(u)\equiv 0\mod 2\)_._ **Remark 4.5**.: _The first property of Lemma 4.4 implies the following: a factor \(F\in\Omega\) is optimal if and only if \(\omega(F)\geq\operatorname{Opt}(\Omega_{W}^{F})\) for every \(W\subseteq T_{\Omega}\) with \(|W|\leq 2\). This is a special case of the main result (Theorem 2) of [10] where the authors consider the \(\operatorname{WGFP}\) for all constraints with gaps of length at most 1. The second property of Lemma 4.4 is more refined than the first property and it implies our main result (Theorem 3.1). In this paper, as a by-product of the proof of property 2, we give a simple proof of Theorem 2 of [10] for the special case \(\operatorname{WGFP}(\mathscr{G}\cup\mathscr{T})\) based on certain properties of cubic graphs._ Using the second property of Lemma 4.4, we can prove Theorem 3.1. **Theorem** (Theorem 3.1).: _Suppose that \(\Omega=(G,\pi,\omega)\) is an instance of \(\operatorname{WGFP}(\mathscr{G}\cup\mathscr{T})\), \(F\) is a factor of \(\Omega\) and \(F\) is optimal in \(\Omega_{u}^{0}\) for some \(u\in T_{\Omega}\). Then a factor \(F^{\prime}\) is optimal in \(\Omega\) if and only if \(\omega(F^{\prime})\geq\omega(F)\) and \(\omega(F^{\prime})\geq\operatorname{Opt}(\Omega_{W}^{F})\) for every \(W\) where \(u\in W\subseteq T_{\Omega}\) and \(|W|=1\) or \(2\)._ Proof.: If \(F^{\prime}\) is optimal in \(\Omega\), then clearly \(\omega(F^{\prime})\geq\omega(F)\) and \(\omega(F^{\prime})\geq\operatorname{Opt}(\Omega_{W}^{F})\) for every \(W\) where \(u\in W\subseteq T_{\Omega}\) and \(|W|=1\) or \(2\). Thus, to prove the theorem, it suffices to prove the other direction. Since \(\omega(F^{\prime})\geq\omega(F)\) and \(F\) is optimal in \(\Omega_{u}^{0}\), we have \(\omega(F^{\prime})\geq\operatorname{Opt}(\Omega_{W}^{F})\) for every \(W\subseteq T_{\Omega}\) where \(u\notin W\) and \(|W|\leq 2\). Also, since \(\omega(F^{\prime})\geq\operatorname{Opt}(\Omega_{W}^{F})\) for every \(W\) where \(u\in W\subseteq T_{\Omega}\) and \(|W|=1\) or \(2\), we have \(\omega(F^{\prime})\geq\operatorname{Opt}(\Omega_{W}^{F})\) for every \(W\subseteq T_{\Omega}\) where \(|W|\leq 2\). For a contradiction, suppose that \(F^{\prime}\) is not optimal in \(\Omega\). Let \(F^{*}\) be an optimal factor of \(\Omega\). Then, \(\omega(F^{*})>\omega(F^{\prime})\). Thus, \(\omega(F^{*})>\omega(F^{\prime})\geq\operatorname{Opt}(\Omega_{W}^{F})\) for every \(W\subseteq T_{\Omega}\) where \(|W|\leq 2\). Also, \(\omega(F^{*})\notin\Omega_{u}^{0}\) since \(\omega(F^{*})>\omega(F)\) and \(F\) is optimal in \(\Omega_{u}^{0}\). Thus, \(\deg_{F^{*}}(u)\not\equiv\deg_{F}(u)\mod 2\). Then, \(T_{\Omega}^{F\Delta F^{*}}\) contains the vertex \(u\) such that \(F\in\Omega_{u}^{0}\). By Lemma 4.4, there exists an \((F,F^{*})\)-basic subgraph \(H\) where \(\deg_{H}(u)\equiv 0\mod 2\). Let \(F^{\prime\prime}=F\Delta H\). Then \(F^{\prime\prime}\in\Omega\) and \(\omega(F^{\prime\prime})>\omega(F)\). Also, \(F^{\prime\prime}\in\Omega_{u}^{0}\) since \(\deg_{F^{\prime\prime}}(u)\equiv\deg_{F}(u)\mod 2\). This is a contradiction with \(F\) being optimal in \(\Omega_{u}^{0}\). Now it suffices to prove Lemma 4.4. By a type of normalization maneuver, we can transfer any instance of \(\operatorname{WGFP}(\mathscr{G},\mathscr{T})\) to an instance of \(\operatorname{WGFP}(\mathscr{G},\mathscr{T})\) defined on subcubic graphs, called a key instance (Definition 4.6). Recall that a subcubic graph is a graph where every vertex has degree 1, 2 or 3. For key instances, there are five possible forms of basic augmenting subgraphs, called basic factors (Definition 4.7). Then, the crux of the proof of Lemma 4.4 is to establish the existence of certain basic factors of key instances (Theorem 4.8). **Definition 4.6** (Key instance).: _A key instance\(\Omega=(G,\pi,\omega)\) is an instance of \(\operatorname{WGFP}(\mathscr{G},\mathscr{T})\) where \(G\) is a subcubic graph, and for every \(v\in V_{G}\), \(\pi(v)=\{0,1\}\) if \(\deg_{G}(v)=1\), \(\pi(v)=\{0,2\}\) if \(\deg_{G}(v)=2\), and \(\pi(v)=\{0,1,3\}\) (i.e., type-1) or \(\{0,2,3\}\) (i.e., type-2) if \(\deg_{G}(v)=3\). We say a vertex \(v\in V_{G}\) of degree \(3\) is of type-1 or type-2 if \(\pi(v)\) is type-1 or type-2 respectively. We say a vertex \(v\in V_{G}\) of any degree is 1-feasible or 2-feasible if \(1\in\pi(v)\) or \(2\in\pi(v)\) respectively._ **Definition 4.7** (Basic factor).: _Let \(\Omega\) be a key instance. A factor of \(\Omega\) is a basic factor if it is in one of the following five forms._ 1. _A path, i.e., a tree with two vertices of degree 1 (called endpoints) and all other vertices, if there exists any, of degree 2._ 2. _A cycle, i.e., a graph consisting of two vertex disjoint paths with the same two endpoints._ 3. _A tadpole graph, i.e., a graph consisting of a cycle and a path such that they intersect at one endpoint of the path._ 4. _A dumbbell graph, i.e., a graph consisting of two vertex disjoint cycles and a path such that the path intersects with each cycle at one of its endpoints._ 5. _A theta graph (i.e., a graph consisting of three vertex disjoint paths with the same two endpoints) where one vertex of degree 3 is of type-1, and the other vertex of degree 3 is of type-2._ **Theorem 4.8**.: _Suppose that \(\Omega=(G,\pi,\omega)\) is a key instance._ 1. _If_ \(\omega(G)>0\)_, then there is a basic factor_ \(F\) _of_ \(\Omega\) _such that_ \(\omega(F)>0\)_._ 2. _If_ \(\omega(G)>0\)_,_ \(\omega(G)>\omega(F)\) _for every basic factor_ \(F\) _of_ \(\Omega\)_, and_ \(G\) _contains a vertex_ \(u\) _with_ \(\deg_{G}(u)=1\) _or_ \(\deg_{G}(u)=3\) _and_ \(\pi(u)=\{0,2,3\}\)_, then there is a basic factor_ \(F^{*}\) _of_ \(\Omega\) _such that_ \(\omega(F^{*})>0\) _and_ \(\deg_{F^{*}}(u)\equiv 0\mod 2\)_. (Recall that_ \(\deg_{F^{*}}(u)=0\) _if_ \(u\notin V_{F^{*}}\)_)._ **Remark 4.9**.: _For the second property of Theorem 4.8, the requirement of \(\pi(u)=\{0,2,3\}\) when \(\deg_{G}(u)=3\) is crucial. Consider the instance \(\Omega=(G,\pi,\omega)\) as shown in Figure 1. It is easy to that \(\Omega\) is a key instance. In this case, it can be checked that \(\omega(G)=6>0\) and \(\omega(G)>\omega(F)\) for every basic factor \(F\) of \(\Omega\). However, there is no basic factor \(F^{*}\) of \(\Omega\) such that \(\omega(F^{*})>0\) and \(\deg_{F^{*}}(u)\equiv 0\bmod 2\). Thus, the second property does not hold for a vertex \(u\) where \(\deg_{G}(u)=3\) and \(\pi(u)=\{0,1,3\}\)._ We will now describe the normalization maneuver, and use Theorem 4.8 to prove Lemma 4.4. Proof of Lemma 4.4.: Recall that \(F\) and \(F^{*}\) are two factors of the instance \(\Omega=(G,\pi,\omega)\) (not necessarily a key instance). Consider the subgraph \(G_{\Delta}=F\Delta F^{*}\) of \(G\). \(G_{\Delta}=(V_{G_{\Delta}},E_{G_{\Delta}})\) is not necessarily a subcubic graph. In order to invoke Theorem 4.8, we modify \(G_{\Delta}\) to a subcubic graph \(G^{s}\), and construct a key instance \(\Omega^{s}=(G^{s},\pi^{s},\omega^{s})\) on it. For every \(v\in V_{G_{\Delta}}\), we consider the set of edges incident to \(v\) in \(G_{\Delta}\), denoted by \(E_{v}\). Since \(G_{\Delta}=F\Delta F^{*}\), we have \(E_{v}\subseteq E_{G_{\Delta}}=E_{F}\Delta E_{F^{*}}\), where \(E_{F}\) and \(E_{F^{*}}\) are the edge sets of the factors \(F\) and \(F^{*}\) respectively. If there is a pair of edges \(e,e^{*}\in E_{v}\) such that \(e\in E_{F}\) and \(e^{*}\in E_{F^{*}}\), then we perform the following _separation_ operation for this pair of edges. Suppose that \(e=(v,u)\) and \(e^{*}=(v,u^{*})\); we add a new vertex \(v^{1}\) to the graph, and replace the edges \(e\) and \(e^{*}\) by \((v^{1},u)\) and \((v^{1},u^{*})\) respectively. We label the vertex \(v^{1}\) (of degree 2) by \(\pi^{s}(v^{1})=\{0,2\}\). With a slight abuse of notation, we may still use \(e\) and \(e^{*}\) to denote these two new edges, and also use \(E_{G_{\Delta}}\) to denote the set of all edges of the new graph. For each \(E_{v}\), keep doing the separation operations for pairs of edges of which one is in \(E_{F}\) and the other is in \(E_{F^{*}}\) until all the remaining edges in \(E_{v}\) are in \(E_{F}\) or in \(E_{F^{*}}\) We use \(E_{v}^{\mathtt{r}}\) to denote the set of remaining edges. It is possible that \(E_{v}^{\mathtt{r}}\) is empty. Let \(P^{1}_{v},\ldots,P^{k}_{v}\) be the pairs of edges that have been separated, and \(v^{1},\ldots,v^{k}\) be the added vertices (\(k\) can be zero). Note that all these new vertices are of degree 2, and are labeled by \(\{0,2\}\). Now, we have the partition \(E_{v}=P^{1}_{v}\cup\cdots\cup P^{k}_{v}\cup E_{v}^{\mathtt{r}}\). Let \(r=|E_{v}^{\mathtt{r}}|\). Then \(r=|\deg_{F}(v)-\deg_{F^{*}}(v)|\). Note that \(r\) is even if \(\pi(v)\in\mathscr{G}_{2}\), and \(r\leq 3\) if \(\pi(v)\in\mathscr{T}\). We deal with edges in \(E_{v}^{\mathtt{r}}\) according to \(r\) and \(\pi(v)\). * If \(r=0\), then \(v\) is an isolated vertex in the current graph, and we simply remove it. Consider an arbitrary subgraph \(H\) of the original \(G_{\Delta}\) induced by a union of some pairs of edges in \(P^{1}_{v},\ldots,P^{k}_{v}\). Then, for the subgraph \(F\Delta H\) of \(G_{\Delta}\), we have \[\deg_{F\Delta H}(v)=\deg_{F}(v)\in\pi(v).\] * If \(r\neq 0\) and \(\pi(v)\in\mathscr{G}_{1}\), then we replace the vertex \(v\) with \(r\) many new vertices, and replace the \(r\) many edges incident to \(v\) by \(r\) many edges incident to these new vertices such that each vertex has degree 1. We label every new vertex by \(\{0,1\}\). Suppose that \(L=\min\{\deg_{F}(v),\deg_{F^{*}}(v)\}\) and \(U=\max\{\deg_{F}(v),\deg_{F^{*}}(v)\}\). Since \(\pi(v)\in\mathscr{G}_{1}\), \(\{L,L+1,\ldots,U\}\subseteq\pi(v)\). Consider an arbitrary subgraph \(H\subseteq G_{\Delta}\) induced by a union of some pairs of edges in \(P^{1}_{v},\ldots,P^{k}_{v}\) and a subset of \(E_{v}^{\mathtt{r}}\). Then, for the subgraph \(F\Delta H\) of \(G_{\Delta}\), we have \[\deg_{F\Delta H}(v)\in\{L,L+1,\ldots,U\}\in\pi(v).\] * If \(r\neq 0\) and \(\pi(v)\in\mathscr{G}_{2}\backslash\mathscr{G}_{1}\), then we replace the vertex \(v\) with \(r/2\) many vertices, and replace the \(r\) many edges incident to \(v\) by \(r\) many edges incident to these new vertices such that each vertex has degree 2. (We can partition these \(r\) many edges into arbitrary pairs.) We label every new vertex by \(\{0,2\}\). Suppose that \(L=\min\{\deg_{F}(v),\deg_{F^{*}}(v)\}\) and \(U=\max\{\deg_{F}(v),\deg_{F^{*}}(v)\}\). Since \(\pi(v)\in\mathscr{G}_{2}\), \(\{L,L+2,\ldots,U\}\subseteq\pi(v)\). Consider an arbitrary subgraph \(H\subseteq G_{\Delta}\) induced by a union of some pairs of edges in \(P^{1}_{v},\ldots,P^{k}_{v}\) and an even-size subset of \(E_{v}^{\mathtt{r}}\). Then, for the subgraph \(F\Delta H\) of \(G_{\Delta}\), we have \[\deg_{F\Delta H}(v)\in\{L,L+2,\ldots,U\}\in\pi(v).\] * If \(r\neq 0\) and \(\pi(v)\in\mathscr{T}\), then there are three subcases. If \(r=1\), then \(v\) has degree 1 in the current graph. We label it by \(\pi^{s}(v)=\{0,1\}\). If \(r=2\), then \(v\) has degree 2 in the current graph. We label it by \(\pi^{s}(v)=\{0,2\}\). If \(r=3\), then \(v\) has degree 3 in the current graph. We label it by \(\pi^{s}(v)=\{0,1,3\}\) if \(\deg_{F}(v)\in D^{1}_{v}\), and \(\pi^{s}(v)=\{0,2,3\}\) if \(\deg_{F}(v)\in D^{0}_{v}\). Consider an arbitrary subgraph \(H\subseteq G_{\Delta}\) induced by a union of some pairs of edges in \(P^{1}_{v},\ldots,P^{k}_{v}\) and a subset \(I\) of \(E_{v}^{\mathtt{r}}\) where \(|I|\subseteq\pi^{s}(v)\). Then, for the subgraph \(F\Delta H\) of \(G_{\Delta}\), we have \[\deg_{F\Delta H}(v)\in\pi(v).\] Now, we get a subcubic graph \(G^{s}=(V_{G^{s}},E_{G^{s}})\) from \(G_{\Delta}\). Each vertex \(v\) in \(G_{\Delta}\) is replaced by a set of new vertices in \(G^{s}\), denoted by \(S(v)\). * If \(\pi(v)\in\mathscr{G}_{1}\), then \(S(v)\) consists of vertices of degree 2 or 1. * If \(\pi(v)\in\mathscr{G}_{2}\), then \(S(v)\) consists of vertices of degree 2. * If \(\pi(v)\in\mathscr{T}\), then \(S(v)\) consists of vertices of degree \(2\) and possibly a vertex of degree \(r\) where \(r=|\deg_{F}(v)-\deg_{F^{*}}(v)|\leq 3\) (there is no such a vertex if \(r=0\)). In particular, if \(\deg_{F}(v)-\deg_{F^{*}}(v)\equiv 0\mod 2\), then \(S(v)\) consists of vertices of degree \(2\). In all cases, we have \(\deg_{G_{\Delta}}(v)=\sum_{x\in S(v)}\deg_{G^{s}}(x)\). Each edge \((u,v)\) in \(G_{\Delta}\) is replaced by an edge \((u^{s},v^{s})\in G_{\Delta}\) where \(u^{s}\in S(u)\) and \(v^{s}\in S(v)\). Once we get \(G^{s}\) from \(G_{\Delta}\), it is clear that there is a natural one-to-one correspondence between edges in \(G^{s}\) and edges in \(G_{\Delta}\). Without causing ambiguity, when we say an edge or an edge set in \(G^{s}\), we may also refer it to the corresponding edge or edge set in \(G_{\Delta}\). As we constructed \(G^{s}\), we have already defined the mapping \(\pi^{s}\) which labels each vertex in \(G^{s}\) with a degree constraint. For \(x\in V_{G^{s}}\), we have \(\pi^{s}(x)=\{0,1\}\) if \(\deg_{G^{s}}(x)=1\), \(\pi^{s}(x)=\{0,2\}\) if \(\deg_{G^{s}}(x)=2\), and \(\pi^{s}(x)=\{0,1,3\}\) or \(\{0,2,3\}\) if \(\deg_{G^{s}}(x)=3\). Moreover, as we have discussed above, for a vertex \(v\in V_{G_{\Delta}}\) and a subgraph \(H\subseteq G_{\Delta}\) induced be a set \(E\) of edges incident to \(v\) in \(G_{\Delta}\), we have \(\deg_{F\Delta H}(v)\in\pi(v)\) if \(\deg_{H^{s}}(x)\in\pi^{s}(x)\) for every \(x\in S(v)\) where \(H^{s}\) is the subgraph of \(G^{s}\) induced by the edge set \(E\) (viewed as edges in \(G^{s}\)). Now, we define the function \(\omega^{s}\) for edges in \(G^{s}\) as follow. Recall that for every edge in \(G^{s}\), its corresponding edge in \(G_{\Delta}\) is either in the factor \(F\) or the factor \(F^{*}\) but not in both since \(G_{\Delta}=F\Delta F^{*}\). For \(e\in E_{G^{s}}\), we define \(\omega^{s}(e)=\omega(e)\) if \(e\in E_{F^{*}}\) and \(\omega^{s}(e)=-\omega(e)\) if \(e\in E_{F}\). We can extend \(\omega^{s}\) to any subgraph of \(G^{s}\) by defining its weight to be the total weight of all its edges. Then, for any subgraph \(H^{s}\subseteq G^{s}\), \(\omega^{s}(H^{s})=\omega(F\Delta H)-\omega(F)\) where \(H\) is the subgraph of \(G_{\Delta}\) corresponding to \(H^{s}\). In particular, \(\omega^{s}(G^{s})=\omega(F^{*})-\omega(F)>0\). Thus, we get a key instance \(\Omega^{s}=(G^{s},\pi^{s},\omega^{s})\) where \(\omega^{s}(G^{s})>0\). Suppose that \(F^{s}\) is a factor of \(G^{s}\) with \(\omega^{s}(F^{s})>0\). We consider the subgraph \(H\) of \(G_{\Delta}\) induced by the edge set \(E_{F^{s}}\) (viewed as edges in \(G_{\Delta}\)). We show that \(H\) is an \((F,F^{*})\)-basic subgraph of \(G\). We have \(H\subseteq G_{\Delta}=F\Delta F^{*}\). As we have discussed above, for every vertex \(v\in V_{F\Delta H}\), \(\deg_{F\Delta H}(v)\in\pi(v)\). Thus, \(F\Delta H\in\Omega\). Also, \(\omega^{s}(F^{s})=\omega(F\Delta H)-\omega(F)>0\). Then, \(H\) is an \(F\)-augmenting subgraph. For every \(v\in V_{H}\), \(\deg_{H}(v)=\sum_{x\in S(v)}\deg_{F^{s}}(x)\). Then, \(\deg_{H}(v)\) is odd only if there is a vertex \(x\in S(v)\) such that \(\deg_{F^{s}}(x)\) is odd. Thus, the number of odd vertices in \(H\) is no more than the number of odd vertices in \(F^{s}\). Since \(F^{s}\) is a basic factor, it has at most \(2\) vertices of odd degree. Thus, \(H\) has at most \(2\) vertices of odd degree. Moreover, for a vertex \(v\in V_{H}\cap T_{\Omega}\), if \(\deg_{F}(v)\equiv\deg_{F^{*}}(v)\mod 2\), then \(S(v)\) consists of vertices of degree \(2\). Thus, \(\deg_{F^{s}}(x)\in\{0,2\}\) for every \(x\in S(v)\). Then, \(\deg_{H}(v)=\sum_{x\in S(v)}\deg_{F^{s}}(x)\) is even. Thus, for a vertex \(v\in V_{H}\cap T_{\Omega}\), \(\deg_{H}(v)\) is odd only if \(\deg_{F}(v)\not\equiv\deg_{F^{*}}(v)\mod 2\). Then, \(V_{H}^{\rm odd}\cap T_{\Omega}\subseteq T_{\Omega}^{F\Delta F^{*}}\) where \(V_{H}^{\rm odd}=\{v\in V_{H}\mid\deg_{H}(v)\equiv 1\mod 2\}\). Thus, \(H\) is an \((F,F^{*})\)-basic subgraph of \(G\). By the first part of Theorem 4.8, there exists a basic factor \(F^{s}\in\Omega^{s}\) with \(\omega^{s}(F^{s})>0\). Thus, there exists an \((F,F^{*})\)-basic subgraph \(H\subseteq G\) induced by the edge set \(E_{F^{s}}\). The first part is done. Now, we prove the second part. Suppose that \(\omega(F^{*})>\operatorname{Opt}(\Omega_{W}^{F})\) for every \(W\subseteq T_{\Omega}^{F\Delta F^{*}}\) where \(|W|\leq 2\), and \(T_{\Omega}^{F\Delta F^{*}}\) contains a vertex \(u\) where \(\deg_{F}(u)\in D_{u}^{0}\). Consider the instance \(\Omega^{s}\). First, we prove that \(\omega^{s}(G^{s})>\omega(F^{s})\) for every basic factor \(F^{s}\) of \(\Omega^{s}\). For a contradiction, suppose that there is some \(F^{s}\in\Omega^{s}\) such that \(\omega^{s}(G^{s})\leq\omega(F^{s})\). Still consider the subgraph \(H\) of \(G_{\Delta}\) induced by \(E_{F^{s}}\). We know that \(H\) is an \((F,F^{*})\)-basic subgraph of \(G\) and \(\omega^{s}(F^{s})=\omega(F\Delta H)-\omega(F)\). Let \(W=V_{H}^{\rm odd}\cap T_{\Omega}\). Then, \(W\subseteq T_{\Omega}^{F\Delta F^{*}}\) and \(|W|\leq 2\). For every \(x\in W\), since \(\deg_{H}(x)\) is odd, we have \(\deg_{F\Delta H}(x)\not\equiv\deg_{F}(x)\mod 2\), and then \(\deg_{F\Delta H}(x)\in\pi(x)\setminus D_{v}^{F}\). For every \(x\in T_{\Omega}W\), since \(\deg_{H}(x)\) is even, we have \(\deg_{F\Delta H}(x)\equiv\deg_{F}(x)\mod 2\) and then \(\deg_{F\Delta H}(x)\in D_{v}^{F}\). Consider the sub-instance \(\Omega_{W}^{F}=(G,\pi_{W}^{F},\omega)\) of \(\Omega\) (see Equation (1) for the definition of \(\Omega_{W}^{F}\)). Then, \(F\Delta H\in\Omega_{W}^{F}\). Thus, \(\omega(F\Delta H)\leq\operatorname{Opt}(\Omega_{W}^{F})\). Since \[\omega^{s}(G^{s})=\omega(F^{*})-\omega(F)\leq\omega^{s}(F^{s})=\omega(F\Delta H)- \omega(F),\] we have \(\omega(F^{*})\leq\omega(F\Delta H)\). Then, \(\omega(F^{*})\leq\operatorname{Opt}(\Omega_{W}^{F})\). A contradiction with the assumption that \(\omega(F^{*})>\operatorname{Opt}(\Omega_{W}^{F})\) for every \(W\subseteq T_{\Omega}^{F\Delta F^{*}}\) where \(|W|\leq 2\). Thus, \(\omega^{s}(G^{s})>\omega^{s}(F^{s})\) for every basic factor \(F^{s}\) of \(\Omega^{s}\). Since \(T_{\Omega}^{F\Delta F^{*}}\) contains a vertex \(u\) where \(\deg_{F}(u)\in D_{u}^{0}\). Consider the vertex set \(S(u)\) in \(G^{s}\) that corresponds to \(u\). Since \(u\in T_{\Omega}^{F\Delta F^{*}}\), \(\deg_{F}(u)\not\equiv\deg_{F^{*}}(u)\mod 2\). Thus, \(S(u)\) consists of vertices of degree 2 and a vertex \(u^{s}\) of degree \(\deg_{G^{s}}(u^{s})=|\deg_{F}(u)-\deg_{F^{*}}(u)|\) which is 1 or 3. If \(|\deg_{F}(u)-\deg_{F^{*}}(u)|=3\), then \(\pi^{s}(u^{s})=\{0,2,3\}\) since \(\deg_{F}(u)\in D_{u}^{0}\). Thus, \(G^{s}\) contains a vertex \(u^{s}\) where \(\deg_{G^{s}}(u^{s})=1\) or \(\deg_{G^{s}}(u^{s})=3\) and \(\pi^{s}(u^{s})=\{0,2,3\}\). Then, by the second part of Theorem 4.8, there is a basic factor \(F^{s}\in\Omega^{s}\) such that \(\omega^{s}(F^{s})>0\) and \(\deg_{F^{s}}(u_{s})\equiv 0\mod 2\). Again, consider the subgraph \(H\) of \(G_{\Delta}\) inducted by \(E_{F^{s}}\). We have proved that \(H\) is an \((F,F^{*})\)-basic subgraph of \(G\). Also, \[\deg_{H}(u)=\sum_{x\in S(u)\setminus\{u^{s}\}}\deg_{F^{s}}(x)+\deg_{F^{s}}(u^{ s})\equiv 0\mod 2\] since \(\deg_{F^{s}}(x)\in\pi^{s}(x)=\{0,2\}\) for every \(x\in S(u)\setminus\{u^{s}\}\), and \(\deg_{F^{s}}(u_{s})\equiv 0\mod 2\). Thus, there is an \((F,F^{*})\)-basic subgraph \(H\) of \(G\) such that \(\deg_{H}(u)\equiv 0\mod 2\). ## 5 Proof of Theorem 4.8 We first prove the first property (restated in Lemma 5.6), and then prove the second property (restated in Lemma 5.7) using the first property. In this section, for two points \(x\) and \(y\), we use \(p_{xy}\), \(p^{\prime}_{xy}\) or \(p^{\prime\prime}_{xy}\) to denote a path with endpoints \(x\) and \(y\). Recall that \(V_{p_{xy}}\) and \(E_{p_{xy}}\) denotes the vertex set and the edge set of \(p_{xy}\) respectively. ### Proof of the first property **Lemma 5.1**.: _Suppose that \(\Omega=(G,\pi,\omega)\) is a key instance with \(\omega(G)>0\). If \(G\) is not connected, then there is a factor \(F\in\Omega\) such that \(\omega(F)>0\) and \(|E_{F}|<|E_{G}|\)._ Proof.: Suppose that \(G_{1}\) is a connected component of \(G\), and \(G_{2}=G\Delta G_{1}\) is the rest of the graph. Note that \(G_{1}\) and \(G_{2}\) are both factors of \(G\). By the definition of subcubic graphs, there are no isolated vertices in \(G\). Thus, neither \(G_{1}\) nor \(G_{2}\) is a single vertex. Then, \(|E_{G_{1}}|,|E_{G_{2}}|\geq 1\). Since \(E_{G}\) is the disjoint union of \(E_{G_{1}}\) and \(E_{G_{2}}\), \(|E_{G_{1}}|,|E_{G_{2}}|<|E_{G}|\), and \(\omega(G)=\omega(G_{1})+\omega(G_{2})\). Since \(\omega(G)>0\), among \(\omega(G_{1})\) and \(\omega(G_{2})\), one is positive. Thus, we are done. **Lemma 5.2**.: _Suppose that \(\Omega=(G,\pi,\omega)\) is a key instance with \(\omega(G)>0\). Then, there is a factor \(F\in\Omega\) such that \(\omega(F)>0\) and \(|E_{F}|<|E_{G}|\) if one of the following conditions holds:_ 1. _There is a path_ \(p_{uv}\subseteq G\) _with endpoints_ \(u\) _and_ \(v\) _where_ \(u\) _and_ \(v\) _are the only two vertices in_ \(p_{uv}\) _of type-2 (i.e.,_ \(\deg_{G}(u)=\deg_{G}(v)=3\) _and_ \(\pi(u)=\pi(v)=\{0,2,3\}\)_) and_ \(\omega(p_{uv})\leq 0\)_._ 2. _There is a cycle_ \(C\subseteq G\) _where no vertex is of type-2 and_ \(\omega(C)\leq 0\)_._ Proof.: Suppose that the first condition holds. Consider the subgraph \(F=G\backslash p_{uv}\) of \(F\). Then, \(|E_{F}|=|E_{G}|-|E_{p_{uv}}|<|E_{G}|\), and \(\omega(F)=\omega(G)-\omega(p_{uv})\geq\omega(G)>0\). Now we only need to show that \(F\) is a factor of \(\Omega\). The vertex set \(V_{F}\) consists of three parts: \[V_{1}=V_{G}\backslash V_{p_{uv}},\quad V_{2}=\{x\in V_{p_{uv}}\backslash\{u,v\} \mid\deg_{G}(x)=3\},\quad\text{ and }\quad V_{3}=\{u,v\}.\] Since \(u\) and \(v\) are the only two vertices of type-2 in \(p_{uv}\), for every \(x\in V_{2}\), \(x\) is of type-1 (i.e., \(\pi(x)=\{0,1,3\}\)). Then, for every \(x\in V_{F}\), we have \(\deg_{F}(x)=\deg_{G}(x)\in\pi(x)\) if \(x\in V_{1}\) \(\deg_{F}(x)=1\in\pi(x)\) if \(x\in V_{2}\), and \(\deg_{F}(x)=2\in\pi(x)\) if \(x\in V_{3}\). Thus, \(F\) is a factor of \(G\). We are done. Suppose that the second condition holds. Consider the subgraph \(F=G\backslash C\). Then \(|E_{F}|<|E_{G}|\) and \(\omega(F)>0\). Similar to the above proof, one can check that \(F\) is a factor \(\Omega\). We are done. **Lemma 5.3**.: _Suppose that \(\Omega=(G,\pi,\omega)\) is a key instance with \(\omega(G)>0\), \(G\) is not a basic factor of \(\Omega\) and \(C\subseteq G\) is a cycle. Let \(k\) be the number of type-1 vertices and \(\ell\) be the number of type-2 vertices in \(C\). If \(k\neq 1\) and \(\ell\neq 1\), then there is a factor \(F\in\Omega\) such that \(\omega(F)>0\) and \(|E_{F}|<|E_{G}|\)._ Proof.: We prove this lemma in two cases depending on whether \(\omega(C)>0\) or \(\omega(C)\leq 0\). We first consider the case that \(\omega(C)>0\). If \(k=0\), then all vertices in \(C\) are 2-feasible (see Definition 4.6). Thus, \(C\) is a factor of \(\Omega\). Since \(G\) is not a basic factor of \(\Omega\), we have \(G\neq C\). Also, since \(G\) has no isolated vertices, \(C\subsetneq G\) implies that \(|E_{C}|<|E_{G}|\). We are done. Thus, we may assume that \(k\geq 2\). Suppose that \(\{u_{1},u_{2},\ldots,u_{k}\}\) are the type-1 vertices in \(C\). We list them in the order of traversing the cycle starting from \(u_{1}\) in an arbitrary direction. Then, these \(k\) many vertices split the cycle into \(k\) many paths \(p_{u_{1}u_{2}},\ldots,p_{u_{k}u_{k+1}}\) (\(u_{k+1}=u_{1}\)). For each path, all its vertices are 2-feasible except for its two endpoints which are 1-feasible. Thus, each path is a basic factor of \(G\). We have \(|E_{p_{u_{i}u_{i+1}}}|<|E_{G}|\) for every \(i\in[k]\). Since \[\omega(C)=\sum_{i=1}^{k}\omega(p_{u_{i}u_{i+1}})>0,\] there is a path \(p_{u_{i}u_{i+1}}\) such that \(\omega(p_{u_{i}u_{i+1}})>0\). Thus, we are done. Then we consider the case that \(\omega(C)\leq 0\). If \(\ell=0\), then \(C\subseteq G\) is cycle with no type-2 vertices. By Lemma 5.2, we are done. Thus, we may assume that \(\ell\geq 2\). Suppose that \(\{v_{1},v_{2},\ldots,v_{\ell}\}\) are the type-2 vertices in \(C\). We list them in the order of traversing the cycle starting from \(v_{1}\) in an arbitrary direction. Then, these \(\ell\) many vertices split the cycle into \(\ell\) many paths \(p_{v_{1}v_{2}},\ldots,\ldots,p_{v_{\ell}v_{\ell+1}}\) (\(v_{\ell+1}=v_{1}\)). For each path, it has no vertex of type-2 except for its two endpoints which are of type-2. Since \[\omega(C)=\sum_{i=1}^{k}\omega(p_{v_{i}v_{i+1}})\leq 0,\] there is a path \(p_{v_{i}v_{i+1}}\) such that \(\omega(p_{v_{i}v_{i+1}})\leq 0\). Thus, there is a path \(p_{v_{i}v_{i+1}}\subseteq G\) where \(v_{i}\) and \(v_{i+1}\) are the only two vertices of type-2 in \(p_{v_{i}v_{i+1}}\) and \(\omega(p_{v_{i}v_{i+1}})\leq 0\). Then, by Lemma 5.2, we are done. **Lemma 5.4**.: _Suppose that \(\Omega=(G,\pi,\omega)\) is a key instance with \(\omega(G)>0\), and \(G\) is not a basic factor of \(\Omega\). If \(G\) is \(2\)-connected, then there is a factor \(F\in\Omega\) such that \(\Omega(F)>0\) and \(|E_{F}|<|E_{G}|\)._ Proof.: Since \(G\) is 2-connected, it contains at least three vertices and it contains no vertex of degree 1. Consider the number of type-1 vertices in \(G\). There are three cases. * \(G\) has no type-1 vertex. Since \(G\) is 2-connected, there is a cycle \(C\subseteq G\). Clearly, \(C\) has no type-1 vertex. If \(C\) has exactly one type-2 vertex, denoted by \(v\), then \(v\) is the only vertex in \(C\) such that \(\deg_{G}(v)=3\). Then, there is an edge \(e\in E_{G}\) incident to \(v\) such that \(e\notin E_{C}\). It is easy to see that \(e\) is a bridge of \(G\), a contradiction with \(G\) being 2-connected. Thus, \(C\) has no type-2 vertex, or it has at least two type-2 vertices. Then, by Lemma 5.3, we are done. * \(G\) has exactly one type-1 vertex. Let \(u\) be the type-1 vertex of \(G\). Since \(G\) is 2-connected, there is a cycle \(C\subseteq G\) containing the vertex \(u\). Since \(\deg_{C}(u)=2<\deg_{G}(u)=3\), by Lemma 2.7, there is a path \(p_{uw}\subseteq G\) with endpoints \(u,w\in V_{C}\) such that \(E_{p_{uw}}\cap E_{C}=\emptyset\). Consider the subgraph \(H=p_{uw}\cup C\) of \(G\). \(H\) is a theta graph where \(\deg_{H}(u)=\deg_{H}(w)=3\). All vertices of \(H\) are 2-feasible except for \(u\) which is 1-feasible. Note that \(H\) is a basic factor of \(\Omega\). Since \(G\) is not a basic factor of \(\Omega\), \(H\neq G\). Also since \(G\) is connected, there exists an edge \(e_{ts}=(t,s)\) incident to a vertex \(s\in V_{H}\) such that \(e_{ts}\notin E_{H}\). Clearly, \(s\) is a vertex of type-2, \(\deg_{G}(s)=3\) and \(\deg_{H}(s)=2\). Then, by Lemma 2.7, there is a path \(p_{sr}\) with endpoints \(s,r\in V_{H}\) such that \(E_{p_{sr}}\cap E_{H}=\emptyset\). Clearly, \(\deg_{G}(r)=3\) and \(r\) is a vertex of type-2. Since \(s,r\in V_{H}\) and \(H\) is a theta graph which is 2-connected, we can find a path \(p^{\prime}_{sr}\subseteq H\) with endpoints \(s\) and \(r\) such that the only type-1 vertex \(u\) in \(H\) is not in \(p^{\prime}_{sr}\). Consider the cycle \(C^{\prime}=p_{sr}\cup p^{\prime}_{sr}\). It has no vertex of type-1, and it has at least two vertices \(s\) and \(r\) of type-2. By Lemma 5.3, we are done. * \(G\) has at least two type-1 vertices. Since \(G\) is 2-connected and it contains at least two type-1 vertices, we can find a cycle \(C\subseteq G\) that contains at least two type-1 vertices. Consider the number of type-2 vertices in \(C\). If the number is not 1, then by Lemma 5.3, we are done. Thus, we may assume that \(C\) contains exactly one vertex of type-2, denoted by \(v\). Since \(G\) is 2-connected and \(\deg_{G}(v)=3>\deg_{C}(v)=2\), we can find a path \(p_{vu}\) for some \(u\in V_{C}\) such that \(E_{p_{vu}}\cap E_{C}=\emptyset\). We have \(\deg_{G}(u)=3\). Since \(v\) is the only vertex of type-2 in \(C\), \(u\) is a vertex of type-1. Vertices \(v\) and \(u\) split \(C\) into two paths \(p^{\prime}_{vu}\) and \(p^{\prime\prime}_{vu}\). Since \(C\) contains at least two type-1 vertices, there exists some \(w\in V_{C}\) where \(w\neq u\) such that \(w\) is of type-1. Also, \(w\neq v\) since \(v\) is of type-2. Since \(w\in V_{C}=V_{p^{\prime}_{vu}}\cup V_{p^{\prime}_{vu}}\) and \(V_{p^{\prime}_{vu}}\cap V_{p^{\prime}_{vu}}=\{u,v\}\), without loss of generality, we may assume that \(w\in V_{p^{\prime}_{vu}}\). Consider the path \(p_{vu}\). If \(p_{vu}\) contains at least two vertices of type-2, then the cycle \(C^{\prime}=p_{vu}\cup p^{\prime}_{vu}\) contains at least two vertices of type-2 and at least two vertices \(u\) and \(w\) of type-1. Then, by Lemma 5.3, we are done. Thus, we may assume that \(v\) is the only vertex of type-2 in \(p_{vu}\). Consider the theta graph \(H=p_{vu}\cup C\). Then \(v\) is the only vertex of type-2 in \(H\). Note that \(w\in V_{H}\), \(\deg_{H}(w)=2<\deg_{G}(w)=3\). Since \(G\) is 2-connected, by Lemma 2.7, we can find a path \(p_{ws}\) for some \(s\in V_{H}\) such that \(E_{p_{ws}}\cap E_{H}=\emptyset\). Clearly \(s\neq v\). Then, \(s\) is of type-1 since \(v\) is the only vertex of type-2 in \(H\). Consider the number of type-2 vertices in \(p_{ws}\). Suppose that there is no vertex of type-2 in \(p_{ws}\). Since \(H\) is 2-connected and \(H\) contains only one vertex \(v\) of type 2, we can find a path \(p^{\prime}_{ws}\subseteq H\) such that \(p^{\prime}_{ws}\) does not contain the vertex \(v\) of type-2. Then, the cycle \(p_{ws}\cup p^{\prime}_{ws}\) has no vertex of type-2 and at least two vertices \(w\) and \(s\) of type-1. By Lemma 5.3, we are done. Otherwise, there is at least one vertex of type-2 in \(p_{ws}\). Since \(H\) is 2-connected, we can find a path \(p^{\prime\prime}_{ws}\subseteq H\) such that \(p^{\prime\prime}_{ws}\) contains the vertex \(v\) of type-2. Then, the cycle \(p_{ws}\cup p^{\prime\prime}_{ws}\) has at least two vertices of type-2 and at least two vertices \(w\) and \(s\) of type-1. By Lemma 5.3, we are done. **Definition 5.5** (Induced sub-instance).: _For a key instance \(\Omega=(G,\pi,\omega)\), and a factor \(F\in\Omega\), the sub-instance of \(\Omega\) induced by \(F\), denoted by \(\Omega_{F}\), is a key instance \((F,\pi_{F},\omega_{F})\) defined on the subgraph \(F\) of \(G\) where \(\pi_{F}(x)=\pi(x)\cap[\deg_{F}(x)]\subseteq\pi(x)\) for every \(x\in V_{F}\) and \(\omega_{F}\) is the restriction of \(\omega\) on \(E_{F}\) (we may write \(\omega_{F}\) as \(\omega\) for simplicity)._ We are now ready to prove the first property of Theorem 4.8 as restated in the next lemma. **Lemma 5.6**.: _Suppose that \(\Omega=(G,\pi,\omega)\) is a key instance. If \(\omega(G)>0\), then there is a basic factor \(F\) of \(\Omega\) such that \(\omega(F)>0\)._ Proof.: We prove this lemma by induction on the number of edges in \(G\). If \(|E_{G}|=1\), then \(G\) is a single edge. Thus, \(G\) is a basic factor of \(\Omega\), and \(\omega(G)>0\). We are done. We assume that the lemma holds for all key instances where the underlying graph has no more than \(n\) many edges. We consider a key instance \(\Omega=(G,\pi,\omega)\) where \(|E_{G}|=n+1\). If \(G\) is a basic factor of \(\Omega\), then clearly we are done. Thus, we may assume that \(G\) is not a basic factor of \(\Omega\). Suppose that we can find a factor \(F\in\Omega\) such that \(\omega(F)>0\) and \(|E_{F}|<|E_{G}|=n+1\). Then, consider the sub-instance \(\Omega_{F}\) of \(\Omega\) induced by \(F\). Since \(|E_{F}|<n+1\) and \(\omega(F)>0\), by the induction hypothesis, there is basic factor \(F^{\prime}\in\Omega_{F}\) such that \(\omega(F^{\prime})>0\). Since \(\Omega_{F}\subseteq\Omega\), \(F^{\prime}\in\Omega\). Then, we are done. Thus, in order to establish the inductive step, it suffices to prove that there is a factor \(F\in\Omega\) such that \(|E_{F}|<|E_{G}|\) and \(\omega(F)>0\). By Lemmas 5.1 and 5.4, if \(G\) is not connected or \(G\) is \(2\)-connected, then we are done. Thus, we may assume that \(G\) is a connected graph but not \(2\)-connected. By Lemma 2.6, \(G\) contains at least a bridge. Fix such a bridge of \(G\). Let \(p_{uv}\) be the path containing the bridge such that for every vertex \(x\in V_{p_{uv}}\backslash\{u,v\}\), \(\deg_{G}(x)=2\) and \(\deg_{G}(u),\deg_{G}(v)\neq 2\); observe that such a path exists and it is unique. In fact, the whole path can be viewed as a "long bridge" of the graph \(G\). Then, \(G\backslash p_{uv}\) is not connected and it has two connected components. Let \(G_{u}\subseteq G\backslash p_{uv}\) be the part that contains \(u\) and \(G_{v}\subseteq G\backslash p_{uv}\) be the part that contains \(v\). If both \(G_{u}\) and \(G_{v}\) are single vertices, then the graph \(G\) is a path. If both \(G_{u}\) and \(G_{v}\) are cycles, then \(G\) is a dumbbell graph. If one of \(G_{u}\) and \(G_{v}\) is a single vertex and the other one is a cycle, then \(G\) is a tadpole graph. In all these cases, \(G\) is a basic factor of \(\Omega\). A contradiction with our assumption. Thus, among \(G_{u}\) and \(G_{v}\), at least one is neither a cycle nor a single vertex. Without loss of generality, we may assume that \(G_{u}\) is neither a cycle nor a single vertex. Since \(G_{u}\) is not a single vertex, \(\deg_{G}(u)\neq 1\). By assumption, \(\deg_{G}(u)\neq 2\). Then \(\deg_{G}(u)=3\), and hence \(\deg_{G_{u}}(u)=2\). Let \(e_{1}=(u,w_{1})\) and \(e_{2}=(u,w_{2})\) be the two edges incident to \(u\) in \(G_{u}\). We slightly modify \(G_{u}\) to get a new graph. We replace the vertex \(u\) in \(G_{u}\) by two vertices \(u_{1}\) and \(u_{2}\), and replace the edges \((u,w_{1})\) and \((u,w_{1})\) in \(G_{u}\) by two new edges \((u_{1},w_{1})\) and \((u_{2},w_{2})\) respectively. We denote the new graph by \(G^{\prime}\). With a slight abuse of notations, we still use \(e_{1}\) and \(e_{2}\) to denote the edges \((u_{1},w_{1})\) and \((u_{2},w_{2})\) in \(G^{\prime}\) respectively, and we say \(E_{G_{u}}=E_{G^{\prime}}\). Then, the edge weight function \(\omega\) can be adapted to \(E_{G^{\prime}}\). We define the following instance \(\Omega^{\prime}=(G^{\prime},\pi^{\prime},\omega^{\prime})\) where \(\pi^{\prime}(u_{1})=\pi^{\prime}(u_{2})=\{0,1\}\) and \(\pi^{\prime}(x)=\pi(x)\) for every \(x\in V_{G^{\prime}}\backslash\{u_{1},u_{2}\}\), and \(\omega^{\prime}(e_{1})=\omega(e_{1})+\omega(G\backslash G_{u})\), and \(\omega^{\prime}(e)=\omega(e)\) for every \(e\in E_{G^{\prime}}\backslash\{e_{1}\}\). In other words, we add the total weight of the subgraph \(G\backslash G_{u}\) to the edge \(e_{1}\). Then, \(\omega^{\prime}(G^{\prime})=\omega(G)>0\) and \(|E_{G^{\prime}}|=|E_{G_{u}}|<|E_{G}|\). By the induction hypothesis, there is a basic factor \(F\in\Omega^{\prime}\) such that \(\omega^{\prime}(F)>0\). We will recover a factor of \(\Omega\) from \(F\) such that it has positive weight and fewer edges than \(G\). This will finish the proof of the inductive step. There are four cases depending on the presence of \(e_{1}\) and \(e_{2}\) in \(F\). * \(e_{1},e_{2}\notin E_{F}\). Then, \(u_{1},u_{2}\notin V_{F}\). For every \(x\in V_{F}\), \(\deg_{F}(x)\in\pi^{\prime}(x)=\pi(x)\). Thus, \(F\) is a basic factor of \(\Omega\). Clearly, \(\omega(F)=\omega^{\prime}(F)>0\) and \(|E_{F}|=|E_{F^{\prime}}|<|E_{G}|\). We are done. * \(e_{1}\in E_{F}\) and \(e_{2}\notin E_{F}\). We can view \(F\) as a subgraph of \(G_{u}\) by changing the edge \((u_{1},w_{1})\) in \(G^{\prime}\) back to the edge \((u,w_{1})\) in \(G_{u}\). Then, the edge \((u,w_{2})\notin E_{F}\). Consider the subgraph \(H=F\cup(G\backslash G_{u})\) of \(G\). Since \((u,w_{2})\notin E_{F}\), we have \((u,w_{2})\notin E_{H}\). Then, \(|E_{H}|<|E_{G}|\). Also, we have \[\omega(H)=\omega(F)+\omega(G\backslash G_{u})=\omega^{\prime}(F)>0.\] The vertex set \(V_{H}\) consists of three parts \(V_{1}=V_{F}\backslash\{u\}\), \(V_{2}=\{u\}\), and \(V_{3}=V_{G\backslash G_{u}}\backslash\{u\}\). For every \(x\in V_{1}\), \(\deg_{H}(x)=\deg_{F}(x)\in\pi(x)\). For every \(x\in V_{3}\), \(\deg_{H}(x)=\deg_{G\backslash G_{u}}(x)=\deg_{G}(x)\in\pi(x)\). Now, we consider the vertex \(u\). * If \(u\) is \(2\)-feasible, then \(\deg_{H}(u)=2\in\pi(x)\). Thus, \(H\) is a factor of \(\Omega\) where \(\omega(H)>0\) and \(|E_{H}|<|E_{G}|\). * If \(u\) is \(1\)-feasible, then \(F\) and \(G\backslash G_{u}\) both are factors of \(\Omega\) since \(\deg_{F}(u)=\deg_{G\backslash G_{u}}(u)=1\in\pi(u)\). Since \(\omega(H)=\omega(F)+\omega(G\backslash G_{u})>0\), among \(\omega(F)\) and \(\omega(G\backslash G_{u})\), at least one is positive. Also, \(|E_{F}|,|E_{G\backslash G_{u}}|<|E_{H}|<|E_{G}|\). We are done. * \(e_{2}\in E_{F}\) and \(e_{1}\notin E_{F}\). Again, we can view \(F\) as a subgraph of \(G_{u}\) where \((u,w_{2})\in E_{F}\) and \((u,w_{1})\notin E_{F}\). Then, we have \(|E_{F}|<|E_{G_{u}}|<|E_{G}|\), and \(\omega(F)=\omega^{\prime}(F)>0\). * If \(u\) is \(1\)-feasible, then \(F\) is a factor of \(G\) where \(|E_{F}|<|E_{G}|\) and \(\omega(F)>0\). We are done. * If \(u\) is \(2\)-feasible, then \(G_{u}\) is a factor of \(\Omega\) since \(\deg_{G_{u}}(u)=2\). If \(\omega(G_{u})>0\), then we are done. Thus, we may assume that \(\omega(G_{u})\leq 0\). Then, \(\omega(G\backslash G_{u})=\omega(G)-\omega(G\backslash G_{u})\geq\omega(G)>0\). Still consider the subgraph \(H=F\cup(G\backslash G_{u})\). Then, \(H\) is a factor of \(\Omega\) since \(\deg_{H}(u)=2\in\pi(u)\). Also, \(\omega(H)=\omega(F)+\omega(G\backslash G_{u})>0\) and \(|E_{H}|<|E_{G}|\). We are done. * \(e_{1},e_{2}\in E_{F}\). Then, \(F\) (as a subgraph of \(G^{\prime}\)) contains two vertices \(u_{1}\) and \(u_{2}\) of degree \(1\). Since \(F\) is a basic factor, it is a path. Still we can view \(F\) as a subgraph of \(G_{u}\) by changing edges \((u_{1},w_{1})\) and \((u_{2},w_{2})\) in \(G^{\prime}\) to edges \((u,w_{1})\) and \((u,w_{2})\) in \(G\). Then, \(F\) is a cycle in \(G_{u}\). Since \(G_{u}\) is not a cycle and it has no isolated vertices, \(|E_{F}|<|E_{G_{u}}|\). Consider the subgraph \(H=F\cup(G\backslash G_{u})\) of \(G\). We have \(|E_{H}|<|E_{G}|\) and \(\omega(H)=\omega(F)+\omega(G\backslash G_{u})=\omega^{\prime}(F)>0\). Also, one can check that \(H\) is a factor of \(G\) no matter whether \(u\) is \(1\)-feasible or \(2\)-feasible since \(\deg_{H}(u)=3\in\pi(u)\). We are done. ### Proof of the second property Now we prove the second property of Theorem 4.8 using the first property (Lemma 5.6). **Lemma 5.7**.: _Suppose that \(\Omega=(G,\pi,\omega)\) is a key instance, and \(u\) is a vertex of \(G\) where \(\deg_{G}(u)=1\) or \(\deg_{G}(u)=3\) and \(\pi(u)=\{0,2,3\}\). If \(\omega(G)>0\) and \(\omega(G)>\omega(F)\) for every basic factor \(F\) of \(\Omega\), then there is a basic factor \(F^{*}\) of \(\Omega\) such that \(\omega(F^{*})>0\) and \(\deg_{F^{*}}(u)\equiv 0\mod 2\). (Recall that we agree \(\deg_{F^{*}}(u)=0\) if \(u\notin V_{F^{*}}\).)_ Proof.: By Lemma 5.6, there exists at least one basic factor of \(\Omega\) such that its weight is positive. Among all such basic factors, we pick an \(F\) such that \(\omega(F)\) is the largest. We have \(0<\omega(F)<\omega(G)\). If \(\deg_{F}(u)\) is even, then we are done. Thus, we may assume that \(\deg_{F}(u)\) is odd. Since \(F\) is a basic factor and it contains a vertex \(u\) of odd degree, \(F\) is not a cycle. By the definition of basic factors, \(F\) contains exactly one more vertex \(v\) of odd degree. Since \(F\) is a factor of \(\Omega\), \(\deg_{F}(u)\subseteq\pi(u)\). Recall that \(\deg_{G}(u)=1\) or \(3\). If \(\deg_{G}(u)=1\), then \(\pi(u)=\{0,1\}\), and hence \(\deg_{F}(u)=1\). If \(\deg_{G}(u)=3\), then \(\pi(u)=\{0,2,3\}\), and hence \(\deg_{F}(u)=3\). Thus, \(\deg_{F}(u)\) always equals \(\deg_{G}(u)\). Consider the graph \(G^{\prime}=G\backslash F\), i.e., the subgraph of \(G\) induced by the edge set \(E_{G}\backslash E_{F}\). Consider the instance \(\Omega^{\prime}=(G^{\prime},\pi^{\prime},\omega^{\prime})\) where for every \(x\in V_{G^{\prime}}\), \(\pi^{\prime}(x)=\{0,1\}\) if \(\deg_{G^{\prime}}(x)=1\), \(\pi^{\prime}(x)=\{0,2\}\) if \(\deg_{G^{\prime}}(x)=2\) and \(\pi^{\prime}(x)=\pi(x)\) if \(\deg_{G^{\prime}}(x)=3\), and \(\omega^{\prime}\) is the weight function \(\omega\) restricted to \(G^{\prime}\). Note that \(\Omega^{\prime}\) is also a key instance, but it is not necessarily a sub-instance of \(\Omega\). Since \(\omega(G)>\omega(F)\), we have \(\omega^{\prime}(G^{\prime})=\omega(G^{\prime})=\omega(G)-\omega(F)>0\). Without causing ambiguity, we may simply write \(\omega^{\prime}\) as \(\omega\) in the instance \(\Omega^{\prime}\). By Lemma 5.6, there exists a basic factor \(F^{\prime}\) of \(\Omega^{\prime}\) such that \(\omega(F^{\prime})>0\). Since \(E_{F^{\prime}}\subseteq E_{G}\backslash E_{F}\), \(F\) and \(F^{\prime}\) are edge-disjoint. Let \(H=F\cup F^{\prime}\), which is the subgraph of \(G\) induced by the edge set \(E_{F}\cup E_{F^{\prime}}\). We show that \(H\) is a factor of \(\Omega\). Let \(V_{\cap}=V_{F}\cap V_{F^{\prime}}\). First we show that for every \(x\in V_{H}\backslash V_{\cap}\), \(\deg_{H}(x)\in\pi(x)\). If \(x\in V_{F}\backslash V_{\cap}\), then \(\deg_{H}(x)=\deg_{F}(x)\). Since \(F\in\Omega\), \(\deg_{F}(x)\in\pi(x)\). Then, \(\deg_{H}(x)\in\pi(x)\). If \(x\in V_{F^{\prime}}\backslash V_{\cap}\), then \(\deg_{H}(x)=\deg_{F^{\prime}}(x)\). Since \(x\notin V_{F}\) and \(G^{\prime}=G\backslash F\), \(\deg_{G^{\prime}}(x)=\deg_{G}(x)\). Then, by the definition of \(\Omega^{\prime}\), we have \(\pi^{\prime}(x)=\pi(x)\). Since \(F^{\prime}\) is a factor of \(\Omega^{\prime}\), \(\deg_{F^{\prime}}(x)\in\pi^{\prime}(x)\). Thus, \(\deg_{H}(x)\in\pi(x)\). Now, we consider vertices in \(V_{\cap}\). Since \(F\) and \(F^{\prime}\) are edge disjoint, for every \(x\in V_{\cap}\) we have \(\deg_{H}(x)=\deg_{F}(x)+\deg_{F^{\prime}}(x)\leq\deg_{G}(x)\leq 3\). Also, \(\deg_{F}(x),\deg_{F^{\prime}}(x)\geq 1\) since \(F\) and \(F^{\prime}\) are subcubic graphs which have no isolated vertices. * If \(\deg_{F}(x)=1\), then \(1\in\pi(x)\). The vertex \(x\) is \(1\)-feasible. Thus, \(\deg_{G}(x)\neq 2\). Since \(\deg_{G}(x)>\deg_{F}(x)=1\), \(\deg_{G}(x)=3\). Then, \(\deg_{G^{\prime}}(x)=\deg_{G}(x)-\deg_{F}(x)=2\), \(\pi^{\prime}(x)=\{0,2\}\) and \(\deg_{F^{\prime}}(x)=2\). * If \(\deg_{F}(x)=2\), then \(\deg_{G}(x)=3\) since \(\deg_{G}(x)>\deg_{F}(x)\). Then, \(\deg_{G^{\prime}}(x)=\deg_{G}(x)-\deg_{F}(x)=1\), \(\pi^{\prime}(x)=\{0,1\}\) and \(\deg_{F^{\prime}}(x)=1\). Thus, for every \(x\in V_{\cap}\), \(\deg_{H}(x)=\deg_{F}(x)+\deg_{F^{\prime}}(x)=3\in\pi(x)\). Thus, \(H\) is a factor of \(\Omega\). Consider the sub-instance \(\Omega_{H}=(H,\pi_{H},\omega_{H})\) of \(\Omega\) induced by \(H\) (we will write \(\omega_{H}\) as \(\omega\) for simplicity). We will show that we can find a a basic factor \(F^{*}\) of \(\Omega_{H}\) such that \(\omega(F^{*})>0\) and \(\deg_{F^{*}}(u)\equiv 0\mod 2\). Clearly, \(F^{*}\) is also a factor of \(\Omega\). Consider the set \(V_{\cap}\) of intersection points. If \(V_{\cap}=\emptyset\), then for every \(x\in V_{F^{\prime}}\), \(\deg_{F^{\prime}}(x)=\deg_{H}(x)\in\pi(x)\). Thus, \(F^{\prime}\) is a basic factor of \(\Omega\) where \(\omega(F^{\prime})>0\) and \(\deg_{F^{\prime}}(u)=0\). That is, \(F^{\prime}\) is the desired \(F^{*}\). We are done. Thus, we may assume that \(V_{\cap}\) is non-empty. For every \(x\in V_{\cap}\), \(\deg_{F}(x)=1\) and \(\deg_{F^{\prime}}(x)=2\), or \(\deg_{F}(x)=2\) and \(\deg_{F^{\prime}}(x)=1\). Recall that \(F\) is a basic factor containing two vertices \(u,v\) of odd degree, and \(\deg_{F}(u)=\deg_{G}(u)\). Clearly, \(u\notin V_{\cap}\). We consider the possible forms of \(F\) and \(F^{\prime}\). Recall that \(F\) is not a cycle. We show that \(F^{\prime}\) is also not a cycle. For a contradiction, suppose that \(F^{\prime}\) is a cycle. Then, all vertices of \(F^{\prime}\) have degree \(2\). Thus, the only possible vertex in \(V_{\cap}\) is \(v\). Since \(V_{\cap}\) is non-empty, \(V_{\cap}=\{v\}\). Then, \(\deg_{F}(v)=1\) and \(\deg_{F^{\prime}}(v)=2\). If \(\deg_{F}(u)=1\), then \(F\) is a path. The graph \(H\) is a tadpole graph where \(v\) is the only vertex of degree \(3\). If \(\deg_{F}(u)=3\), then \(F\) is a tadpole graph. The graph \(H\) is a dumbbell graph where \(v\) and \(u\) are the two vertices of degree \(3\). In both cases, \(H\) is a basic factor of \(\Omega\). Since \(\omega(F^{\prime})>0\), we have \(\omega(H)=\omega(F)+\omega(F^{\prime})>\omega(F)\) which leads to a contraction with \(F\) being a basic factor with the largest weight. Thus, \(F^{\prime}\) is a basic factor which is not a cycle. Then, it contains exactly two vertices \(s,t\) of odd degree. Then, \(V_{\cap}\subseteq\{v,s,t\}\). We consider the graph \(H\) depending on the forms of \(F\) and \(F^{\prime}\), and the vertices in \(V_{\cap}\). There are \(5\) main cases. 1. \(F\) is a path. 2. \(F\) is a tadpole graph and \(\deg_{F}(u)=3\). 3. \(F\) is a tadpole graph and \(\deg_{F}(u)=1\). 4. \(F\) is a dumbbell graph. 5. \(F\) is a theta graph. Recall that for two points \(x\) and \(y\), we use \(p_{xy}\), \(p^{\prime}_{xy}\) or \(p^{\prime\prime}_{xy}\) to denote a path with endpoints \(x\) and \(y\). We also use \(q_{xy^{3}}\) or \(q^{\prime}_{xy^{3}}\) to denote a tadpole graph where \(x\) is the vertex of degree \(1\) and \(y\) is the vertex of degree \(3\), and \(\theta_{xy}\) to denote a theta graph where \(x\) and \(y\) are the two points of degree \(3\). In the following Figures 2 to 12, we use hollow nodes to denote \(1\)-feasible vertices, solid nodes to denote 2-feasible vertices, semisolid nodes to denote vertices that are possibly 1-feasible or 2-feasible, red-colored lines to denote paths in \(F\), and blue-colored lines to denote paths in \(F^{\prime}\). **Case I:**\(F\) is a path. There are 4 subcases depending on the form of \(F^{\prime}\). 1. \(F\) and \(F^{\prime}\) are both paths. Then, \(V_{\cap}\subseteq\{v,s,t\}\). There are 5 subcases: \(V_{\cap}=\{v\}\), \(V_{\cap}=\{s\}\) or \(\{t\}\), \(V_{\cap}=\{v,s\}\) or \(\{v,s\}\), \(V_{\cap}=\{s,t\}\), and \(V_{\cap}=\{v,s,t\}\). 1. \(V_{\cap}=\{v\}\). 2. \(v_{\cap}=\{v\}\). 3. \(v_{\cap}=\{v,s\}\). 4. \(v_{\cap}=\{v,s\}\). 5. \(v_{\cap}=\{v,s,t\}\). 6. \(v_{\cap}=\{v,s,t\}\). 7. \(v_{\cap}=\{v,s,t\}\). 8. \(v_{\cap}=\{v,s,t\}\). 9. \(v_{\cap}=\{v,s,t\}\). 10. \(v_{\cap}=\{v,s,t\}\). 11. \(v_{\cap}=\{v,s,t\}\). 12. \(v_{\cap}=\{v,s,t\}\). 13. \(v_{\cap}=\{v,s,t\}\). 14. \(v_{\cap}=\{v,s,t\}\). 15. \(v_{\cap}=\{v,s,t\}\). 16. \(v_{\cap}=\{v,s,t\}\). 17. \(v_{\cap}=\{v,s,t\}\). 18. \(v_{\cap}=\{v,s,t\}\). 19. \(v_{\cap}=\{v,s,t\}\). 20. \(v_{\cap}=\{v,s,t\}\). 21. \(v_{\cap}=\{v,s,t\}\). 22. \(v_{\cap}=\{v,s,t\}\). 23. \(v_{\cap}=\{v,s,t\}\). 24. \(v_{\cap}=\{v,s,t\}\). 25. \(v_{\cap}=\{v,s,t\}\). 26. \(v_{\cap}=\{v,s,t\}\). 27. \(v_{\cap}=\{v,s,t\}\). 28. \(v_{\cap}=\{v,s,t\}\). 29. \(v_{\cap}=\{v,s,t\}\). 30. \(v_{\cap}=\{v,s,t\}\). 31. \(v_{\cap}=\{v,s,t\}\). 32. \(v_{\cap}=\{v,s,t\}\). 43. \(v_{\cap}=\{v,s,t\}\). 44. \(v_{\cap}=\{v,s,t\}\). 5. \(v_{\cap}=\{v,s,t\}\). 6. \(v_{\cap}=\{v,s,t\}\). 7. \(v_{\cap}=\{v,s,t\}\). 8. \(v_{\cap}=\{v,s,t\}\). 9. \(v_{\cap}=\{v,s,t\}\). 10. \(v_{\cap}=\{v,s,t\}\). 11. \(v_{\cap}=\{v,s,t\}\). 12. \(v_{\cap}=\{v,s,t\}\). 13. \(v_{\cap}=\{v,s,t\}\). 14. \(v_{\cap}=\{v,s,t\}\). 26. \(v_{\cap}=\{v,s,t\}\). 27. \(v_{\cap}=\{v,s,t\}\). 33. \(v_{\cap}=\{v,s,t\}\). 15. \(v_{\cap}=\{v,s,t\}\). 28. \(v_{\cap}=\{v,s,t\}\). 34. \(v_{\cap}=\{v,s,t\}\). 45. \(v_{\cap}=\{v,s,t\}\). 35. \(v_{\cap}=\{v,s,t\}\). 46. \(v_{\cap}=\{v,s,t\}\). 5. \(v_{\cap}=\{v,s,t\}\). 6. \(v_{\cap}=\{v,s,t\}\). 7. \(v_{\cap}=\{v,s,t\}\). 8. \(v_{\cap}=\{v,s,t\}\). 9. \(v_{\cap}=\{v,s,t\}\). 16. \(v_{\cap}=\{v,s,t\}\). 17. \(v_{\cap}=\{v,s,t\}\). 18. \(v_{\cap}=\{v,s,t\}\). 19. \(v_{\cap}=\{v,s,t\}\). 20. \(v_{\cap}=\{v,s,t\}\). 21. \(v_{\cap}=\{v,s,t\}\). 22. \(v_{\cap}=\{v,s,t\}\). 23. \(v_{\cap}=\{v,s,t\}\). 24. \(v_{\cap}=\{v,s,t\}\). 25. \(v_{\cap}=\{v,s,t\}\). 26. \(v_{\cap}=\{v,s,t\}\). 27. \(v_{\cap}=\{v,s,t\}\). 38. \(v_{\cap}=\{v,s,t\}\). 39. \(v_{\cap}=\{v,s,t\}\). 40. \(v_{\cap}=\{v,s,t\}\). 39. \(v_{\cap}=\{v,s,t\}\). 41. \(v_{\cap}=\{v,s,t\}\). 5. \(v_{\cap}=\{v,s,t\}\). 6. \(v_{\cap}=\{v,s,t\}\). 7. \(v_{\cap}=\{v,s,t\}\). 8. \(v_{\cap}=\{v,s,t\}\). 8. \(v_{\cap}=\{v,s,t\}\). 9. \(v_{\cap}=\{v,s,t\}\). 10. \(v_{\cap}=\{v,s,t\}\). 11. \(v_{\cap}=\{v,s,t\}\). 12. \(v_{\cap}=\{v,s,t\}\). 13. \(v_{\cap}=\{v,s,t\}\). 14. \(v_{\cap}=\{v,s,t\}\). 15. \(v_{\cap}=\{v,s,t\}\). 16. \(v_{\cap}=\{v,s,t\}\). 17. \(v_{\cap}=\{v,s,t\}\). 18. \(v_{\cap}=\{v,s,t\}\). 29. \(v_{\cap}=\{v,s,t\}\). 20. \(v_{\cap}=\{v,s,t\}\). 21. \(v_{\cap}=\{v,s,t\}\). 22. \(v_{\cap}=\{v,s,t\}\). 23. \(v_{\cap}=\{v,s,t\}\). 24. \(v_{\cap}=\{v,s,t\}\). 24. \(v_{\cap}=\{v,s,t\}\). 25. \(v_{\cap}=\{v,s,t\}\). 26. \(v_{\cap}=\{v,s,t\}\). 27. \(v_{\cap}=\{v,s,t\}\). 28. \(v_{\cap}=\{v,s,t\}\). 29. \(v_{\cap}=\{v,s,t\}\). 29. \(v_{\cap}=\{v,s,t\}\). 39. \(v_{\cap}=\{v,s,t\}\). 39. \(v_{\cap}=\{v,s,t\}\). 40. \(v_{\cap}=\{v,s,t\}\). 39. \(v_{\cap}=\{v,s,t\}\). 41. \(v_{\cap}=\{v,s,t\}\). 5. \(v_{\cap}=\{v,s,t\}\). 6. \(v_{\cap}=\{v,s,t\}\). 7. \(v_{\cap}=\{v,s,t\}\). 8. \(v_{\cap}=\{v,s,t\}\). 9. \(v_{\cap}=\{v,s,t\}\). 10. \(v_{\cap}=\{v,s,t\}\). 11. \(v_{\cap}=\{v,s,t\}\). 12. \(v_{\cap}=\{v,s,t\}\). 13. \(v_{\cap}=\{v,s,t\}\). 14. \(v_{\cap}=\{v,s,t\}\). 15. \(v_{\cap}=\{v,s,t\}\). 16. \(v_{\cap}=\{v,s,t\}\). 17. \(v_{\cap}=\{v,s,t\}\). 18. \(v_{\cap}=\{v,s,t\}\). 19. \(v_{\cap}=\{v,s,t\}\). 20. \(v_{\cap}=\{v,s,t\}\). 21. \(v_{\cap}=\{v,s,t\}\). 22. \(v_{\cap}=\{v,s,t\}\). 23. \(v_{\cap}=\{v,s,t\}\). 24. \(v_{\cap}=\{v,s,t\}\). 39. \(v_{ \(\deg_{p_{ut}}(s)=2\in\pi(s)\). Thus, \(p_{ut}\) is a basic factor of \(\Omega\). Then, \(\omega(F)\geq\omega(p_{ut})\) since \(F\) is a basic factor of \(\Omega\) with the largest weight \(\omega(F)\). Then, \[\omega(F)=\omega(p_{us})+\omega(p_{sv})\geq\omega(p_{us})+\omega(p_{st})=\omega( p_{ut}).\] Thus, \(\omega(p_{sv})\geq\omega(p_{st})>0\). Let \(p_{vt}=p_{sv}\cup p_{st}\) be the path with endpoints \(v\) and \(t\). Then, \[\omega(p_{vt})=\omega(p_{sv})+\omega(p_{st})>0.\] Since \(u\) is not in \(p_{vt}\), \(\deg_{p_{vt}}(u)=0\). Similar to the proof of \(p_{ut}\in\Omega\), we have \(p_{vt}\in\Omega\). Thus, the path \(p_{vt}\) is a basic factor of \(\Omega\) where \(\omega(p_{vt})>0\) and \(\deg_{p_{vt}}(u)=0\). 3. \(V_{\cap}=\{v,s\}\) or \(\{v,t\}\). These two cases are symmetric. We only consider the case that \(V_{\cap}=\{v,s\}\). \[\xy(0,0)=0,0.5cm] \[\xy(0,0)=0,0. In this case, \(\deg_{H}(u)=\deg_{H}(v)=1\), \(\deg_{H}(s)=\deg_{H}(t)=3\), and \(\pi(s)=\pi(t)=\{0,2,3\}\). The points \(s\) and \(t\) split \(F\) into three paths. Without loss of generality, we may assume that \(s\) is closer to \(u\) and \(t\) is closer to \(v\). Then, the three paths are \(p_{us}\), \(p_{st}\), and \(p_{tv}\), and \(F=p_{us}\cup p_{st}\cup p_{ts}\). Also, \(F^{\prime}\) is a path with endpoints \(s\) and \(t\), which is disjoint with \(p_{st}\). (See Figure 5.) Consider the path \(p^{\prime}_{uv}=p_{us}\cup F^{\prime}\cup p_{tv}\). One can check that \(p^{\prime}_{uv}\) is a basic factor of \(\Omega\). Then, \(\omega(F)\geq\omega(p^{\prime}_{uv})\). Thus, \(\omega(p_{st})\geq\omega(F^{\prime})>0\). Consider the cycle \(F^{*}=F^{\prime}\cup p_{st}\). Also, one can check that \(F^{*}\) is a basic factor of \(\Omega\). Moreover, \(\omega(F^{*})=\omega(F^{\prime})+\omega(p_{st})>0\) and \(\deg_{F^{*}}(u)=0\). We are done. * \(V_{\cap}=\{v,s,t\}\). \(p^{\prime}_{sv}\)\(p_{tv} 1. \(V_{\cap}=\{v\}\). In this case, \(\deg_{H}(u)=\deg_{H}(s)=1\), \(\deg_{H}(v)=\deg_{H}(t)=3\), \(\pi(v)=\{0,1,3\}\), and \(\pi(t)=\{0,1,3\}\) or \(\{0,2,3\}\). There are two subcases depending on whether the intersection point \(v\) appears in the path part or the cycle part of \(F^{\prime}\). 1. \(v\) appears in the path part. Note that for every \(x\in V_{C}\backslash\{t\}\), \(\deg_{H}(x)=2\), and \(\deg_{H}(t)=3\). We say such a cycle with exactly one vertex of degree \(3\) in \(H\) is a _dangling_ cycle in \(H\). Let \(e_{t}\) be the edge incident to \(t\) where \(e_{t}\notin E_{C}\). We call the vertex \(t\) the _connecting point_ of \(C\), and the edge \(e_{t}\) the _connecting bridge_ of \(C\). Consider the graph \(H^{\prime}=H\backslash C\). Notice that \(\deg_{H^{\prime}}(x)=\deg_{H}(x)\) for every \(x\in V_{H^{\prime}}\backslash\{t\}\) and \(\deg_{H^{\prime}}(t)=1\). Consider the instance \(\Omega_{H^{\prime}}=(H^{\prime},\pi_{H^{\prime}},\omega_{H^{\prime}})\) where \(\pi_{H^{\prime}}(x)=\pi_{H}(x)\) for every \(x\in V_{H^{\prime}}\backslash\{t\}\) and \(\pi_{H^{\prime}}(t)=\{0,1\}\), and \(\omega_{H^{\prime}}(e)=\omega(e)\) for every \(e\in E_{H^{\prime}}\backslash\{e_{t}\}\) and \(\omega_{H^{\prime}}(e_{t})=\omega(e_{t})+\omega(C)\). In other words, the instance \(\Omega_{H^{\prime}}\) is obtained from \(\Omega_{H}\) by contracting the dangling cycle \(C\) to its connecting point \(t\) and adding the total weight of \(C\) to its connecting bridge \(e_{t}\). Clearly, \(\Omega_{H^{\prime}}\) is a key instance and \(\omega_{H^{\prime}}(H^{\prime})=\omega(H^{\prime})+\omega(C)=\omega(H)>0\). For every factor \(K^{\prime}\in\Omega_{H^{\prime}}\), we can recover a factor \(K\in\Omega_{H}\) from \(K^{\prime}\) as follows: \(K=K^{\prime}\) if \(e_{t}\notin E_{K^{\prime}}\) and \(K=K^{\prime}\cup C\) if \(e_{t}\in E_{K}\). One can check that \(K\) is a factor of \(\Omega_{H}\), and \(\omega(K)=\omega_{H^{\prime}}(K^{\prime})\). If \(e_{t}\notin K\), then \(K^{\prime}=K\). Clearly, \(K^{\prime}\) is a basic factor of \(\Omega_{H^{\prime}}\) if and only if \(K\) is a basic factor of \(\Omega\). Now, suppose that \(e_{t}\in K\). Remember that \(\deg_{H^{\prime}}(t)=1\). Then, \(K^{\prime}\) is a path with \(t\) as an endpoint if and only if \(K=K^{\prime}\cup C\) is a tadpole graph with \(t\) as the vertex of degree \(3\), and \(K^{\prime}\) is a tadpole with \(t\) as the vertex of degree \(1\) if and only if \(K=K^{\prime}\cup C\) is a dumbbell graph. Thus, \(K^{\prime}\) is a basic factor of \(\Omega_{H^{\prime}}\) if and only if \(K\) is a basic factor of \(\Omega_{H}\). Notice that the instance \(\Omega_{H}^{\prime}\) has a similar structure to the instance \(\Omega_{H}\) in Case I.1.1. By replacing the vertex \(v\) in Case I.1.1 by the cycle \(C\) (and re-arranging the weights between the cycle \(C\) and its connecting bridge), one can check that the proof of Case I.1.1 works here. Note that after this replacement, the path \(p_{vt}\) in Case I.1.1 becomes a tadpole graph \(q_{vt^{3}}\) which is still a basic factor. 2. \(v\) appears in the cycle part. Together with the point \(t\), the point \(v\) splits the cycle in \(F^{\prime}\) into two paths \(p_{vt}\) and \(p_{vt}^{\prime}\). Let \(p_{ts}\) denote the path in \(F^{\prime}\) with endpoints \(t\) and \(s\). Then, \(F^{\prime}=p_{vt}\cup p_{vt}^{\prime}\cup p_{ts}\). (See Figures 7.) * If \(\pi(t)=\{0,1,3\}\), then the paths \(p_{vt}\), \(p_{vt}^{\prime}\) and \(p_{ts}\) are all basic factors of \(\Omega\). Moreover, the vertex \(u\) does not appear in any of these paths. Also, since \(\omega(F^{\prime})=\omega(p_{vt})+\omega(p_{vt}^{\prime})+\omega(p_{ts})>0\), there is at least one path with positive weight. We are done. Figure 7: The graph \(H\) in Case I.2.(a) * If \(\pi(t)=\{0,2,3\}\), then the tadpole graph \(q_{uv^{3}}=F\cup p_{vt}\cup p^{\prime}_{vt}\) is a basic factor or \(\Omega\). Since \(\omega(F)\geq\omega(q_{uv^{3}})\), we have \(\omega(p_{vt})+\omega(p^{\prime}_{vt})\leq 0.\) Without loss of generality, we assume that \(\omega(p^{\prime}_{vt})\leq 0\). Consider the path \(F^{*}=p_{vt}\cup p_{ts}\). We have \(\deg_{F^{*}}(u)=0\), and \(F^{*}\) is a basic factor of \(\Omega\). Since \[\omega(F^{\prime})=\omega(p_{vt})+\omega(p^{\prime}_{vt})+\omega(p_{ts})= \omega(F^{*})+\omega(p^{\prime}_{vt})>0\] and \(\omega(p^{\prime}_{vt})\leq 0\), we have \(\omega(F^{*})>0\). We are done. * \(V_{\cap}=\{s\}\). Still, the cycle \(C\) in \(F^{\prime}\) is a dangling cycle with the connecting point \(t\). We can contract \(C\) to \(t\) and add the weight \(\omega(C)\) to its connecting bridge. Then, this case is similar to Case I.1.(b). By replacing the vertex \(t\) in Case I.1.(b) by the \(C\), one can check that the proof of Case I.1.(b) works here. Note that the path \(p_{st}\) in Case I.1.(b) is replaced by a tadpole graph \(q_{st^{3}}\) which is still a basic factor. * \(V_{\cap}=\{v,s\}\). In this case, \(\deg_{H}(u)=1\), \(\deg_{H}(v)=\deg_{H}(s)=\deg_{H}(t)=3\), \(\pi(v)=\{0,1,3\}\), \(\pi(s)=\{0,2,3\}\), and \(\pi(t)=\{0,1,3\}\) or \(\{0,2,3\}\). There are two subcases depending on whether the intersection point \(v\) appears in the path part or the cycle part of the tadpole graph \(F^{\prime}\). Note that there is only one way for the intersection point \(s\) to appear in the path \(F\), and \(s\) always splits \(F\) into two paths \(p_{us}\) and \(p_{st}\). * \(v\) appears in the path part. Still, the cycle \(C\) is a dangling cycle with the connecting point \(t\). This case is similar to Case I.1.(c). By replacing the vertex \(t\) in Case I.1.(c) by the cycle \(C\), one can check that the proof of Case I.1.(c) works here. Note that the path \(p_{vt}\) and the tadpole graph \(F^{*}=q_{tv^{3}}\) in Case I.1.(c) are replaced by a tadpole graph and a dumbbell graph respectively. Both are still basic factors. * \(v\) appears in the cycle part. Consider the path \(p_{vt}^{\prime\prime}=p_{sv}\cup p_{st}\). It is a basic factor of \(\Omega\) and \(\deg_{p_{vt}^{\prime\prime}}(u)=0\). Also, \(\omega(p_{vt}^{\prime\prime})=\omega(p_{sv})+\omega(p_{st})>0\). We are done. * If \(\pi(t)=\{0,2,3\}\), then the tadpole graph \(q_{uv^{3}}=F\cup p_{vt}\cup p_{vt}^{\prime}\) is a basic factor of \(\Omega\). Since \(\omega(F)\geq\omega(q_{uv^{3}})\), we have \(\omega(p_{vt})+\omega(p_{vt}^{\prime})\leq 0\). Since \(\omega(F^{\prime})>0\), we have \(\omega(p_{st})>0\). Consider the path \(p_{uv}^{\prime}=p_{us}\cup p_{st}\cup p_{vt}\). It is a basic factor of \(\Omega\). Since \(\omega(F)\geq\omega(p_{uv}^{\prime})\), we have \[\omega(p_{sv}) \geq\omega(p_{st})+\omega(p_{vt}).\] Similarly, consider the path \(p_{uv}^{\prime\prime}=p_{us}\cup p_{st}\cup p_{vt}^{\prime}\). We have \[\omega(p_{sv}) \geq\omega(p_{st})+\omega(p_{vt}^{\prime}).\] Sum up the above two inequalities, we have \[2\omega(p_{sv})\geq 2\omega(p_{st})+\omega(p_{vt})+\omega(p_{vt}^{\prime}) \geq 2\omega(p_{st})>0.\] Consider the theta graph \(F^{*}=p_{sv}\cup F^{\prime}\). Note that \(F^{*}\) is a basic factor of \(\Omega\) and \(\deg_{F^{*}}(u)=0\). Also, \(\omega(F^{*})=\omega(p_{sv})+\omega(F^{\prime})>0\). We are done. We are done with Case I.2 where \(F\) is a path and \(F^{\prime}\) is a tadpole graph. 1. \(F\) is a path and \(F^{\prime}\) is a dumbbell graph. Then, \(V_{\cap}=\{v\}\). Let \(C_{s}\) and \(C_{t}\) be the two cycles in \(F^{\prime}\) that contain vertices \(s\) and \(t\) respectively. Clearly, among \(V_{C_{s}}\) and \(V_{C_{t}}\), there exists at least one such that it does not contain the intersection point \(v\). Notice that vertices \(s\) and \(t\) are symmetric in this case. Without loss of generality, we may assume that \(v\notin V_{C_{s}}\). Then, \(V_{C_{s}}\cap V_{F}=\emptyset\). Thus, \(C_{s}\) is a dangling cycle with the connecting point \(s\). Then, this case is similar to Case I.2.(a). By replacing the vertex \(s\) in Case I.2.(a) by the cycle \(C_{s}\), one can check that the proof of Case I.2.(a) works here. 2. \(F\) is a path and \(F^{\prime}\) is a theta graph. Then, \(V_{\cap}=\{v\}\). In this case, \(\deg_{H}(u)=1\), \(\deg_{H}(v)=\deg_{H}(s)=\deg_{H}(t)=3\), and \(\pi(v)=\{0,1,3\}\). Since \(F^{\prime}\) is a theta graph and \(\deg_{F^{\prime}}(s)=\deg_{F^{\prime}}(t)=3\), without loss of generality, we may assume that \(\pi(s)=\{0,1,3\}\) and \(\pi(t)=\{0,2,3\}\). \(F^{\prime}\) consists of three paths \(p_{st}\), \(p_{st}^{\prime}\) and \(p_{st}^{\prime\prime}\). Without loss of generality, we may assume that \(v\) appears in the path \(p_{st}\) and it splits the path into two paths \(p_{sv}\) and \(p_{vt}\). (see Figure 9.) Consider the paths \(p_{sv}^{\prime}=p_{st}^{\prime}\cup p_{vt}\) and \(p_{sv}^{\prime\prime}=p_{st}^{\prime\prime}\cup p_{vt}\), and the tadpole graph \(q_{vs^{3}}=p_{sv}\cup p_{st}^{\prime}\cup p_{st}^{\prime\prime}\). It can be checked that \(p_{sv}^{\prime}\), \(p_{sv}^{\prime\prime}\) and \(q_{vs^{3}}\) are all basic factors of \(H\). The vertex \(u\) does not appear in any of them. Also, \[\omega(p_{sv})+\omega(p_{sv}^{\prime})+\omega(p_{sv}^{\prime\prime})+\omega(q _{vs^{3}})=2\omega(F^{\prime})>0.\] Then, among them at least one is positive. Thus, we can find a basic factor of \(\Omega\) satisfying the requirements. Figure 9: The graph \(H\) in Case I.4 We are done with Case I where \(F\) is a path. **Case II:**\(F\) is a tadpole graph and \(\deg_{F}(u)=3\). By assumption, \(\pi(u)=\{0,2,3\}\). Also, since \(\deg_{F}(v)=1\in\pi(v)\), \(v\) is \(1\)-feasible. Let \(C\) be the cycle part of \(F\). Consider \(\{s,t\}\cap V_{C}\). Here, we discuss possible cases depending on intersection vertices belonging to \(V_{C}\) instead of the entire set \(V_{\cap}\) of vertices points as in Case I. There are three subcases. 1. \(\{s,t\}\cap V_{C}=\emptyset\). In this case, \(\deg_{H}(x)=2\) for every \(x\in V_{C}\backslash\{u\}\). Thus, \(C\) is a dangling cycle with in connecting point \(u\) in \(H\). Then, the case is similar to Case I. By replacing the vertex \(u\) in Case I by the cycle \(C\), one can check that the proof of Case I works here. Note that after the above replacement, a path containing \(u\) as an endpoint in Case I becomes a tadpole graph containing the cycle \(C\), and a tadpole graph containing \(u\) as the vertex of degree \(1\) in Case I becomes a dumbbell graph. 2. \(\{s,t\}\cap V_{C}=\{s\}\) or \(\{t\}\). Without loss of generality, we may assume that \(s\in V_{C}\). Then, \(\deg_{H}(u)=\deg_{H}(s)=3\) and \(\pi(u)=\pi(s)=\{0,2,3\}\). If \(\omega(C)>0\), then we are done since \(C\) is a basic factor of \(\Omega\) and \(\deg_{C}(u)=2\). Thus, we may assume that \(\omega(C)\leq 0\). Vertices \(s\) and \(u\) split \(C\) into two paths \(p_{us}\) and \(p^{\prime}_{us}\). Since \(\omega(C)=\omega(p_{us})+\omega(p^{\prime}_{us})\leq 0\), among them at least one is non-positive. Without loss of generality, we assume that \(\omega(p_{us})\leq 0\). Consider the graph \(H^{\prime}=H\backslash p_{us}\). Note that \(V_{H^{\prime}}=(V_{H}\backslash V_{p_{us}})\cup\{u,s\}\). For every \(x\in V_{H^{\prime}}\backslash\{u,s\}\), we have \(\deg_{H^{\prime}}(x)=\deg_{H}(x)\in\pi(x)\) since \(H\) is a factor of \(\Omega\). Also, \(\deg_{H^{\prime}}(u)=2\in\pi(u)\) and \(\deg_{H^{\prime}}(s)=2\in\pi(s)\). Thus, \(H^{\prime}\) is a factor of \(\Omega\). Also, \(\omega(H^{\prime})=\omega(H)-\omega(p_{us})>0\). However, it is not clear whether \(H^{\prime}\) is a _basic_ factor of \(\Omega\). Consider the substance \(\Omega^{\prime}_{H}=(H^{\prime},\pi_{H^{\prime}},\omega)\) of \(\Omega\) induced by the factor \(H^{\prime}\). Since \(\omega(H^{\prime})>0\), by Lemma 5.6, there is a basic factor \(F^{*}\in\Omega_{H^{\prime}}\) such that \(\omega(F^{*})>0\). Then, \(\deg_{F^{*}}(u)\in\pi_{H^{\prime}}(u)=\{0,2\}\). Clearly, \(F^{*}\) is also a basic factor of \(\Omega\). We are done. Note that this proof works no matter whether \(F^{\prime}\) is a path or a tadpole graph, and whether \(v\in V_{\cap}\) or \(t\in V_{\cap}\). In fact, this proof also works when \(F\) is a dumbbell graph as long as \(s\) (or symmetrically \(t\)) is the only vertex in \(V_{F^{\prime}}\) appearing in the cycle \(C\) of \(F\) that contains the vertex \(u\). 3. \(\{s,t\}\subseteq V_{C}\). In this case, \(\deg_{H}(u)=\deg_{H}(s)=\deg_{H}(t)=3\) and \(\pi(u)=\pi(s)=\pi(t)=\{0,2,3\}\). Also, \(\deg_{F^{\prime}}(s)=\deg_{F^{\prime}}(t)=1\). Thus, \(F^{\prime}\) is a path with endpoints \(s\) and \(t\). Note that in this case, it is possible that \(v\in V_{F^{\prime}}\). If \(v\in V_{F^{\prime}}\), then \(\deg_{H}(v)=3\) and \(\pi(v)=\{0,1,3\}\); otherwise, \(\deg_{H}(v)=1\) and \(\pi(v)=\{0,1\}\). The points \(u\), \(s\), and \(t\) split \(C\) into three paths, \(p_{us}\), \(p_{st}\), \(p_{tu}\). Then, \(C=p_{us}\cup p_{st}\cup p_{tu}\). (See Figure 10.) If \(\omega(C)>0\), then we are done. Thus, we may assume that \(\omega(C)\leq 0\). Figure 10: The two possible forms of graph \(H\) in Case II.3. Consider the graph \(H_{1}=H\backslash p_{st}=(F\backslash p_{st})\cup F^{\prime}\). Similar to the above Case II.2, one can check that \(H_{1}\) is a factor of \(\Omega\). Also, \(H_{1}\) is a tadpole graph if \(\deg_{H}(v)=1\) or a theta graph if \(\deg_{H}(v)=3\). Thus, in both cases, \(H_{1}\) is a _basic_ factor of \(\Omega\). Since \(F\) is a basic factor of \(\Omega\) with the largest weight, we have \[\omega(F)\geq\omega(H_{1})=\omega(F)-\omega(p_{st})+\omega(F^{\prime}).\] Thus, \(\omega(p_{st})\geq\omega(F^{\prime})>0\). Since \(\omega(C)=\omega(p_{st})+\omega(p_{us})+\omega(p_{tu})\leq 0\), we have \(\omega(p_{us})+\omega(p_{tu})<0\). Without loss of generality, we may assume that \(\omega(p_{us})<0\). Then, consider the graph \(H_{2}=H\backslash p_{us}\). Still, one can check that \(H_{2}\) is a factor of \(\Omega\), and \(\deg_{H_{2}}(u)=2\). Also, \(H_{2}\) is a tadpole graph if \(\deg_{H}(v)=1\), or a theta graph if \(\deg_{H}(v)=3\). Thus, \(H_{2}\) is a basic factor of \(\Omega\). Moreover, \(\omega(H_{2})=\omega(H)-\omega(p_{us})>0\). We are done. **Case III:**\(F\) is a tadpole graph and \(\deg_{F}(v)=3\). In this case, \(\deg_{F}(u)=1\), \(\pi(u)=\{0,1\}\), \(\deg_{F}(v)=3\), and \(\pi(v)=\{0,1,3\}\) or \(\{0,2,3\}\). Recall that \(\deg_{H}(u)=\deg_{F}(u)=1\), and \(u\notin V_{\cap}\). Let \(C\) be the cycle part of \(F\). Still consider \(\{s,t\}\cap V_{C}\). There are three subcases. 1. \(\{s,t\}\cap V_{C}=\emptyset\). In this case, \(\deg_{H}(x)=2\) for every \(x\in V_{C}\backslash\{v\}\). Thus, in the graph \(H\), the cycle \(C\) is a dangling cycle with the connecting point \(v\). Then, the case is similar to Case I. For a graph \(H\) in Case I where \(\deg_{H}(v)=1\) (i.e., \(v\notin V_{\cap}\)), by replacing the vertex \(v\) by the cycle \(C\), one can check that the proof of Case I works here. 2. \(\{s,t\}\cap V_{C}=\{s\}\) or \(\{t\}\). Without loss of generality, we may assume that \(s\in V_{C}\). Then \(\deg_{H}(s)=3\) and \(\pi(s)=\{0,2,3\}\). Vertices \(s\) and \(v\) split \(C\) into two paths \(p_{vs}\) and \(p^{\prime}_{vs}\). Let \(p_{uv}\) be the path part in the tadpole graph \(F\). There are two subcases depending on whether \(t\in V_{F}\). Since \(t\notin V_{C}\), \(t\in V_{F}\) implies \(t\in V_{p_{uv}}\). 1. \(t\notin V_{p_{uv}}\). 2. \(p_{uv}\)\(p^{\prime}_{vs}\ * If \(\pi(v)=\{0,2,3\}\), then the cycle \(C\) is a basic factor of \(\Omega\). Consider \(H_{1}=H\backslash p_{vs}\). Note that \(\deg_{H_{1}}(u)=1\in\pi(u)\), \(\deg_{H_{1}}(v)=2\in\pi(v)\), \(\deg_{H_{1}}(s)=2\in\pi(s)\), and \(\deg_{H_{1}}(t)=\deg_{H}(t)\in\pi(t)\). One can check that \(H_{1}\) is a factor of \(\Omega\). Also, \(H_{1}\) is either a path with endpoints \(u\) and \(s\) if \(F^{\prime}\) is a path, or a tadpole graph with \(u\) being the vertex of degree \(1\) and \(t\) being the vertex of degree \(3\) if \(F^{\prime}\) is a tadpole graph. Thus, \(H_{1}\) is a basic factor of \(\Omega\). Since \(F\) is a basic factor with the largest weight, \[\omega(F)\geq\omega(H_{1})=\omega(F)-\omega(p_{vs})+\omega(F^{\prime}).\] Thus, \(\omega(p_{vs})\geq\omega(F^{\prime})>0\). Similarly, by considering \(H_{2}=H\backslash p^{\prime}_{vs}\), we have \(\omega(p^{\prime}_{vs})>0\). Then, \(\omega(C)=\omega(p_{vs})+\omega(p^{\prime}_{vs})>0\). Thus, \(C\) is a basic factor of \(\Omega\) with positive weight and \(\deg_{C}(u)=0\). * \(t\in V_{p_{uv}}\). \[\begin{array}{c}\includegraphics[width=142.26378pt]{images/142.eps}\\ \end{array}\] In this case, \(\deg_{H}(t)=3\) and \(\pi(t)=\{0,2,3\}\). \(F^{\prime}\) is a path with endpoints \(s\) and \(t\). The vertex \(t\) splits \(p_{uv}\) into two parts \(p_{ut}\) and \(p_{tv}\) (see Figure 12). * If \(\pi(v)=\{0,1,3\}\), then \(p_{uv}\) is a basic factor of \(\Omega\). Since \(\omega(F)\geq\omega(p_{uv})\), we have \(\omega(C)\geq 0\). Consider the path \(p^{\prime}_{uv}=p_{ut}\cup F^{\prime}\cup p_{vs}\). It is also a basic factor of \(\Omega\). Still, since \(\omega(F)\geq\omega(p^{\prime}_{uv})\), we have \[\omega(p_{tv})\geq\omega(F^{\prime})+\omega(p_{vs}).\] Similarly, by considering the path \(p^{\prime\prime}_{uv}=p_{ut}\cup F^{\prime}\cup p^{\prime}_{vs}\), we have \[\omega(p_{tv})\geq\omega(F^{\prime})+\omega(p^{\prime}_{vs}).\] Thus, \(2\omega(p_{tv})\geq 2\omega(F^{\prime})+\omega(p_{vs})+\omega(p^{\prime}_{vs})=2 \omega(F^{\prime})+\omega(C)>0\). Consider the theta graph \(F^{*}=p_{tv}\cup C\cup F^{\prime}\). Clearly, \(\omega(F^{*})>0\). Then, \(F^{*}\) is a basic factor of \(\Omega\) with \(\deg_{F^{*}}(u)=0\). We are done. * If \(\pi(v)=\{0,2,3\}\), then \(C\) is a basic factor of \(\Omega\). Consider \(H_{1}=H\backslash p_{vs}\). It is a tadpole graph with the vertex \(u\) of degree \(1\) and the vertex \(t\) of degree \(3\). Note that \(\deg_{H_{1}}(u)=1\in\pi(u)\), \(\deg_{H_{1}}(v)=2\in\pi(v)\), \(\deg_{H_{1}}(s)=2\in\pi(s)\), and \(\deg_{H_{1}}(t)=3\in\pi(t)\). One can check that \(H_{1}\) is a basic factor of \(\Omega\). Thus, \(H_{1}\) is a basic factor of \(\Omega\). Since \(F\) is a basic factor with the largest weight, \[\omega(F)\geq\omega(H_{1})=\omega(F)-\omega(p_{vs})+\omega(F^{\prime}).\] Thus, \(\omega(p_{vs})\geq\omega(F^{\prime})>0\). Similarly, by considering \(H_{2}=H\backslash p^{\prime}_{vs}\), we have \(\omega(p^{\prime}_{vs})>0\). Then, \(\omega(C)=\omega(p_{vs})+\omega(p^{\prime}_{vs})>0\). Thus, \(C\) is a basic factor of \(\Omega\) with positive weight and \(\deg_{C}(u)=0\). \[\begin{array}{c}\includegraphics[width=142.26378pt]{images/142.eps}\\ \end{array}\] In this case, \(\deg_{H}(t)=3\) and \(\pi(t)=\{0,2,3\}\). \(F^{\prime}\) is a path with endpoints \(s\) and \(t\). The vertex \(t\) splits \(p_{uv}\) into two parts \(p_{ut}\) and \(p_{tv}\) (see Figure 12). * If \(\pi(v)=\{0,1,3\}\), then \(p_{uv}\) is a basic factor of \(\Omega\). Since \(\omega(F)\geq\omega(p_{uv})\), we have \(\omega(C)\geq 0\). Consider the path \(p^{\prime}_{uv}=p_{ut}\cup F^{\prime}\cup p_{vs}\). It is also a basic factor of \(\Omega\). Still, since \(\omega(F)\geq\omega(p^{\prime}_{uv})\), we have \[\omega(p_{tv})\geq\omega(F^{\prime})+\omega(p_{vs}).\] Similarly, by considering the path \(p^{\prime\prime}_{uv}=p_{ut}\cup F^{\prime}\cup p^{\prime}_{vs}\), we have \[\omega(p_{tv})\geq\omega(F^{\prime})+\omega(p^{\prime}_{vs}).\] Thus, \(2\omega(p_{tv})\geq 2\omega(F^{\prime})+\omega(p_{vs})+\omega(p^{\prime}_{vs})=2 \omega(F^{\prime})+\omega(C)>0\). Consider the theta graph \(F^{*}=p_{tv}\cup C\cup F^{\prime}\). Clearly, \(\omega(F^{*})>0\). Then, \(F^{*}\) is a basic factor of \(\Omega\) with \(\deg_{F^{*}}(u)=0\). We are done. * If \(\pi(v)=\{0,2,3\}\), then \(C\) is a basic factor of \(\Omega\). Consider \(H_{1}=H\backslash p_{vs}\). It is a tadpole graph with the vertex \(u\) of degree \(1\) and the vertex \(t\) of degree \(3\). Note that \(\deg_{H_{1}}(u)=1\in\pi(u)\), \(\deg_{H_{1}}(v)=2\in\pi(v)\), \(\deg_{H_{1}}(s)=2\in\pi(s)\), and \(\deg_{H_{1}}(t)=3\in\pi(t)\). One can check that \(H_{1}\) is a basic factor of \(\Omega\). Thus, \(H_{1}\) is a basic factor of \(\Omega\). Since \(F\) is a basic factor of \(\Omega\). Since \(F\) is a basic factor of \(\Omega\), \(F^{*}\) is a basic factor of \(\Omega\) with \(\deg_{F^{*}}(u)=0\). Thus, \(F^{*}\) is a basic factor of \(\Omega\) with \(\deg_{F^{*}}(u)=0\). 3. \(\{s,t\}\in V_{C}\). In this case, \(\deg_{H}(u)=1\), \(\pi(u)=\{0,1\}\), \(\deg_{H}(v)=\deg_{H}(s)=\deg_{H}(t)=3\), and \(\pi(s)=\pi(t)=\{0,2,3\}\). Also, \(\deg_{F^{\prime}}(s)=\deg_{F^{\prime}}(t)=1\). Thus, \(F^{\prime}\) is a path with endpoints \(s\) and \(t\). Let \(p_{st}\subseteq C\) be the path with endpoints \(t\) and \(s\) such that \(v\notin V_{p_{st}}\). Consider the tadpole graph \(q_{uv^{3}}=(F\backslash p_{st})\cup F^{\prime}\). In other words, \(q_{uv^{3}}\) is the tadpole graph obtained from \(F\) by replacing the path \(p_{st}\) by \(F^{\prime}\). One can check that \(q_{uv^{3}}\) is also a basic factor of \(\Omega\). Since \(F\) is a basic factor of \(\Omega\) with the largest weight, \[\omega(F)\geq\omega(q_{uv^{3}})=\omega(F)-\omega(p_{st})+\omega(F^{\prime}).\] Thus, \(\omega(p_{st})\geq\omega(F^{\prime})>0\). Consider the cycle \(C^{\prime}=p_{st}\cup F^{\prime}\). Note that it is a basic factor of \(\Omega\). Also, \(\deg_{C^{\prime}}(u)=0\) and \(\omega(C^{\prime})=\omega(p_{st})+\omega(F^{\prime})>0\). We are done. **Case IV:**\(F\) is a dumbbell graph. Let \(C_{u}\) and \(C_{v}\) be the two cycles of \(F\) containing vertices \(u\) and \(v\) respectively. If \(\{s,t\}\cap C_{v}=\emptyset\), then \(C_{v}\) is a dangling cycle in \(H\) with the connecting point \(v\). This case is similar to Case II. For a graph \(H\) in Case II where \(\deg_{H}(v)=1\) (i.e., \(v\notin V_{\cap}\)), by replacing the vertex \(v\) by the cycle \(C_{v}\), one can check that the proof of Case II works here. If \(\{s,t\}\cap C_{u}=\emptyset\), then \(C_{u}\) is a dangling cycle in \(H\) with the connecting point \(u\). This case is similar to Case III. By replacing the vertex \(u\) in Case III by the cycle \(C_{u}\), one can check that the proof of Case III works here. If \(\{s,t\}\cap C_{u}\) and \(\{s,t\}\cap C_{v}\) are both non-empty, then without loss of generality, we may assume that \(s\in C_{u}\) and \(t\in C_{v}\). Thus, \(F^{\prime}\) is a path with endpoints \(s\) and \(t\). As we have mentioned in Case II.2, one can check that the proof of Case II.2 works here. **Case V:**\(F\) is a theta graph. In this case, \(\deg_{H}(u)=\deg_{H}(v)=3\). By assumption, \(\pi(u)=\{0,2,3\}\). Also, by the definition of theta graphs, \(\pi(v)=\{0,1,3\}\). Then, \(V_{\cap}\subseteq\{s,t\}\). There are two subcases. 1. \(V_{\cap}=\{s\}\) or \(\{t\}\). Without loss of generality, we assume that \(V_{\cap}=\{s\}\). Then, \(\deg_{H}(s)=3\) and \(\pi(s)=\{0,2,3\}\). The theta graph \(F\) consists of three paths \(p_{uv}\), \(p^{\prime}_{uv}\) and \(p^{\prime\prime}_{uv}\). Without loss of generality, we may assume that \(s\) appears in the path \(p_{uv}\) and it splits \(p_{uv}\) into two paths \(p_{us}\) and \(p_{sv}\). Consider the paths \(p_{sv}\), \(p^{\prime}_{sv}=p^{\prime}_{uv}\cup p_{su}\) and \(p^{\prime\prime}_{sv}=p^{\prime\prime}_{uv}\cup p_{su}\), and the tadpole graph \(q_{sv^{3}}=p_{sv}\cup p^{\prime}_{uv}\cup p^{\prime\prime}_{uv}\). They are not factors of \(H\) since the degree of \(s\) is \(1\) in all these four graphs. However, by taking the union of \(F^{\prime}\) with any one of them, we can get a basic factor of \(H\) and the degree of \(u\) in it is even. Since \[\omega(p_{sv})+\omega(p^{\prime}_{sv})+\omega(p^{\prime\prime}_{sv})+\omega(q_ {sv^{3}})=2\omega(F^{\prime})>0,\] among them at least one is positive. Also, \(\omega(F^{\prime})>0\). Then, by taking the union of it with \(F^{\prime}\), we can find a basic factor of \(\Omega\) satisfying the requirements. 2. \(V_{\cap}=\{s,t\}\). In this case, \(F^{\prime}\) is a path with endpoints \(s\) and \(t\). Since \(F\) is a theta graph which is \(2\)-connected, we can find a path \(p_{st}\subseteq F\) such that \(v\notin V_{p_{st}}\). If \(u\notin V_{p_{st}}\), then one can check that the theta graph \(H^{\prime}=(F\backslash p_{st})\cup F^{\prime}\) is also a basic factor of \(\Omega\). Since \(\omega(F)\geq\omega(H^{\prime})\), we have \(\omega(p_{st})\geq\omega(F^{\prime})>0\). Then, the cycle \(C=p_{st}\cup F^{\prime}\) is a basic factor of \(\Omega\) where \(\omega(C)=\omega(p_{st})+\omega(F^{\prime})>0\) and \(\deg_{C}(u)=0\). We are done. Otherwise, \(u\in V_{p_{st}}\). The vertex \(u\) splits \(p_{st}\) into two paths \(p_{us}\) and \(p_{ut}\). Consider the theta graph \(H^{\prime}=(F\backslash p_{us})\cup F^{\prime}\), where \(\deg_{H^{\prime}}(v)=\deg_{H^{\prime}}(t)=3\), \(\pi(v)=\{0,1,3\}\), and \(\pi(t)=\{0,2,3\}\). One can check that \(H^{\prime}\) is a basic factor of \(\Omega\). Since \(\omega(F)\geq\omega(H^{\prime})=\omega(F)-\omega(p_{us})+\omega(F^{\prime})\), we have \(\omega(p_{us})\geq\omega(F^{\prime})>0\). Similarly, by considering the theta graph \(H^{\prime\prime}=(F\backslash p_{ut})\cup F^{\prime}\), we have \(\omega(p_{ut})\geq\omega(F^{\prime})>0\). Then the cycle \(C=p_{st}\cup F^{\prime}=p_{us}\cup p_{ut}\cup F^{\prime}\) is a basic factor of \(\Omega\) where \(\omega(C)=\omega(p_{us})+\omega(p_{ut})+\omega(F^{\prime})>0\) and \(\deg_{C}(u)=2\). We are done. We have taken care of all possible cases and finished the proof. Combining Lemmas 5.6 and 5.7, we finished the proof of Theorem 4.8. ## Appendix A \(\Delta\)-Matroids and Matching Realizability A \(\Delta\)-matroid is a family of sets obeying an axiom generalizing the matroid exchange axiom. Formally, a pair \(M=(U,\mathscr{F})\) is a \(\Delta\)-matroid if \(U\) is a finite set and \(\mathscr{F}\) is a collection of subsets of \(U\) satisfying the following: for any \(X,Y\in\mathscr{F}\) and any \(u\in X\Delta Y\) in the symmetric difference of \(X\) and \(Y\), there exits a \(v\in X\Delta Y\) such that \(X\Delta\{u,v\}\) belongs to \(\mathscr{F}\)[1]. A \(\Delta\)-matroid is _symmetric_ if, for every pair of \(X,Y\subseteq U\) with \(|X|=|Y|\), we have \(X\in\mathscr{F}\) if and only if \(Y\in\mathscr{F}\). A \(\Delta\)-matroid is _even_ if for every pair of \(X,Y\subseteq U\), \(|X|\equiv|Y|\mod 2\). Suppose that \(U=\{u_{1},u_{2},\ldots,u_{n}\}\). A subset \(V\subseteq U\) can be encoded by a binary string \(\alpha_{V}\) of \(n\)-bits where the \(i\)-th bit of \(\alpha_{V}\) is \(1\) if \(u_{i}\in V\) and \(0\) if \(u_{i}\notin V\). Then, a \(\Delta\)-matroid \(M=(U,\mathscr{F})\) can be represented by a relation \(R_{M}\) of arity \(|U|\) which consists of binary strings that encode all subsets in \(\mathscr{F}\). Such a representation is unique up to a permutation of variables of the relation. A degree constraint \(D\) of arity \(n\) can be viewed as an \(n\)-ary symmetric relation which consists of binary strings with the Hamming weight \(d\) for every \(d\in D\). By the definition of \(\Delta\)-matroids, it is easy to check that a degree constraint \(D\) (as a symmetric relation) represents a \(\Delta\)-matroid if and only if \(D\) has all gaps of length at most \(1\). The definition of matching realizability (Definition 1.1) can be extended to a relation \(R\) of arity \(n\) by requiring the set \(U\) of \(n\) vertices in a matching gadget to represent the \(n\) variables of \(R\). If \(R\) is realizable by a matching gadget \(G=(U\cup V,E)\), then for every \(\alpha\in\{0,1\}^{n}\), \(\alpha\in R\) if and only if there is a matching \(F=(V_{F},E_{F})\) of \(G\) such that \(V_{F}\cap U\) is exactly the subset of \(U\) encoded by \(\alpha\) (i.e., for every \(u_{i}\in U\), \(u_{i}\in V_{F}\) if and only if \(\alpha_{i}=1\)), and for every \(v\in V\) where \(\pi(v)=\{1\}\), \(v\in V_{F}\). Note that the matching realizability of a relation is invariant under a permutation of its variables. We say that a \(\Delta\)-matroid is matching realizable if the relation representing it is matching realizable.4 Footnote 4: This definition of matching realizability for \(\Delta\)-matroids is different with the one that is usually used for even \(\Delta\)-matroids [1, 1, 1], in which the gadget is only allowed to use the constraint \(\{1\}\) for perfect matchings, and hence the resulting \(\Delta\)-matroid must be even. **Lemma A.1**.: _If a \(\Delta\)-matroid \(M=(U,\mathscr{F})\) is matching realizable, then there is a graph \(G=(U\cup W\cup X,E)\) where \(\deg(v)=1\) for every \(v\in U\cup X\) and there are no edges between vertices in \(U\cup X\), such that for every \(V\subseteq U\), \(V\in\mathscr{F}\) if and only if there exists \(X_{1}\subseteq X\) such that the induced subgraph of \(G\) induced by the vertex set \(V\cup W\cup X_{1}\) (denoted by \(G(V\cup W\cup X_{1})\)) has a perfect matching._ _With a slight abuse of notation, we also say the graph \(G=(U\cup W\cup X,E)\) realizes \(M\)._ Proof.: Let \(G=(U\cup W,E)\) be the matching gadget realizing \(M=(U,\mathscr{F})\). We construct the following graph \(G^{\prime}\) from \(G\). For every \(x\in W\) with \(\pi(x)=\{0,1\}\), we add a new edge incident to it. As the edge is added, a new vertex of degree of \(1\) is also added to the graph. We denote these new vertices by \(X\) and these new edges by \(E_{X}\). Then, one can check that the graph \(G^{\prime}=(U\cup W\cup X,E\cup E_{X})\) satisfies the requirements. The following result generalizes Lemma A.1 of [13]. **Lemma A.2**.: _Suppose that \(M=(U,\mathscr{F})\) is a matching realizable \(\Delta\)-matroid, and \(V_{1},V_{2}\in\mathscr{F}\). Then, \(V_{1}\Delta V_{2}\) can be partitioned into single variables \(S_{1},\ldots,S_{k}\) and pairs of variables \(P_{1},\ldots,P_{\ell}\) such that for every \(P=S_{i_{1}}\cup\cdots\cup S_{i_{r}}\cup P_{j_{1}}\cup\cdots\cup P_{j_{t}}\)\((\{i_{1},\ldots,i_{r}\}\subseteq[k],\{j_{1},\ldots,j_{t}\}\subseteq[\ell])\), \(V_{1}\Delta P\in\mathscr{F}\) and \(V_{2}\Delta P\in\mathscr{F}\)._ Proof.: By Lemma A.1, there is a graph \(G=(U\cup W\cup X,E)\) realizing \(M\). Since \(V_{1},V_{2}\in\mathscr{F}\), there exists \(X_{1}\subseteq X\) and \(X_{2}\subseteq X\) such that the induced subgraph \(G(V_{1}\cup W\cup X_{1})\) has a perfect matching \(M_{1}\), and \(G(V_{2}\cup W\cup X_{2})\) has a perfect matching \(M_{2}\). Let \(E_{1}\) and \(E_{2}\) be the edge sets of \(M_{1}\) and \(M_{2}\) respectively. Consider the graph \(G^{\prime}=(U\cup W\cup X,E_{1}\Delta E_{2})\). Since \(E_{1}\) covers each vertex in \(V_{1}\cup W\cup X_{1}\) exactly once, and \(E_{2}\) covers each vertex in \(V_{2}\cup W\cup X_{2}\) exactly once, for every \(v\in(V_{1}\cap V_{2})\cup W\cup(X_{1}\cap X_{2})\) in \(G^{\prime}\), \(\deg(v)=0\) or \(2\), and for every \(v\in(V_{1}\Delta V_{2})\cup(X_{1}\Delta X_{2})\) in \(G^{\prime}\), \(\deg(v)=1\). Thus, \(G^{\prime}\) is a union of induced cycles and paths, where each path connects two vertices in \((V_{1}\Delta V_{2})\cup(X_{1}\Delta X_{2})\). For every vertex \(u\in V_{1}\Delta V_{2}\), if it is connected to another vertex \(v\in V_{1}\Delta V_{2}\) by a path in \(G^{\prime}\), then we make \(\{u,v\}\) a pair. Otherwise (i.e., \(u\) is connected to a vertex in \(X_{1}\Delta X_{2}\) by a path in \(G^{\prime}\)), we make \(u\) a single variable. Then, \(V_{1}\Delta V_{2}\) can be partitioned into single variables \(S_{1},\ldots,S_{k}\) and pairs \(P_{1},\ldots,P_{\ell}\) according to the paths in \(G^{\prime}\). Moreover, each path in \(G^{\prime}\) is an alternating path with respect to both matchings \(M_{1}\) and \(M_{2}\). Pick a union of such paths (note that they are edge-disjoint). Suppose that there are \(r\) many paths that connect single variables in \(S_{i_{1}},\ldots,S_{i_{r}}\) with variables in \(X\), and \(t\) many paths that connect pairs \(P_{j_{1}},\ldots,P_{j_{t}}\). Let \(P=S_{i_{1}}\cup\cdots\cup S_{i_{r}}\cup P_{j_{1}}\cup\cdots\cup P_{j_{t}}\). After altering the matchings \(M_{1}\) and \(M_{2}\) according to these \(t\) many alternating paths, we obtain two new matchings that cover exactly \((V_{1}\Delta P)\cup W\cup X_{1}^{\prime}\) for some \(X_{1}^{\prime}\subseteq X\) and \((V_{2}\Delta P)\cup W\cup X_{2}^{\prime}\) for some \(X_{2}^{\prime}\subseteq X\) respectively. Thus, \(V_{1}\Delta P\in\mathscr{F}\) and \(V_{2}\Delta P\in\mathscr{F}\). **Theorem A.3**.: _A degree constraint \(D\) of gaps of length at most \(1\) is matching realizable if and only if all its gaps are of the same length \(0\) or \(1\)._ Proof.: By the gadget constructed in the proof of [11, Theorem 2], if a degree constraint has all gaps of length \(1\) then it is matching realizable.5 We give the following gadget (Figure 13) to realize a degree constraint \(D\) with all gaps of length \(0\), which generalizes the gadget in [10]. Suppose that \(D=\{p,p+1,\ldots,p+r\}\) of arity \(n\) where \(n\geq p+r\geq p\geq 0\). Consider the following graph \(G=(U\cup V,E)\): \(U\) consists of \(n\) vertices of degree \(1\), and \(V\) consists of two parts \(V_{1}\) with \(|V_{1}|=n\) and \(V_{2}\) with \(|V_{2}|=n-p\); the induced subgraph \(G(V)\) of \(G\) induced by \(V\) is a complete bipartite graph between \(V_{1}\) and \(V_{2}\), and the induced subgraph \(G(U\cup V_{1})\) of \(G\) induced by \(U\cup V_{1}\) is a bipartite perfect matching between \(U\) and \(V_{1}\). Every vertex in \(V_{1}\) is labeled by the constraint \(\{1\}\). There are \(r\) vertices in \(V_{2}\) labeled by \(\{0,1\}\) and the other \(n-p-r\) vertices in \(V_{2}\) labeled by \(\{1\}\). One can check that this gadget realizes \(D\). Footnote 5: We remark that [11] includes gadgets for other types of degree constraints, including type-1 and type-2, but only under a more general notion of gadget constructions that involve edges and triangles. The gadget that only involves edges is a matching gadget defined in this paper. For the other direction, without loss of generality, we may assume that \(\{p,p+1,p+3\}\subseteq D\) and \(p+2\notin D\). Since \(D\) has gaps of length at most \(1\), it can be associated with a symmetric \(\Delta\)-matroid \(M=(U,\mathscr{F})\). Then, there is \(V_{1}\in\mathscr{F}\) with \(|V_{1}|=p\) and \(V_{2}\in\mathscr{F}\) with \(|V_{2}|=p+3\). Since \(M\) is symmetric, we may pick \(V_{2}=V_{1}\cup\{v_{1},v_{2},v_{3}\}\) for some \(\{v_{1},v_{2},v_{3}\}\cap V_{1}=\emptyset\). Let \(S=V_{1}\Delta V_{2}=\{v_{1},v_{2},v_{3}\}\). By Lemma A.2, \(S\) can be partitioned into single variables and/or pairs of variables such that for any union \(P\) of them, \(V_{2}\backslash P\in\mathscr{F}\). Since \(|S|=3\), there exists at least a single variable \(x_{i}\) in the partition of \(S\) such that \(V_{2}\backslash\{v_{i}\}\in\mathscr{F}\). Note that \(|V_{2}\backslash\{v_{i}\}|=p+2\). Thus, \(p+2\in D\). A contradiction.
2310.07642
Cosmology from LOFAR Two-metre Sky Survey Data Release 2: Cross-correlation with the cosmic microwave background
We combine the LOw-Frequency ARray (LOFAR) Two-metre Sky Survey (LoTSS) second data release (DR2) catalogue with gravitational lensing maps from the Cosmic Microwave Background (CMB) to place constraints on the bias evolution of LoTSS radio galaxies, and on the amplitude of matter perturbations. We construct a flux-limited catalogue, and analyse its harmonic-space cross-correlation with CMB lensing maps from Planck, $C_\ell^{g\kappa}$, as well as its auto-correlation, $C_\ell^{gg}$. We explore the models describing the redshift evolution of the large-scale radio galaxy bias, discriminating between them through the combination of both $C_\ell^{g\kappa}$ and $C_\ell^{gg}$. Fixing the bias evolution, we then use these data to place constraints on the amplitude of large scale density fluctuations. We report the significance of the $C_\ell^{g\kappa}$ signal at a level of $26.6\sigma$. We determine that a linear bias evolution of the form $b_g(z) = b_{g,D} / D(z)$, where $D(z)$ is the growth rate, is able to provide a good description of the data, and measure $b_{g,D} = 1.41 \pm 0.06$ for a sample flux-limited at $1.5\,{\rm mJy}$, for scales $\ell < 250$ for $C_\ell^{gg}$, and $\ell < 500$ for $C_\ell^{g\kappa}$. At the sample's median redshift, we obtain $b(z = 0.82) = 2.34 \pm 0.10$. Using $\sigma_8$ as a free parameter, while keeping other cosmological parameters fixed to the Planck values, we find fluctuations of $\sigma_8 = 0.75^{+0.05}_{-0.04}$. The result is in agreement with weak lensing surveys, and at $1\sigma$ difference with Planck CMB constraints. We also attempt to detect the late-time integrated Sachs-Wolfe effect with LOFAR, but with the current sky coverage, the cross-correlation with CMB temperature maps is consistent with zero. Our results are an important step towards constraining cosmology with radio continuum surveys from LOFAR and other future large radio surveys.
S. J. Nakoneczny, D. Alonso, M. Bilicki, D. J. Schwarz, C. L. Hale, A. Pollo, C. Heneka, P. Tiwari, J. Zheng, M. Brüggen, M. J. Jarvis, T. W. Shimwell
2023-10-11T16:38:23Z
http://arxiv.org/abs/2310.07642v3
Cosmology from LOFAR Two-metre Sky Survey Data Release 2: Cross-correlation with Cosmic Microwave Background ###### Abstract Context: Aims:We combine the LOw-Frequency ARray (LOFAR) Two-metre Sky Survey (LoTSS) second data release (DR2) catalogue with gravitational lensing maps from the Cosmic Microwave Background (CMB) to place constraints on the bias evolution of LoTSS detected radio galaxies, and on the amplitude of matter perturbations. Methods:We construct a flux-limited catalogue from LoTSS DR2, and analyse its harmonic-space cross-correlation with CMB lensing maps from Planck, \(C_{\ell}^{\pi}\), as well as its auto-correlation, \(C_{\ell}^{\pi}\). We explore the models describing the redshift evolution of the large-scale radio galaxy bias, discriminating between them through the combination of both \(C_{\ell}^{\pi}\) and \(C_{\ell}^{\pi}\). Fixing the bias evolution, we then use these data to place constraints on the amplitude of large scale density fluctuations, parametrised by \(\sigma_{8}\). Results:We report the significance of the \(C_{\ell}^{\pi}\) signal at a level of 26.6\(\sigma\). We determine that a linear bias evolution of the form \(b_{z}(z)=b_{z,B}(\lambda)\), where \(D(z)\) is the growth rate, is able to provide a good description of the data, and measure \(b_{z,0}=1.41\pm 0.06\) for a sample flux-limited at 1.5 mJy, for scales \(\ell<250\) for \(C_{\ell}^{\pi,\pi}\) and \(\ell<500\) for \(C_{\ell}^{\pi,\pi}\). At the sample's median redshift, we obtain \(b(z=0.82)=2.34\pm 0.10\). Using \(\sigma_{8}\) as a free parameter, while keeping other cosmological parameters fixed to the Planck values, we find fluctuations of \(\sigma_{8}=0.75_{-0.05}^{+0.05}\). The result is in agreement with weak lensing surveys, and at 1\(\sigma\) difference with Planck CMB constraints. We also attempt to detect the late-time integrated Sach-Wolfe effect with LOFAR data, but with the current sky coverage, the cross-correlation with CMB temperature maps is consistent with zero. Our results are an important step towards constraining cosmology with radio continuum surveys from LOFAR and other future large radio surveys. Conclusions: ## 1 Introduction One of the main current goals of observational cosmology is constraining the history of structure growth as a way to pin down the different components that dominate the background expansion of the Universe at late times (Huterer, 2022). To do so, one must investigate probes of structure that, ideally, satisfy a number of criteria. Among these: 1. They should cover a large enough patch of the Universe to reconstruct the density field on cosmological scales. 2. They should also be easy to connect with fundamental quantities, such as the matter overdensity. 3. They should contain redshift information, allowing for an accurate reconstruction of the structure growth history. Unfortunately, virtually no cosmological probe, taken alone, is able to fulfill these requirements. Although weak gravitational lensing, measured from its effect on the shapes of background galaxies, or on the cosmic microwave background (CMB) fluctuations (Bartelmann & Schneider, 2001) is an unbiased tracer of the total matter fluctuations, it has significantly lower raw statistical power and poorer ability to trace redshift evolution than measurements of galaxy clustering. The latter, in turn, is curved by the problem of galaxy bias: the complicated relation between galaxy and matter overdensities. However, if reasonably accurate redshifts are available (be them spectroscopic or photometric), the growth of structure can be reconstructed through redshift-space distortions (Guzzo et al., 2008; Blake et al., 2011; de la Torre et al., 2013; Blake et al., 2013; Howlett et al., 2015; Okumura et al., 2016; Pezzotta et al., 2017; Alam et al., 2021), or by combining galaxy clustering and weak lensing (Hu, 2002; de la Torre et al., 2017; Peacock and Bilicki, 2018; Wilson and White, 2019; Krolewski et al., 2020; Heymans et al., 2021; White et al., 2022; Garcia-Garcia et al., 2021; Alonso et al., 2023). The combination of different tracers of the large-scale structure is thus able to overcome their individual shortcomings and fulfill the requirements listed above. It is for this reason that multi-tracer large-scale structure analyses have now become one of the samples of late-Universe cosmology. In this context, radio continuum surveys are an interesting and promising probe. Due to the large instantaneous field of view of modern low frequency radio interferometers, such surveys cover wide areas of the sky. Unencumbered by dust extinction, they are able to cover large swathues of the Universe. Their ability to recover clustering information on gigaparsec scales thus makes them potentially valuable for specific cosmological science cases, such as the search for primordial non-Gaussianity (Ferramacho et al., 2014; Alonso and Ferreira, 2015; Gomes et al., 2020). Furthermore, radio continuum samples are dominated by Active Galactic Nuclei (AGNs) and Star-Forming Galaxies (SFGs), and thus their study can shed light on key processes in the formation and evolution of galaxies. In this sense, the clustering of radio sources has been used to place constraints on the properties of the different radio populations, making use of a variety of datasets, such as SUMSS (Blake et al., 2004), NVSS (Blake and Wall, 2002; Overzier et al., 2003; Negrello et al., 2006; Nusser and Tiwari, 2015; Chen and Schwarz, 2016), FIRST (Lindsay et al., 2014), COSMOS 3GHz (Hale et al., 2018), and TGSS (Dolfi et al., 2019; Rana and Bagla, 2019). In this work, we will study the clustering of radio galaxies in the second data release of the LOw-Frequency ARray Two-metre Sky Survey (LoTSS DR2 Shimwell et al., 2022), extending the analyses carried out making use of the first data release (DR1, Siewert et al., 2020; Alonso et al., 2021; Tiwari et al., 2022). The dominant mechanism for radio emission is synchrotron radiation, which is characterised by a featureless almost power-law spectrum (Condon, 1992). The absence of bright emission lines or other sharp features in the radio spectrum, therefore, precludes redshift measurement. This gives rise to two key sources of uncertainty in the cosmological analysis of radio continuum surveys: the evolution of the main properties of the sample (e.g. galaxy bias, relative fractions of different source types) over the large range of redshifts covered by the sample, and the detailed description of the sample's redshift distribution. As in the case of optical surveys, some of these uncertainties can be overcome or mitigated through the use of cross-correlations. For example, their cross-correlation with optical catalogs can be used to infer the clustering properties of the radio sample, and to constrain its redshift distribution via tomography (e.g. Menard et al., 2013). Our focus in this work will be the cross-correlation with maps of the CMB lensing convergence (Planck Collaboration et al., 2020). CMB lensing, sourced at \(z\sim 1100\), receives contributions from density inhomogeneities covering a wide range of redshifts, peaking at \(z\sim 2\). As such, it is an interesting tracer to cross-correlate with radio data, one of the few probes able to cover comparable volumes. In fact, the cross-correlation with radio data from the NRAO VLA Sky Survey (NVSS, Condon et al., 1998) was used to make the first detection of the CMB lensing signal (Smith et al., 2007). Since then, this cross-correlation has been used for the benefit of both probes, e.g. as a way to measure the radio galaxy bias (Allison et al., 2015; Piccirilli et al., 2023), and in the context of delensing (Namikawa et al., 2016). As the sensitivity of radio and CMB experiments increases, and the statistical uncertainties of this cross-correlation decrease, the joint analysis of radio continuum and CMB lensing data becomes a more powerful tool for cosmological studies, able to not only constrain amplitude-like parameters, but also to discriminate between more nuanced details of the underlying astrophysical model. For example, the analysis of the first LoTSS data release in combination with CMB lensing data (Alonso et al., 2021, A21 hereafter) showed that the galaxy auto-correlation and its cross-correlation with CMB lensing respond differently to changes in the galaxy bias and to the width of the redshift distribution, two effects that would otherwise be highly degenerate. The inclusion of CMB lensing data can therefore shed light on the main systematic uncertainties affecting continuum surveys as described above. In addition, it provides a way to measure the global amplitude of matter fluctuations. In order to constrain the redshift distribution of the LoTSS DR2 radio sources, we make use of the cross-identifications of radio sources with multiwavelength data in three LoTSS deep fields (Tasse et al., 2021; Sabater et al., 2021; Kondapally et al., 2021). These allow for the measurement of photometric redshifts for over 90 per cent of all deep field sources at flux densities above 1.5 mJy, of which about a quarter have spectroscopic redshift as well (Duncan et al., 2021; Bhardwaj et al., 2023). In this paper, we will address two main science questions: what is the galaxy bias within flux-limited radio samples and its redshift evolution? And what is the amplitude of density fluctuations as probed by radio data? To do so, we will carry out a joint harmonic-space analysis of two radio samples extracted from the LoTSS DR2, defined by different flux and signal-to-noise cuts, together with maps of the CMB lensing convergence provided by the Planck collaboration. The analysis of the LoTSS sample will follow closely the treatment described in the companion paper (Hale et al., 2023, H23 hereafter), focused on constraints from the real-space galaxy auto-correlation. Taking advantage of the tools developed for that analysis, we will also apply them to the cross-correlation between LoTSS and CMB primary anisotropies, in order to place constraints on the integrated Sachs-Wolfe effect (ISW, Sachs and Wolfe, 1967). This paper is structured as follows: Section 2 lays out the theoretical background behind the auto- and cross-correlations we will use here. Sections 3 and 4 present the datasets used in this work, as well as the methods used to analyse them. The measurements of the radio galaxy bias and the tentative constraints on \(\sigma_{8}\) are presented in Section 5. We summarise our results and conclude in Section 7. ## 2 Theory Our main observable is a sky map of the galaxy surface density \[\delta_{g}(\mathbf{\hat{n}})=\frac{N_{g}(\mathbf{\hat{n}})-N_{g}}{\bar{N}_{g}}, \tag{1}\] where \(\mathbf{\hat{n}}\) is a unit vector pointing along a line of sight, \(N_{g}(\mathbf{\hat{n}})\) is the number of galaxies along \(\mathbf{\hat{n}}\) per unit solid angle, and \(\bar{N}_{g}\) is the mean number of galaxies per unit solid angle. The projected overdensity is related to the three-dimensional galaxy overden sity \(\Delta_{g}\) through (Peebles, 1980) \[\delta_{g}(\mathbf{\hat{n}})=\int_{0}^{\infty}dz\;p(z)\;\Delta_{g}(\chi(z) \mathbf{\hat{n}},z), \tag{2}\] where \(\chi(z)\) is the comoving radial distance, and \(p(z)\) is the redshift distribution of the galaxy sample, normalised to 1 when integrated over \(z\). In addition to \(\delta_{g}\), we will study maps of the CMB lensing convergence \(\kappa(\mathbf{\hat{n}})\), which quantifies the distortion in the trajectories of the CMB photons caused by the gravitational potential of the intervening matter structures (Lewis and Challinor, 2006), and is proportional to the divergence of the deflection in the photon arrival angle \(\boldsymbol{\alpha}\): \(\kappa=-\nabla_{\mathbf{\hat{n}}}\cdot\boldsymbol{\alpha}/2\). As such, \(\kappa\) is an unbiased tracer of the matter density fluctuations \(\Delta_{m}(\boldsymbol{x},z)\), and is related to them through: \[\kappa(\mathbf{\hat{n}})=\int_{0}^{\chi_{ESS}}d\chi\frac{3H_{0}^{2}\Omega_{m}} {2a(\chi)}\chi\frac{\chi_{LSS}-\chi}{\chi_{LSS}}\Delta_{m}(\chi\mathbf{\hat{n} },z(\chi)), \tag{3}\] where \(\Omega_{m}\) is the fractional matter density, \(a=1/(1+z)\) is the scale factor, \(H_{0}\) is the Hubble constant today, and \(\chi_{LSS}\) is the comoving distance to the surface of last scattering. Finally, we will also consider the cross-correlation with CMB temperature anisotropies, which receives a contribution from the so-called Integrated Sachs-Wolfe effect (Sachs and Wolfe, 1967). Caused by time-varying gravitational potentials at late times, the ISW leads to an additional temperature fluctuation of the form \[\left.\frac{\Delta T}{T}\right|_{\mathrm{ISW}}(\mathbf{\hat{n}})=2\int_{0}^{ \mathrm{t_{LS}}}d\chi\;a\;\dot{\phi}, \tag{4}\] where \(\dot{\phi}\) is the derivative of the Newtonian potential with respect to cosmic time. In Fourier space, \(\dot{\phi}\) can be related to the matter overdensity, assuming linear growth, via \[\dot{\phi}(\mathbf{k},t)=-\frac{3H_{0}^{2}\Omega_{m}}{2a}\frac{H}{k^{2}}(f-1) \;\Delta_{m}(\mathbf{k},t), \tag{5}\] where \(f(a)\equiv d\log\Delta_{m}/d\log a\) is the "growth rate", which is scale-independent in the linear regime. Consider a generic three dimensional field (\(U\)) projected onto a sphere with a kernel \(W_{u}\) \[u(\mathbf{\hat{n}})=\int d\chi W_{u}(\chi)U(\chi\mathbf{\hat{n}},z(\chi)). \tag{6}\] Any such projected quantity can be decomposed in terms of its spherical harmonic coefficients \(u_{\ell m}\), the covariance of which with another field \(V\) is the so-called angular power spectrum (\(C_{\ell}^{m}\)). The angular power spectrum can be related to the power spectrum of the 3D fields \(P_{UV}(k,z)\) through \[C_{\ell}^{m}=\int\frac{d\chi}{\chi^{2}}W_{u}(\chi)W_{v}(\chi)P_{UV}\left(k_{ \ell}(\chi),z(\chi)\right), \tag{7}\] where \(P_{UV}(k,z)\) is the covariance of the Fourier coefficients of \(U\) and \(V\), and \(k_{\ell}(\chi)\equiv(\ell+1/2)/\chi\). In this formalism, for the three fields under consideration (galaxy overdensity, CMB lensing convergence, and ISW), the radial kernels are given by \[W_{g}(\chi) =\frac{H(z)}{c}p(z), \tag{8}\] \[W_{u}(\chi) =f_{\ell}\frac{3H_{0}^{2}\Omega_{m}}{2a}\chi\frac{\chi_{LSS}- \chi}{\chi_{LSS}}\Theta(\chi_{LSS}-\chi),\] (9) \[W_{\mathrm{ISW}}(\chi) =\frac{3\;H_{0}^{2}\Omega_{m}}{k_{\ell}^{2}}H(z)(1-f), \tag{10}\] where \(H(z)\) is the expansion rate, \(\Theta(x)\) is the Heaviside function, \(c\) is the speed of light, and \(f_{\ell}\) is the scale-dependent prefactor given by \[f_{\ell}=\frac{\ell(\ell+1)}{(\ell+1/2)^{2}}=1-\frac{1}{(2\ell+1)^{2}}, \tag{11}\] which significantly differs from unity only for \(\ell\lesssim 10\), and accounts for the fact that \(\kappa\) is related to \(\Delta_{m}\) through the angular Laplacian of the gravitational potential \(\phi\). Eq. (7) is only valid in the Limber approximation (Limber, 1953), which holds when the extent of the radial kernels is much broader than the correlation scale of the matter fluctuations (which is the case for \(W_{g}\), \(W_{\star}\) and \(W_{\mathrm{ISW}}\) in this work). In order to use the galaxy distribution as a probe of structure, we need to define its relation to the matter overdensities. This implies developing models for the 3D power spectra \(P_{gg}(k,z)\), and \(P_{gm}(k,z)\), as well as the matter power spectrum \(P_{mm}(k,z)\). In this work, we will assume a simple parametrisation, for which \[P_{gg}(k,z)=b_{g}^{2}(z)P_{mm}(k,z),\;\;\;\;P_{gm}(k,z)=b_{g}(z)P_{mm}(k,z), \tag{12}\] where \(b_{g}(z)\) is the linear bias function. The models used to describe the redshift distribution of our sample, and the redshift evolution of the bias are described in Section 4. To compute the linear matter power spectrum, we use the CAMB Boltzmann solver (Lewis et al., 2000), and estimate the non-linear power spectrum from it using HALOFIT (Smith et al., 2003; Takahashi et al., 2012). All other theoretical calculations (e.g. Limber integrals) were carried out using the Core Cosmological Library (CCL, Chisari et al., 2019). Unless stated otherwise, we fix all cosmological parameters to the best-fit \(\Lambda\)CDM values found by Planck Collaboration et al. (2020): \(\Omega_{c}=0.26503\), \(\Omega_{b}=0.04939\), \(h=0.6732\), \(\sigma_{8}=0.8111\), \(n_{s}=0.96605\). ## 3 Data ### LoTss Dr2 The LOw-Frequency ARray (LOFAR) Two-metre Sky Survey (LoTSS) second data release (DR2) (Shimwell et al., 2022) covers 27% of the northern sky at 120-168 MHz. It consists of 841 pointings, split into two regions separated by the Galactic plane, spanning 4178 and 1457 square degrees, respectively, and shown in Figure 1 (top-left). Data reduction was performed using both direction-dependent and independent calibration pipelines, and the source catalogue was created with the source find PyBDSF (Mohan and Rafferty, 2015). The catalogue derived from the total intensity (Stokes I) maps contains 4,396,228 radio sources. The completeness and spatial homogeneity of the sample have a complex dependence on flux morphology and signal-to-noise ratio (SNR), defined as the ratio of peak flux density per beam and root mean square noise per beam. The fiducial sample used in this work comprises 1,136,219 galaxies with a flux density brighter than 1.5 mJy, and detected with \(\mathrm{SNR}\geq 7.5\), which H23 finds as the best balance between the number of sources and variation in data compared to random samples. To test the robustness of the cosmological constraints depending on the choice of the sample, we will also present results for an alternative selection corresponding to galaxies above 2.0 mJy with \(\mathrm{SNR}\geq 5\) (958,438 objects). These cuts are similar to those used in the analysis of the two-point correlation function (Siewert et al., 2020) and the cross correlation with the CMB lensing of LoTSS DR1 (Alonso et al., 2021). As described in H23, a spatial mask was created by removing regions with tiles that have not been mosaiced together, or have a large number of gaps due to problems in the reduction process, which would lead to strong spatial variations in the flux scale. These regions are mostly located by the outer edges. The resulting unmasked footprint covers 4357 deg\({}^{2}\), corresponding to a sky fraction \(f_{\rm sky}=0.11\). After data cuts and masking, 896,637 objects remain in our fiducial catalogue (1.5 mJy flux density cut, SNR \(\geq 7.5\)), and 742,692 in the 2.0 mJy, SNR \(\geq 5.0\) sample. We note that because we use pixel coordinates to mask the resulting maps, the final number of objects is slightly different than in H23 which uses the object coordinates to mask the catalogue. ### Planck We use CMB data released as part of the 2018 Planck analysis (Planck Collaboration et al., 2020). We use the "minimum variance" (MV) CMB lensing convergence harmonic coefficients released in Planck Collaboration et al. (2020), together with the associated sky mask. The harmonic coefficients are transformed into a HEALPix (Gorski et al., 2005) map with resolution parameter \(N_{\rm side}\), after truncating them to \((\ell,m)<3\,N_{\rm side}\). In our analysis, we will use a common resolution \(N_{\rm side}=512\), corresponding to a pixel size of \(\sim 6.9\) arcmin. We repeated our analysis using \(N_{\rm side}=256\), finding compatible results. The lensing map covers a sky fraction \(f_{\rm sky}\simeq 0.67\), and overlaps with LoTSS over the whole footprint of the latter. For the ISW analysis described in Section 5.5, we use the foreground-cleaned temperature fluctuation map produced through the SMICA component separation method, described in Planck Collaboration et al. (2020), as well as its associated sky mask. Both mask and map were downgraded to the common resolution \(N_{\rm side}=512\). ## 4 Methodology ### Maps Figure 1 shows the maps used in this analysis. In the figure, we smooth them for visualisation purposes with a Gaussian filter with a full width at half-maximum of \(1^{\circ}\). The LOFAR maps are limited to the region corresponding to the LOFAR mask, while Planck maps are shown at the intersection between the LOFAR mask and the corresponding CMB mask. Lower and higher resolution falls outside of the scales of interest for this analysis, \(50<\ell<800\) (see Section 5.2). We use catalogues of randomly-generated sources to account for the spatially-varying survey depth. The object positions in the random catalogue should be uncorrelated, while tracing the detection rate, which is not uniform across the footprint. The simulated sources are based on a modified SKA Design Studies Simulated Skies (SKADS, Wilman et al., 2008, 2010), to account for an underestimated number of SFGs at the faintest flux densities (e.g. Hale et al., 2023). The catalogue provides multiple observable properties for simulated sources, which were used in combination with simulations from Shimwell et al. (2022) to account for the effects of smearing, variation of sensitivity due to elevation or declination, location within the mosaic, proximity to bright sources or edge of the observed field, where the number of mosaiced pointings is smaller. As we do not split the sources between their type (AGN/SFG), redshift, or luminosity, it is the input flux density distribution of the randoms which is the most important, and the modified SKADS represents it well for 144 MHz sources, to below the source detection limit of the survey. The process of generating the random catalogues is described in detail in cicitalias (Hale, 2023). It provides us with simulated "output" number count maps, where "output" represents galaxies as if they were detected by LOFAR in an idealized case. We use this "output" map to correct for depth fluctuations, and as a weight for the resulting galaxy overdensity map when computing power spectra (i.e., \(w_{g}(\mathbf{\hat{n}})\) constitutes the mask of the galaxy overdensity map). The overdensity \(\delta_{g}\) is computed as \[\delta_{g}(\mathbf{\hat{n}})=\frac{N_{g}(\mathbf{\hat{n}})}{\bar{N}_{g}w_{g}( \mathbf{\hat{n}})}-1, \tag{13}\] where \(N_{g}(\mathbf{\hat{n}})\) is the number of galaxies in the pixel lying in the direction of \(\mathbf{\hat{n}}\). \(\bar{N}_{g}\) is the mean number of objects per pixel, and is estimated as \(N_{g}=\langle N_{g}(\mathbf{\hat{n}})\rangle_{\mathbf{\hat{n}}}/\langle w_{g} (\mathbf{\hat{n}})\rangle_{\mathbf{\hat{n}}}\), where \(\langle\cdots\rangle_{\mathbf{\hat{n}}}\) represents a mean over all pixels within the mask. ### Redshift distribution An important ingredient of the analysis is the redshift distribution, \(p(z)\), of the LoTSS sources, necessary to recover the three-dimensional clustering parameters, which can then be compared with theoretical predictions (eq. 8). For radio continuum objects, the individual redshifts are not known and cannot be estimated from radio fluxes. At present, we do not have optical identifications and photometric redshift estimates for most of the LoTSS DR2 sources. Therefore, we need to model the underlying \(p(z)\) in a more indirect way. Extragalactic radio sources consist mostly of SFGs and AGNs, although their fractions vary with both redshift and flux density (see Best et al., 2023). However, limitations in the multi-wavelength coverage of the sample may lead to some uncertainty in the redshifts and classification of sources. In order to calibrate the redshift distribution of our sample, we make use of the LOFAR deep fields observations (Tasse et al., 2021; Sabater et al., 2021). The Deep Field data consist of three fields: Bootes, ELAIS and Lockman Hole. For each field, a smaller region was defined for which there exists deep multi-wavelength information, of an area equal to 8.6 deg\({}^{2}\) in the Bootes field, 6.7 deg\({}^{2}\) in ELAIS and 10.3 deg\({}^{2}\) in the Lockman Hole field (Kondapally et al., 2021; Duncan et al., 2021). A redshift and its probability density function were associated with each source using a hybrid method that combined template fitting and machine learning (further details can be found in Duncan et al., 2021). The photometric redshift quality is characterised by normalised median absolute deviation (\(\sigma_{\rm NMAD}\)) ranging from 1.6 to 2% for galaxies and 6.4 to 7% for AGNs, while the outlier fraction (\(\geq\)\(\sigma_{\rm phot}-z_{\rm spec}\)\(]/(1+z_{\rm spec})>0.15\)) equals around 2% for galaxies and 20% for AGNs. It is worth noting that \(\sim 5\%\) of the sources satisfying our sample cuts in the deep fields do not have an optical cross-match. We estimate the redshift distribution for each flux density cut catalogue using a technique based on sampling redshift values from the probability distributions of photometric redshifts, using spectroscopic redshifts where available. Given the full probability distribution over a redshift range for each photometric redshift measurement, we sample a single redshift value over this probability for each object, and build a histogram of such a distribution, binning in \(\Delta z=0.05\). For objects with spectroscopic redshifts available, we always take the reported value (i.e., equivalent to zero photo-\(z\) uncertainty). We repeat this procedure of histogram creation for each deep field separately, and the number of histograms created for each field is proportional to the number of objects in each field, which makes fields with more observations more significant in the final estimate. We find that the final results do not change after sampling at least 200 histograms in total. The final distribution and its statistical uncertainty is given by the mean and standard deviation calculated over all histogram realisations, and then normalised to a unit integral over the redshift range \(0<z<6\). This approach to redshift distribution is also described in H23. The method is able to combine both photometric and spectroscopic redshifts, and ensures a reasonable estimate of the final uncertainty in the redshift distribution. The uncertainty estimated with this method accounts both for errors in every single measurement of the photometric redshift, and for differences in redshift distributions between the three deep fields. We found that the errors estimated with this method are significantly larger in comparison to bootstrap sampling over the probability distributions of photometric redshifts, where single redshift distributions are calculated as a sum of probability distributions within the bootstrap samples, and uncertainty is taken as a standard deviation within those. We model the resulting redshift distribution using a functional form \[p(z)\propto\frac{z^{2}}{1+z}\left(\exp\left(\frac{-z}{z_{0}}\right)+\frac{r^ {2}}{(1+z)^{n}}\right), \tag{14}\] normalised to a unit integral over the redshift range \(0<z<6\), with \(\left[z_{0},r,a\right]\) being free parameters. This form is motivated by the fact that the LoTSS radio sources contain two main populations of objects, AGNs, and SFGs. At low redshifts, we expect their numbers to grow proportionally to the volume for both populations, which motivates the factor of \(z^{2}\), which would be exact for any redshift in a de Sitter model. The factor \(1/(1+z)\) provides a simple correction for a \(\Lambda\)CDM model, and gives a good approximation up to the redshift of \(\sim 0.2\). For higher redshifts, the flux density limitation of the sample becomes the dominant aspect, and the form of the luminosity function for each population starts to be important. The AGN radio luminosity function is typically approximated by a double power law, which motivates the power law term, while the SFG radio luminosity function is typically modelled as a Schechter function (Bonato et al., 2017), which exhibits an exponential cut-off, and motivates the first term. The relative fraction of both contributions is controlled by the parameter \(r\). We verified that this three-parameter model provides a good semi-empirical fit and is superior to other simple parameterisations that have been tested. Table 1 shows the constrained parameters, based on the uncertainties mentioned above, for the 1.5 mJy and 2.0 mJy samples, while figure 2 shows the resulting redshift distributions. The blue and orange bands show the \(1\sigma\) constraints measured from the deep fields for the 1.5 mJy \begin{table} \begin{tabular}{l c c c} \hline \hline Sample & \(z_{0}\) & \(r\) & \(a\) \\ \hline 1.5 mJy & \(0.05\pm 0.01\) & \(0.20\pm 0.03\) & \(4.9\pm 0.1\) \\ 2.0 mJy & \(0.04\pm 0.01\) & \(0.17\pm 0.03\) & \(5.0\pm 0.1\) \\ \hline \end{tabular} \end{table} Table 1: Constraints on the parameters of the redshift distribution, as given in the equation 14, for the 1.5 mJy and 2.0 mJy samples. Figure 1: Maps used in this work, smoothed over a scale of \(1^{\circ}\), strictly for a visualization purpose. _Top left_: LoTSS DR2 overdensity, _top right_: LoTSS DR2 completeness based on randoms, _bottom left_: Planck CMB lensing convergence, _bottom right_: Planck CMB temperature. The overdensity and completeness maps include the LoTSS DR2 mask (Hale et al., 2023), while CMB maps include this and an appropriate mask from the Planck survey. Figure 2: Redshift distribution based on the three deep fields located within the LoTSS DR2 footprint, for 2 mJy and 1.5 mJy flux cuts. The thick lines show the models fitted with Eq. 14, and the shaded areas are a \(1\sigma\) region from the deep fields measurements. The redshift distribution is limited to \(z<6\). and 2 mJy cuts, respectively, with the corresponding solid lines showing the best-fit model of Eq. (14) in each case. ### Bias models Given the wide range of redshifts covered by the samples studied, the evolution of the linear galaxy bias over that range must be taken into account. This is non-trivial, as the sample includes several types of extragalactic sources (SFGs and AGNs, in the simplest description), and their relative abundances and intrinsic galaxy biases evolve with \(z\). To assess the impact of our assumptions regarding the evolution of the effective bias of the sample, we will consider three different models (Nusser and Tiwari, 2015; Alonso et al., 2021): * A constant bias model \(b_{g}(z)=b_{g}\) represents the simplest case. Although likely an unrealistic model, the corresponding value of \(b_{g}\) can be interpreted as the effective bias of the sample accounting for redshift evolution. * A constant amplitude model, in which the bias evolves inversely with the linear growth factor \(D(z)\) \[b_{g}(z)=b_{g,D}/D(z).\] (15) This model has the advantage of reproducing the expected rise in \(b_{g}(z)\) at high \(z\) for a flux-limited sample (assuming a monotonic mass-luminosity relation), while preserving the simplicity of the constant-bias model, with only a single free parameter. In this model, the amplitude of \(\Delta_{g}\) does not change over time at linear order (since \(\Delta_{m}\propto D(z)\)). This would correspond to a galaxy distribution that is fixed at some early time and preserves its large-scale properties unchanged (Bardeen et al., 1986; Mo and White, 1996; Tegmark and Peebles, 1998; Coil et al., 2004). * The two previous models fix the redshift evolution of \(b_{g}(z)\), allowing only its overall amplitude to vary. As a more flexible alternative, we will also use a quadratic bias model, in which \[b_{g}(z)=b_{0}+b_{1}z+b_{2}z^{2},\] (16) with \(\{b_{0},b_{1},b_{2}\}\) free parameters. ### Power spectra We use NaMaster(Alonso et al., 2019) to compute the angular power spectra of fields defined on a limited region of the sphere using the pseudo-\(C_{\ell}\) estimator (Peebles, 1973; Hivon et al., 2002). We calculate the shot-noise contribution to the galaxy auto-correlation before inverting the pseudo-\(C_{\ell}\) mode-coupling matrix, as (Nicola et al., 2020) \[\tilde{N}_{\ell}^{ex}=\frac{\langle w_{g}\rangle}{\tilde{N}_{\Omega}}, \tag{17}\] where \(\tilde{N}_{\Omega}\) is the mean angular number density of galaxies (in units of sr\({}^{-1}\)), and \(\langle w_{g}\rangle\) is the value of the mask averaged across the sky. There are reasons to expect departures from a purely Poisson shot noise contribution to \(N_{\ell}^{SE}\). Prominently, in the case of radio surveys, a fraction of sources may have multi-component detections, which effectively leads to a higher shot-noise amplitude than predicted by Poisson statistics (Blake et al., 2004; Tiwari et al., 2022). Additionally, stochastic and non-local effects in galaxy formation, as well as effects such as halo exclusion, can lead to similar departures from Poissonian shot noise (Baldauf et al., 2016; Kokron et al., 2022). To account for these effects, we marginalise over a free shot noise amplitude \(A_{m}\), which in practice makes the pipeline sensitive only to non-flat contributions to the galaxy auto-correlation. To calculate the statistical uncertainties of our measurements of \(C_{\ell}^{xy}\), we use a jackknife resampling procedure (Norberg et al., 2009). We divide the LoTSS DR2 footprint into 54 similarly-sized rectangular areas. We find this number of regions to provide a good balance between the small scale and large scale errors. Then, removing one of these areas at a time, we calculate the power spectra in the resulting footprint. The power spectrum covariance is then calculated as \[\mathsf{Cov}(C_{\ell}^{x},C_{\ell}^{y})=\frac{N_{\rm IK}-1}{N_{\rm IK}}\sum_{i= 1}^{N_{\rm IK}}(C_{\ell}^{x,i}-\bar{C}_{\ell}^{x})(C_{\ell}^{y,j}-\bar{C}_{ \ell}^{y}), \tag{18}\] where \(x\) and \(y\) stand for \((gg,gx,gT)\), \(N_{\rm IK}\) is the number of jackknife samples, \(C_{\ell}^{x,i}\) is the power spectrum measured in the \(i\)-th sample, and \(\bar{C}_{\ell}^{x}\equiv\sum_{i}C_{\ell}^{x,i}/N_{\rm IK}\) is the average over jackknife samples. To validate this estimate of the covariance matrix, we compared it with the analytical prediction assuming all fields studied can be described by Gaussian statistics (e.g. Garcia-Garcia et al., 2019). Both estimates were found to be in good agreement. We also report a correlation matrix \[\mathsf{r}_{ij}=\mathsf{Cov}_{ij}/\sqrt{\mathsf{Cov}_{ii}\mathsf{Cov}_{jj}}, \tag{19}\] where \(\mathsf{r}\) is the correlation coefficient, while \(i\) and \(j\) are corresponding indices of the covariance matrix. ### Likelihood inference We assume the power spectrum data follow a Gaussian distribution, and we estimate the log-likelihood as \[\chi^{2}\equiv-2\log p(\mathbf{d}|\mathbf{q})=[\mathbf{d}-\mathbf{t}(\mathbf{ q})]^{T}\mathsf{Cov}^{-1}[(\mathbf{d}-\mathbf{t}(\mathbf{q})], \tag{20}\] where \(\mathbf{d}\) denotes the data vector, consisting of combinations of \(C_{\ell}^{\rm ss}\) and \(C_{\ell}^{\rm ss}\), as well as the deep field measurements of the redshift distribution described in Section 4.2. We do not use \(C_{\ell}^{xT}\) as part of the inference scheme, as we only report its significance. \(\mathbf{t}(\mathbf{q})\) is the theoretical prediction for \(\mathbf{d}\) given a set of parameters \(\mathbf{q}\), describing both the power spectra and the redshift distribution (as parameterised in Eq. 14). The covariance matrix \(\mathsf{Cov}\) incorporates the correlated uncertainties of the different elements of \(\mathbf{d}\). We assume that the power spectrum and redshift distribution measurements are uncorrelated, while retaining all potential correlations between different power spectra (at different scales and for different fields). In other words, the covariance matrix elements for \((p(z),C_{\ell}^{xy})\) are set to zero. The covariance of the measured redshift distribution was assumed to be diagonal, with errors estimated via sampling as described in Section 4.2. We report the significance of the \(C_{\ell}^{\rm ss}\) power spectrum and the ISW signal as the square root of the difference in \(\chi^{2}\) between a null hypothesis, defined as \(C_{\ell}=0\), and the best-fit model (\(\sqrt{\lambda\chi^{2}}\)). We calculate the reduced chi-squared as: \[\chi_{\nu}^{2}=\frac{\chi^{2}}{\nu}, \tag{21}\] where \(\nu\) stands for the number of degrees of freedom equal to the number of observations minus the number of fitted parameters. The number of observations includes the data points from the \(C_{\ell}^{\rm gg}\) and \(C_{\ell}^{\rm gx}\), while the number of fitted parameters includes the bias parameters, amplitude of shot noise, and \(\sigma_{8}\). Data points from deep fields and parameters from the redshift distribution modelling are not included while reporting those statistics. Finally, we report the "probability to exceed" (PTE), calculated in terms of the \(\chi^{2}\) as: \[{\rm PTE}(\chi^{2},\nu)=1-F(\chi^{2},\nu), \tag{22}\] where \(F\) denotes the \(\chi^{2}\) cumulative distribution function. To explore the posterior distribution function we make use of rejection sampling via Monte-Carlo Markov Chains (MCMC) as implemented in the public emcee code (Foreman-Mackey et al. 2013). Table 2 lists the parameters of interest explored in Section 5 together with their priors. We free up the value of \(\sigma_{8}\) only in Section 5.4. The MCMC chains were generated using 32 walkers and a convergence condition ensuring that the number of samples is equal to or higher than 40 times the mean of the auto-correlation scale for all the inferred parameters. ## 5 Results ### Power spectra Figure 3 shows the measurements of the LoTSS DR2 auto-spectrum, and its cross-correlation with the Planck lensing map (left and right panels respectively). The solid gray line in the left panel shows the expected contribution from shot noise. As expected, given the broad redshift range covered by the sample, the auto-correlation has a featureless, roughly power-law-like behaviour, which is detected at relatively high significance over all the scales explored. The cross-correlation is also clearly detected to scales \(\ell\sim 800\). Quantifying the significance of this detection as described in Section 4.5 (from the \(\chi^{2}\) difference between best-fit model and null hypothesis), including these scales, we obtain a signal-to-noise of: \[\left(\frac{S}{N}\right)_{\ell\leq 800}=26.6. \tag{23}\] This is one of the most significant detections of the cross-correlation between radio galaxies and CMB lensing so far, comparable to the significance of the correlation with the NVSS sample over a much larger area (Ade et al. 2014). Considering only the fiducial scales \(\ell\leq 500\) that we will include in the analysis, the significance is \[\left(\frac{S}{N}\right)_{\ell\leq 500}=23.1. \tag{24}\] This is a factor \(\sim 3.6\) higher than the detection in A21 using LoTSS DR1, in good agreement with the expectation given the relative increase in area between both releases (assuming that \(S/N\) scales as \(\sqrt{f_{\rm sky}}\)). For the auto-correlation, we obtain the signal-to-noise ratio of \(34.6\sigma\) at \(\ell\leq 500\), and \(17.9\sigma\) at \(\ell\leq 250\). Figure 3 also shows the best-fit predictions for both power spectra using the constant amplitude bias model and linear (green line) and HALOFIT (orange line) predictions, as well as corresponding models resulting from using the same best-fit parameters as found earlier, but changing only the matter power spectrum (both dashed lines). Comparing the same colour solid and dashed lines, linear and HALOFIT predictions begin to differ from one another by more than \(2\sigma\) of the statistical uncertainties in our measurements of \(C_{\ell}^{\rm gg}\) at \(\ell=250\), and about \(1\sigma\) of the error of \(C_{\ell}^{\rm gx}\) at \(\ell=500\) in case of the cross-correlation. Using the approximation \(\theta\simeq 180^{\circ}/\ell\), those scales translate to \(\sim 0.72^{\circ}\) and \(z_{\rm med}\simeq 0.82\), based on the fitted distribution, these angular scales correspond to wave numbers \(k=0.13\,h\,{\rm Mpc}^{-1}\) and \(0.26\,h\,{\rm Mpc}^{-1}\) respectively. We can thus use this to define conservative scale cuts for which the linear bias model can be considered. Our fiducial scale cuts will therefore be reported here as \(\ell<(250,500)\). To eliminate any residual systematics associated with large-scale survey depth variations, we will also remove the first bandpower in the galaxy auto-correlation. It is worth noting, however, that using a linear bias model applied to the non-linear matter power spectrum has been empirically found to extend the validity of the model to mildly non-linear scales (Pandey et al. 2020; Sugiyama et al. 2022; Porredon et al. 2022). In some cases, we will therefore also report results for less conservative scale cuts \(\ell<(500,800)\) (corresponding to \(k_{\rm max}=0.26\,h\,{\rm Mpc}^{-1}\) and \(0.42\,h\,{\rm Mpc}^{-1}\) respectively). We stress, however, that these results should be interpreted with care, since they rely on the validity of the linear bias model over mildly non-linear scales. This could be quantified via numerical simulations including a physics-based model for the galaxy-halo relation for SFGs and AGNs, but this lies beyond the scope of this work. The correlation matrix of the joint (\(C_{\ell}^{\rm gg},C_{\ell}^{\rm gx}\)) data vector after imposing these scale cuts is shown in Fig. 4. As evidenced by this plot, the uncertainties between different bandpowers are largely uncorrelated. ### Constraining bias We now use the measurements presented in the previous section to constrain the bias of radio sources in the LoTSS DR2 sample. For now, we will fix all cosmological parameters to the best-fit Planck cosmology (Planck Collaboration et al. 2020), and will only vary bias and \(p(z)\) parameters. We begin by comparing our two 1-parameter bias models, the constant-bias and constant-amplitude (or \(1/D(z)\)) parametrisations, when constrained by different combinations of correlation functions, but in all cases using our fiducial scale cuts \(\ell<(250,500)\), and the HALOFIT matter power spectrum. Table 3 shows the constraints on the bias and the shot-noise amplitude \(A_{m}\) obtained using our fiducial scale cuts when including only the galaxy auto-correlation (first row), the cross-correlation (second row), and both (third row). We find that, while both \begin{table} \begin{tabular}{l l r r} \hline \hline & & prior & middle point \\ \hline constant bias & \(b_{g}\) & positive & 2.0 \\ const. amplitude bias & \(b_{g,D}\) & positive & 1.5 \\ quadratic bias & \(b_{0}\) & positive & 1.5 \\ & \(b_{1}\) & positive & 1.0 \\ & \(b_{2}\) & none & 0.1 \\ redshift distribution & \(z_{0}\) & positive & 0.05 \\ & \(a\) & none & 5.0 \\ & \(r\) & positive & 0.2 \\ shot noise amplitude & \(A_{m}\) & [0.8, 1.4] & 1.1 \\ matter fluctuations & \(\sigma_{8}\) & positive & 0.81 \\ \hline \hline \end{tabular} \end{table} Table 2: List of all parameters used in the inference with the corresponding priors and initial values. We require \(b_{0}\) and \(b_{1}\) to be positive, in order to obtain positive \(a\), and to be increasing at low redshifts. We allow \(b_{2}\) to be negative, as it can both increase and decrease the bias evolution. The initial values are drawn uniformly from a range centred at the ‘middle point’ value, plus/minus 20%. models are able to provide a good fit to the auto- and cross-correlations separately, the \(1/D(z)\) model provides a better representation of both signals simultaneously. The combined constraint \(b_{g,D}=1.41\pm 0.06\) is compatible with the individual constraints from \(C_{\ell}^{gg}\) and \(C_{\ell}^{gg}\), which are also in agreement with each other. The constant-bias model, in turn, finds broadly incompatible best-fit values for \(b_{g}\), and the model is a worse fit for the combined data vector than \(1/D(z)\) model. It is also interesting to note that, with the conservative scale cuts applied here, the bias is better constrained with \(C_{\ell}^{gg}\) than with the galaxy auto-spectrum. When including the CMB lensing cross-correlation, there is then significant evidence that the effective bias of the sample grows with redshift, as would be expected for most flux-limited samples. Having established that the constant bias model is mildly disfavoured, we use constant-amplitude and quadratic bias models to further compare the bias estimates between the linear and HALOFIT models of the power spectrum at different scales. The results are shown in Table 4. The linear and HALOFIT models are in broad agreement within \(\sim 2\sigma\) at the fiducial scale cuts \(\ell<(250,500)\). At the less conservative scale cuts \(\ell<(500,800)\), the agreement is significantly poorer, and neither model is able to provide a good fit to the data (with PTEs at around 3% and 13% for linear and HALOFIT models respectively). This shows that the linear bias assumption employed here is not a reliable representation of the data on these mildly non-linear scales. It is worth noting that, for either choice of scale cuts, the linear power spectrum model achieves a consistently poorer \(\chi^{2}\) than HALOFIT (\(\Delta\chi^{2}\simeq 6\) for \(\ell<(500,800)\), with a significantly worse PTE). The linear power spectrum model also generally prefers a 10% to 20% higher value of the shot-noise amplitude \(A_{m}\), to compensate for the lower small-scale power in comparison with HALOFIT. The combination of \(C_{\ell}^{gg}\) and \(C_{\ell}^{gg}\) allows us to successfully constrain the quadratic bias model. The constraints on the bias evolution obtained from the joint data vector, analysed under the HALOFIT model, for the three bias evolution models explored here, are shown in Fig. 5. The figure also shows other existing estimates of the bias of various radio galaxy samples (Nusser and Tiwari, 2015; Hale et al., 2018; Chakraborty et al., 2020; Mazumder et al., 2022), which are of different depth and different ratios of AGNs and SFGs, as well as the results obtained in H23 using the galaxy correlation function. The figure also shows, as a vertical shaded band, the median and 68 percentiles of the redshift distribution estimated from the deep fields. Using the median survey redshift, the \(1/D(z)\) model predicts a value of the bias \(b(z=0.82)=2.34\pm 0.10\). This is in good agreement with the \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{3}{c}{\(b_{g}(z)=b_{g}\)} & \multicolumn{3}{c}{\(b_{g}(z)=b_{g,D}/D(z)\)} \\ & \(b_{g}\) & \(A_{m}\) & \(\chi^{2}_{\nu}\) & PTE & \(b_{g,D}\) & \(A_{m}\) & \(\chi^{2}_{\tau}\) & PTE \\ \hline \(C_{\ell}^{gg}\) & \(1.86^{+0.14}_{-0.14}\) & \(0.93^{+0.10}_{-0.08}\) & 1.7 & 19\% & \(1.53^{+0.09}_{-0.11}\) & \(0.93^{+0.11}_{-0.08}\) & 1.7 & 19\% \\ \(C_{\ell}^{gg}\) & \(2.16^{+0.10}_{-0.09}\) & & 1.2 & 30\% & \(1.39^{+0.06}_{-0.06}\) & & 1.2 & 32\% \\ \(C_{\ell}^{gg}\) \& \(C_{\ell}^{gg}\) & \(2.08^{+0.09}_{-0.09}\) & \(0.89^{+0.08}_{-0.06}\) & 1.4 & 18\% & \(1.41^{+0.06}_{-0.05}\) & \(1.01^{+0.08}_{-0.09}\) & 1.2 & 25\% \\ \hline \end{tabular} \end{table} Table 3: Comparison of bias estimates for constant bias and constant amplitude models, using different power spectrum measurements at the fiducial \(\ell<(250,500)\) scale cuts, together with the HALOFIT matter power spectrum. Figure 3: Comparison of the linear and HALOFIT matter power spectrum for the auto- and cross-correlation (_left_ and _right_, respectively). We note that the shot noise is reported for each multipole separately, while the correlation signal is calculated in the bins of 50 multipoles. The solid lines show the best-fit results with different 3D power spectrum models, while dashed lines show the models with the same resulting best-fit parameters as obtained for solid lines, but with only matter power spectrum changed to the other model. Hence, the difference between the corresponding solid and dashed lines stems only from a difference between linear and HALOFIT models. The vertical dashed lines mark the multipole ranges used in this analysis: the fiducial \(50\leq\ell\leq 250\) and \(50\leq\ell\leq 500\), as well as larger \(50\leq\ell\leq 500\) and \(50\leq\ell\leq 800\), for \(C_{\ell}^{gg}\) and \(C_{\ell}^{gg}\) respectively. The fits shown here were made on the fiducial multipole range, where differences between the linear and HALOFIT models are between \(1\sigma-2\sigma\) of errors on data measurements. prediction from the other two bias models at the same redshift. Future releases of LoTSS data may allow us to better constrain the values of the quadratic bias model, which imposes fewer assumptions on bias evolution. With the current data, the predictions of the quadratic bias model are in good agreement with those of the \(1/D(z)\) model. From these results, we are led to conclude that the more reliable setup to carry out cosmological analyses with the LoTSS data is to adopt the conservative scale cuts \(\ell<(250,500)\) paired with the HALOFIT matter power spectrum and the \(1/D(z)\) bias model, as the simplest parametrisation able to describe all the data used in the analysis. The different results from the literature shown in Fig. 5 are in rough agreement with our measurements. It is worth noting, however, that each of these works was carried out on samples of radio galaxies with different depths and ratios of AGNs and SFGs, also using different bias parametrisations, and hence a direct comparison is not possible. The most direct comparisons can be made with the measurements of A21, using a sample at flux higher than 2 mJy and an SNR higher than 5 defined on the LoTSS DR1 catalog, and with the estimate of H23, based on the angular auto-correlation function of LoTSS DR2 for a flux limit of 1.5 mJy and an SNR cut of 7.5. A21 found the joint constraints on galaxy bias to depend strongly on the assumed redshift distribution of the sample, although the cross-correlation alone was extremely robust against this systematic. In this case, for the \(1/D(z)\) model, A21 finds \(b_{g,D}=1.46\pm 0.28\), in agreement with our findings (\(b_{g,D}=1.40\pm 0.07\) at 2.0 mJy, at 5 SNR). H23 used a setup that is much more similar to ours, although only studying the LoTSS auto-correlation. Using the linear matter power spectrum and a scale range \(36<\ell<360\) (assuming a current \(\ell=180^{\circ}/b\) between real and harmonic scales), H23 found \(b_{g,D}=1.79^{+0.15}_{-0.14}\), which is at \(1.3\sigma\) difference from our corresponding setup which yields \(b_{g,D}=1.61\pm 0.11\), while using only the auto-correlation at \(50<\ell<250\) and the linear matter power spectrum. Our bias estimate becomes lower, \(b_{g,D}=1.54\pm 0.06\), and the difference becomes larger, at a level of \(1.8\sigma\), if we add the cross-correlation, as shown in the first row of table 4. The difference becomes even larger, at a level of \(2.3\sigma\); if we assume the HALOFIT matter power spectrum, which gives better fits in our case and evaluates to \(b_{g,D}=1.41\pm 0.06\), in comparison to \(b_{g,D}=1.75^{+0.16}_{-0.15}\) from H23, while also using the HALOFIT. We follow upon this in Sect. 6. \begin{table} \begin{tabular}{l l c c c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{\(b_{g}(z)=b_{g,D}/D(z)\)} & \multicolumn{4}{c}{\(b_{g}(z)=b_{0}+b_{1}z+b_{2}z^{2}\)} \\ & & \(b_{g,D}\) & \(A_{sn}\) & \(\chi^{2}_{r}\) & PTE & \(b_{0}\) & \(b_{1}\) & \(b_{2}\) & \(A_{sn}\) & \(\chi^{2}_{r}\) & PTE \\ \hline \(\ell<(250,500)\) & linear & \(1.54^{+0.06}_{-0.06}\) & \(1.19^{+0.06}_{-0.06}\) & 1.4 & 16\% & \(1.54^{+0.21}_{-0.24}\) & \(0.67^{+0.67}_{-0.46}\) & \(0.19^{+0.25}_{-0.28}\) & \(1.21^{+0.06}_{-0.07}\) & 1.8 & 6.8\% \\ & HALOFIT & \(1.41^{+0.06}_{-0.05}\) & \(1.01^{+0.08}_{-0.09}\) & 1.2 & 25\% & \(1.56^{+0.19}_{-0.21}\) & \(0.57^{+0.50}_{-0.39}\) & \(0.06^{+0.20}_{-0.17}\) & \(0.98^{+0.10}_{-0.09}\) & 1.4 & 17\% \\ \(\ell<(500,800)\) & linear & \(1.65^{+0.04}_{-0.04}\) & \(1.14^{+0.02}_{-0.02}\) & 1.8 & 1.6 & \(1.60^{+0.19}_{-0.26}\) & \(0.83^{+0.77}_{-0.59}\) & \(0.23^{+0.28}_{-0.34}\) & \(1.15^{+0.02}_{-0.02}\) & 1.9 & 0.9\% \\ & HALOFIT & \(1.44^{+0.04}_{-0.04}\) & \(1.04^{+0.02}_{-0.02}\) & 1.5 & 7.8\% & \(1.51^{+0.16}_{-0.18}\) & \(0.65^{+0.53}_{-0.45}\) & \(0.11^{+0.23}_{-0.22}\) & \(1.04^{+0.02}_{-0.02}\) & 1.6 & 3.9\% \\ \hline \end{tabular} \end{table} Table 4: Comparison of bias estimates for constant amplitude and quadratic models, using different multipole ranges and modelling of matter power spectrum, obtained with both \(C_{\ell}^{gs}\) and \(C_{\ell}^{gs}\). The last column shows the number of data points. Figure 4: Correlation matrix (\(t_{ij}\), see eq. 19) for \(C_{\ell}^{gs}\), \(C_{\ell}^{gs}\) power spectra. Multipole ranges \(50\leq\ell\leq 500\) and \(50\leq\ell\leq 800\) are shown, which are the largest ranges used in this work, in bins of \(\Delta\ell=50\). Figure 5: Bias constraints for three different models based on \(C_{\ell}^{gs}\) and \(C_{\ell}^{gs}\) at the fiducial \(\ell<(250,500)\), and for the HALOFIT matter power spectrum. The blue vertical line shows the LoTSS DR2 median redshift and 68 percentiles based on the deep fields \(N(z)\). The markers show bias estimates for radio galaxies known from the literature (Nusser & Tiwari 2015; Hale et al. 2018; Alonso et al. 2021; Chakraborty et al. 2020; Mazumder et al. 2022; Hale et al. 2023). Markers with white fillings and grey/black borders stand for AGNs/SFGs respectively, while markers with grey fillings and black borders denote mixed populations. ### Bias evolution and clustering redshifts Since all the bias models explored here assume some form of bias evolution, we carry out one additional test of their validity, making use of the clustering redshifts technique (Newman, 2008; Menard et al., 2013; Scottez et al., 2016). Clustering-based redshift estimation uses a set of angular cross-correlations between a sample for which the redshift distribution is unknown and a reference spectroscopic sample with known redshifts, to infer the unknown redshift distribution. Due to the degeneracy between the redshift distribution and the galaxy bias of the target sample in setting the amplitude of the cross-correlations, the technique is in fact only able to constrain the combination \(b_{g}(z)\,p(z)\). Although this degeneracy with bias evolution is one of the drawbacks of the clustering redshifts technique, we can use it to our advantage in order to validate our assumptions regarding bias evolution. We estimate \(b_{g}(z)\,p(z)\) for our sample using the public tool Tomographer1(Chiang and Menard, 2019), which uses about 2 million spectroscopic objects covering about 10,000 square degrees, based on samples of galaxies and quasars from the Sloan Digital Sky Survey (SDSS, Strauss et al., 2002; Blanton et al., 2005; Schneider et al., 2010; Reid et al., 2016; Paris et al., 2017; Ata et al., 2018; Bautista et al., 2018). We compare the result from Tomographer with the product of the redshift distribution obtained from the deep fields and the best-fit \(1/D(z)\) bias model. Figure 6 shows the results, with Tomographer measurements shown as points with error bars, and the 68% confidence interval of our estimated \(b_{g}(z)\,p(z)\) shown as an orange band. Both results are in broad agreement, but there is some potential evidence of a higher bias at \(z\gtrsim 1.5\), which could be confirmed with future LoTSS data releases, or with a dedicated cross-correlation analysis involving a dense optical galaxy sample at those redshifts (e.g. Meisner et al., 2018; Storey-Fisher et al., 2023). Footnote 1: [http://tomographer.org](http://tomographer.org). ### Constraining \(\sigma_{8}\) We put constraints on the \(\sigma_{8}\) parameter by using \(C_{\ell}^{gs}\) and \(C_{\ell}^{ge}\) at the fiducial \(\ell<(250,500)\) scale cuts, together with the deep fields \(p(z)\), the HALOFIT matter power spectrum, and the \(1/D(z)\) bias model, as justified in previous sections. Our data is not yet powerful enough to break the degeneracy between different cosmological parameters, and therefore we only vary the amplitude of matter fluctuations, parametrised by \(\sigma_{8}\). Our measurement thus corresponds to an independent constraint on the growth of structure at low redshifts, assuming that CMB data can reliably constrain all background evolution parameters (\(\Omega_{\rm c}\), \(\Omega_{\rm b}\), \(H_{0}\), etc.). Combinations with other datasets (e.g. BAO measurements) may allow us to break these degeneracies independently from the CMB, but we leave this analysis for future work. The resulting 68% CL constraints on \(\sigma_{8}\) are \[\sigma_{8}=0.75^{+0.05}_{-0.04}. \tag{25}\] The full marginalised distribution is shown in Figure 7, together with the constraints from Planck (Planck Collaboration et al., 2020), as well as the Kilo-Degree Survey (KiDS, Heymans et al., 2021), and the Dark Energy Survey (DES, Abbott et al., 2022). The fiducial measurement (first from the top in Fig. 7) is in agreement with the weak lensing surveys, and at \(1.2\sigma\) difference from the CMB constraints by Planck. The measurement using linear matter power spectrum (second from the top in Figure 7) is in agreement with the fiducial setup, which uses the HALOFIT modelling, but it is closer than the HALOFIT to the measurements from Planck. To test the robustness of this result to the choice of galaxy sample, we repeat the analysis for the fiducial flux and SNR cuts of A21 (2.0 mJy, 5.0 respectively, third and fourth from the top in Figure 7). We obtain \(\sigma_{8}=0.82^{+0.08}_{-0.07}\), in agreement with the result found for the fiducial sample, but we note higher uncertainty of the estimations using this additional Figure 6: Comparison of our fit to redshift distribution and bias from our fiducial approach and \(b_{g,D}/D(z)\) bias modelling, against the results from Tomographer. Figure 7: Constraints on \(\sigma_{8}\) using \(C_{\ell}^{ge}\) and \(C_{\ell}^{ge}\) at the fiducial \(\ell<(250,500)\), the \(b_{g,D}/D(z)\) bias modelling, HALOFIT matter power spectrum, and Planck cosmology assumed for parameters other than \(\sigma_{8}\). The top bar shows constraints form the LoTSS DR1 (Alonso et al., 2021), and the three bottom bars present Planck (Planck Collaboration et al., 2020), KiDS (Heymans et al., 2021), and DES (Abbott et al., 2022). sample. These results are summarised in Table 5. The full posterior distribution of all model parameters is shown in Figure A.1. ### Integrated Sachs-Wolfe effect The ISW signal is especially useful for cosmology, as it is sensitive to the dark energy equation of state. However, sufficiently large sky coverage of the galaxy sample is needed for it to be detected, which is not yet the case for LoTSS. Figure 8 shows the measured cross-correlation between our LoTSS DR2 sample limited at \(1.5\,\mathrm{mJy}\) and the CMB temperature anisotropies measured by Planck. This measurement was carried out using thinner \(\ell\) bins (\(\Delta\ell=16\)) to concentrate on the largest scales, where the ISW signal is the most significant. The orange lines in the same plot show the theoretical prediction for values of the galaxy bias selected by the MCMC chains run in Section 5.2 for the \(1/D(z)\) model using HALOFIT. Fixing the galaxy bias to the best-fit value found in Section 5.2, and comparing the \(\chi^{2}\) value of the measured \(C_{\ell}^{\mathrm{sT}}\) with respect to the null-hypothesis and the best-fit model, we determine that the ISW signal is not significantly detected. A higher-significance measurement of this signal can be expected with future releases of LoTSS covering the full northern sky. ## 6 Discussion We have shown that our results are in reasonable agreement with previous measurements of the galaxy bias for various radio samples, including previous CMB lensing cross-correlation analyses. The comparison with the real-space analysis of LoTSS DR2 carried out in H23 shows \(1.3\sigma\) difference if we use only the autocorrelation and linear matter power spectrum, \(1.8\sigma\) difference if we add the cross-correlation, and \(2.3\sigma\) difference if we assume the HALOFIT modelling for both approaches, which provides better fits in our case. As shown in Hamana et al. (2022), the difference in \(S_{8}=\Omega_{m}^{1/2}\sigma_{8}\) estimates between the real and harmonic space can be even larger than \(1\sigma\). In our case, there are several possible reasons for the resulting difference. * The pure sample variance is due to the fact that both analyses actually do not use the same modes. We only expect harmonic space and real space methods to agree for full sky coverage or isotropic sampling of a statistically isotropic universe. The second aspect, the isotropic sampling, is indeed violated, and we tried to correct it by means of the weight mask and data cuts that we applied. However, the difference between the real and harmonic space analyses can point out that we did not correct for all the large-scale systematics. * Both approaches treat the multi-component sources in different ways. In our case, it is a marginalisation over the amplitude of shot noise, whereas H23 selects the scales which allow avoiding the effects resulting from the multi-component sources. * The angular two-point correlation function can be affected by contamination from a dipole. Chen & Schwarz (2016) show that an excess of two-point correlation at the degree scale in the NVSS data set can be removed by properly removing the NVSS dipole before analysing the two-point correlation. A study of that issue in the scope of the LoTSS survey will be published in Bohme et al. (2023). ## 7 Conclusion We combined LoTSS DR2 wide field and LoTSS DR1 deep fields, supplemented by multiwavelength data, with gravitational lensing from the Planck Cosmic Microwave Background (CMB) to place constraints on the bias and its evolution for radio galaxies, and on the amplitude of matter perturbations. Our main results can be summarised as follows: * We obtain one of the most significant detections of the cross-correlation between radio and CMB lensing data, resulting in the signal-to-noise ratio at a level of \(26.6\sigma\). * We show that the inclusion of CMB lensing information leads to a clear preference for an evolving galaxy bias, growing towards higher redshifts, as expected from linear theory. We determine that a linear bias evolution of the form \(b_{g}(z)=b_{g,D}/D(z)\), where \(D(z)\) is the linear growth factor, is able to consistently provide a good description of different sectors of the data. This allows us to measure \(b_{g,D}=1.41\pm 0.06\) Figure 8: Cross-correlation with the CMB temperature, based on fiducial bias constraints from the section 5.2, using scales \(2<\ell<50\). The signal-to-noise ratio is consistent with zero. \begin{table} \begin{tabular}{l l c c c c c} \hline \hline Sample & Matter power spectrum & \(\sigma_{8}\) & \(b_{g,D}\) & \(A_{sm}\) & \(\chi^{2}\) & PTE \\ \hline (\(1.5\,\mathrm{mJy}\), \(\mathrm{SNR}>7.5\)) & HALOFIT & \(0.75^{+0.05}_{-0.04}\) & \(1.62^{+0.21}_{-0.19}\) & \(0.97^{+0.09}_{-0.09}\) & 1.2 & 26\% \\ & linear & \(0.79^{+0.05}_{-0.05}\) & \(1.62^{+0.20}_{-0.18}\) & \(1.17^{+0.08}_{-0.08}\) & 1.5 & 12\% \\ (\(2.0\,\mathrm{mJy}\), \(\mathrm{SNR}>5.0\)) & HALOFIT & \(0.82^{+0.08}_{-0.07}\) & \(1.38^{+0.25}_{-0.22}\) & \(1.17^{+0.10}_{-0.09}\) & 1.4 & 20\% \\ & linear & \(0.82^{+0.06}_{-0.06}\) & \(1.49^{+0.20}_{-0.18}\) & \(1.30^{+0.06}_{-0.08}\) & 1.8 & 5.7\% \\ \hline \end{tabular} \end{table} Table 5: Comparison of \(\sigma_{8}\) estimates at two different choices of data cuts, using both \(C_{\ell}^{gg}\) and \(C_{\ell}^{gx}\) and the fiducial scale cut \(\ell<(250,500)\) and HALOFIT matter power spectrum. The number of data points in the correlation functions is 13. which evaluates to \(b(z=0.82)=2.34\pm 0.10\) at the median survey redshift, for a sample flux-limited at \(1.5\,\mathrm{mJy}\). These results are also in good agreement with more flexible bias parametrisations (e.g. a quadratic polynomial in redshifts), which lead to similar constraints. * KiDS (Asgari et al., 2021; Heymans et al., 2021) and DES (Abbott et al., 2022), as well as CMB data from Planck. * We attempt a first measurement of the ISW signal with LOFAR data, but find that the signal is compatible with zero. Throughout this analysis, we used conservative scale cuts, \(\ell<250\) for \(C_{\ell}^{\mathrm{es}}\), and \(\ell<500\) for \(C_{\ell}^{\mathrm{es}}\), and showed that for more permissive cuts (\(\ell<(500,800)\)), including mildly non-linear scales, the simple linear bias models used here are not able to fit the data adequately. More work is needed in order to provide a robust model for the bias of radio galaxies that extends to non-linear scales. This could be done by making use of perturbative bias expansions (Matsubara, 2008; Desjacques et al., 2018), or phenomenological halo-based models (Peacock and Smith, 2000; Berlind and Weinberg, 2002). However, additional information, in the form of cross correlations with other tracers (e.g. optical redshift surveys, tomographic cosmic shear data) will be necessary in order to disentangle non-linear bias from evolutionary effects. The strength of our approach comes from the ability of high-resolution radio surveys to detect galaxies at high redshifts, and from the combination of both auto- and cross-correlations with CMB lensing, which allows us to break degeneracies between the amplitude of matter perturbations and galaxy bias, and to potentially constrain the redshift evolution of the latter. However, by far the largest source of systematic uncertainty in our results is the lack of redshift information for radio continuum samples. We have modelled and calibrated the redshift distribution of our sample using three deep fields within the LoTSS DR2 footprint. The resulting redshift distribution is still subject to caveats, due to the use of photometric redshifts, the small area covered by the deep fields, and the uncertainty due to radio sources with no optical cross-matches (around 5% of our sample). In our analysis, however, we have propagated the \(p(z)\) calibration uncertainties, by making the redshift distribution measurements part of the data vector, modelled together with the galaxy and CMB lensing power spectra. Future LoTSS data releases will include one additional deep field and even deeper observations of those fields, which will likely help reduce this source of uncertainty. The current LoTSS catalog does not allow us to make a significant detection of the ISW effect. However, future data releases covering the majority of northern sky, should allow us to improve this result. If included in the cosmological analysis, this measurement could help improve cosmological constraints, particularly in the context of dark energy, which is an important source of the ISW signal. In this work, we have demonstrated the significant improvement on cosmological and astrophysical constraints from radio continuum data enabled by the inclusion of CMB lensing cross-correlations. This will allow future LOFAR data releases to start providing meaningful constraints on cosmological parameters, on par with (and in combination with) other probes of the large-scale structure. ###### Acknowledgements. SIN is supported by the US National Science Foundation (NSF) through grant AST-2108402, and the Polish National Science Centre through grant UMO-2013/13/NSF/03975. DA acknowledges support from the Beecroft Trust, and from the Science and Technology Facilities Council through an Emesi Rutherford Fellowship, grant reference ST/P004474. MBI is supported by the Polish National Science Center through grants no. 2020/38/E/ST9/00395, 2018/30/E/ST9/00698, 2018/31/G/ST9/03388 and 2020/39/B/ST9/03494. DIS acknowledges support from the Bundesministerium fur Bildung und Forschung (BMBF) eIA-PhrO-PG 05A2019 and Ministerium fur Kulther und Wissenschaft des Landes Nordrhein-Westfalen Profilsublizd 2020 grant B3D. CLH is ausponskut fur Experimentphysat in the Leverhulme Trust through an Early Career Research Fellowship. AP is supported by the Polish National Science Centre grant UMO-2018/30/MST9/00757, and by COST (European Cooperation in Science and Technology) through grant COST Action CA21136 - "Addressing observational tensions" in cosmology with systematics and fundamental physics (Cos-moVrese)." AP and MBi acknowledge support from the Polish Ministry of Science and Higher Education through grant DIR/WK/2018/12. CSH's work is funded by the Volksawegren Foundation. CSH acknowledges additional support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2121 - "Quantum Universe" - 39083330 and EXC 2181/1 - 390909048 the Heidelberg STRUCTURE Universe - Excellence Clustery. CLH acknowledges support from the Leverhulme Trust through an Early Career Research Fellowship. PT acknowledges the support of the RFIS grant (No. 1225104322) by the National Natural Science Foundation of China (NSFC). ZK is supported by the project "NRW-Cluster for data intensive radio astronomy: Big Bang to Big Data (B3D)/funded through the programme "Profilsublizd 2020", an initiative of the Ministry of Culture and Science of the State of North Rhine-Westphalia. MNRG acknowledges support from the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy - EXC 2121 - "Quantum Universe" - 390833306 and from the BMBF EU-Pro grant 05A2023. MIJ acknowledges support of the STFC consolidated grant [ST/S000488/1] and [ST/V000903/1] and from a UKRI Frontiers Research Grant [EP/K02659/11]. MIJ also acknowledges support from the Oxford Hintze Centre for Astrophysical Surveys which is funded through generous support from the Hintze Family Charitable Foundation. LOFAR is the Low Frequency Array designed and constructed by ASTRON. It has observing, data processing, and data storage facilities in several countries, which are owned by various par- ties (each with their own funding sources), and which are collectively operated by the I1T foundation under a joint scientific policy. The I1T resources have benefited from the following recent major funding sources: CNRS-INSU, Observatoire de Paris and Univer- sitie of Orleans, France; BMBF, MIWF-NRW, MPG, Germany; Science Foundation Ireland (SFD), Department of Business, Enwerprise and Innovation (DED), Ireland; NWO, The Netherlands; The Science and Technology Facilities Council, UK; Ministry of Science and Higher Education, Poland; The Istituto Nazionale di Astrofisica (INAF), Italy. This research was carried out using Python 3 (Van Rosam de Drake, 2009) and a number of software packages from which we enumerate the most significant ones for our analysis: leapy (Zonca et al., 2019), HEALPix (Gorski et al., 2005), Astropy (Astropy Collaboration et al., 2013, 2018, 2022), ppynster (Aalonso et al., 2019), yunc (Chisari et al., 2019), emece (Foreman-Mackey et al., 2013), getdist (Lewis, 2019), Numly (Harris et al., 2020), SciPy (Virtanen et al., 2020), (Python (Perez and Granger, 2007), Pandas (Wes McKinney, 2010), Marplotlib (Hunter, 2007), seaborn (Wakon, 2021), tqdn (da Costa-Luis, 2019).
2305.03692
Suppression of dark-state polariton collapses in cold-atom quantum memory
We observe dark-state polariton collapses and revivals in a quantum memory based on electromagnetically induced transparency on a cloud of cold cesium atoms in a magnetic field. Using $\sigma^+$ polarized signal and control beams in the direction of the magnetic field, we suppress the dark-state polariton collapses by polarizing the atoms towards one of the stretched Zeeman states and optimizing the frequency detuning of the control beam. In this way, we demonstrate a quantum memory with only partial dark-state polariton collapses, making the memory usable at any storage time, not only at discretized times of revivals. We obtain storage time of more than 400 $\rm{\mu}$s, which is ten times longer than what we can achieve by trying to annul the magnetic field.
Katja Gosar, Vesna Pirc Jevšenak, Tadej Mežnaršič, Samo Beguš, Tomasz Krehlik, Dušan Ponikvar, Erik Zupanič, Peter Jeglič
2023-05-05T17:13:51Z
http://arxiv.org/abs/2305.03692v2
# Suppression of dark-state polariton collapses in cold-atom quantum memory ###### Abstract We observe dark-state polariton collapses and revivals in a quantum memory based on electromagnetically induced transparency on a cloud of cold cesium atoms in a magnetic field. Using \(\sigma^{+}\) polarized signal and control beams in the direction of the magnetic field, we suppress the dark-state polariton collapses by polarizing the atoms towards one of the stretched Zeeman states and optimizing the frequency detuning of the control beam. In this way, we demonstrate a quantum memory with only partial dark-state polariton collapses, making the memory usable at any storage time, not only at discretized times of revivals. We obtain storage time of more than \(400\,\mathrm{\SIUnitSymbolMicro s}\), which is ten times longer than what we can achieve by trying to annul the magnetic field. ## I Introduction The ability to coherently store light and to recall it at a later time is essential for quantum communication [1; 2]. Such quantum memories have been subject to a lot of research in recent years [3; 4], and the use of electromagnetically induced transparency (EIT) on hot and cold atoms has proven to be a very promising method [5; 6]. EIT occurs when two laser beams that form a \(\Lambda\)-type system are shone onto a dense cloud of atoms. These beams drive a two-photon transition from the ground state to the storage state via an excited state. The signal beam couples the ground state to an excited state, while the control beam couples the storage state and excited state. The signal beam is much weaker than the control beam. Under these conditions, the absorption of the signal beam is greatly reduced, and the refractive index undergoes a steep variation at the resonance frequency. This leads to a strong reduction of the group velocity of the signal beam, causing a phenomenon called slow light [7; 8; 9]. A pulse of the signal beam, slowed down by EIT, can be described as a quasi-particle called a dark-state polariton (DSP) [10; 11; 12]. It has an electromagnetic component and an atomic component. While the signal pulse is slowly propagating through the atoms, the stronger beam, called the control beam, can be adiabatically turned off. This causes the electromagnetic part of the DSP, along with the group velocity of the signal pulse, to be reduced to zero. The information of the signal pulse is thus stored in the spin coherence between the ground state and the storage state. This coherence is called a spin-wave and evolves temporally with a frequency \(\omega_{sw}=\omega_{s}-\omega_{g}\), where \(\hbar\omega_{s}\) and \(\hbar\omega_{g}\) are the energies of the storage and ground states. After a desired time, adiabatically turning the control beam back on transfers the information from the atomic component of the DSP back into the electromagnetic component and the signal pulse is restored [13; 14; 15]. For the ideal quantum memory, a high efficiency and a long time of storage are desired. The latter is limited by dephasing of the atomic coherence due to the atomic motion [16; 17] and any magnetic field gradients that might be present [18]. While annulling stray fields is hard, especially in cold atom experiments, deliberately turning on a strong perpendicular homogeneous magnetic field has been shown to actually improve the lifetime of cold atom-based quantum memories [19]. Moreover, even a weak residual magnetic field may cause the atomic states to undergo Zeeman splitting and the \(\Lambda\)-systems are no longer degenerate. Consequently, once we turn off the control beam to store the signal pulse, many spin-waves with different energies \(\hbar\omega_{sw}\) are formed. Because these spin-waves evolve with different frequencies, they interfere with each other. Depending on when we turn the control beam back on, this causes collapses and revivals of the amplitude of the retrieved light pulse as a function of storage time. If the magnetic field is weak, as residual fields in cold atom experiments usually are, the time between revivals is large [13; 20]. This may result in only the initial collapse being visible, as the consequent revivals are further than the intrinsic lifetime of the memory allows. Therefore, the effective lifetime of the quantum memory is much shorter than if there was no magnetic field present. If, however, we turn on a stronger magnetic field, the time between revivals decreases and much longer lifetimes are achievable, limited now mostly by just the atomic motion. There remains one major challenge. Due to these collapses, the quantum memory is, in a way, discretized. The question, therefore, is how to reduce these collapses so that the quantum memory can be used for all storage times. In this article, we first show how the effective lifetime of the quantum memory can be improved by applying a homogeneous magnetic field on unpolarized cesium cold atoms. Then we show how polarizing the atoms in an even stronger magnetic field suppresses the storage collapses. Lastly, we demonstrate the use of frequency selectivity to decrease the collapses even further and overall increase the quantum memory lifetime tenfold. ## II Experiment After 7 s of loading, we prepare a cloud of \(5\times 10^{7}\) cesium atoms in a magneto-optical trap at \(\sim\)70 \(\mathrm{\SIUnitSymbolMicro K}\). Then we use the compressed MOT and molasses technique to further cool the atoms to \(\sim\)13 \(\mathrm{\SIUnitSymbolMicro K}\) and transfer them into \(F=3\). The details of this procedure can be found in Ref. [21]. We start the memory measurement 4 ms after we turn off the MOT to ensure the quadrupole coil has completely turned off. Right before the memory measurement, we can shine a strong polarizing beam with \(\sigma^{+}\) polarization to transfer approximately 80% of the atoms into \(|F=3,m_{F}=3\rangle\). The quantization axis is defined by the magnetic field \(B\) parallel to the signal beam. To store light, we shine on the atoms with the control beam and a 0.5 \(\mathrm{\SIUnitSymbolMicro s}\) pulse of the signal beam. Simultaneously with the end of the pulse, we turn off the control beam. After a selected storage time, we turn on the control beam again. We detect the stored signal light with a fast photodiode (Thorlabs PDA8A2, 50 MHz). We shine the beams at a small angle of \(\sim\)0.5\({}^{\circ}\) that allows us to spatially filter the beams by blocking the control beam on an iris. In Section II.1, we describe experiments where the signal beam is \(\sigma^{+}\) polarized and the control is \(\sigma^{-}\) polarized. In this case, the beams are additionally separated by a polarizing beam splitter. In experiments described in Section II.2, both control and signal beam have \(\sigma^{+}\) polarization, therefore we cannot use polarization filtering of the two beams. The setup is illustrated in Fig. 1a. ### Unpolarized atoms First, we demonstrate the occurrence of quantum memory revivals in a system of completely unpolarized atoms. In this experiment, we do not use the polarizing beam and the magnetic field is perpendicular to the direction of the probe beam. The control beam is, in this case, \(\sigma^{-}\) polarized. All of this ensures that the atoms are distributed across all \(m_{F}\) states. The amplitude of retrieved light pulses can be ex Figure 1: (a) Experimental setup for Section II.2. We combine the control and signal beam on a polarizing beam splitter (PBS) and set both polarizations to \(\sigma^{+}\) with a polarizer (POL) and a quarter-wave plate. The two beams travel at an angle of \(\sim\)0.5\({}^{\circ}\) and intersect at the position of the atomic cloud in the ultra-high vacuum chamber. On the other side of the chamber, we block the control light with an iris and measure the intensity of the signal beam with a photodiode. (b) Energy levels of Cs D2 transition used for EIT. The control beam drives the transition \(|F=4\rangle\rightarrow|F^{\prime}=4\rangle\) and the signal beam is on the \(|F=3\rangle\rightarrow|F^{\prime}=4\rangle\) transition. In the presence of a magnetic field, the \(m_{F}\) levels are not degenerate because of Zeeman splitting, as shown in the figure. In this case, seven different \(\Lambda\) systems (three-level systems exhibiting EIT) contribute to the quantum memory, each with a slightly different energy difference. The energies of the created spin-waves \(\hbar\omega_{sw}=\hbar\omega_{s}-\hbar\omega_{g}\) are shown in (c). Figure 2: Retrieval efficiency with non-polarized atoms in magnetic fields of different strengths. Spin waves from different \(m_{F}\) states interfere and create a periodic pattern in the light retrieval from the quantum memory as a function of storage time. The frequency of the occurrence of the revivals is proportional to the magnetic field strength. Around \(t_{\mathrm{storage}}=53\,\mathrm{\SIUnitSymbolMicro s}\) we achieve a higher retrieval efficiency for 161 \(\mathrm{mG}\) than for the lowest achievable magnetic field. The lines are a guide to the eye. pressed as \[A(t)=A(0)\left|\sum_{n=-3}^{3}\sum_{m=-4}^{4}\!P_{n,m}\mathrm{e}^{i\left(\omega_{0 }+(n+m)\frac{g\mu_{B}B}{\hbar}\right)t}\right|^{2}f(t,\tau) \tag{1}\] where \(A(0)\) is the initial amplitude of the stored light (adapted from Ref. [15]) and \(f(t,\tau)\) is a function describing the decay of efficency due to dephasing with a characteristic lifetime \(\tau\). The sums go over all 7 and 9 magnetic sublevels of the ground \(F=3\) state and the \(F=4\) states respectively. A combination of \(n\) and \(m\) corresponds to different coherences and their amplitudes are described by \(P_{n,m}\). Because the coherences are formed by a two photon transition, \(P_{n,m}\) is zero if \(|n-m|>2\), since the absorption or emission of a photon changes the magnetic number by at most one. \(\omega_{0}\) is the frequency of the \(F=3\) to \(F=4\) clock transition in cesium, which is \(9.193\,\mathrm{GHz}\). \(\omega_{L}\) is the Larmor frequency \(\omega_{L}=g\mu_{B}B/\hbar\), where \(g\approx 0.35\,\mathrm{MHz/G}\) is the Lande \(g\)-factor of \(F=3\) ground state. In equation (1) it is already taken into account that the Lande \(g\)-factor is of equal magnitude and opposite sign for \(F=4\) ground state [22]. \(\mu_{B}\) is the Bohr magneton and \(B\) is the applied magnetic field. The specific form of \(f(t,\tau)\) depends on the dephasing mechanisms involved. Our results are best described by a Gaussian function \(f(t,\tau)=\exp\left(-t^{2}/\tau^{2}\right)\). We measure efficiency of the quantum memory retrieval as a function of the storage time for several different amplitudes of the magnetic field. The results are shown in Fig. 2. We observe that peaks in the retrieval efficiency occur every Larmor period \(2\pi/\omega_{L}\). Interestingly, for longer storage times, the retrieval efficiency is higher at the peak of the memory revival at the highest shown magnetic field (\(161\,\mathrm{mG}\), blue) than what we measure at the lowest achievable magnetic field (violet). From that, we conclude that even at the smallest magnetic field we can achieve, the magnetic field is not completely compensated and the observed lifetime \(\tau=\)\(44\,\mathrm{\SIUnitSymbolMicro s}\) is limited by magnetic dephasing rather than other mechanisms. The measurement at the higher magnetic field proves that the intrinsic lifetime of the memory is longer than the lifetime measured at the lowest achievable magnetic field. Since the widths of these peaks are inversely proportional to the magnitude of the magnetic field [20], we were able to approximate that there is \(\sim 3\,\mathrm{mG}\) of stray magnetic field present for the measurement shown in violet. ### Polarized atoms We try to suppress the effect of the DSP collapses by polarizing the atomic cloud. Here we use the polarizing beam and set the polarization of the control and signal beam to both be \(\sigma^{+}\), with the magnetic field in the direction of the beams. This way we excite fewer distinct spin waves, that are non-degenerate due to the Zeeman splitting. To describe this situation, we rewrite Eq. (1) by taking into account that we use \(\sigma^{+}\)-polarized signal and control beams. In this case, the only allowed coherences are ones with \(m=n\). This results in \[A(t)=A(0)\left|\sum_{m=-3}^{3}p_{m}\mathrm{e}^{i(\omega_{0}+2m\omega_{L})t} \right|^{2}f(t,\tau) \tag{2}\] where we denoted \(P_{m,m}=p_{m}\). We see that, in this case, the revivals occur every half Larmor period and not only every Larmor period as in the previous section. Fig. 3a compares measurements in a similar magnetic field with and without the polarizing beam. We see that by using the polarizing beam we achieve that the retrieval efficiency no longer falls to zero between revivals. Fig. 3b shows a measurement of the intrinsic lifetime of the memory using polarized atoms in a magnetic field of \(1.0\,\mathrm{G}\). The lifetime is \(\tau=440\,\mathrm{\SIUnitSymbolMicro s}\). Insets show the oscillations of the retrieval efficiency for two different ranges of storage time. We measure the amplitude of the oscillations as a function of the strength of the magnetic field. In Fig. 4a we plot the maximal and the minimal retrieval efficiency and the relative amplitude of oscillations. The oscillations are clearly suppressed at higher magnetic fields. We see that in the experiment, time evolution of the retrieved light has a simple one-frequency cosine function shape. This is because only two coherences are formed - \(p_{3}\) and \(p_{2}\), as the atoms are almost perfectly polarized Figure 3: The effect of polarizing the atoms and the lifetime of revivals. (a) Stored light as a function of storage time in a similar magnetic field with and without using a polarizing beam to polarize atoms. (b) With polarized atoms in an applied magnetic field we measured a lifetime of the quantum memory \(\tau=\)\(440\,\mathrm{\SIUnitSymbolMicro s}\) and without retrieval efficiency falling to zero for any storage time. to \(m_{F}=3\), with a small percentage of atoms still left in \(m_{F}=2\). In this case, equation (2) simplifies to \[A(t)=A(0)\left[p_{3}^{2}+p_{2}^{2}+2p_{2}p_{3}\cos(2\omega_{L}t)\right]f(t,\tau) \tag{3}\] The maximal retrieved signal (without decay) is therefore \(A_{max}=A(0)(p_{3}+p_{2})^{2}\) and the peak-to-peak amplitude of the oscillations is \(A_{osc}=A(0)4p_{2}p_{3}\). The ratio of these two, denoted by \(R\), represents the relative amplitude of the oscillations, and it is equal to \[R=A_{osc}/A_{max}=4p_{2}(1-p_{2}), \tag{4}\] where we already took into account that \(p_{2}+p_{3}=1\). In measurements shown to this point, the frequencies of the signal and control beam were set to the value that resulted in the highest peak efficiencies. Here, we present another way we can make the memory more selective for one \(m_{F}\) component, that is, by increasing the frequency difference of the control and signal beam. In zero magnetic field, the frequency difference for this transition is \(\omega_{0}\). We describe the frequency detuning with the \(\Delta=\omega_{\mathrm{sig}}-\omega_{\mathrm{con}}-\omega_{0}\), where \(\omega_{\mathrm{sig}}\) and \(\omega_{\mathrm{con}}\) are the frequencies of the signal beam and the control beam respectively. Fig. 1c shows how the frequency of the spin wave \(\omega_{sw}\) depends on \(m_{F}\) and the magnetic field, and we expect that the frequency difference of the beams should follow \(\omega_{sw}\). We measure the highest and the lowest point of the oscillations as a function of \(\Delta\). We select the frequency of the signal beam that results in the highest retrieval efficiency and scan the \(\Delta\) by only changing the frequency of the control beam. The results are shown in Fig. 4(b-d) for 0.2 G, 1.0 G and 2.6 G. In the first case, where \(B=0.2\) G, the relative amplitude of the oscillations is independent on the frequency and the peak of the maximal retrieval efficiency is centered on \(\Delta=0\). However, for 1 G, the peak in the maximal retrieval efficiency is at \(\Delta=2.2\) MHz and the relative amplitude significantly decreases for higher \(\Delta\). Examples of the oscillations of retrieval efficiency at 1 G for three different detunings are shown in Fig. 4(e). We see a similar effect for 2.6 G, leading to even lower relative amplitudes that plateau for higher detunings. The position of the peak of the maximal retrieval efficiency aligns well with the shift due to the Zeeman splitting \(\Delta_{sw}=\omega_{sw}-\omega_{0}=2g_{F}\mu_{B}Bm_{F}\) of \(m_{F}=3\), which is 2.1 MHz for 1.0 G and 5.5 MHz for 2.6 G. The smallest relative amplitude we observed is \(R=0.25\), measured at \(\Delta=6.5\) MHz at 2.6 G. Using Equation (4) we can calculate that this corresponds to \(p_{2}=0.07\), meaning that 93% of light is stored in the coherence with \(m_{F}=3\). ## III Discussion and Conclusions It is very challenging to completely annul the magnetic field in a cold atom experiment where the use of magnetic shields is not possible. This means that the effective lifetime of a quantum memory is often limited by the residual magnetic field and not the intrinsic lifetime of the memory. We show that it is beneficial to instead add a strong magnetic field and polarize the atoms into predominantly one \(m_{F}\) state. This way we were able to show the intrinsic lifetime in our system is \(\tau=440\) us, even though it is otherwise limited to 44 us by the stray magnetic field. From the shape of the decay of efficiency \(f(t,\tau)\), which is Gaussian, we conclude that the dominant decoherence mechanism in our system are magnetic gradients. \(\tau=440\) us corresponds to a gradient of \(\sim 7\) mG/cm, which agrees with our previous assessment of magnetic field gradients in our system [23]. From the temperature of the atoms and the angle between the beams, we estimate that if we could eliminate the effect of these gradients, the lifetime would be limited to \(\sim\)700 us by atomic motion. The main requirement for this experiment is that the magnetic field is in the direction of the beams and that Figure 4: Dependence of the variation of the retrieval efficiency of the quantum memory as a function of the applied magnetic field and the detuning of the control beam. Upwards pointing blue triangles show the maximal retrieval efficiency of storage and downwards pointing blue triangles show the minimal retrieval efficiency. The red circles show the relative amplitude of the oscillations. (a) The dependence on the magnetic field where the data is measured at such detuning that peak retrieval efficiency is maximal. The ratio clearly shows suppression of the DSP collapses for larger magnetic fields. (b-d) show the dependence of retrieval efficiency of the quantum memory on the detuning \(\Delta\) for three different magnetic fields. As the frequency difference is increased, the process becomes selective for the transitions that are most changed by the magnetic field (highest \(m_{F}\)). This decreases the amplitude of DSP collapses. (e) Shows the oscillations of the retrieval efficiency for the first 17 us of storage time for three different detunings. We show normalized retrieval efficiency, which is the retrieval efficiency divided by the maximum. both the signal and the control beam are \(\sigma^{+}\) polarized. This way, the coherence is formed from states \(|g\rangle\) and \(|s\rangle\) with the same \(m_{F}\) (as shown in Fig. 1). The number of different coherences is therefore much lower than when the magnetic field is perpendicular to the direction of the beams and the coherences form between states with \(m_{F}\) and \(m_{F}\pm 1\) as well. We decrease the number of populated coherences by polarizing the atoms towards the stretched \(m_{F}\) state with a pulse of the polarizing beam before performing the quantum memory experiment. Additionally, we show that we can decrease the influence of \(m_{F}=2\) by detuning the control beam towards higher frequencies, as it then becomes selective for the highest \(m_{F}\) state. In this paper, we show stored light and the retrieval efficiency in arbitrary units, because the input power of the control and signal beams varies between experimental runs. However, it should be noted that in our system with optical depth of \(\sim\)10, the efficiencies of storage reach up to 7%. This could be improved by using an elongated and denser MOT, leading to higher optical depth and therefore higher efficiency [24; 25]. Even though the collapses and revivals of dark-state polaritons present a challenge when trying to achieve a continuous quantum memory, their presence could be useful for certain types of storage. For example, we could exploit the revivals for time-multiplexing of the quantum memory [26]. In principle, one could send two signal pulses into the same atomic cloud and read the signals at the time of the corresponding revival for each input pulse separately. Here, the complete collapse of the dark-state polariton would ensure that the output pulse would consist of purely the corresponding input, since the other input is completely suppressed. Additionally, one can imagine that the presence of different possible coherences would allow for multiplexing in the frequency of the signal and control beam. The writing process requires the difference between the frequency of the control and the signal beam to correspond to the atomic transition and the width of this process is only in the MHz range, much narrower than the individual atomic transition. In a high enough magnetic field, the EIT resonances for each Zeeman sublevel would be separated by more than the width of the EIT and we could excite the coherences separately. ###### Acknowledgements. We thank Rok Zitko, Jure Pirman, Ticijana Ban and Damir Aumiler for their comments and discussions. This work was supported by the Slovenian Research Agency (Research Core Fundings No. P1-0125, No. P1-0099 and No. P1-0416, and Research Project No. J2-2514). K.G. and V.P.J. contributed equally to this work.
2306.14866
Enriching the NArabizi Treebank: A Multifaceted Approach to Supporting an Under-Resourced Language
In this paper we address the scarcity of annotated data for NArabizi, a Romanized form of North African Arabic used mostly on social media, which poses challenges for Natural Language Processing (NLP). We introduce an enriched version of NArabizi Treebank (Seddah et al., 2020) with three main contributions: the addition of two novel annotation layers (named entity recognition and offensive language detection) and a re-annotation of the tokenization, morpho-syntactic and syntactic layers that ensure annotation consistency. Our experimental results, using different tokenization schemes, showcase the value of our contributions and highlight the impact of working with non-gold tokenization for NER and dependency parsing. To facilitate future research, we make these annotations publicly available. Our enhanced NArabizi Treebank paves the way for creating sophisticated language models and NLP tools for this under-represented language.
Riabi Arij, Mahamdi Menel, Seddah Djamé
2023-06-26T17:27:31Z
http://arxiv.org/abs/2306.14866v1
# Enriching the NArabizi Treebank: A Multifaceted Approach to Supporting an Under-Resourced Language ###### Abstract In this paper we address the scarcity of annotated data for NArabizi, a Romanized form of North African Arabic used mostly on social media, which poses challenges for Natural Language Processing (NLP). We introduce an enriched version of NArabizi Treebank Seddah et al. (2020) with three main contributions: the addition of two novel annotation layers (named entity recognition and offensive language detection) and a re-annotation of the tokenization, morpho-syntactic and syntactic layers that ensure annotation consistency. Our experimental results, using different tokenization schemes, showcase the value of our contributions and highlight the impact of working with non-gold tokenization for NER and dependency parsing. To facilitate future research, we make these annotations publicly available. Our enhanced NArabizi Treebank paves the way for creating sophisticated language models and NLP tools for this under-represented language. ## 1 Introduction Despite the abundance of rich and diverse dialects worldwide, each possessing distinctive features and characteristics, many of these dialects still lack the necessary resources and support to enable their speakers to access modern technologies in their own language Joshi et al. (2020). Therefore, it is imperative to undertake endeavors aimed at creating annotated corpora, developing language models, and establishing dictionaries and grammars for low-resource dialects. These efforts are crucial for the preservation and advancement of these dynamic languages, which encapsulate unique cultures, histories, and experiences within their respective communities. One notable example of such an effort is the Masakhane community, which is dedicated to enhancing natural language processing (NLP) research for African languages through significant initiatives such as MasakhaNER Adelani et al. (2021). Similar efforts are ongoing for Indonesian languages Cahyawijaya et al. (2022). In addition, a long-standing and somewhat unrelated initiative known as the Universal Dependencies project Nivre et al. (2020) originally aimed to provide a standardized set of syntactic guidelines for a limited number of languages turned out to become the recipient of numerous treebank initiatives for low-resource languages. These initiatives not only adopted the initial guidelines but also expanded upon them to accommodate the unique idiosyncrasies of each language. In this work, we aim to enhance a pre-existing multi-view treebank devoted to a very low-resource language, namely the North-African Arabic dialect written in Latin script, collected from Algerian sources and denoted as the Narabizi treebank, the first available for this dialect, where Arabizi refers to both the practice of writing Arabic using the Latin alphabet and \(N\) for the North African dialect Seddah et al. (2020). Made of noisy user-generated content that exhibits a high level of language variability, its annotations faced many challenges as described by the authors and contained remaining errors Touileb and Barnes (2021). Our work builds on previous efforts to annotate and standardize treebank annotations for low-resource languages to enhance the quality and consistency of linguistic resources Schluter and van Genabith (2007); Sade et al. (2018); Turk et al. (2019); Zariquiqiey et al. (2022). Following previous research, we consider the impact of refining annotation schemes on downstream tasks. Mille et al. (2012) examine how much a treebank's performance relies on its annotation scheme and whether employing a more linguistically rich scheme would decrease performance. Their findings indicate that using a fine-grained annotation for training a parser does not necessarily improve performance when parsing with a coarse-grained tagset. This observation is relevant to our study as we expect refining the treebank could enhance the parsing performance even though the inherent variability of this language, which, tied to its small size treebank, could bring a negative impact on such enhancements. On the other hand, the experiments conducted by Schluter and van Genabith (2007) demonstrate that using a cleaner and more coherent treebank yields superior results compared to a treebank with a training set five times larger. This observation highlights the significance of high-quality dataset annotations, particularly for smaller datasets. This understanding primarily drives the goal of improving the NArabizi treebank's annotations. In this context, we propose a heavily revised version of NArabizi treebank Seddah et al. (2020) that includes two novel annotation layers for Named Entity Recognition (NER) and offensive language detection. One of the goals of this work is also to study the impact of non-gold tokenization on NER, a scenario almost never investigated by the community Bareket and Tsarfaty (2021). Our primary contributions are as follows: * Using error mining tools, we release a new corrected version of the treebank, which leads to improved downstream task performance. * We show that corrections made to a small size treebank of a highly variable language favorably impacts the performance of NLP models trained on it. * We augment the treebank by adding NER annotations and offensive language detection, expanding its applicability in various NLP tasks. * We homogenize tokenization across the dataset, analyze the impact of proper tokenization on UD tasks and NER and conduct a realistic evaluation on predicted tokenization, including NER evaluation. The enhanced version of the Narabizi Treebank is freely available.1 Footnote 1: [https://gitlab.inria.fr/ariabi/release-narabizi-treebank](https://gitlab.inria.fr/ariabi/release-narabizi-treebank) ## 2 Related work NArabiziThe Arabic language exhibits diglossia, where Modern Standard Arabic (MSA) is employed in formal contexts, while dialectal forms are used informally Habash (2010). Dialectal forms, which display significant variability across regions and predominantly exist in spoken form, lack standardized spelling when written. Many Arabic speakers employ the Latin script for transcribing their dialects online, using digits and symbols for phonemes not easily mapped to Latin letters Seddah et al. (2020). This written form, known as Arabizi and its North African variant, NArabizi, often showcases code-switching with French and Amazigh Amazouz et al. (2017). Textual resources for Arabizi primarily consist of noisy, user-generated content Foster (2010); Seddah et al. (2012); Eisenstein (2013), complicating the creation of supervised models or collection of extensive pre-training datasets. The original NArabizi treebank Seddah et al. (2020), contains about 1500 sentences. The sentences are randomly sampled from the romanized Algerian dialectal Arabic corpus of Cotterell et al. (2014) and from a small corpus of lyrics from Algerian dialectal Arabic songs popular among the younger generation. This treebank is manually annotated with morpho-syntactic information (parts-of-speech and morphological features), together with glosses and code-switching labels at the word level, as well as sentence-level translations to French. Moreover, this treebank also contains 36% of French tokens. Since its creation, this treebank spawned two derived versions that first added a transliteration to the Arabic script at the word level and sentiment and topic annotation at the sentence level Youileb and Barnes (2021). In parallel to our own corrections and annotation work2, Touileb (2022) extended this work to include a named-entity annotation layer. Footnote 2: Released on November 26th, 2022, the same day as the publication of Touileb (2022). Treebanking for User-generated ContentTreebanks and annotated corpora have greatly impacted NLP tools, applications, and research in general. Despite the challenges of constructing large and structurally consistent corpora, which requires considerable effort and time, many in the field considered this pursuit valuable and necessary de Marneffe et al. (2021). However, constructing treebanks for user-generated content is more challenging due to the extensive variation in language usage and style, the prevalence of non-standard spellings and grammar, and the necessity for domain-specific annotations Sanguinetti et al. (2022). Interest in treebanking user-generated content, such as social media posts and online forum discussions, has risen, and numerous efforts have been undertaken to create treebanks for user-generated content Foster et al. (2011); Seddah et al. (2012); Sanguinetti et al., 2018; Rehbein et al., 2019; Sanguinetti et al., 2020). NER for Dialects and User-generated Content NER is an information extraction task that identifies and categorizes entities at the token level. It is an extensively investigated NLP task with numerous datasets and models for various languages. However, datasets for low-resource languages are rare, and NER datasets for social media platforms such as Twitter predominantly exist for English (Ritter et al., 2011; Derczynski et al., 2016, 2017). A prominent NER dataset for _lower-than-English_ resource languages is the CoNLL 2002 Shared Task dataset (Tjong Kim Sang, 2002), which provides NER annotations for four languages: Dutch, Spanish, Chinese, and Czech. Additionally, the WikiAnn dataset (Pan et al., 2017) includes NER annotations for several low-resource languages. Nevertheless, it is derived from Wikipedia content which is not well-suited for NER tasks involving user-generated content. As mentioned above, Touileb (2022) added a NER annotation for the first version of the NArabizi treebank. However, they did not address the tokenization issues inherent in the dataset and used a different annotation scheme. The following sections delve deeper into the tokenization challenges and the differences between the two datasets. ## 3 Extending a Low-resource Language treebank In this section, we outline our methodology for expanding and enhancing the NArabizi treebank. We start by re-annotating tokenization, morpho-syntactic, and syntactic layers to ensure consistency, followed by detailing the annotation guidelines and procedures for NER and Offensive Language detection. We refer to the initial treebank introduced by Seddah et al. (2020) as NArabiziV1 and our extended version as NArabiziV2. ### Maintaining Consistency in Treebank Annotations We start with an extended clean-up of the NArabiziV1 formatting, which involves reinstating missing part-of-speech tags and rectifying Conflu formatting discrepancies. Then, we embark on general error mining in the lexical and syntactical annotation and correction phase. We implement this stage using semi-automated methods. We do not change the UD tagsets used in the original treebank. Error MiningWe use the UD validator Vr2.13, a tool designed to assess the annotation of treebanks in UD and ensure compliance with the UD specifications. The validator is specifically employed to detect common errors, such as invalid dependency relations, incorrect part-of-speech tags, and inconsistent usage of features like tense and aspect. By leveraging the UD validator, we guarantee that our dataset is syntactically consistent and conforms to the standards established by the UD project. These changes encompass correcting cycle and projectivity issues and removing duplicates. Footnote 3: [https://github.com/UniversalDependencies/too](https://github.com/UniversalDependencies/too) ls/releases/tag/r2.11 We also use Errator (Wisniewski, 2018), a data mining tool, to pinpoint inconsistencies in our dataset. It implements the annotation principle presented by Boyd et al. (2008), which suggests that if two identical word sequences have different annotations, one is likely erroneous. We remove the duplicated sentences when the text field is an exact match and fix duplicated sentence identification for different sentences. We also fixed some problems with the original text, such as Arabic characters encoding and sentence boundaries. TokenizationWe address tokenization concerns to uphold consistency in the NArabizi Treebank annotations. Furthermore, we introduce targeted adjustments to resolve issues related to segmenting specific word classes, including conjunctions, interjections (e.g., "ya"), determiners, and prepositions, especially when adjacent to noun phrases. For example, we segment determiners at the initial vowel ("a" or "e"), as demonstrated in the examples "e ssalam" ("the peace") and "e dounoub" ("the sins"). The lemma field for these terms is aligned with the French translation for the splitting (e.g., "e ssalam" \(\Rightarrow\) "la paix" ("the peace")). For prepositions, we perform splitting at the first letter followed by "i" when possible, as seen in "brabi" \(\Rightarrow\) "b rabi" ("with my god"). We also establish rules for segmenting determiners and proper nouns. When possible, we separate prepositions at the initial letter and "i" and instituted guidelines for segmenting determiners and proper nouns. We implement these alterations for splitting using the Grew graph rewriting tool for NLP (Guillaume, 2021) to improve the consistency and quality of the treebank annotations. Additionally, we fix all the problems mentioned by Touileb and Barnes (2021) regarding the incoherence of the tokenization, wrong translations, and incoherent annotations. TranslationThe translation quality is also enhanced; previously, translations were not consistently carried out by Algerian speakers, resulting in local expressions and phrases being frequently misinterpreted, either in a literal manner or, at times, entirely inaccurately. This had implications for lexical and syntactical annotation. For instance, the term "skara" was initially annotated as "on purpose" but was later revised to "taunting". Recognizing that "skara fi" represents a local expression facilitates annotation and promotes corpus harmonization. ExampleIn Figure 1, we illustrate a parse tree before and after applying several corrections. Tokenization errors in French were rectified ("jetaime" \(\Rightarrow\) "je t aime"), and Arabic prepositions, articles, and conjunctions were separated from the nouns or adverbs they were attached to ("fal3ali" \(\Rightarrow\) "f al 3ali", "wdima" \(\Rightarrow\) "w dima"). We also correct some dependency relations: the previous "obj" relation between the verb "aimer" and the proper noun "madjid" was altered to "vocative". Interesting PropertiesThe corpus displays several interesting linguistic features, including _parataxis_, _goeswith_, and dislocated structures, characteristic of oral productions and user-generated content. A deeper examination of the root/parataxis ratio and the average parataxis per tree in the corpus, which contains 2066 parataxis for 1287 sentences, shows that the corpus exhibits a high level of juxtaposed clauses resulting from the absence of punctuation. Given the initial data sources (web forums), it is likely that these end of sentences markers were initially present as carriage returns. As pointed out by Seddah et al. (2020) the corpus also exhibits a high level of spelling variation, reflecting the speakers' diversity in terms of geography and accents. Furthermore, analyzing the number of sentences without a verb and the average number of verbs per sentence shows that NArabizi speakers tend to favor nominalization, as seen in the abundance of ellipses (e.g., "rabbit m3ak" which translates in English to "God bless you"). ### Annotation Methodology for NER and Offensive Language Detection Named Entity RecognitionOur NER annotation guidelines are based on the revised tokenization of the NArabizi treebank, which ensures consistency between token-level annotations, an essential aspect of multi-task learning. We use the Inception tool Klie et al. (2018) for our manual annotation by two native speakers, adhering to the IOB2 Scheme Tjong Kim Sang and Veenstra (1999). Each word is labeled with a tag indicating whether it is at the beginning, inside, or outside of a named entity. In case of disagreement between annotators, the multiple annotations were subsequently discussed until agreement was reached, and one annotation was selected to be retained. We extend the FTB NER Figure 1: Illustration of an example from the NAarabizi treebank before and after the modifications. (Sagot et al., 2012) French treebank annotations. Our annotation contains the following NE types: PER for real or fictional persons, ORG for organizations, LOC for locations, COMP for companies, and OTH for brands, events, and products. In cases of ambiguity between products and companies, we adhere to the decision made in the FTB dataset. For person names, we exclude grammatical or contextual words from the mention. We annotate football teams as organizations, and we annotate mentions of "Allah" or "Rabi" as PERderivA. The PERderivA annotation is applied to groups of individuals who originate from or share the same location. Country names are consistently labeled as locations, irrespective of the context. TV channels and ambiguous brand names are annotated as companies, while religious groups are not designated entities. The names of football stadiums are classified under OTH, whereas journal names are identified as organizations. Table 1 presents the distribution of entities, with a similar distribution observed across both the development and test splits. The most frequent entity type is PERderivA, while the least frequent is COMP. Table 2 displays the number of unique words which can provide information about the language used in the corpus. The fact that the count of unique tokens constitutes nearly half of the total tokens suggests that the language used in the corpus is complex and diverse, with a wide range of vocabulary and expressions. This can make it more challenging for NER algorithms to accurately identify and classify named entities in the corpus. Touileb (2022) recently introduced NERDz, a version of the NArabizi treebank annotated for NER. As our dataset's annotation labels differ from theirs, we establish a mapping between the two annotation schemes to enable comparisons (cf. see Table 10 in the appendix A). Our schemes also differ in named entities' scope, as we split contracted forms, ours only cover the nominal phrase parts. Regarding nouns, such as "_bled_", which means _country_, some are annotated as entity GPE in NERDz, which is not the case in our dataset. Also, the names of stadiums are annotated as LOC in NERDz while they are considered OTH in our dataset. Similarly, for "_equipe nationale_", which means _national team_ is annotated ORG in NERDz, while we do not consider it as an entity, following the FTB NER's guidelines. Added to annotator divergences, this may explain the differences in the count of the entities. Offensive Language ClassificationThe annotation process for offensive language classification was conducted manually by three annotators with diverse backgrounds. The annotators consisted of two females and one male, each bringing unique expertise to the task. One female annotator is a Ph.D. student in NLP, the other is a Ph.D. student in political sciences, and the male annotator is an engineer with in-depth knowledge of North African football, a prominent topic in the dataset. The annotators were asked to annotate every sentence as offensive (OFF) or non-offensive (NOTEOFF). Offensive posts included any form of unacceptable language, targeted offense (veiled or direct), insults, threats, profane language, and swear words. To maintain objectivity and minimize potential bias, the annotators were not granted access to the other annotators' work and were not allowed to discuss their annotations with one another. This approach ensured the independence of their judgments, allowing for a more reliable evaluation of the offensive language classification process. For the offensive annotation, the two female annotators did not usually agree with the male annotator as they have different backgrounds and hence different opinions about football-related sentences. The final label is determined through a majority voting process. Additionally, we calculate the average \begin{table} \begin{tabular}{l r r r r} \hline \hline Type & train & dev & test & Total \\ \hline PER & 371 & 61 & 47 & 479 \\ LOC & 358 & 58 & 50 & 466 \\ ORG & 200 & 23 & 28 & 251 \\ COMP & 6 & 5 & 3 & 14 \\ OTH & 44 & 6 & 7 & 57 \\ PERderiv & 96 & 14 & 13 & 123 \\ PERderivA & 386 & 57 & 66 & 509 \\ \hline Total & 1461 & 224 & 214 & 1899 \\ \hline \hline \end{tabular} \end{table} Table 1: Named entity type distribution across train, dev, and test splits. \begin{table} \begin{tabular}{l r r r} \hline \hline Type & train & dev & test \\ \hline nb sentences & 1003 & 139 & 145 \\ nb tokens & 15522 & 2124 & 2118 \\ nb unique tokens & 6652 & 1284 & 1327 \\ \hline \hline \end{tabular} \end{table} Table 2: Statistics of the deduplicated corpus across train, dev, and test splits. The train-dev intersection contains 549 tokens, the train-test intersection contains 551 tokens, and the dev-test intersection contains 266 tokens. pair-wise Cohen's \(\kappa\)(Cohen, 1960) to highlight how hard this task was. The average \(\kappa\) value is 0.54, indicating a moderate agreement between annotators, common in sentence level annotation for annotators with different backgrounds and topic familiarity (Bobicev and Sokolova, 2017). This disagreement likely stems from the interpretation of terms that can be considered offensive or non-offensive depending on either the dialect or context. Table 3 presents the distribution of non-offensive and offensive language instances. The dataset features an imbalance between non-offensive and offensive classes, with non-offensive samples being considerably more frequent in each split. ## 4 Dataset Evaluation We evaluate the NarabiziV2 dataset on UD parsing tasks and NER using standard transfer learning architectures on which we vary the pre-trained language model and the tokenization scenario. New NArabizi CharacterBert ModelFollowing Riabi et al. (2021), we train a CharacterBERT El Boukkouri et al. (2020) model, a character-based BERT variant, on a NArabizi new filtered corpus. The authors demonstrate that Character-BERT achieves significant results when dealing with noisy data while being extremely data efficient. We improve the initial pre-training dataset used by Riabi et al. (2021) by more stringently filtering non-NArabizi examples from the 99k instances provided by Seddah et al. (2020), as well as incorporating new samples from the CTAB corpus Amara et al. (2021) and 12k comments extracted from various Facebook and forum posts, mostly in the Tunisian dialect taken from different datasets listed by Younes et al. (2020). This results in a 111k sentence corpus. To exclude non-NArabizi content, we first use a language detection tool Nakatani (2010) with a 0.9 confidence threshold to eliminate text in French, English, Hindi, Indonesian, and Russian, which are commonly found in mixed Arabizi data. Following the filtering process, a bootstrap sampling method is adopted to randomly select a subset of the remaining text for manual annotation. This annotated text is then used to train an SVM classifier for NArabizi detection. The final dataset, containing 91k annotated text instances after deduplication, focuses on North African Arabizi text. We make this corpus publicly available. Sub-word ModelsWe also evaluate the performance of subword-based language models, monolingual and multilingual. For the multilingual subword-based language model, we use mBERT, the multilingual version of BERT Devlin et al. (2018). It is trained on data from Wikipedia in 104 different languages, including French and Arabic. Muller et al. (2020) demonstrated that such a model could be transferred to NArabizi to some degree. Finally, our monolingual model is DziriBERT Abdaoui et al. (2021), a monolingual BERT model trained on 1.2M tweets from major and highly-populated Algerian cities scrapped using a set of popular keywords in the Algerian spoken dialect in both Arabic and Latin scripts. ## 5 Results ### New Results for UD For our updated version of the treebank, we present results for models trained and tested on NArabiziV2, as shown in Table 4 and highlighted by a red box. These results represent the new state-of-the-art performance for the treebank, and we report findings for three previously used models. The DziriBERT model exhibits the best performance; however, CharacterBERT delivers competitive results while being trained on a mere 7.5% of the data used for training DziriBERT. This observation is consistent with the conclusions drawn by Riabi et al. (2021). In order to assess the influence of the implemented corrections, we use NArabiziV1 and eliminate duplicate sentences 4. For this comparison, we focused on the DziriBERT model's performance when trained on either NArabiziV1 or NArabiziV2 and tested on NArabiziV2, as denoted by the blue highlights in Table 4. Training on NArabiziV2 enhances the average scores for UPOS, UAS, and LAS by 3.5 points, illustrating the favorable outcomes of the refinements introduced in the NArabiziV2 dataset. This observation is further substan \begin{table} \begin{tabular}{l c c} \hline \hline Split & Non-Offensive & Offensive \\ \hline Train & 804 & 199 \\ Dev & 86 & 53 \\ Test & 118 & 27 \\ \hline \hline \end{tabular} \end{table} Table 3: Offensive language detection distributions across train, dev, and test splits. tiated by examining the performance of CharacterBERT and mBERT, reinforcing the validity of the noted improvements. A comparative analysis of the results for models trained and tested on NArabiziV1, denoted by the blue box, and those for models trained and tested on NArabiziV2, denoted by the red box, reveals that NArabiziV2 generally yields superior evaluation scores. This observation underlines the impact of the treebank's consistency on the overall performance of the models. When we test on NArabiziV1, the model trained on NArabiziV1 gets better results than the model trained on NArabiziV2. The modifications in tokenization can explain this drop in performance. ### Results for NER and Offensive Language Detection NerTable 5 presents the results for NER6. The CharacterBERT model achieves the highest F1 scores for LOC and OTH categories, as well as the best performance for PERderiv and PERderivA. On the other hand, the DziriBERT model outperforms the other models in the ORG and PER categories. It is important to note that the performance varies significantly across the different categories, reflecting the diverse challenges posed by each entity type. For instance, some categories contain named entities with variations of the same word, such as "Alah"/"Alah"/"Elah", which translates into God for PERderivA. Since CharacterBERT uses character-level information, it is more robust to noise, which explains the high performances for those entities. Footnote 6: We use Seqeval Nakayama (2018) classification report. Offensive Language DetectionThe imbalance between non-offensive and offensive instances is challenging during the models' training and evaluation. For example, we fail to train mBERT as it only predicts non-offensive labels corresponding to the majority class. This can also be explained by how hard the distinction between offensive and non-offensive content is without context and external knowledge, as explained before. This also raises the question of how relevant is the backgrounds of the annotators for the offensive detection dataset Basile et al. (2020); Uma et al. (2021); Almanea and Poesio (2022). ## 6 Discussion ### Impact of the Pre-training Corpus In Appendix A, we present the results of all our experiments using the CharacterBERT model trained by Riabi et al. (2021). We observe a heterogeneous improvement in performance, with predominantly better outcomes for our CharacterBERT. We hypothesize that the impact of filtering the training data may not be overly beneficial, possibly due to some smoothing during the training process. Both models' final training data sizes are comparable: 99k for CharacterBERT Riabi et al. (2021) and 91k for our CharacterBERT. Nevertheless, we believe this new corpus can be a valuable resource for this language. ### Impact of Tokenization In this section, we investigate the tokenization influence on the enhanced NArabizi Treebank, with a particular emphasis on the homogenization of the tokenization 7 and its subsequent impact on our tasks. We also evaluate the models in a realistic scenario where gold tokenization is unavailable. We use the UDPipe tokenizer Straka et al. (2016) that employs a Gated Linear Units (GRUs) Cho \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{\(\underbrace{\text{Test}}\)} & \multicolumn{3}{c}{_NArabiziV1_} & \multicolumn{3}{c}{_NArabiziV2_} \\ \cline{3-7} & & UPOS & UAS & LAS & UPOS & UAS & LAS \\ \hline mBERT & \multirow{3}{*}{\(\underbrace{\text{DziriBERT}}\)} & 77.42 \(\pm\) 1.52 & 68.91 \(\pm\) 0.05 & 56.19 \(\pm\) 0.06 & 74.59 \(\pm\) 1.42 & 66.01 \(\pm\) 0.47 & 53.19 \(\pm\) 0.87 \\ DziriBERT & & 83.57 \(\pm\) 0.92 & 73.97 \(\pm\) 0.72 & 62.04 \(\pm\) 0.54 & 80.19 \(\pm\) 0.82 & 70.28 \(\pm\) 0.83 & 58.63 \(\pm\) 0.78 \\ CharacterBERT & & 76.19 \(\pm\) 2.48 & 68.78 \(\pm\) 0.36 & 55.14 \(\pm\) 0.38 & 73.01 \(\pm\) 2.05 & 66.10 \(\pm\) 0.48 & 52.41 \(\pm\) 0.50 \\ \hline mBERT & \multirow{3}{*}{\(\underbrace{\text{DziriBERT}}\)} & 74.48 \(\pm\) 0.95 & 66.03 \(\pm\) 0.35 & 52.82 \(\pm\) 0.66 & 79.65 \(\pm\) 0.90 & 70.56 \(\pm\) 0.32 & 58.08 \(\pm\) 0.76 \\ DziriBERT & & 78.75 \(\pm\) 1.29 & 70.51 \(\pm\) 0.43 & 57.51 \(\pm\) 0.67 & 83.10 \(\pm\) 1.60 & 74.26 \(\pm\) 0.27 & 62.66 \(\pm\) 0.52 \\ CharacterBERT & & 72.24 \(\pm\) 2.62 & 65.74 \(\pm\) 0.24 & 51.86 \(\pm\) 0.51 & 76.34 \(\pm\) 2.68 & 69.84 \(\pm\) 0.27 & 56.27 \(\pm\) 0.54 \\ \hline \hline \end{tabular} \end{table} Table 4: Results for UD on test set, DEV set is used for validation (with gold tokenization) (We report average of F1 scores over 5 seeds with the standard deviation) et al., 2014) artificial neural network for the identification of token and sentence boundaries in plain text. It processes fixed-length segments of Unicode characters and assigns each character to one of three classes: token boundary follows, sentence boundary follows, or no boundary. The tokenizer is trained using the Adam stochastic optimization method, employing randomly shuffled input sentences to ensure effective tokenization across various NLP tasks. We conduct a 5-fold evaluation using the UDP-Pipe tokenizer and assess its performance based on the token-level, multiword, and word-level scores. The results in Table 7 show high scores for the tokens and words F1 scores demonstrate the tokenizer's efficacy in handling various tokens and words, which shows that the tokenization for NArabizi is learnable. We also notice sub-optimal performance regarding multi-words, due to their random occurrence nature.8. Footnote 8: It is important to note that tokens refer to surface tokens (e.g., French “au” counts as one token), while words represent syntactic words (“au” is split into two words, “a” and “le”). For our following experiments, we train a tokenizer using the train and dev as held-out and tokenize the test set for evaluation. We do not predict the boundaries of the sentence. Pos-tagging and Dependency ParsingTable 8 presents the results for models trained on the NArabiziV2 training set and tested on both the predicted tokenization and the previous version of tokenization with gold annotations from NArabiziV2. The outcomes for the predicted tokenization indicate that despite having a well-performing tokenizer, as demonstrated in Table 7, there is still a substantial loss in performance when compared to the gold tokenization results, highlighted by the red box in Table 4. Similarly, using the tokenization from NArabiziV1 and gold annotations from NArabiziV2 also exhibits a significant drop in performance. This observation first highlights the impact of the corrections brought to standardize the treebank tokenization and then, given the difference of performance between predicted and gold tokens, calls for the development of morphological-analysers, crucial for Arabic-based dialects, as UD tokenization is indeed a morpho-syntactic process. best performance on gold and predicted tokenization. Moreover, when evaluated using predicted tokenization, all models demonstrate a similar performance drop. This demonstrates that there is an important gap when evaluating using gold tokenization, which raises the question of how much the current evaluation of NER models reflects the actual model performance in a realistic setting for noisy UGC. ## 7 Conclusion In this paper, we present a comprehensive study on the development and refinement of the NArabizi Treebank Seddah et al. (2020) by improving its annotations, consistency, and tokenization, as well as providing new annotations for NER and offensive language. Our work contributes to the enhancement of the NArabizi Treebank, making it a valuable resource for research on low-resource languages and user-generated content with high variability. We explore the impact of tokenization on the refined NArabizi treebank, employing the UDPipe tokenizer for our evaluation. The results demonstrate the tokenizer's effectiveness in handling various tokens and multiword expressions. Our experiments show that training and testing on the NArabiziv2 improve the UD tasks performances. Furthermore, we show the impact of the tokenization for NER and UD tasks, and we report results using predicted tokenization for evaluation to estimate the models' performance on raw data. Future research could emphasize expanding the NArabizi Treebank towards other dialects and examining the treebank's potential applications in various NLP tasks. Our dataset is made freely available as part of the new version of the Narabizi Treebank9. The next release will additionally contain a set of other sentence translations prepared by a Tunisian speaker. These translations will be interesting for cross-dialect studies, given that the Narabizi corpus is predominantly made of Algerian dialect. Footnote 9: [https://gitlab.inria.fr/ariabi/release-narabizi-treebank](https://gitlab.inria.fr/ariabi/release-narabizi-treebank) ## Acknowledgements We warmly thank the reviewers for their very valuable feedback. This work received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 101021607. We are grateful to Roman Castagne for his valuable feedback and proofreading and wish to gratefully acknowledge the OPAL infrastructure from Universite Cote d'Azur for providing resources and support.
2308.02570
Learning Implicit Entity-object Relations by Bidirectional Generative Alignment for Multimodal NER
The challenge posed by multimodal named entity recognition (MNER) is mainly two-fold: (1) bridging the semantic gap between text and image and (2) matching the entity with its associated object in image. Existing methods fail to capture the implicit entity-object relations, due to the lack of corresponding annotation. In this paper, we propose a bidirectional generative alignment method named BGA-MNER to tackle these issues. Our BGA-MNER consists of \texttt{image2text} and \texttt{text2image} generation with respect to entity-salient content in two modalities. It jointly optimizes the bidirectional reconstruction objectives, leading to aligning the implicit entity-object relations under such direct and powerful constraints. Furthermore, image-text pairs usually contain unmatched components which are noisy for generation. A stage-refined context sampler is proposed to extract the matched cross-modal content for generation. Extensive experiments on two benchmarks demonstrate that our method achieves state-of-the-art performance without image input during inference.
Feng Chen, Jiajia Liu, Kaixiang Ji, Wang Ren, Jian Wang, Jingdong Wang
2023-08-03T10:37:20Z
http://arxiv.org/abs/2308.02570v1
# Learning Implicit Entity-object Relations by Bidirectional Generative Alignment for Multimodal NER ###### Abstract. The challenge posed by multimodal named entity recognition (MNER) is mainly two-fold: (1) bridging the semantic gap between text and image and (2) matching the entity with its associated object in image. Existing methods fail to capture the implicit entity-object relations, due to the lack of corresponding annotation. In this paper, we propose a bidirectional generative alignment method named BGA-MNER to tackle these issues. Our BGA-MNER consists of image2text and text2image generation with respect to entity-sailient content in two modalities. It jointly optimizes the bidirectional reconstruction objectives, leading to aligning the implicit entity-object relations under such direct and powerful constraints. Furthermore, image-text pairs usually contain unmatched components which are noisy for generation. A stage-refined context sampler is proposed to extract the matched cross-modal content for generation. Extensive experiments on two benchmarks demonstrate that our method achieves state-of-the-art performance without image input during inference. Named entity recognition, Multimodal alignment, Transformer, Generation + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + (a), attention and visual grounding-based methods tend to align concrete dog in text and image but ignore the desired 'Sebastian', leading to recognizing 'Sebastian' as PER type depending on textual representation. As for the knowledge-based methods shown in Figure 1 (b), the top related prompt after querying the image is about person which is the main object in image. However, such information misleads the model that 'Sebastian' is a person, so as to classify it as PER type. The crux of the issue lies in two aspects. (1) Previous methods are learned under text-dominated NER annotations. It is insufficient for enhancing cross-modal alignment (Zhou et al., 2017), causing heavy modality bias on textual information and misalignment of entity-object relations. (2) The potential entities in the sentence are often presented with names, which are harder than simple nouns for cross-modal alignment. Therefore, existing models may focus more on nouns-object relations while and the entity-object relations are still under-explored (Zhou et al., 2017). In this paper, we propose a novel bidirectional generative alignment method named BGA-MNER to address the above issues. The main idea of BGA-MNER is to use matched two-modality content for latent image2text and text2image generation. Such cross-modal generation supervision focuses on cross-modal alignment and alleviates the modality bias caused by NER supervision. Besides, by involving the entity and its corresponding object in bidirectional cross-modal generation, our method aligns the implicit entity-object relations under a direct and powerful constraint. For example, by enforcing the entity 'Sebastian' to generate the dog object and vice versa in Figure 1(c), our model learns the entity-object relation mapping directly during training and generates the desired entity-aligned visual features according to given entity during inference. Thus, we call this generation as generative alignment. To be specific, first, we propose a Stage-refined Context Sampler (SCS) to extract the most relevant content in image-text pairs by pruning unmatched tokens/patches. As shown in Figure 1 (c), SCS samples the word 'Sebastian'&'dogs' and removes the irrelevant content. Second, we design a Multi-level Cross-modal Generator (MCG) to generate corresponding content of one modality with sampled content of the other modality, e.g., generating the visual dog content with given 'Sebastian'&'dogs' and vice versa. This bidirectional generation directly learns the implicit relations between 'Sebastian' and dog with supervision. To ensure the successful mutual translation, we further take advantage of cycle consistency from CycleGAN (Zhou et al., 2017), (e.g., the generated visual dog content is expected to back-generated to 'Sebastian'&'dogs' in Figure 1 (c)), to avoid under-constraint cross-modal mapping. With the help of aforementioned modules, the alignment of implicit entity-object relations is consolidated. Our contributions could be summarized as follows: * **A novel framework for MNER.** We propose a new end-to-end Transformer MNER framework, which aligns the two modal features in a generative manner. To the best of our knowledge, this is the first attempt to introduce bidirectional generative alignment for MNER. * **Image-free merit during inference.** By replacing the real visual feature with the generated one for cross-modal interaction, our framework is practical in dealing with text-only inputs and robust to noises from images. * **State-of-the-art performance.** Extensive experiments on Twitter2015 and Twitter2017 benchmarks indicate that our method outperforms existing state-of-the-art methods. Our model also shows superiority of aspects in cross-domain generalization and modality-missing robustness. ## 2. Related Work **Multimodal NER.** Existing MNER methods mainly exploit the effective visual information and suppress the interference information (Zhou et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2019). For attention-based works, MAF (Wang et al., 2018) proposes a cross-modal co-attention module to calculate the similarity score between text and image, then uses it as the proportion of visual information for cross-modal fusion. To solve the imprecise and biased cross-modal correspondence in attention-based methods, FMIT (Wang et al., 2018) extends the lattice structure from Chinese NER (Hu et al., 2019) to MNER, which depends on noun phrases in sentences and general domain words to obtain visual cues. Besides, promptMNER (Zhou et al., 2017), MRC-MNER (Wang et al., 2018) and CAT-MNER (Zhou et al., 2017) interpolate external knowledge, such as machine reading comprehension, prompt and extended label words, to help multimodal understanding. However, the above studies still fail to capture implicit entity-object relations, due to insufficient NER supervision and specific entity names for cross-modal alignment. In this work, our method aims to enhance entity-object correlation by additional cross-modal generation. **Cross-modal Generation.** Different from multimodal recognition tasks (Wang et al., 2018), image caption (Chen et al., 2019) and text-conditional image generation (Chen et al., 2019) are two typical cross-modal generation tasks for multimodal understanding. Chen et.al. (Chen et al., 2019) explore diverse description modes to produce controllable and informative image descriptions. Imagen Video (Chen et al., 2019) is not only capable of generating videos of high fidelity but also has a high degree of controllability and knowledge in various artistic styles. Inspired by cross-modal generation, Lat (Chen et al., 2019) introduces a latent cross-modal translation with global features to alleviate the information distortion for text-video retrieval. In this work, we focus on fine-grained cross-modal generation in latent space. However, directly applying previous generation methods on our Twitter datasets works poorly, due to the fact that image-text correlation from real-world social media is far worse than image caption datasets. To alleviate the semantic gap in our case, we first estimate the content that has the same meaning in two modalities, and then use such content for cross-modal generation with cycle-consistency loss (Zhou et al., 2017). ## 3. Our Method ### Task Formulation Given a sentence \(S=(w_{1},w_{2},...,w_{n})\) comprised of \(n\) words and a corresponding image \(I\), MNER aims to assign an NER tag \(y_{i}\in\mathcal{Y}\) for each word \(w_{i}\), where \(\mathcal{Y}\) is a pre-defined label set with standard BIO schema (Zhou et al., 2017). The predefined entity types usually contain person (PER), location (LOC), organization (OR6), and miscellaneous (MISC). ### Overview As illustrated in Figure 2, we present a novel bidirectional generative alignment method named BGA-MNER, consisting of a Transformer layer for each modality, \(N\) bidirectional generative alignment layers (BGA layers) and a CRF decoder. The BGA layers in the visual branch and textual branch are designed similarly. Taking the textual branch as an example, Stage-refined Context Sampler module first extracts matched content from image-text pairs. Then, we feed the extracted textual part to text2image generator \(G_{t2v}\) to synthesize the pseudo visual features. Such cross-modal generation is further ensured by cycle consistency. Finally, Textual Feature Extractor is a text-conditional hybrid attention layer with real textual features and generated visual features. **Input Embedding.** Given a sentence \(S\), we follow BERT (Dong et al., 2017) to tokenize it into a sequence of word embeddings. Then the special tokens [CLS] and [SEP] are inserted at the beginning and end positions of word embeddings. As a result, we feed \(\text{T}=\{T_{i}\}_{i=1}^{N_{t}}\in\mathbb{R}^{N_{t}\times d}\) with \(N_{t}\) tokens to text branch as input. To extract features from images, we leverage ViT-B/32 (Dong et al., 2017) from CLIP (Liu et al., 2019) as a visual feature extractor. Following (Dong et al., 2017), the combination of image and salient objects according to the text is projected and flattened into patch embeddings \(\mathbf{V}=\{V_{t}\}_{i=1}^{N_{w}}\in\mathbb{R}^{N_{w}\times d}\) with \(N_{w}\) patches. We use the subscript \(t\) and \(v\) to represent textual and visual modality respectively. Since the design of the visual branch is almost identical to that of the text branch, we will elaborate our method on the textual branch for simplicity. ### Stage-refined Context Sampler Image and text normally contain irrelevant content which is unsuitable for generation. To boost the generation process, Stage-refined Context Sampler (SCS) is proposed to adaptively extract the matched content from two modalities. To avoid overwhelmed calculation, we design our SCS with multiple MLP layers to estimate the finer token/patch mask, which denotes keeping the content for generation, via considering the coarse estimation from previous layers. Taking textual SCS as an example, as illustrated in the bottom-right corner of Figure 2, our SCS is designed in a recursive-refined fashion where the mask assignment depends on both current local features and previously selected global content. Concretely, In the \(l\)-th layer, current local feature \(z_{t}^{l}\in\mathbb{R}^{N_{t}\times d}\) and previously selected global content \(g_{t}^{l}\in\mathbb{R}^{1\times d}\) are obtained by two linear projectors: \[z_{t}^{l} =\text{MLP}_{\phi}(T^{l}),\] \[g_{t}^{l} =\text{GAP}(\text{MLP}_{\varphi}(T^{l})*m_{t}^{l-1}), \tag{1}\] where GAP denotes global average pooling and \(m_{t}^{l-1}=\{m_{ti}^{l-1}\}_{i=1}^{N_{t}}\in\mathbb{R}^{N_{t}\times 1}\) is the token-wise textual mask decision from the previous layer. For the first layer, \(m_{t}^{0}\) is initialized as an all-one vector. Then, the probability \(p_{t}\) of retaining tokens is calculated by \(\text{Softmax}(\text{MLP}_{\Phi}(\{z_{t}^{l},g_{t}^{l}\}))\), where \([\cdot,\cdot]\) denotes the concatenation operation along channel dimension and \(\phi,\varphi,\Phi\) represent different MLPs. We follow (Kang et al., 2017) to use GumbelSoftmax to sample binary decision mask \(m_{t}^{l}\) from \(p_{t}\) in a differentiable manner. In the visual branch, the mask decision \(m_{t}^{l}\) for patch embeddings is obtained similarly. During the optimization procedure, the distributions of extracted two-modality content by SCS tend to be consistent under the generation supervision. Figure 2. Overview of our BGA-MNER. Our bidirectional generative alignment layer (BGA layer) consists of two Stage-refined Context Samplers (SCS), a Multi-level Cross-modal Generator (MCG) and two feature extractors. During inference, we only process the textual branch without cycle generation. ### Multi-level Cross-modal Generator Multi-level Cross-modal Generator (MCG) is to generate the pseudo content of one modality with the given sampled content of the other modality. It has a shared text2image generator \(G_{t22p}\) and a shared image2text generator \(G_{t2qt}\) across layers to avoid overwhelmed parameters. **Cross-modal Generator.** We utilize transformer decoders with modality queries as the generator (Golovolovolov et al., 2016; Zhang et al., 2017). Taking text2image generation in \(l\)-th layer for example, the pseudo visual embeddings \(\hat{V}^{l}\) are calculated with the learnable visual query \(Q_{o}\in\mathbb{R}^{N_{o}\times d}\)(Chen et al., 2017) and textual content \(T^{l}\) with decision mask \(m_{t}^{l}\): \[\hat{V}^{l}=G_{t2o}(T^{l},Q_{o},m_{t}^{l}). \tag{2}\] In detail, we reformulate the attention of transformer decoders in \(G_{t2o}\) as1: Footnote 1: We omit the FeedForward Network and LayerNorm. \[\hat{V}^{l}=\text{Softmax}((\frac{(Q_{o}W^{Q})(T^{l}W^{K})^{T}}{\sqrt{d}}+ \mathcal{M})\cdot Q_{o}W^{V}, \tag{3}\] where \(W^{Q}\in\mathbb{R}^{d\times d_{d}},W^{K}\in\mathbb{R}^{d\times d_{d}},W^{V} \in\mathbb{R}^{d\times d_{d}}\) are randomly initialized projection matrices. We set \(d_{q}=d_{k}=d_{o}=d/h\) where \(h\) is the number of heads in each multi-head attention layer. \(\mathcal{M}\in\{0,-\inf\}^{N_{o}\times N_{t}}\) is the attention mask. The element in \(\mathcal{M}\) is set to 0 for keeping the unit and set to negative infinity for removing it. \(\mathcal{M}\) is obtained by transposing \(m_{t}^{l}\) and then broadcasting \((m_{t}^{l}-1)*\inf\) from \(1\times N_{t}\) to \(N_{t}\times N_{t}\) resolution. **Reconstruction for Generation.** The reconstruction loss is used to ensure generating desired features. Specifically, the generated textual \(\hat{T}^{l}\) feature is calculated by: \[\hat{T}^{l}=G_{t2t}(V^{l},Q_{t},m_{o}^{l}), \tag{4}\] where \(Q_{t}\in\mathbb{R}^{N_{o}\times d}\) is the learnable textual query. Thus, the reconstruction loss for generation is: \[\mathcal{L}_{recon}^{l}=\sum_{i=1}^{N_{p}}\mathcal{D}_{KL}(\hat{V}^{l}_{i}||V ^{l}_{i})\cdot m_{oi}^{l}+\sum_{i=1}^{N_{t}}\mathcal{D}_{KL}(\hat{T}^{l}_{i}|| T^{l}_{i})\cdot m_{t^{l}}^{l}, \tag{5}\] where \(\mathcal{D}_{KL}\) denotes the Kullback-Leibler Divergence (Kal **Inference.** Notably, different from interaction with real two-modality features, our method uses the generated visual feature to simulate the real one for cross-modal interaction. The real visual and textual features are not integrated directly. Therefore, it is unnecessary to process the visual branch during inference, which is another merit of our work. Thanks to the SCS that samples salient entity content in text, our MCG directly generates its corresponding visual features, rather than extracting potential visual clues from noisy images. ## 4. Experiments ### Datasets and Evaluation Metrics We test on two publicly benchmark Twitter datasets (Twitter2015 and Twitter2017) which are provided by (Wang et al., 2017) and (Tweweiler et al., 2017), respectively. Table 1 shows the detail of two datasets, including the number of entities for each type and the size of train/dev/test data split. We use the Micro-F1 score (F1) of each type and overall precision (P), recall (R), and Micro-F1 score (F1) to evaluate the performance of the MNER models, which are widely used in many recent works (Kumar et al., 2017; Wang et al., 2017). In all experiments, we use the evaluation code provided by UMT (Wang et al., 2017) for fair comparison. ### Implementation Details Our method is implemented on one NVIDIA P100 GPU with Pytorch 1.7.0. We use the pre-trained uncased BERT-based model (Chen et al., 2016) as textual encoder, and ViT-B/32 from CLIP (Liu et al., 2017) as the visual encoder. Thus, we stack \(N=11\) BGA layers and 1 Transformer layer. The maximum length of the sentence input is set to 128. The visual input includes the global image and objects detected by Faster-RCNN (Fan et al., 2015). All optimizations are performed with the AdamW optimizer and a linear warmup of ratio 0.01. We set batch size to 16, learning rate to \(3\times 10^{-5}\), and training epoch to 30. The coefficient factor \(\alpha\) for balancing two-task loss is set to 0.001. ### Baselines We compare two groups of baseline models with our method. The first group is the representative text-based approaches for NER: (1) _BiLSTM-CRF_, _CNN-BiLSTM-CRF_(Kumar et al., 2017) and _HBiLSTM-CRF_(Kumar et al., 2017) are the NER models with bidirectional LSTM and CRF layer. The difference between them lies in the encoding layers to obtain character-level embedding. (2) _BERT_(Chen et al., 2016) and _BERT-CRF_ exploit more powerful pretrained BERT compared with above methods. The second group includes several MNER approaches. (1) _AdaCAN-CNN-BiLSTM-CRF_(Wang et al., 2017) is a classical CNN+LSTM+CRF combination with an adaptive co-attention network to decide whether to attend to the image. (2) _UMT_(Wang et al., 2017) empowers Transformer with a multimodal interaction module to capture the inter-modality dynamics. (3) _MRC-MNER_(Kumar et al., 2017) and _CAT-MNER_(Wang et al., 2017) use external knowledge to provide prior information about entity types and image regions. For fair comparison, we compare the results of CAT-MNER using uncased BERT-base as text encoder. (4) _FMIT_(Kumar et al., 2017) is the state-of-the-art method using unified lattice structure and entity boundary detection for joint noun-object detection. (5) _HVPNet_(Chen et al., 2016) proposes a hierarchical visual prefix to achieve more effective and robust performance. (6) _R-GCN_(Wang et al., 2017) constructs an inter-modal relation graph and an intra-modal relation graph to gather the image information most relevant to the current text and image from the dataset. ### Main Results Table 2 shows the performance comparison of our method with baseline models on two benchmarks. In the text-based approaches, the powerful BERT and post-doc CRF could bring better performance. Apparently, the external knowledge of pretrained text encoder facilitates the understanding of complicated Twitter posts. For the CRF layer, it benefits models by sequential link constraint between two consecutive labels, which reduces the possibility of predicting unreasonable labels like assigning B-PER after I-PER. Besides, the multimodal approaches achieve 1.60%-4.35% improvement over the best text-based approaches. It demonstrates that the additional visual information is helpful for NER. We also compare our method with previous multimodal approaches. It is obvious that our method achieves state-of-the-art results on two datasets simultaneously. Particularly, our method obtains 76.31% and 87.71% overall F1 score on two datasets. Compared with previous best approaches, i.e., R-GCN and FMIT, our method also has great advantages. For example, even though only outperforming R-GCN with 0.6% on Twitter2017, our method obtains 1.31% improvement over it on Twitter2015. The significant improvement over attention-based approaches (UMT, R-GCN), visual grounding based approach (FMT) and knowledge-based approaches (CAT-MNER, MRC-MNER) shows the superiority of our method over current approaches. ### Discussion and Analysis **Ablation Study** To verify the effectiveness of each component, we report the ablation results in Table 3. First, after removing the stage-refined context sampler, the performance drops by 0.43% and 0.18%, respectively. It shows our SCS refines image-text alignment by removing irrelevant content. We further provide the visualization of SCS results in Figure 4. It is easy to observe that our SCS extracts the matched content, including desired entities and matched nouns in image and text. In Figure 4 (a), our SCS simplifies the sentence to two names and a sport which are semantically consistent to the extracted image content. Such matched two-modality content is necessary for following cross-modal generation. Besides, removing the redundant content further boosts the generation process. As shown in Figure 4 (b), the word 'Great day' is hard for generation. Removing such subjective descriptions which are common in social media can alleviate the semantic gap in understanding MNER examples. \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline \multirow{2}{*}{Entity Type} & \multicolumn{3}{c|}{Twitter2015} & \multicolumn{3}{c}{Twitter2017} \\ & Train & Dev & Test & Train & Dev & Test \\ \hline Person & 2217 & 552 & 1816 & 2943 & 626 & 621 \\ Location & 2091 & 522 & 1697 & 731 & 173 & 178 \\ Organization & 928 & 247 & 839 & 1674 & 375 & 395 \\ Miscellaneous & 940 & 225 & 726 & 701 & 150 & 157 \\ \hline Total & 6176 & 1546 & 5078 & 6049 & 1324 & 1351 \\ Num of Twitters & 4000 & 1000 & 3257 & 3373 & 723 & 723 \\ \hline \hline \end{tabular} \end{table} Table 1. The statistics of two multimodal Twitter datasets. As for the cycle consistency in MCG, it is proved to be helpful in enhancing multimodal understanding. Solely removing it decreases the performance by 0.61% and 0.39% while eliminating it with SCS deteriorates further. Finally, without SCS and MCG, our method degenerates to the text-based BERT-CRF, with 4.35% and 4.27% drops on two datasets. The significant improvement brought by MCG is from bidirectional generation with entity and its corresponding object, which introduce a direct and powerful constraint on entity-object alignment. To further investigate the effectiveness of our learned bidirectional generation, we conduct the following experiment. Given an image-text pair, we first generate the pseudo visual feature from MCG with extracted textual content produced by SCS. Then the similarity is calculated between pseudo visual feature, real visual feature from paired image and real visual feature from other images from dataset. As shown the example in Figure 3 (a), the multisense entity 'Harry Potter' needs additional information to classify, as it can stand for a movie, a book or a character. As we can see, the boy in the paired image is clearly not the character, which makes the entity semantically vague. In our method, we can obtain entity-related pseudo visual information by generation. The generated visual content is more related to the entity-related images (two images in the middle that contain character from Harry Potter story) than the irrelevant image (the forth image). \begin{table} \begin{tabular}{c|c c} \hline \hline Features & Twitter2015 & Twitter2017 \\ \hline real textual + real visual & 75.08 & 86.48 \\ real textual + generated visual & 76.31 & 87.71 \\ \hline \hline \end{tabular} \end{table} Table 4. Ablation study on using real or generated visual features for cross-modal interaction during inference. \begin{table} \begin{tabular}{c|c c c|c c c|c c c|c c c|c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{8}{c|}{Twitter2015} & \multicolumn{8}{c}{Twitter2017} \\ \cline{2-13} & \multicolumn{4}{c|}{Single Type (F1)} & \multicolumn{4}{c|}{Overall} & \multicolumn{4}{c}{Single Type (F1)} & \multicolumn{4}{c}{Overall} \\ \cline{2-13} & PER & LOC & ORG & MISC & P & R & F1 & PER & LOC & ORG & MISC & P & R & F1 \\ \hline BiLSTM-CRF & 76.77 & 72.56 & 41.33 & 26.80 & 68.14 & 61.09 & 64.42 & 85.12 & 72.68 & 72.50 & 52.56 & 79.42 & 73.43 & 76.31 \\ CNN-BiLSTM-CRF & 80.86 & 75.39 & 47.77 & 32.61 & 66.24 & 68.09 & 67.15 & 87.99 & 77.44 & 74.02 & 60.82 & 80.00 & 78.76 & 79.37 \\ HBiLSTM-CRF & 82.34 & 76.83 & 51.59 & 32.52 & 70.32 & 68.05 & 69.17 & 87.91 & 78.57 & 76.67 & 59.32 & 82.69 & 78.16 & 80.37 \\ BERT & 84.72 & 79.91 & 58.26 & 38.81 & 68.30 & 74.61 & 71.32 & 90.88 & 84.00 & 79.25 & 61.63 & 82.19 & 83.72 & 82.95 \\ BERT-CRF & 84.74 & 80.51 & 60.27 & 37.29 & 69.22 & 74.59 & 71.81 & 90.25 & 83.05 & 81.13 & 62.21 & 83.32 & 83.57 & 83.44 \\ \hline A-C-BiLSTM-CRF & 81.98 & 78.95 & 53.07 & 34.02 & 72.75 & 68.74 & 70.69 & 89.63 & 77.46 & 79.24 & 62.77 & 84.16 & 80.24 & 82.15 \\ UMT & 85.24 & 81.58 & 63.03 & 39.45 & 71.67 & 75.23 & 73.41 & 91.56 & 84.73 & 82.24 & 70.10 & 85.28 & 85.34 & 85.31 \\ FMIT & 86.77 & **83.93** & **64.88** & 42.97 & 75.11 & **77.43** & 76.25 & 93.14 & **86.52** & 83.93 & 70.90 & 87.51 & 86.08 & 86.79 \\ MRC-MNER & 85.71 & 81.97 & 61.12 & 40.20 & 78.10 & 71.45 & 74.63 & 92.64 & 86.47 & 83.16 & **72.66** & **88.78** & 85.00 & 86.85 \\ HVDNet & - & - & - & - & 73.87 & 76.82 & 75.32 & - & - & - & - & 85.84 & 87.93 & 86.87 \\ CAT-MNER & 85.57 & 82.53 & 63.77 & **43.38** & 76.19 & 74.65 & 75.41 & 91.90 & 85.96 & 83.38 & 68.67 & 87.04 & 84.97 & 85.99 \\ R-GCN & 86.36 & 82.08 & 60.78 & 41.56 & 73.95 & 76.18 & 75.00 & 92.86 & 86.10 & 84.05 & 72.38 & 86.72 & 87.53 & 87.11 \\ BGA-MNER(ours) & **86.80** & 83.62 & 63.60 & 42.65 & **78.60** & 74.16 & **76.31** & **93.71** & 85.55 & **85.71** & 71.05 & 87.71 & **87.71** & **87.71** \\ \hline \hline \end{tabular} \end{table} Table 2. Performance comparison of different competitive text-based and multi-modal approaches on two Twitter datasets. AdaCAN-CNN-BiLSTM-CRF is abbreviated to A-C-BiLSTM-CRF. Figure 3. Generative alignment analysis for implicit entity-object relations. We first generate the pseudo visual feature from MCG with extracted textual content produced by SCS. Then the similarity is calculated between pseudo visual feature, real visual feature from paired image (the first image in each row) and real visual feature from other images from dataset (the last three images). \begin{table} \begin{tabular}{c|c|c} \hline \hline Settings & Twitter2015 & Twitter2017 \\ \hline BGA-MNER & 76.31 & 87.71 \\ \hline w/o SCS & 75.88 (\(\downarrow\)0.43) & 87.53(\(\downarrow\)0.18) \\ w/o cycle & 75.70 (\(\downarrow\)0.61) & 87.32(\(\downarrow\)0.39) \\ w/o SCS \& w/o cycle & 74.90 (\(\downarrow\)1.41) & 86.96(\(\downarrow\)0.75) \\ w/o SCS \& w/o MCG & 71.81 (\(\downarrow\)4.50) & 83.44(\(\downarrow\)4.27) \\ \hline \hline \end{tabular} \end{table} Table 3. Ablation study of each component on overall F1 score of two datasets. “cycle” denotes the cycle consistency loss. **Image-free Inference.** A common concern raised naturally is why BGA-MNER does not use the real visual information during inference? The main reason is that our BGA-MNER can generate well-aligned pseudo visual features during inference, to avoid introducing potential noise in images. We first compare the results of using real and generated visual features in Table 4. Using real textual and generated visual features is about 1.2% better than its counterpart. We believe the generated visual feature is more effective than the real one, especially for unmatched text-image pairs. As shown in Figure 3 (b), the entity 'LeBorn James' does not exist in the paired image. However, we can still obtain the its pseudo visual information by generation where the generated visual content is more close to the images of the well-known player and discriminative with irrelevant images. **Influence of Visual Encoder** Since most previous methods [30, 36] adopt ResNet as visual encoder, we compare our method with them using the same encoder, _i.e._, ViT-b/32 from CLIP, for comparison fairness. As shown in Table 5, first, our BGA-MNER shows great superiority over CAT-MNER [28], which originally uses the same visual encoder as our method. Second, we also fairly compare with other SOTA methods by replacing ResNet152 with ViT-B/32 from CLIP as the visual encoder in UMT and R-GCN. It is obvious that this encoder boosts their results with 0.1%-0.3% gain, however, our method still outperform with a large margin. **Data and Model Efficiency Discussion.** For data efficiency, we randomly sample [50, 100, 200, 400] image-text pairs from the train split as the data-limited training set and then evaluate the model on the original test set. As shown in Figure 5, the line of BGA is generally higher than others in different conditions of samples, indicating our method is more data-effective than compared approaches. Specifically, in the extremely limited case with 50 examples, our BGA still outperforms R-GCN with nearly 20% in F1-PER and 10% in F1-Overall of Twitter2015. For the infrequent entity types, i.e., ORG in Twitter2015 and LOC, MISC in Twitter2017, our method also achieves better results than other works. For model efficiency, as shown in Table 5, our method is more lightweight than other methods. In detail, total textual SCS modules contain 11.5M parameters and text2image generator has 7.9M parameters. We can easily observe that (1) the additional parameters from SCS and MCG are negligible. (2) During inference we drop the whole visual branch, including SCS for image, image2text generation, and visual feature extractor. This design slims our method to a pure text-based BERT-CRF model. **Cross-domain Generalization.** The difference in type distribution and data characteristics often brings significant performance gaps in practice. In Table 6, we compare our method with other approaches in cross-domain scenarios. Cross-domain generalization analysis is implemented by training on the source dataset while testing on the target dataset. In all metrics, our BGA-MNER outperforms existing approaches by a large margin. Compared with UMT, our method outperforms it by 4.58% and 4.04% on F1 score. For CAT-MNER, our BGA-MNER shows superiority with 0.28% and 0.33% gain. These results demonstrate the strong generalization ability of our model. **Modality-missing Evaluation.** In social media, users do not always post with additional images. During inference, this modality-missing problem usually results in the failure of multimodal models trained on full-modality samples. We further analyze the property of existing MNER approaches against this issue in Table 7 by replacing all images in the test set with a uniform empty image. It can be observed that (1) UMT, MAF and R-GCN suffer from the unavailability of images with 0.2%-0.6% drops. These approaches rely on valid visual information for cross-modal interaction to recognize entities, leading to the performance decrease in modality-missing condition. (2) BGA-MNER and ITA [26] do not need the visual input, so as to be resistant to this issue. Moreover, our method outperforms ITA with 0.33% and 2.22% improvement on two datasets. We believe the multiple BGA layers facilitate the entity-object alignment by ensuring mutual translation at different semantic levels. **Case Study** To better understand the advantage of our BGA-MNER, we select the representative samples with predictions of BERT-CRF [5] and UMT [30] in Figure 6. First, we can see that BERT-CRF and UMT heavily rely on nouns in text to recognize entities which may introduce harmful textual prior. For example, 'County' and 'Center' in case (b) and (c) mislead these two models with emphasis bias of location, leading to predict 'Dickson County's and 'Frank Erwin Center' as LOC type. In contrary, our BGA-MNER takes advantage of bidirectional gener \begin{table} \begin{tabular}{c|c c c|c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{Twitter2017\(-\)Twitter2015} & \multicolumn{2}{c}{Twitter2015\(-\)Twitter2017} \\ & P & R & F1 & P & R & F1 \\ \hline UMT & 64.67 & 63.59 & 64.13 & 67.80 & 55.23 & 60.87 \\ UMOF & 67.00 & 62.18 & 66.21 & 69.88 & 56.92 & 62.74 \\ FMT & 66.72 & 69.73 & 68.19 & 70.65 & 59.22 & 64.43 \\ CAT-MNER & 74.86 & 63.01 & 68.43 & 70.69 & 59.44 & 64.58 \\ BGA-MNER(ours) & 72.17 & 67.98 & 68.71 & 70.81 & 59.60 & 64.91 \\ \hline \hline \end{tabular} \end{table} Table 6. Comparison of the cross-domain generalization ability. Results are from [15, 28]. Figure 4. Visualization of sampled content in image and text by SCS. \begin{table} \begin{tabular}{c|c|c|c c} \hline \hline Method & Visual Enc & Total Param & Twitter2015 & Twitter2017 \\ \hline UMT & ResNet152 & 206.3M & 73.41 & 85.31 \\ UMT & ViT-B/32 & 233.6M & 73.52 & 85.64 \\ R-GCN & ResNet152 & 172.2M & 75.00 & 87.11 \\ R-GCN & ViT-B/32 & 199.5M & 75.14 & 87.21 \\ CAT-MNER & ViT-B/32 & 198.5M & 75.41 & 85.99 \\ BGA-MNER & ViT-B/32 & 130.3M & **76.31** & **87.71** \\ \hline \hline \end{tabular} \end{table} Table 5. Ablation study on the visual encoder and model efficiency. All these methods use uncased BERT-base model as textual encoder. We report the total parameters of the whole network for model efficiency analysis. mutual translation with additional generation supervision. We treat two modalities equally in bidirectional generation and highlight the content of each modality as input and output, alleviating the potential bias from text. Thus, our method could recover their true meaning as ORG type. Furthermore, the understanding of multimodal context is superficial in UMT. The image of baseball sport in case (a) should be interpolated as event hosting and the personal photo in (c) actually describes an interview in a celebrating. However, the sport and person information simply understood by UMT is noise for MNER. In our BGA-MNER, we understand the pair of text and image in generative way. This manner focuses more on the content understanding with generation supervision, endowing our model with a powerful reasoning capability. ## 5. Conclusion and Future Work In this paper, we propose a bidirectional generative alignment method named BGA-MNER. The main idea of our BGA-MNER is to use matched two-modality content for bidirectional generation, so as to provide direct and powerful constraints for entity-object cross-modal correlation. To be specific, we propose a new end-to-end Transformer MNER framework, which aligns the two modal features in a generative manner, leading to effectively capture the implicit entity-object relations. In our future work, we would like to apply our framework to vision-and-language pretraining as a more uniform structure than a conventional two-stream design. \begin{table} \begin{tabular}{c|c c} \hline \hline Methods & Twitter2015 & Twitter2017 \\ \hline UMT & 73.01(\(\lfloor\)0.25) & 83.49(\(\lfloor\)0.31) \\ MAF & 73.06(\(\lfloor\)0.41) & 85.79(\(\lfloor\)0.22) \\ R-GCN & 74.07(\(\lfloor\)0.52) & 86.52(\(\lfloor\)0.41) \\ ITA & 75.98 & 85.49 \\ BGA-MNER(ours) & 76.31 & 87.71 \\ \hline \hline \end{tabular} \end{table} Table 7. Modality-missing evaluation on overall F1 score of two datasets. The results are achieved by their official implementation. Figure 5. Comparison in data efficiency evaluation on two datasets. TEXT denotes the text-based BERT-CRF while our BGA-MNER is abbreviated to BGA. Figure 6. Predictions of BERT-CRF, UMT and our BGA-MNER on three test samples.
2303.17569
Iterative Prompt Learning for Unsupervised Backlit Image Enhancement
We propose a novel unsupervised backlit image enhancement method, abbreviated as CLIP-LIT, by exploring the potential of Contrastive Language-Image Pre-Training (CLIP) for pixel-level image enhancement. We show that the open-world CLIP prior not only aids in distinguishing between backlit and well-lit images, but also in perceiving heterogeneous regions with different luminance, facilitating the optimization of the enhancement network. Unlike high-level and image manipulation tasks, directly applying CLIP to enhancement tasks is non-trivial, owing to the difficulty in finding accurate prompts. To solve this issue, we devise a prompt learning framework that first learns an initial prompt pair by constraining the text-image similarity between the prompt (negative/positive sample) and the corresponding image (backlit image/well-lit image) in the CLIP latent space. Then, we train the enhancement network based on the text-image similarity between the enhanced result and the initial prompt pair. To further improve the accuracy of the initial prompt pair, we iteratively fine-tune the prompt learning framework to reduce the distribution gaps between the backlit images, enhanced results, and well-lit images via rank learning, boosting the enhancement performance. Our method alternates between updating the prompt learning framework and enhancement network until visually pleasing results are achieved. Extensive experiments demonstrate that our method outperforms state-of-the-art methods in terms of visual quality and generalization ability, without requiring any paired data.
Zhexin Liang, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Chen Change Loy
2023-03-30T17:37:14Z
http://arxiv.org/abs/2303.17569v2
# Iterative Prompt Learning for Unsupervised Backlit Image Enhancement ###### Abstract We propose a novel unsupervised backlit image enhancement method, abbreviated as CLIP-LIT, by exploring the potential of Contrastive Language-Image Pre-Training (CLIP) for pixel-level image enhancement. We show that the open-world CLIP prior not only aids in distinguishing between backlit and well-lit images, but also in perceiving heterogeneous regions with different luminance, facilitating the optimization of the enhancement network. Unlike high-level and image manipulation tasks, directly applying CLIP to enhancement tasks is non-trivial, owing to the difficulty in finding accurate prompts. To solve this issue, we devise a prompt learning framework that first learns an initial prompt pair by constraining the text-image similarity between the prompt (negative/positive sample) and the corresponding image (backlit image/well-lit image) in the CLIP latent space. Then, we train the enhancement network based on the text-image similarity between the enhanced result and the initial prompt pair. To further improve the accuracy of the initial prompt pair, we iteratively fine-tune the prompt learning framework to reduce the distribution gaps between the backlit images, enhanced results, and well-lit images via rank learning, boosting the enhancement performance. Our method alternates between updating the prompt learning framework and enhancement network until visually pleasing results are achieved. Extensive experiments demonstrate that our method outperforms state-of-the-art methods in terms of visual quality and generalization ability, without requiring any paired data. Code for our method will be made available. ## 1 Introduction Backlit images are captured when the primary light source is behind some objects. The images often suffer from highly imbalanced illuminance distribution, which affects the visual quality or accuracy of subsequent perception algorithms. Correcting backlit images manually is a laborious task given the intricate challenge of preserving the well-lit re gions while enhancing underexposed regions. One could apply an automatic light enhancement approach but will find that existing approaches could not cope well with backlit images [14]. For instance, many existing supervised light enhancement methods [26, 27, 33] cannot precisely perceive the bright and dark areas, and thus process these regions using the same pipeline, causing over-enhancement in well-lit areas or under-enhancement in low-light areas. Unsupervised light enhancement methods, on the other hand, either rely on ideal assumptions such as average luminance and a gray world model [9, 15] or directly learn the distribution of reference images via adversarial training [10]. The robustness and generalization capability of these methods are limited. As for conventional exposure correction methods [1, 29], they struggle in coping with real-world backlit images due to the diverse backlit scenes and luminance intensities. The problem cannot be well resolved by collecting backlit images that consist of ground truth images that are retouched by photographers [19], since these images can never match the true distribution of real backlit photos. In this work, we propose an unsupervised method for backlit image enhancement. Different from previous unsupervised methods that learn curves or functions based on some physical hypothesis or learn the distribution of well-lit images via adversarial training that relies on task-specific data, we explore the rich visual-language prior encapsulated in a Contrastive Language-Image Pre-Training (CLIP) [21] model for pixel-level image enhancement. While CLIP can serve as an indicator to distinguish well-lit and backlit images to a certain extent, using it directly for training a backlit image enhancement network is still non-trivial. For example, for a well-lit image (Fig. 2 top left), replacing similar concepts "normal light" with "well-lit" brings a huge increase in CLIP score. In the opposite case (Fig. 2 top right), "normal light" becomes the correct prompt. This indicates the optimal prompts could vary on a case-by-case basis due to the complex illuminations in the scene. In addition, it is barely possible to find accurate 'word' prompts to describe the precise luminance conditions. Prompt engineering is labor-intensive and time-consuming to annotate each image in the dataset. Moreover, the CLIP embedding is often interfered by high-level semantic information in an image. Thus, it is unlikely to achieve optimal performance with fixed prompts or prompt engineering. To overcome the problems, we present a new pipeline to tailor the CLIP model for our task. It consists of the following components: 1) _Prompt Initialization._ We first encode the backlit and well-lit images along with a learnable prompt pair (positive and negative samples) into the latent space using the pre-trained CLIP's image and text encoder. By narrowing the distance between the images and text in the latent space, we obtain an initial prompt pair that can effectively distinguish between backlit and well-lit images. 2) _CLIP-aware Enhancement Training._ With the initialized prompt, we train an enhancement network using the text-image similarity constraints in the CLIP embedding space. 3) _Prompt Refinement._ We introduce a prompt fine-tuning mechanism, in which we update the prompt by further distinguishing the distribution gaps among backlit images, enhanced results, and well-lit images via rank learning. We iteratively update the enhancement network and prompt learning framework until achieving visually pleasing results. Our method stands apart from existing backlit image enhancement techniques as we leverage the intrinsic perceptual capability of CLIP. Rather than solely utilizing CLIP as a loss objective [7, 35], we incorporate prompt refinement as an essential component of the optimization process to further enhance performance. Our approach surpasses state-of-the-art methods in both qualitative and quantitative metrics, without requiring any paired training data. We demonstrate the generalization capability and robustness of our method through the preview of our results shown in Fig. 1, and we compare our results with existing methods in Fig. 3. ## 2 Related Work **Backlit Image Enhancement.** Several approaches have been proposed in the literature. Li and Wu [17] employ a region segmentation technique in combination with a learning-based restoration network to separately process the back Figure 2: Motivation. CLIP scores of proper prompts demonstrate alignment with human annotations (_e.g_., well-lit images), suggesting that CLIP can serve as an indicator to differentiate between well-lit and backlit images. However, the best wordings could differ on a case-by-case basis due to complex illumination. In contrast, the learnable positive/negative prompts are more robust and consistent with the labels. -lit and front-lit areas of an image. Buades et al. [3] and Wang et al. [24] use fusion-based techniques to combine pre-processed images. Zhang et al. [31] learn a parametric "S-curve" using a small image-specific network, ExCNet, to correct ill-exposed images. More recently, Lv et al. [19] have created the first paired backlit dataset, named BAID, in which the ground truth images are edited by photographers so that the quality is still sub-optimal, shown in Fig. 8. **Light Enhancement.** Backlit image enhancement is closely related to low-light image enhancement and exposure correction. Traditional methods for low-light image enhancement [8, 16] typically employ the Retinex model to restore normal-light images. With the availability of paired data [26] and simulated data [38], several supervised methods [27, 28] have been proposed, which design various networks for low-light image enhancement. Despite their success, supervised methods suffer from limited generalization capability. Consequently, unsupervised methods[9, 15, 18, 20] have garnered increasing attention. Since low-light image enhancement cannot effectively process both underexposed and overexposed regions, exposure correction methods [1, 5, 29] have also been proposed. For example, Afifi et al. [1] propose an exposure correction network based on Laplacian pyramid decomposition and reconstruction. **CLIP and Prompting in Vision.** CLIP [21] has shown remarkable performance in zero-shot classification, thanks to the knowledge learned from large-scale image-text data. Its generalizability has been shown in high-level tasks[30, 12, 35]. A recent study[23] shows that the rich visual language prior encapsulated in CLIP can be used for assessing both the quality and abstract perception of images in a zero-shot manner. These studies inspire our work to exploit CLIP for backlit image enhancement. Prompt learning, as the core of vision-and-language models, is a recent emerging research direction. CoOp [37] introduces prompt learning into the adaptation of vision-language models for downstream vision tasks. CoCoOp [36] further improves the generalizability by allowing a prompt to be conditioned on each input instance rather than fixed once learned. Existing prompt learning methods focus solely on obtaining better prompts for high-level vision tasks. In contrast, our approach uses prompt learning to extract more accurate low-level image representations, such as color, exposure, and saturation, while ignoring high-level semantic information in CLIP. ## 3 Methodology **Overview.** Our proposed approach consists of two stages, as illustrated in Fig. 4. In the first stage, we learn an initial prompt pair (negative/positive prompts referring to backlit/well-lit images) by constraining the text-image similarity between the prompt and the corresponding image in the CLIP embedding space. With the initial prompt pair, we use a frozen CLIP model to compute the text-image similarity between the prompts and the enhanced results to train the initial enhancement network. In the second stage, we refine the learnable prompts by utilizing backlit images, enhanced results, and well-lit images through rank learning. The refined prompts can be used to fine-tune the enhancement network for further performance improvement. We alternate the prompt refinement and fine-tuning of the enhancement network until we achieve visually pleasing results. It should be noted that the CLIP model remains fixed throughout the learning process, and our method does not introduce any additional computational burden apart from prompt initialization and refinement. We provide further details on the key components of our approach below. ### Initial Prompts and Enhancement Training The first stage of our approach involves the initialization of negative and positive (learnable) prompts to roughly characterize backlit and well-lit images, as well as the training of the initial enhancement network. **Prompt Initialization.** The process of prompt initialization is depicted in Fig. 5(a). Given a backlit image \(I_{b}\in\mathbb{R}^{H\times W\times 3}\) and a well-lit image \(I_{w}\in\mathbb{R}^{H\times W\times 3}\) (as reference), we randomly initialize a positive prompt \(T_{p}\in\mathbb{R}^{N\times 512}\) and a negative prompt \(T_{n}\in\mathbb{R}^{N\times 512}\). \(N\) represents the number of embedded tokens in each prompt. Then, we feed the backlit and well-lit images to the image encoder \(\Phi_{image}\) of the pre-trained CLIP to obtain their latent code. Meanwhile, we also extract the latent code of the positive and negative prompts by feeding them to the text Figure 3: Visual comparison between our method and the state-of-the-art light enhancement methods, including exposure correction method (Afifi et al. [1]), backlit enhancement method (ExCNet [31]), low-light image enhancement methods (SCI [20], Zero-DCE [9], SNR-aware [28], EnlightenGAN [10]). Our method effectively enhances the backlit image without introducing artifacts and over-/under-enhancement. encoder \(\Phi_{text}\). Based on the text-image similarity in the CLIP latent space, we use the binary cross entropy loss of classifying the backlit and well-lit images to learn the initial prompt pair: \[\mathcal{L}_{initial} =-(y*\log(\hat{y})+(1-y)*\log(1-\hat{y})), \tag{1}\] \[\hat{y} =\frac{e^{cos(\Phi_{image}(I),\Phi_{text}(T_{p}))}}{\sum_{i\in\{ n,p\}}e^{cos(\Phi_{image}(I),\Phi_{text}(T_{i}))}}, \tag{2}\] where \(I\in\{I_{b},I_{w}\}\) and \(y\) is the label of the current image, \(0\) is for negative sample \(I_{b}\) and \(1\) is for positive sample \(I_{w}\). **Training the Initial Enhancement Network.** Given the initial prompts obtained from the first stage, we can train an enhancement network with a CLIP-aware loss. As a baseline model, we use a simple Unet [22] to enhance the backlit images, though more complex networks can also be employed. Inspired by the Retinex model [13], which is widely used for light enhancement, the enhancement network estimates the illumination map \(I_{i}\in\mathbb{R}^{H\times W\times 1}\) and then produces the final result via \(I_{t}=I_{b}/I_{i}\). To train the enhancement network, we employ CLIP-Enhance loss \(\mathcal{L}_{clip}\) and identity loss \(\mathcal{L}_{identity}\). The CLIP-Enhance loss measures the similarity between the enhanced result and the prompts in the CLIP space: \[\mathcal{L}_{clip}=\frac{e^{cos(\Phi_{image}(I_{t}),\Phi_{text}(T_{n}))}}{ \sum_{i\in\{n,p\}}e^{cos(\Phi_{image}(I_{t}),\Phi_{text}(T_{i}))}}. \tag{3}\] The identity loss encourages the enhanced result to be similar to the backlit image in terms of content and structure: \[\mathcal{L}_{identity}=\sum_{l=0}^{4}\alpha_{l}\cdot||\Phi_{image}^{l}(I_{b})- \Phi_{image}^{l}(I_{t})||_{2}, \tag{4}\] where \(\alpha_{l}\) is the weight of the \(l^{th}\) layer of the image encoder in the ResNet101 CLIP model. The final loss for training the enhancement network is the combination of the two losses: \[\mathcal{L}_{enhance}=\mathcal{L}_{clip}+w\cdot\mathcal{L}_{identity}, \tag{5}\] where \(w\) is the weight to balance the magnitude of different loss terms. We divide the training schedule into two parts. First, we use the identity loss to implement self-reconstruction as it encourages the enhanced result to be similar to the backlit image in the pixel space. Then, we use both the identity loss and the CLIP-Enhance loss to train the network. For the identity loss, we set \(\alpha_{l=0,1,\dots,4}\) in Eq. (4) to \(1.0\) during the self-reconstruction stage. During training of the backlit enhancement network, we set \(\alpha_{l=0,1,2,3}=1.0\) and \(\alpha_{l=4}=0.5\). This is because we found that the features of the last layer are more related to the color of the images, which is what we want to adjust. ### Prompt Refinement and Enhancement Tuning In the second stage, we iteratively perform prompt refinement and enhancement network tuning. The prompt refinement and the tuning of the enhancement network are conducted in an alternating manner. The goal is to improve the accuracy of learned prompts for distinguishing backlit images, enhanced results, and well-lit images, as well as perceiving heterogeneous regions with different luminance. **Prompt Refinement.** We observed that in some cases, using only the initial prompts obtained from the backlit and well-lit images is insufficient for enhancing the color and illuminance. This is because the initial prompts may fail to capture the fine-grained differences among the backlit images, enhanced results, and well-lit images. To address this, we propose a further refinement of the learnable positive and negative prompts. Given the result \(I_{t}\in\mathbb{R}^{H\times W\times 3}\) enhanced by the current enhancement network, we use a margin ranking loss to update the prompts. The process of prompt refinement is illustrated in Fig. 5(b). Formally, we define the negative similarity score between Figure 4: Our proposed method involves two main stages. (a) The first stage constitutes prompt initialization and the initial training of an enhancement network. (b) The second stage involves prompt refinement and enhancement model fine-tuning. The two components here are updated in an alternating manner. The prompt refinement in the second stage aims at learning accurate prompts that distinguish among backlit images, enhanced results, and well-lit images. By employing these learned prompts, the enhancement network produces enhanced results that are similar to well-lit images and distinct from backlit images in the CLIP embedding space, ultimately leading to visually pleasing results. the prompt pair and an image as: \[S(I)=\frac{e^{cos(\Phi_{image}(I),\Phi_{text}(T_{n}))}}{\sum_{i\in\{n,p\}}e^{cos( \Phi_{image}(I),\Phi_{text}(T_{i}))}}, \tag{6}\] Then, the margin ranking loss can be expressed as: \[\begin{split}\mathcal{L}_{prompt1}&=\max(0,S(I_{w} )-S(I_{b})+m_{0})\\ &+\max(0,S(I_{t})-S(I_{b})+m_{0})\\ &+\max(0,S(I_{w})-S(I_{t})+m_{1}),\end{split} \tag{7}\] where \(m_{0}\in[0,1]\) represents the margin between the score of well-lit/enhanced results and the backlit images in the CLIP embedding space. We set \(m_{0}\) to 0.9 to extend the distance between backlit images and well-lit images as much as possible. Meanwhile, \(m_{1}\) represents the margin between the score of the enhanced results and the well-lit images in the CLIP embedding space. We set \(m_{1}\) to 0.2 to ensure that the enhanced results are similar to well-lit images. These hyper-parameters are chosen empirically based on the performance of the algorithm on the validation set. To ensure that the iterative learning can improve the performance in each iterative round, we preserve the previous enhanced results \(I_{t-1}\) obtained by the previous enhancement network in the ranking process. We add the two groups of enhanced results, \(I_{t-1}\) and \(I_{t}\), into the constraints, enabling the newly learned prompts to focus more on the light and color distribution of images, rather than high-level content in the image (see Fig. 10). The loss function is modified as: \[\begin{split}\mathcal{L}_{prompt2}&=\max(0,S(I_{w} )-S(I_{b})+m_{0})\\ &+\max(0,S(I_{t-1})-S(I_{b})+m_{0})\\ &+\max(0,S(I_{w})-S(I_{t})+m_{1})\\ &+\max(0,S(I_{t})-S(I_{t-1})+m_{2}),\end{split} \tag{8}\] where \(m_{2}\) represents the margin between the newly enhanced results and previously enhanced results. We set \(m_{2}=m_{1}\) as the margins \(m_{1}\) and \(m_{2}\) have the same target, keeping the two image groups similar. **Tuning the Enhancement Network.** The tuning of the enhancement network follows the same process in Sec. 3.1 except we use the refined prompts to compute for the CLIP-Enhance loss \(\mathcal{L}_{clip}\) and generate the enhanced training data from the updated network to further refine the prompt. **Discussion.** To show the effectiveness of iterative learning, following Chefer _et al_. [6], we visualize the attention maps in the CLIP model for the interaction between the learned negative prompt and an input image at different alternate rounds. The heatmap, as shown in Fig. 6, represents the relevance between each pixel in the image and the learned prompt. The heatmap shows that during iterations, the learned negative prompt becomes increasingly relevant to the regions with unpleasant lighting and color. We also show the enhanced results with different iterative rounds in Fig. 7. At the intermediate round, the color in some enhanced regions of the outputs is over-saturated. After enough iterations, the over-saturation is corrected while the dark regions are closer to the well-lit state compared with the previous outputs. The Figure 5: Illustration of the prompt learning framework. **1.** Prompt Initialization. A cross-entropy loss constrains the learned prompts, maximizing the distance between the representation of negative and positive samples in the CLIP latent space. **2.** Adding the enhanced results from the current round \(I_{t}\) into the ranking process (i.e., ranking loss) to make the enhanced results \(I_{t}\) closer to the representation of the well-lit images \(I_{w}\) in the CLIP latent space and far from the representation of the input image \(I_{b}\). **3.** Adding the images inferred from the previous round \(I_{t-1}\) to constrain the result of updated enhancement network. \(I_{t}\) being closer to the representation of positive samples than the previous round \(I_{t-1}\), and far from the representation of negative ones \(I_{b}\) in CLIP latent space. Figure 6: Attention map changes with iterative learning. Figure 7: Enhanced results of different iteration rounds. observation here suggests the capability of our approach in perceiving heterogeneous regions with different luminance. We will provide the quantitative comparison in Sec. 4.3. ## 4 Experiments **Dataset.** For training, we randomly select 380 backlit images from BAID [19] training dataset as input images and select 384 well-lit images from DIV2K [2] dataset as reference images. We test our methods on the BAID test dataset, which includes 368 backlit images taken in diverse light scenarios and scenes. To examine the generalization ability, we collected a new evaluation dataset, named Backlit300, which consists of 305 backlit images from Internet, Pexels, and Flickr. The data will be made available. **Training.** We implement our method with PyTorch on a single NVIDIA GTX 3090Ti GPU. We use Adam optimizer with \(\beta_{1}=0.9\) and \(\beta_{2}=0.99\). The number \(N\) of embedded tokens in each learnable prompt is set to \(16\). We set the total training iterations to \(50K\), within which, the number of self-reconstruction iterations is set to \(1K\), the number of prompt pair initialization learning iterations is set to \(10K\). We set the learning rate for the prompt initialization/refinement and enhancement network training to \(5\cdot 10^{-6}\) and \(2\cdot 10^{-5}\). The batch size for prompt initialization/refinement and enhancement network training is set to \(8\) and \(16\). During training, we resize the input images to \(512\times 512\) and use flip, zoom, and rotate as augmentations. **Inference.** The sizes of some input images from the BAID and Backlit300 test datasets are large, and some methods are unable to handle such high-resolution images directly. To ensure a fair comparison, we resize all test images to have a long side of 2048 pixels if their size is larger than \(2048\times 2048\). **Compared Methods.** As there are very few publicly available deep learning-based methods for backlit image enhancement, we compare our approach with representative methods that solve related tasks, including low-light image enhancement methods such as Zero-DCE [9], Zero-DCE++ [15], SCI [20], URetinex-Net [27], SNR-Aware [28], Zhao et al. citeINN, and EnlightenGAN [10]; exposure correction methods such as Afifi et al. [1]; and backlit enhancement methods such as ExCNet [31]. Some methods provide different models trained on different datasets. We compare our method with all released models of different methods to ensure a fair comparison. To further validation, we also provide retrained supervised methods' results in supplementary material. For unsupervised methods, we retrained them on the same training data as our method to ensure that they are evaluated under the same conditions. ### Results **Visual Comparison.** We present visual comparisons of some typical samples from the BAID test dataset in Fig. 8. Due to space limitations, we only show the results of the best-performing methods. The complete comparisons of all methods can be found in the supplementary material. Our method consistently produces visually pleasing results with improved color and luminance without over- or under-exposure. Moreover, our method excels in handling challenging backlit regions, restoring clear texture details and satisfactory luminance without introducing any artifacts, while other methods may either fail to address such regions or produce unsatisfactory results with visible artifacts. We also evaluate our method on the Backlit300 test dataset, and present the comparison results in Fig. 9. We can see that compared to EnlightenGAN [10] and ExCNet [31], our method produces results without visible distortion artifacts. Our method is also more effective in enhancing dark regions, unlike Afifi et al. [1] and EXCNet [31]. Moreover, our results exhibit better color contrast and input-output consistency in well-lit regions. We emphasize that our method achieves these results without the need for paired data, which is not available in many real-world scenarios. **Quantitative Comparison.** We use three full-reference image quality evaluation (IQA) metrics, i.e., PSNR, SSIM [25], and LPIPS [32] (Alex version) and one non-reference IQA metric MUSIQ [11] to evaluate the quantitative results. As current non-reference IQA metrics only evaluate the overall image quality, they may not accurately measure the results of Figure 8: Visual comparison on the backlit images sampled from the Backlit300 test dataset. backlit image enhancement. Hence, we primarily rely on the state-of-the-art MUSIQ metric to evaluate the performance. The quantitative comparison on the BAID test dataset is presented in Tab. 1. Our method outperforms all state-of-the-art methods in terms of the full-reference IQA metrics, indicating that the results generated by our method preserve the content and structure of the original images well and are close to the reference images retouched by photographers. Our method also performs the best in the non-reference MUSIQ metric when compared to other methods, demonstrating the good image quality of our results. We also report the quantitative comparison on the Backlit300 test dataset in Tab. 2, where our method continues to achieve the best performance, further indicating the effectiveness of our method. ### User Study We conducted a user study to more comprehensively evaluate the visual quality of enhanced results obtained by different methods. In addition to our results, we chose the results obtained from the top-3 PSNR methods: Zero-DCE [9], EXCNet [31], and URetinex [27], as well as EnlightenGAN [10] since it is a related work to our method. We randomly selected 20 images from the Backlit300 test partition as the evaluation set. For each image, we provided the input backlit image, the corresponding images enhanced by our method and a baseline. A total of 40 participants were invited to select their preferred image. The statistics of the user study are summarized in Fig. 11. The vote distribution shows that our results are the most favored by participants, with obvious advantages over the other methods. For each image, over 60% of the participants voted for our result, indicating that our method generates more visually pleasing results when compared to other methods. ### Ablation Studies **Effectiveness of Iterative Learning.** In addition to the observation provided in Sec. 3.2, to further validate the effectiveness of iterative learning, we provide the quantitative comparison in Tab. 3. As presented, fine-tuning the prompts using the loss functions Eq. (7) and Eq. (8) improve the \begin{table} \begin{tabular}{c|c|c|c|c||c} \hline \multirow{2}{*}{Type} & Methods & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & MUSIQ\(\uparrow\) \\ \cline{2-6} & Input & 16.641 & 0.768 & 0.197 & 52.115 \\ \hline \multirow{6}{*}{Supervised} & Afifi et al. [1] & 15.904 & 0.745 & 0.227 & 52.863 \\ & Zhao et al.-MIT5K [34] & 18.228 & 0.774 & 0.189 & 51.457 \\ & Zhao et al.-LOL [34] & 17.947 & 0.822 & 0.272 & 49.334 \\ & URetinex-Net [27] & 18.925 & 0.865 & 0.211 & **54.402** \\ & SNR-Aware-LOLv1 [28] & 15.472 & 0.747 & 0.408 & 26.425 \\ & SNR-Aware-LOLv2real [28] & 17.307 & 0.754 & 0.398 & 26.438 \\ & SNR-Aware-LOLv2synthetic [28] & 17.364 & 0.752 & 0.403 & 23.960 \\ \hline \multirow{6}{*}{Unsupervised} & Zero-DCE [9] & **19.74**0 & 0.871 & 0.183 & 51.804 \\ & Zero-DCE++ [15] & 19.658 & **0.883** & 0.182 & 48.573 \\ & RUAS-LOL [18] & 9.920 & 0.656 & 0.523 & 37.207 \\ & RUAS-MIT5K [18] & 13.312 & 0.758 & 0.347 & 45.008 \\ & RUAS-DarkFace [18] & 9.696 & 0.642 & 0.517 & 39.655 \\ & SCI-easy [20] & 17.819 & 0.840 & 0.210 & 51.984 \\ & SCI-medium [20] & 12.766 & 0.762 & 0.347 & 44.176 \\ & SCI-diffucult [20] & 16.993 & 0.837 & 0.232 & 52.369 \\ & EnlightenGAN [10] & 17.550 & 0.864 & 0.196 & 48.417 \\ & ExCNet [31] & 19.437 & 0.865 & **0.168** & 52.576 \\ \hline \multirow{6}{*}{Unsupervised} & Zero-DCE [9] & 18.553 & 0.863 & 0.194 & 49.436 \\ & Zero-DCE++ [15] & 16.018 & 0.832 & 0.240 & 47.253 \\ \cline{1-1} & RUAS [18] & 12.922 & 0.743 & 0.362 & 45.056 \\ \cline{1-1} & SCI [20] & 16.639 & 0.768 & 0.197 & 52.265 \\ \cline{1-1} & EnlightenGAN [10] & 17.957 & 0.849 & 0.182 & 53.871 \\ \cline{1-1} & CLIP-LIT (Ours) & **21.579** & **0.883** & **0.159** & **55.682** \\ \hline \end{tabular} \end{table} Table 1: Quantitative comparison on the BAID test dataset. The best and second performance are marked in red and blue. \begin{table} \begin{tabular}{c|c|c} \hline \multicolumn{2}{c|}{Methods} & MUSIQ\(\uparrow\) \\ \hline Input & 51.900 \\ \hline Afifi et al. [1] & 51.930 \\ Zhao et al.-MIT5K [34] & 50.354 \\ Zhao et al.-LOL [34] & 48.334 \\ URetinex-Net [27] & 51.551 \\ SNR-Aware-LOLv1 [28] & 29.915 \\ SNR-Aware-LOLv2real [28] & 30.903 \\ SNR-Aware-LOLv2synthetic [28] & 29.149 \\ \hline Zero-DCE [9] & 51.250 \\ Zero-DCE++ [15] & 48.216 \\ RUAS-LOL [18] & 40.329 \\ RUAS-MIT5K [18] & 44.523 \\ RUAS-DarkFace [18] & 48.216 \\ SCI-easy [20] & 50.642 \\ SCI-medium [20] & 48.216 \\ SCI-difficult [20] & 49.428 \\ EnlightenGAN [10] & 48.308 \\ ExCNet [31] & 50.278 \\ \hline Zero-DCE [9] & 48.491 \\ Zero-DCE++ [15] & 46.000 \\ RUAS [18] & 45.251 \\ SCI [20] & **51.960** \\ EnlightenGAN [10] & 48.261 \\ CLIP-LIT (Ours) & **52.921** \\ \hline \end{tabular} \end{table} Table 2: Quantitative comparison on the Backlit300 test dataset. Figure 9: Visual comparison on the backlit images sampled from the Backlit300 test dataset. enhancement performance. **Necessity of Prompt Refinement.** Compared to the selected words or sentences, our learned prompts can better distinguish between backlit and well-lit images (see Fig. 12). Results in Tab. 3 also indicate that the enhancement model trained under the constraint of our refined prompts performs better than a fixed prompts. **Impact of Training Data.** To investigate the impact of the reference data (the well-lit images) on our method, we conducted an experiment where we retrained our method on another dataset containing 1000 images selected from DIV2K[2] and MTI5K[4], which has more diverse well-lit images. The results, as shown in Fig. 13 and Tab. 4, indicate that the two sets of results obtained by our method using different training data are similar, and the quantitative scores only have slight differences. Such results demonstrate that the number and variety of well-lit images used as training data have little impact on the performance of our method. **Advantage of CLIP-Enhance Loss over the Adversarial Loss.** To show the advantage of our CLIP-Enhance loss over the adversarial loss, we trained our enhancement network on the same unpaired training data using adversarial loss. We used the same discriminator as EnlightenGAN [10]. The results in Tab. 5 indicate that our CLIP-Enhance loss achieves better enhancement performance than adversarial loss. This may be due to the fact that the CLIP prior is more sensitive to color and luminance distribution, enabling it to differentiate between images with varied lighting conditions (see Fig. 10) and perceive unbalanced luminance regions (see Fig. 6). Visual comparison is provided in supplementary material.
2310.17451
Generating by Understanding: Neural Visual Generation with Logical Symbol Groundings
Despite the great success of neural visual generative models in recent years, integrating them with strong symbolic reasoning systems remains a challenging task. There are two levels of symbol grounding problems among the core challenges: the first is symbol assignment, i.e. mapping latent factors of neural visual generators to semantic-meaningful symbolic factors from the reasoning systems by learning from limited labeled data. The second is rule learning, i.e. learning new rules that govern the generative process to enhance the symbolic reasoning systems. To deal with these two problems, we propose a neurosymbolic learning approach, Abductive visual Generation (AbdGen), for integrating logic programming systems with neural visual generative models based on the abductive learning framework. To achieve reliable and efficient symbol grounding, the quantized abduction method is introduced for generating abduction proposals by the nearest-neighbor lookup within semantic codebooks. To achieve precise rule learning, the contrastive meta-abduction method is proposed to eliminate wrong rules with positive cases and avoid less informative rules with negative cases simultaneously. Experimental results show that compared to the baseline approaches, AbdGen requires significantly less labeled data for symbol assignment. Furthermore, AbdGen can effectively learn underlying logical generative rules from data, which is out of the capability of existing approaches. The code is released at this link: https://github.com/candytalking/AbdGen.
Yifei Peng, Yu Jin, Zhexu Luo, Yao-Xiang Ding, Wang-Zhou Dai, Zhong Ren, Kun Zhou
2023-10-26T15:00:21Z
http://arxiv.org/abs/2310.17451v2
# Generating by Understanding: ###### Abstract Despite the great success of neural visual generative models in recent years, integrating them with strong symbolic knowledge reasoning systems remains a challenging task. The main challenges are two-fold: one is _symbol assignment_, i.e. bonding latent factors of neural visual generators with meaningful symbols from knowledge reasoning systems. Another is _rule learning_, i.e. learning new rules, which govern the generative process of the data, to augment the knowledge reasoning systems. To deal with these _symbol grounding_ problems, we propose a neural-symbolic learning approach, _Abductive Visual Generation (AbdGen)_, for integrating logic programming systems with neural visual generative models based on the abductive learning framework. To achieve reliable and efficient symbol assignment, the _quantized abduction_ method is introduced for generating abduction proposals by the nearest-neighbor lookups within semantic codebooks. To achieve precise rule learning, the _contrastive meta-abduction_ method is proposed to eliminate wrong rules with positive cases and avoid less-informative rules with negative cases simultaneously. Experimental results on various benchmark datasets show that compared to the baselines, AbdGen requires significantly fewer instance-level labeling information for symbol assignment. Furthermore, our approach can effectively learn underlying logical generative rules from data, which is out of the capability of existing approaches. ## 1 Introduction Neural visual generative models (NVGMs) Bond-Taylor _et al._ (2021) have received great attention in recent AI research due to their wide applications in AI-based content generation. Following the pursuit of next-generation AI systems which are able to conduct high-level symbolic knowledge reasoning Lake _et al._ (2017); Scholkopf (2019); LeCun (2022), it is meaningful to study the integration of NVGMs and symbolic knowledge reasoning (SKR) systems, where the generating process of NVGMs is conditioned on the reasoning output of the SKR systems. _Symbol grounding_ is among the most fundamental challenges for building symbolic knowledge integrated NVGMs. The basic mechanism of NVGMs is to model the generative process from _latent semantic factors (LSFs)_ to visual objects. In most NVGMs, the latent semantic factors have no explicit meanings other than following specific prior distributions. However, knowledge-integrated NVGMs are necessary to generate outputs based on _symbolic semantic factors (SSFs)_ provided by the SKR systems. For these reasons, grounding the LSFs from NVGMs with the SSFs is a key link in the knowledge-based visual generation pipeline. The symbol grounding problem becomes even more challenging when few explicit labeling for the mapping between LSFs and SSFs are available, a common situation in real-world applications. Even though it is known that many NVGMs can obtain LSFs with preliminary semantics by purely unsupervised generative training Kingma and Welling (2013); Goodfellow et al. (2014), unsupervised symbol grounding of NVGMs remains a challenging task. In recent years, studies of unsupervised disentanglement (UD) Higgins et al. (2016) show sparks of learning semantics for LSFs with as little supervision as possible. However, some significant obstacles still remain: 1) it is proven that purely UD is impossible in general Locatello et al. (2019); 2) the basic assumption of UD is the independence among LSFs, which is hardly satisfied when symbol grounding is necessary, since SSFs have complex relationships governed by the SKR system Scholkopf et al. (2021). Alternatively, learning symbol groundings based on _weak supervisions from the SKR systems_ would be a more reasonable solution. In this paper, we study the symbol grounding of _NVGMs integrated with logic programming SKR systems_, based on two fundamental tasks. one is _symbol assignment_: bonding LSFs and SSFs based on weak supervision from the SKR system, under the condition that _few instance-level labeling_ is available. Another is _rule learning_: learning symbolic rules which govern the visual generative process, from limited-labeled data and the SKR background knowledge. As far as we know, there is no previous studies on rule learning under the neural-symbolic visual generation area. While this task is very meaningful since grasping the underlying data generation rules would be beneficial for symbol grounding due to a better understanding of the meaning of the SSFs. Furthermore, it enables NVGMs to discover high-level symbolic rules from data, which could be used in future generation tasks. We propose a neural-symbolic learning, AbdGen, to integrate logic programming systems with NVGMs based on the abductive learning framework Zhou (2019); Dai et al. (2019), which has shown appearing effectiveness to bridge neural perception and logical reasoning in perceptual learning tasks. For reliable and efficient symbol assignment, _quantized abduction_ is introduced to conduct distance-based abduction proposal generation by exploiting the vector-quantized structure of the nerual generator. For precise rule learning, _contrastive meta-abduction_ is proposed for learning precise generative rules through contrastive verification among positive and negative cases simultaneously. Experimental results show that compared to the existing approaches, AbdGen could learn under weak supervision from logic programming systems with significantly fewer instance-level labeling for symbol assignment. Furthermore, it can effectively learn logical generative rules from data to enhance NVGMs, which is the first approach to achieve this target. ## 2 Related Work ### Content-Based Visual Generation With the advancements in deep learning, NVGMs already have the strong ability to generate visual objects based on input contents. For instance, some research focuses on tasks like text-to-image generation Reed et al. (2016, 2016); van den Oord et al. (2016); Zhang et al. (2017, 2018); Hong et al. (2018); Du et al. (2020); Liu et al. (2021); Ramesh et al. (2022); Crowson et al. (2022); Saharia et al. (2022), while others explore scene-graph-to-image generationJohnson et al. (2018); Ashual and Wolf (2019); Gu et al. (2019); Li et al. (2019); Mittal et al. (2019); Herzig et al. (2020); Hua et al. (2021); Zhu et al. (2022), which provides clearer and more direct information. However, these works primarily rely on non-symbolic neural learning. In this work, we focus on neural-symbolic visual generation based on outputs of logic programming systems, which aligns more closely to the pursuit of higher-level AI. On the other hand, despite a few researches Jiang and Ahn (2021); Feinman and Lake (2020); Gothoskar et al. (2021), there is not much work studying knowledge-integrated NVGMs in the field of neural-symbolic learning, in special the integration of logic programming systems. The only exception is VAEL Misino et al. (2022), which pioneers the study of logic programming integrated NVGMs. VAEL integrates the DeepProblog Manhaeve et al. (2018) framework with autoencoder visual generators, achieving impressive ability for visual objects generation based on logic rules, as well as strong generalization when the training and testing logic rules differ. However, considering symbol grounding, VAEL still assumes that sufficient instance-level labels are given for doing symbol assignment, meanwhile, no mechanism of doing rule learning is provided. In our view, achieving these two abilities require highly efficient mechanisms of _grounding searching_ and _logic program synthesis_, which is not the advantage of DeepProllog. The basic mechanism of DeepProllog is to conduct logical reasoning by calculating possible world probabilities, which may involve intractable calculation of joint distributions. In comparison, abductive learning is more suitable for symbol grounding due to more direct search over the logic program space, making effective weakly-supervised symbol assignment and rule learning realizable. ### Symbol Grounding in Neural-Symbolic Learning Most neural-symbolic learning researches assume that the grounding between neural and symbolic factors are given before learning starts. While the symbol grounding problem becomes essential when the neural model should discover LSFs autonomously, meanwhile the explicit labeling between LSFs and SSFs is lacking. For example, the grounding of neural representations to causal variables has become an important research topic recently Scholkopf _et al._ (2021), and the symbolic ground problem has been studied for SATNet Topan _et al._ (2021). However, the related study is seldom for NVGMs integrated with logic reasoning systems, except for VAEL. On the other hand, our work is built upon the abductive learning (ABL) framework Zhou (2019); Dai _et al._ (2019), which is a suitable choice to address the symbol grounding problem for NVGMs. ABL is a neural-symbolic learning framework that unifies sub-symbolic perception and symbolic reasoning by employing logical abduction. In the pioneering work of Dai _et al._ (2019), ABL is proposed for addressing the weakly-supervised classification problem where the training data are unlabeled, and there is a logic programming system is available to generate _abduced_ labels for learning neural classification model. The abduction process refers to both the logic programming system, which generate high-level abduced labels, and the neural classifier, which generates lower-level perceptual labels, which refine their predictions interactively during the learning process. Subsequently, in Dai and Muggleton (2020), MetaAbd is proposed to further enhance ABL with the ability to learn new logic programs during the abduction process based on meta-interpretative learning Muggleton (2017). Both ABL and MetaAbd propose promising ways to address symbol grounding of NVGMs, but there exist significant technical challenges: 1) LSFs in NVGMs are usually multi-dimensional representation vectors instead of simple classification labels, which involve more complicated mechanisms for training; 2) In visual generation tasks, the logic programming systems need to model the generative process rules instead of classification criterion, which is usually more complicated, making the time cost of abduction a significant issue. A new abduction strategy should be designed which can utilize the prior information from the model design of NVGMs to improve abduction efficiency; 3) The logic rule learned for generative learning should be as _precise_ as possible, since generative tasks require more information than justifying the prediction results in classification problems. As a result, a stronger rule learning method should be proposed for rule learning in this paper. ## 3 Problem Setup In this paper, we consider symbol-grounded autoencoder-style NVGMs for 2D images. Accordingly, a symbol-grounded NVGM consists of a _generator (decoder)_\(G\) for transforming _SSFs_\(z^{\prime}\) into visual objects \(x^{\prime}\), and an _encoder_\(E\) for transforming input visual objective \(x\) into _LSFs_\(z\). This transformation can be expressed as \[G(z^{\prime})=x^{\prime},\quad E(x)=z.\] We further assume that there exists a _symbol grounding module_\(V\) bonding \(z\) and \(z^{\prime}\). For knowledge-based NVGMs, \(V\) is responsible for grounding sub-symbolic LSFs \(z\) into symbolic SSFs \(z^{\prime}\) to generate symbol-guided visual objects. Furthermore, for providing symbolic guidance, we assume the existence of a knowledge reasoning model \(H\), which is a logic programming system, as well as a set of background knowledge \(B\), which are logical programs and facts, in charge of symbolic reasoning. Now we can define the visual generation task considered in this paper. **Symbol-grounded conditional generation**: Given NVGM modules \(E,G,V\), reasoning model \(H\), and background knowledge \(B_{test}\), for input \(x\), the generation task can be defined as \[\hat{x}^{\prime}=G(\hat{z}^{\prime}),\] \[s.t.\ B_{test}\cup H\cup z^{\prime}\,\models\,\hat{z}^{\prime},\ z^ {\prime}=V(z),\ z=E(x),\] where \(\models\) symbolizes _logically entails_, which means that SSF \(\hat{z}^{\prime}\) is consistent with the knowledge reasoning system defined on the left-hand side. From the definition, the new visual object \(\hat{x}^{\prime}\) is generated conditioned on the input \(x\), which is also guided by the symbolic reasoning process defined by \(B_{test}\) and \(H\) as in the second equation. To obtain this visual generation model, the following NVGM with symbol grounding learning problem is introduced. **Learning NVGM with symbol grounding**: The learner is provided with training data \(X\) which consists of two kinds of instances. The first kind, _positive cases_, are generated from the unknown symbol-guided generation process, which is decided by \(B_{under},H\) as well as the ground-truth NVGM. \(B_{under}\) is the _training-stage ground-truth background knowledge set, which can be different from \(B_{test}\)_. The positive cases usually consist of a sequence of images following a specific rule in \(B_{under}\), such as an agent moving towards a direction for several steps. The other kind, _negative cases_, follows a similar structure but is inconsistent with \(B_{under}\). We assume that the learner knows whether a training case is positive or negative in general. Furthermore, the training images are with ground-truth symbol groundings \(Z^{\prime}\), we assume that only a _minority or none_ of \(Z^{\prime}\) is given to the learner, and denote \(Z^{\prime}_{U}\) the unknown subset of \(Z^{\prime}\). Finally, we also assume that the learner is given the reasoning model \(H\) and a _training background knowledge set_\(B_{train}\), which _can be a subset of \(B_{under}\)_. Now we define the two symbol grounding learning tasks. **Symbol Assignment**: Symbol assignment can be formulated into the following learning problem: \[\arg\max_{\theta,Z^{\prime}_{U}}\mathcal{L}(X,Z;\theta,H,B_{train}),\] in which \(\mathcal{L}\) is the data-likelihood, and \(\theta\) is the parameters of NVGM. Under this situation, \(B_{train}=B_{under}\), meaning that the learner is given the full data generation background knowledge. The objective is to learn the optimal NVGM as well as discovering the missing symbolic groundings \(Z^{\prime}_{U}\). **Rule learning**: Rule learning can be formulated into the following learning problem: \[\arg\max_{\theta,Z^{\prime}_{U},B_{U}}\mathcal{L}(X,Z;\theta,H,B_{train}).\] Under this situation, we assume that \(B_{train}\subset B_{under}\), meaning that besides symbol assignment, the learner should also learn \(B_{U}=B_{under}/B_{train}\), which is the missing logic programs of \(B_{train}\) in \(B_{under}\). If \(B_{U}\) appears in \(B_{test}\), then it will be useful for future testing generation tasks. ## 4 Abductive Visual Generation In this section, we introduced the proposed abduction visual generation (AbdGen) approach. Due to the space limitation, we mainly describe the high-level process of the algorithm. The implementation details are included in the appendix. The symbol-integrated NVGM model in AbdGen is illustrated in Fig. 2. Compared with traditional autoencoders, the major difference of the proposed model lies in the vector-quantized symbolic grounding module \(V\), as well as the integrated logic programming system. Taking sub-symbolic LSFs \(z\) as input, the vector-quantized module \(V\) conducts the nearest-neighbor look-up to get the corresponding semantic SSF \(z^{\prime}\) as output, which shares the similar coding mechanism as VQVAE Van Den Oord _et al._ (2017). The obtained symbolic SSF \(z^{\prime}\), which have explicit semantic groundings, are used for abductive learning for generating abduced groundings \(z^{\prime}_{abd}\). Figure 1: Example of symbol-grounded conditional generation. Given the first picture and the background knowledge rule “Mario moves with right priority followed by up priority”, a sequence of images would be generated. The algorithmic process of AbdGen is illustrated in Alg. 1. The crucial step lies in the abduction process in Line 10-15. Two abductive learning processes are introduced for generating abduced groundings, which refine NVGM-generated groundings by knowledge reasoning based on \(B_{train}\) and \(H\). After that, the abduced groundings are used to update the model using \(L_{abd}\), which supervises \(z^{\prime}\) to get closer to \(z^{\prime}_{abd}\), together with traditional reconstruction loss and vector-quantized loss as proposed in Van Den Oord _et al._ (2017). Detailed descriptions of the losses are included in the appendix. ### Symbol Assignment by Knowledge Abduction For symbol assignment task, AbdGen utilizes the Quantized Abduction (QuantAbd) subroutine, which is illustrated in Alg. 2. QuantAbd makes use of the quantized structure of SSFs \(z^{\prime}\). This allows the utilization of the distance between codes, thereby enhancing the efficiency of the code displacement process. During the abduction procedure, candidate groundings are iteratively chosen from the codebook of the vector quantized module based on the nearest distance criterion. The Figure 2: The structure of _AbdGen_ model selected candidate groundings are then fed into the reasoning model \(H\) equipped with background knowledge \(B_{train}\). If the candidate groundings are consistent with \(H\) and \(B_{train}\), then they are used as the output groundings. The procedure continues until successful return is achieved, or the time limit is exceeded. When the abduction procedure fails, the instance will not be used for training. ### Learning New Rules for Knowledge Augmentation For rule learning task, AbdGen utilizes the Contrastive Meta-Abduction (ConMetaAbd) subroutine illustrated in Alg. 3. The basic mechanism follows the meta-abductive learning framework (MetaAbd) Dai and Muggleton (2020) (Line 7). Fixing \(B_{train}\) and \(H\), to generate abduced groundings for a set of SSFs \(z^{\prime}_{P}\), MetaAbd takes a prior probability distribution \(\mathrm{Prob}_{P}\) over all possible groundings, and conducts searching to find the best output as follows: \[(z^{\prime}_{abd},B_{U}) =\arg\max_{z^{\prime}_{P}\in Z^{\prime},B_{U}\in\mathcal{B}} \mathrm{Prob}_{P}(z^{\prime}_{abd}),\] \[s.t. Z^{\prime}=\{z^{\prime}:B_{train}\cup B_{U}\cup H\vDash z^{ \prime}\},\] in which \(\mathcal{B}\) is the set of all possible new rules defined by \(B_{train},H\) and a set of pre-defined second-order logic meta-rules. The prior distribution \(\mathrm{Prob}_{P}\) is usually generated by neural prediction models, which is NVGM in our case. ``` 1:Input: background knowledge \(B_{train}\), reasoning model \(H\), positive SSF \(z^{\prime}_{P}\), positive grounding probabilities \(\mathrm{prob}_{P}\), negative SSF \(z^{\prime}_{N}\); 2:Output: abduced grounding \(z^{\prime}_{abd}\), \(B_{U}\); 3:for\(i\gets 1\) to \(N\)do 4:\(z^{\prime}_{i}\leftarrow\arg\min_{z^{\prime}\in z^{\prime}_{P}}dist(z^{\prime},z^{\prime}_{N,i}),\ z^{\prime}_{N,i}\in z^{\prime}_{N}\); 5:endfor 6:Bonded negative ordered set: \(b^{\prime}_{N}\leftarrow(z^{\prime}_{1},z^{\prime}_{2},...,z^{\prime}_{N})\); 7:\(z^{\prime}_{abd},B_{U}\leftarrow\texttt{MetaAbd}(\mathrm{Prob}_{P},z^{\prime} _{P},b^{\prime}_{N},B_{train},H)\); ``` **Algorithm 2** Quantized Abduction (QuantAbd) In the previous study, MetaAbd only utilizes positive cases for abduction. The reason is that negative cases can be trivially grounded in abductive learning, making them have no effect on learning. Consider the example in Fig. 4, in which the grounding semantics are the positions of Mario in the images. If the negative cases are input directly to MetaAbd, any abduction result that the two images lie at the same position would be legal, while the result would be incorrect and useless in this case. On the other hand, for generative learning, involving only positive cases would also lead to _less-informative_ rules: if only the positive case is considered in Fig. 4, the correct rule that allow going right only, and the less-informative rule that allows both going left and right, would both be legal. For this reason, there should be a method that includes negative cases of abduction meanwhile avoiding trivial solutions. The proposed method is quite easy to understand: by bonding the groundings of negative cases to their most similar ones in the positive cases, as done by the dashed arrow in Fig. 4, the trivial solutions on negative cases can be easily eliminated, since trivializing negative cases would also affect the grounding of the positive cases. In ConMetaAbd, the bonding is done by the nearest neighbor calculation, similar to the situation in QuantAbd. This is benefited from the vector quantized module for which the metric distance is closely related to grounding similarity. ## 5 Experiments This section presents experiments conducted to verify the effectiveness of AbdGen under symbol assignment and rule learning tasks. The primary objectives are: 1. To verify whether AbdGen can achieve desirable symbol assignment accuracy under weak supervision from logic programming systems with few instance-level labeling information. 2. To determine if AbdGen can achieve symbol grounding with predefined rules and also can generate new images based on new given rules, thereby demonstrating its generalization capability. 3. To assess whether AbdGen can learn the appropriate rules using provided background knowledge and symbolically grounded images, and then produce new images following the learned rules. The neural and logical parts of AbdGen are implemented with PyTorch Paszke _et al._ (2019) and SWI-Prolog Wielemaker (2003) respectively. The source code is released on [https://github.com/candytalking/AbdGen](https://github.com/candytalking/AbdGen). Detailed descriptions of experimental tasks, datasets, and implementations are provided in the appendix. ### Symbol Assignment In this experiment, we will assess the performance of AbdGen in symbol assignment tasks. We will verify that our model that links symbols with knowledge will have greater advantages in symbol assignment. We mainly utilize dSprites Matthey _et al._ (2017) and Mario Misino _et al._ (2022) Datasets to compare and evaluate symbol assignment performance with state-of-art disentanglement models, VQVAE Van Den Oord _et al._ (2017) and \(\beta\)-VAE Higgins _et al._ (2017). To evaluate the grounding accuracy under weak supervision, AbdGen will be tested under specific percentage of instance-level labeled training data and compared with other models across varying label percentage. The grounding output of all approaches are generated using a single-layer linear classifier. The SSFs \(z^{\prime}\) obtained from AbdGen and VQVAE after passing through the symbol grounding module \(V\) will serve as input to the classifier, whereas \(\beta\)-VAE directly utilizes the latent Figure 4: Illustration of bonding negative case. Figure 3: The first two images respectively illustrate the classification performance of AbdGen compared to beta-VAE and VQVAE on dSprites (left) and Mario (right) datasets. The latter two images demonstrate the performance of AbdGen ’s abduction and classification over iterations on dSprites (left) and Mario (right) datasets. The line corresponding to each model represents the average accuracy \(\bar{a}\), while the shaded area indicates the accuracy range with standard error \(\bar{a}\pm\sigma\). variable \(z^{\prime}\) obtained after sampling. The main loss function will combine the reconstruction \(L_{rec}\) (MSE) and the classification \(L_{abd}\) (Cross-Entropy) components whose relative weights are optimally tuned. Each experiment was conducted five times, and the performance metric was the average accuracy on the test set. **Results**: The symbol assignment performance of different models is shown in first two images from Fig. 3. On dSprites dataset, unlike beta-VAE and VQVAE, which depend on the label percentage, our model achieves comparable performance even with a small proportion of given labeled data. While on Mario dataset, the rapid convergence of \(\beta\)-VAE and VQVAE with only a small number of labeled datas can be attributed to the clear distinctions between the nine positions in Mario's images from a classification perspective. Nevertheless, AbdGen still exhibits a significant advantage even when no labels are provided. The last two images illustrate the changes of abductive and classification performance of AbdGen with the number of iterations. AbdGen demonstrates outstanding performance on the datasets especially dSprites, where it achieves nearly perfect alignment between the pseudo-labels generated through abduction and the ground truths after only a few iterations. Furthermore, the classification accuracy also increases in accordance with the rise of abduction. ### Rule-Based Generalization In this task, our objective is to contrast the ability of our proposed method with that of VAEL Misino _et al._[2022] in the conditional generation of images, based on testing rules that is different from those used for training. We carry out this comparative analysis using the Mario dataset. The training rules are limited to going one step _right_ and _up_, while the testing rules can be _left_ and _down_. For our method, we utilize unlabeled image sequences that first move right and then upwards, with a maximum of five images per sequence. While, VAEL employs pairs of images for training, utilizing \(100\%\), \(50\%\), and \(10\%\) of the 'right' and 'up' labels, respectively. **Results**: As shown in Fig. 5, our method, even in the absence of labels, can match or even surpass the performance of VAEL with \(100\%\) labels in the conditional generation task. VAEL exhibits some instability in the generation of positions within the Mario dataset. Furthermore, as we diminish the label count for VAEL, it becomes evident that VAEL's accuracy in determining Mario's position drops considerably, leading to generated outcomes that are not in line with our desired results. \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Wrong} & Role-Example & \multirow{2}{*}{Control} \\ & & Loss-Informer & \\ \hline \multirow{4}{*}{Mario} & \(\{\)A,B\(\}\)-doun(\(\Delta\),C,C,B), & \(\{\)A,B\(\}\)-doun(\(\Delta\),C,C,B), & \(\{\)A,B\(\}\)-doun(\(\Delta\),C,C,B), \\ & \(\{\)A,B\(\}\)-doun(\(\Delta\),C,C,C,B), & \(\{\)A,B\(\}\)-doun(\(\Delta\),C,C,B), & \(\{\)A,B\(\}\)-doun(\(\Delta\),C,C,C,B), \\ & \(\{\)A,J\(\}\)-doun(\(\Delta\) ### Rule Learning In this experiment, we aim at testing the rule learning performance of AbdGen. For this task, the reasoning system should not only acquire new knowledge but also effectively integrate it with symbol grounding and image generation. We'll assess the efficacy of AbdGen's Knowledge Augmentation using the Mario and FIW Robinson _et al._ (2016) datasets. The original FIW dataset include face images of different family members. In our experiments, we design logic roles based on human age and genders. Towards this purpose, we choose 800 images as training data from FIW and label them with corresponding ages and genders as ground-truth. Subsequently, we design the underlying generation rule (correct rules in Tab. 1) on mario dataset as _move with right priority if possible, and then move up_. Under FIW dataset, we design the rule as _generate with age descending priority, and then with female gender priority_. Our testing procedure involves presenting an image from the test set and then generating subsequent images based on acquired rules. While the both task focuses on image generation, the FIW dataset is more intricate not only because we make use of a relatively small number of training data, but also because it comprise randomly chosen face images. This randomness compelling the model to discern symbol-related factors in more intricate image combinations. We compare AbdGen with the gerative modification of MetaAbd Dai and Muggleton (2020), which is the only existing method capable for rule learning of logical symbolic NVGMs according to our knowledge. To make MetaAbd, which is designed for solving classification tasks under weak supervision of logic programming systems, suitable for generative learning, we transfer the original classification module of MetaAbd into the same vector quantized structure to AbdGen, making MetaAbd capable for generating symbol-grounded visual objects. As discussed above, the major difference of MetaAbd to AbdGen is the lacking of making use of negative cases to do contrastive learning, in order to generating more precise rules. **Results**: The experimental results are shown in Tab. 2 and Fig. 6. Tab. 1 illustrates examples of learned Prolog-style logic rules during learning, which include wrong, less-informative, and the ground-truth correct rules. From Tab. 2, it can be observed that AbdGen significantly outperforms MetaAbd in learning correct rules. Without making use of information from negative cases, MetaAbd fails to distinguish between correct and incorrect rules, as well as eliminate less-informative rules. This phenomenon verifies the importance of including contrastive information from both positive and negative cases for rule abduction. Fig. 6 shows grounded conditional generation results of the trained AbdGen model. Given input images on the left, the model generates subsequent images on the right, following the learned hidden generative rules as expected. In special, under FIW, the model capture complex semantic groundings of age and gender even with a small training dataset, which shows \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multicolumn{3}{c}{Rule Type (\% )} \\ & & Wrong & Less-Informative & Correct \\ \hline \multirow{2}{*}{Mario} & MetaAbd & 0.11 & 0.89 & 0 \\ & AbdGen & 0.05 & 0.12 & 0.83 \\ \hline \multirow{2}{*}{Face} & MetaAbd & 1 & 0 & 0 \\ & AbdGen & 0.03 & 0.00 & 0.97 \\ \hline \hline \end{tabular} \end{table} Table 2: The percentages of correct, incorrect, and less informative rules produced by the AbdGen and MetaAbd methods under Mario and FIW datasets, respectively. Figure 6: Grounded conditional generation results of AbdGen on FIW (left) and Mario (right). Given the input image on the left, AbdGen generates a sequence of images based on the learned rules (corresponding to the correct rules in Tab.1). promising results that AbdGen could achieve high data efficiency for symbol-grounded generative learning. ## 6 Limitations and Future Work 1) Currently, we assume that the number of semantic factors to ground, as well as the number of situations for each factor, are known as prior information to the learner. We believe that this assumption is reasonable when the logic reasoning system is relatively complete, so that the prior information of the grounding semantics can be easily obtained from the system. Furthermore, this assumption is also introduced in some classical semantic grounding methods, e.g. DC-IGN Kulkarni _et al._ (2015). While relaxation of this assumption would be very challenging and interesting future work. 2) Since we focus on symbol grounding rather than generative quality in this work, due to limitations of time and computation resources, we make use of relatively simple generative models in our experiments. Large-scale benchmark datasets with more complicated visual objects are also not included. On the other hand, we believe that the proposed method can be scaled-up to complex generative models and datasets, which is left as important future research task. ## 7 Negative Societal Impact As with all kinds content-based visual generation methods, one potential negative impact is that the danger of generating fake visual objects for malicious applications. For our method, an additional risk is that malicious users can inject harmful background knowledge to affect the generation results. However, we believe that all these risks can be alleviated by restricting background knowledge to include only safe logic rules. As our approach is based on logic programming systems, which are highly interpretative and controllable by human users, this kind of restrictions are reasonably easy to set comparing to subsymbolic methods, which are usually black-box and the outputs are hard to interpret. ## 8 Conclusion In this paper, a neural-symbolic learning approach, abductive visual generation (AbdGen), is proposed for integrating logic programming systems with neural visual generative models. Through AbdGen, the LSFs of the learned NVGM is symbolically grounded with the SSFs from the logic programming system, thus could generate visual contents based on complex logical generative rules. The core of AbdGen lies in two novel abductive learning strategies, quantized abduction and contrastive meta-abduction, which can achieve reliable and efficient symbol assignment with few instance-level supervision, meanwhile effectively learn precise missing generative rules from data. From experimental results, AbdGen provides sound evidence that the abductive learning framework can be advantageous on symbol grounding when symbolic reasoning systems and neural generative models should be integrated to obtain visual content generators with higher-level intelligence. Therefore, we expect our work could inspire more future researches on symbol grounding of neural-symbolic generative models. ## 9 Acknowledgement This work is supported by National Key R&D Program of China (2022ZD0114804) and National Science Foundation of China (62206245).
2307.15851
High-Energy Neutrino and Gamma Ray Production in Clusters of Galaxies
We compute the contribution from clusters of galaxies to the diffuse neutrino and $\gamma-$ray background. Due to their unique magnetic-field configuration, cosmic rays (CRs) with energy $\leq10^{17}$ eV can be confined within these structures over cosmological time scales, and generate secondary particles, including neutrinos and $\gamma-$rays, through interactions with the background gas and photons. We employ three-dimensional (3D) cosmological magnetohydrodynamical (MHD) simulations of structure formation to model the turbulent intergalactic and intracluster media. We propagate CRs in these environments using multi-dimensional Monte Carlo simulations across different redshifts (from $z \sim 5$ to $z = 0$), considering all relevant photohadronic, photonuclear, and hadronuclear interactions. We also include the cosmological evolution of the CR sources. We find that for CRs injected with a spectral index $1.5 - 2.7$ and cutoff energy $E_\text{max} = 10^{16} - 10^{17}$~eV, clusters contribute to a substantial fraction to the diffuse fluxes observed by the IceCube and Fermi-LAT, and most of the contribution comes from clusters with $M > 10^{14} \, M_{\odot}$ and redshift $z < 0.3$. We also estimated the multimessenger contributions from the local galaxy cluster.
Saqib Hussain, Giulia Pagliaroli, Elisabete M. de Gouveia Dal Pino
2023-07-29T00:50:29Z
http://arxiv.org/abs/2307.15851v1
# High-Energy Neutrino and Gamma Ray Production in Clusters of Galaxies ###### Abstract: We compute the contribution from clusters of galaxies to the diffuse neutrino and \(\gamma-\)ray background. Due to their unique magnetic-field configuration, cosmic rays (CRs) with energy \(\leq 10^{17}\) eV can be confined within these structures over cosmological time scales, and generate secondary particles, including neutrinos and gamma-rays, through interactions with the background gas and photons. We employ three-dimensional (3D) cosmological magnetohydrodynamical (MHD) simulations of structure formation to model the turbulent intergalactic and intracluster media. We propagate CRs in these environments using multi-dimensional Monte Carlo simulations across different redshifts (from \(z\sim 5\) to \(z=0\)), considering all relevant photohadronic, photonuclear, and hadronuclear interactions. We also include the cosmological evolution of the CR sources. We find that for CRs injected with a spectral index \(1.5-2.7\) and cutoff energy \(E_{\rm max}=10^{16}-10^{17}\) eV, clusters contribute to a substantial fraction to the diffuse fluxes observed by the IceCube and Fermi-LAT, and most of the contribution comes from clusters with \(M>10^{14}\,M_{\odot}\) and redshift \(z<0.3\). We also estimated the multimessenger contributions from the local galaxy cluster. Introduction The diffuse neutrino and \(\gamma-\)ray background provide a very unique view of the high-energy Universe. The origin of these messengers is unknown. They may originate from various astrophysical sources such as galaxy clusters [1, 2], active galactic nuclei (AGNs) [3, see also references therein], star-forming galaxies (SFGs) [see e.g., 4], supernova remnants [e.g., 5], gamma-ray bursts [see e.g., 6]. These high-energy messengers can be produced by a single class of sources i.e., clusters of galaxies [1]. Galaxy Clusters are the largest astrophysical objects of sizes \(\sim\) Mpc and magnetic field strength of about 1 \(\mu\) G [7, 8] They can accelerate CRs up to very high-energies \(\sim 10^{18}\) eV [8, 9] through shock and large-scale turbulence acceleration potentially involving magnetic reconnection. Due to the large size and magnetic field strength, these structures can confine CRs of energy \(\lesssim 10^{17}\) eV up to a few \(\sim\) Gyrs [7, 10]. While confined, CRs have the ability to interact with gas present within the intercluster medium (ICM) and photon fields, including the cosmic microwave background (CMB) and extragalactic background light (EBL). This interaction leads to the production of high-energy secondary particles such as neutrinos and gamma-rays. Consequently, clusters of galaxies emerge as potential sources for generating high-energy multi-messenger signals [1, 7]. Previous analytical and semi-analytical investigations [e.g., 1, 11] have estimated the contribution of galaxy clusters to the diffuse neutrino background, revealing their potential to contribute up to 100% of the diffuse neutrino background. Additionally, their impact on the diffuse \(\gamma-\)ray background is also pronounced, particularly for energies exceeding 10 GeV [2]. In this work, we summarize our calculations of diffuse backgrounds originating from galaxy clusters, employing the most advanced numerical methods available to date [36, 37]. In Section 2, we present our assessment of the contribution of galaxy clusters to the production of neutrinos and \(\gamma\)-rays, taking into account the entire population of galaxy clusters. Following that, in Section 3, we specifically focus on a cluster with properties like the local one, the Perseus cluster. Finally, in Section 4, we discuss our results and draw our conclusions. ## 2 Neutrinos and gamma rays from galaxy clusters To study the diffuse neutrino and \(\gamma-\)ray backgrounds we used the most detailed numerical method i.e., combining 3D-MHD simulations with the multi-dimensional Monte-Carlo simulations of test particle propagation using CRPropa code [39] The background of ICM is probed by 3D-MHD simulations performed by [12]. Our assumptions are the following: (i) CRs consist of only protons; (ii) neutrinos and \(\gamma-\)rays are initially produced by purely hadronic interactions inside the clusters; and (iii) the extragalactic magnetic field is not considered during the propagation of CRs and \(\gamma-\)rays because it is highly uncertain, and most likely, it will not produce any significant change in the results, especially above 10 GeV energy. Our simulation setup has two steps: the first one is to propagate CRs inside clusters considering the background magnetic field and density distribution directly from the MHD simulations. We considered all the relevant CR interactions namely: photopion production, Bethe-Heitler pair production, pair production (single, double, triplet), inverse Compton scattering (ICS), and proton-proton (pp) interactions. We also take into account the adiabatic energy losses due to the expansion of the Universe and the synchrotron losses. Since the energy of synchrotron photons is most likely \(<\) Gev, it is beyond of the scope of this work. At the end of the first step, we collect the CRs escaped as well as their byproducts \(\gamma-\)rays and neutrinos, at the edge of the cluster. In the second step, we propagate those escaped CRs and the \(\gamma-\)rays from the edge of the cluster to the Earth. During CR propagation in the intergalactic medium, we implemented the photopion and Bethe-Heitler pair production due to interactions with CMB and EBL photon fields. Furthermore, through the propagation of \(\gamma-\)rays, we accounted for the electromagnetic cascade processes (single, double, triplet pair production, as well as inverse Compton scattering (ICS), which take place both within the clusters and in the intergalactic medium due to CMB and EBL. We summarize our results in Fig. 1, comparing them with the IceCube [13] and Fermi-LAT data [14]. It shows the fluxes of neutrinos and \(\gamma-\)rays from the entire population of clusters, considering the CR sources embedded in the center. These fluxes are obtained by injecting CRs in the center of clusters with spectral indices \(\alpha=1.5-2.5\) and maximum energy in the range \(10^{16}-10^{17}\) eV. The mass range of clusters considered in our simulations is \(10^{12}<M/M_{\odot}\lesssim 5\times 10^{15}\) and the redshift interval is \(z\leq 5.0\). Furthermore, we have assumed that 1% of the cluster luminosity goes into CRs to be consistent with the prediction of Fermi-LAT [15]. Results obtained from our simulations are quite comparable with observed diffuse fluxes of neutrinos and \(\gamma-\)rays. Fig. 1 shows the connection between neutrinos and \(\gamma-\)rays, but it depends on the adopted assumptions such as spectral indices, maximum energy, and the luminosity of CRs [36, 37]. In the lower panel of Fig. 1, we show the contribution from misaligned AGN [20], blazars [21], and SFGs [4] to diffuse \(\gamma-\)ray background. The contribution of individual sources is dominant up to energy 100 GeV while the cluster contribution starts to dominate above this energy. More importantly, our results are comparable with the sensitivity curves of the High Altitude Water Cherenkov (HAWC) [22], the Large High Altitude Air Shower Observatory (LHAASO) [23] and the upcoming Cherenkov Telescope Array (CTA) [24]. ## 3 Multi-messenger from Perseus-like clusters Recently, the Telescope Array (TA) collaboration has observed an excess of CR events of energy \(\gtrsim 10^{19.4}\) eV with \(\sim 3.5\)\(\sigma\) standard deviations toward the center of the Perseus-Pisces supercluster (PPSC), which is about 75 Mpc away from the Earth [25]. In this section, we focus on the multi-messenger emission including high-energy neutrinos and \(\gamma-\)rays from an individual cluster with properties similar to Perseus cluster where there is a high probability of existence of UHECR sources. We have extracted a single cluster from the global MHD simulation described in the previous section [36, 37] to probe the background of a cluster like Perseus with mass \(\sim 10^{14.5}\)\(M_{\odot}\), and studied the propagation of CRs in that medium. We injected CRs at the center of the cluster with spectral index \(\alpha=2.3\), \(E_{\rm max}=10^{17}\) eV, and considered that 1% cluster luminosity goes to CRs. Initially, neutrinos and \(\gamma-\)rays are produced by CRs interactions in the ICM. We have considered all the relevant CR interactions during their propagation both in the cluster and in the intergalactic medium and also taken into account the \(\gamma-\)ray cascade inside and outside the cluster, as described above. In Fig.2, we present the multi-messenger picture of a Perseus-like cluster. The Fermi-LAT [15] collaboration estimated the upper limit for \(\gamma-\)rays from individual clusters (Fig. 2) and our results are consistent with their predictions. Nevertheless, the \(\gamma-\)ray flux from the central source of Perseus Figure 1: Multi-messenger emission from clusters of galaxies. Neutrino [36] and \(\gamma-\)ray [37] from the entire population of galaxy clusters are represented by blue and gray bands, respectively. The neutrino flux is compared with the IceCube data (error bars correspond to the 68% confidence intervals) [16]. The \(\gamma-\)ray flux is compared with the DGRB observed by Fermi-LAT (error bars denote the total uncertainties, statistical and systematic) [17], and upper limits from HAWC (95% confidence level) [18] and CASA-MIA (90% confidence level) [19]. The lower panel compares our \(\gamma-\)ray flux (pink band) with the sensitivity curves (gray lines) obtained for point sources from LHAASO [23], HAWC [22], and the forthcoming CTA observatories [24] only for reference purposes. The contribution from individual sources, namely, blazars [21], AGNs [20], and SFGs [4] is also shown. Extracted from [36] and [37]. cluster (NGC-1275) observed by the SHALON experiment (1996 - 2012) [27] is much larger than our results. In Fig. 3, we show the total fluxes of CRs, \(\gamma-\)rays, and neutrinos from an entire sample of Perseus-like sources in the local Universe. The number density of clusters with mass \(\gtrsim 2\times 10^{14}\,M_{\odot}\) is obtained from our MHD simulation [12, 36, 37], which is around \(N\simeq 10^{-5}\log M\,{\rm Mpc}^{-3}\) and also comparable with [38]. Overall, Fig. 3 indicates that Perseus-like clusters can significantly contribute to the emissions of UHECRs if their composition consists of protons only. However, the CR flux is much smaller than the observed UHECR flux from Auger data [see e.g., 28, also references therein]. It is worth noting that the acceleration of CRs depends on rigidity, which suggests that heavier nuclei may not be dominant. Nevertheless, it is also important to acknowledge that individual sources within clusters, such as starburst galaxies and compact objects like magnetars, possess the capability to accelerate heavier nuclei to extremely high energies. Their inclusion in our analysis could potentially enhance the results for CRs, though the production of neutrinos and \(\gamma-\)rays is more efficient for protons compared to heavier nuclei. This is primarily because these particles are predominantly produced through pion decay processes. In the case of heavier nuclei, photodisintegration becomes more prominent than pion production. Therefore, the assumption of considering only protons in our study seems to be reliable [see e.g., 29]. Still, the contribution of individual sources should be further explored. In Fig. 3 we also showed the flux of secondary particles produced by the interaction of CRs during their propagation in the ICM and intergalactic medium. The neutrino flux we obtained for Perseus-like sources is comparable with upper limits recently estimated by the IceCube [34]. On the other hand, the \(\gamma-\)ray flux is smaller by an order of magnitude from the diffuse flux observed by the Fermi-LAT [17]. Figure 2: Multi-messenger flux of Perseus-like cluster at 75 Mpc. Red arrows represent the upper limit from Fermi-LAT [26] for five clusters (top to bottom: A400, A3112, A1367, Coma, EXO0422). We also show the high-energy \(\gamma-\)ray emission from the active galaxy NGC-1275 in the center of Perseus cluster, observed by SHALON experiment (1996-2012) [27]. ## 4 Discussion and Conclusion Our results predict that clusters of galaxies can contribute up to a sizeable percentage to diffuse neutrinos and \(\gamma-\)rays. Our results on neutrinos [36] match with the previous studies [1, 7, 33, 11, 34] and the \(\gamma-\)rays flux we obtained [37] roughly agree with the predictions of [2, 33]. The contribution to diffuse \(\gamma-\)rays by individual sources such as active galactic nuclei (AGN) [3], star forming galaxies (SFGs) [4], and blazars [21] is dominant over clusters below 100 GeV energy. However, above energy 100 GeV, we showed that the total \(\gamma-\)ray flux from the entire population of clusters is dominant over individual source contribution. Therefore, our estimation is extremely important provided that high-energy CRs are present in the clusters. Our results are comparable with the diffuse fluxes of neutrinos and \(\gamma-\)rays observed by IceCube [16] and Fermi-LAT [17], respectively. Moreover, the \(\gamma-\)ray flux we obtained from the cluster population up to redshift \(z\leq 5.0\) is comparable with the sensitivity curves of the HAWC [18, 35], the LHAASO [23], and even the CTA [24] which indicates possible observations of \(\gamma-\)rays from these sources [36, 37]. Further, we have calculated the multi-messenger emission from Perseus-like sources within a distance of about 75 Mpc, in our local Universe. Our evaluation suggests that the CR flux from these sources (calculated for protons only) can account for a sizeable percentage of UHECR detected through the Telescope Array (TA). The current prediction by the TA collaboration of a probable source of UHECRs in the direction of the Perseus cluster provides significance to our findings. Figure 3: Multi-messenger fluxes from an entire population of Perseus-like sources located at a distance of about 75 Mpc. The total \(\gamma-\)ray flux from the entire population is presented by black curves. The DGRB from Fermi-LAT [17] and the upper limits for DGRB from HAWC-collaboration [18] are also depicted. The total neutrino flux from the Perseus-like clusters is presented with blue curves, the IceCube diffuse flux (blue circle)[13, 16], and the upper limits for clusters (blue diamonds) [34] is also shown. The red curve shows the spectrum of CRs arrived from Perseus-like sources within a distance \(\sim 75\) Mpc and the red marker shows the CR spectrum observed by the telescope array and telescope array low energy extension (TALE (\(E\times 0.91\)))[30], pink marker represents the Auger data (\(E\times 1.05\)) [31, 32]. However, our results are not consistent with the Auger data which indicate that the composition of UHECRs mainly consists of heavier nuclei. In the forthcoming work we will account for this contribution. Nevertheless, the production of neutrinos and \(\gamma-\)rays calculated from this population of local clusters shows that the flux of neutrino is comparable with the estimated upper limits of IceCube for clusters [34], while the \(\gamma-\)ray flux is about one order of magnitude less than the Fermi-LAT [17] observations. This indicates that lower mass clusters should also be contributing to this emission, as found in [37] (see also Fig. 1. We acknowledge that our estimations may be subject to changes if certain parameters are modified, such as the luminosity of the CRs and the spatial distribution of CR sources within clusters. It is important to recognize that the presence of the intergalactic magnetic field can also have minor effects on our results. However, it is worth noting that the parameters we have chosen for our calculations are in general well-established and similar to those used in previous research. Any modifications to these parameters are expected to have only minor impacts on our overall conclusions. Moreover, our results establish a clear connection between the three messengers (neutrinos, gamma-rays, and cosmic rays), enabling us to indirectly investigate the properties of CRs within clusters. This connection provides valuable insights into the nature and behavior of CRs in these environments. AcknowledgmentsThis work is partially supported by the Brazilian agencies FAPESP (grant 2013/10559 - 5 & 17/12828 - 4) and CNPq (grant 308643/2017 - 8). The work of SH and GP is partially supported by the research grant number 2017W4HA7S "NAT-NET:Neutrino and Astroparticle Theory Network" under the program PRIN 2017 funded by the Italian Ministero dell'Istruzione, dell'Universita' e della Ricerca (MIUR).
2304.13970
Spin-Peierls transition to a Haldane phase
We present an organic compound exhibiting a spin-Peierls (SP) transition to an effective spin-1 antiferromagnetic uniform chain, that is, the Haldane chain. The clear disappearance of magnetization, accompanied by a structural phase transition, is well explained by the deformation to an effective spin-1 Haldane chain. The flexibility of the molecular orbitals in the organic radical compound allows the transformation of the exchange interactions into the Haldane state with different topologies. The SP transition in the present compound demonstrates a mechanism different from that of the conventional systems, paving another path for research in quantum phenomena originating from spin-lattice couplings.
Hironori Yamaguchi, Hiroki Takahashi, Takashi Kawakami, Kiyomi Okamoto, Toru Sakai, Takeshi Yajima, Yoshiki Iwasaki
2023-04-27T06:15:01Z
http://arxiv.org/abs/2304.13970v1
# Spin-Peierls transition to a Haldane phase ###### Abstract We present an organic compound exhibiting spin-Peierls (SP) transition to an effective spin-1 antiferromagnetic uniform chain, that is, the Haldane chain. The clear disappearance of magnetization, accompanied by a structural phase transition, is well explained by the deformation to an effective spin-1 Haldane chain. The flexibility of the molecular orbitals in the organic radical compound allows the transformation of the exchange interactions into the Haldane state with different topologies. The SP transition in the present compound demonstrates a mechanism different from that of the conventional systems, paving a new path for research in quantum phenomena originating from spin-lattice couplings. pacs: 75.10.Jm, + Footnote †: preprint: APS/123-QED One-dimensional (1D) spin chains, in which localized spins are linearly arranged through exchange interactions, represent the simplest spin model with strong quantum fluctuations. Various quantum many-body phenomena occur according to the degrees of freedom in 1D spin chains. Spin-lattice couplings in spin-1/2 chains give rise to a spin-Peierls (SP) transition to a nonmagnetic quantum phase [1; 2], which is the magnetic analog of the Peierls instability of 1D metals [3]. The antiferromagnetic (AF) spin-1/2 uniform chain acquires a spin gap through lattice deformation, resulting in an AF alternating chain, revealing that the increase in magnetic energy exceeds the loss of elastic energy due to lattice distortion. The SP transition, which corresponds to the boundary between the gapless uniform and gapped alternated states, is a second-order phase transition and has been observed in a variety of 1D materials [4; 5; 6; 7; 8; 9]. Organic materials have particularly many examples because of the lattice softness inherent to molecular-based systems. In addition, they are mostly unstable upon pressurization, yielding pressure-induced phase transitions to superconducting states via SP ground states [10; 11; 12; 13]. The Haldane state is a well-known example of quantum many-body phenomena in 1D spin models, in which the ground-state topology changes depending on the spin-size [14]. The Heisenberg AF chain with integer spins demonstrates an energy gap between the nonmagnetic ground state (Haldane state) and the first excited state, whereas that with half-integer spins demonstrates no energy gap. The topology of the Haldane state can be described by a valence-bond picture [15], in which each integer spin is considered a collection of \(S\)=1/2 and forms a singlet state between \(S\)=1/2 spins at the different sites. In recent research on quantum computers, the application of the edge state of symmetry-protected topological phases in cases with odd-integer spins has been proposed and has attracted attention [16; 17; 18]. The valence-bond picture of the Haldane state can be mapped onto the strong ferromagnetic (F) coupling limit of an F-AF alternating chain with half-integer spin values [19; 20; 21; 22; 23]. The ground-state properties of the spin-1/2 F-AF chains have investigated for various exchange constants ratios, \(J_{\rm F}/J_{\rm AF}\)[24; 25]. No discontinuous change in the ground state associated with a phase transition was observed between the Haldane (\(|J_{\rm F}|\gg J_{\rm AF}\)) and AF dimer (\(|J_{\rm F}|\ll J_{\rm AF}\))) states, indicating that the ground state of the spin-1/2 F-AF chain is equivalent to the spin-1 Haldane state. Our material design using verdazyl radicals with diverse molecular structures realizes unconventional spin-1/2 systems, such as the ferromagnetic-leg ladder, quantum pentagon, and random honeycomb, which have not been realized in conventional inorganic materials [26; 27; 28]. The flexibility of the molecular orbital (MO) in the verdazyl radicals enabled us to design spin arrangements composed of intermolecular exchange interactions through molecular design [29]. Moreover, the alternating spin density distribution in \(\pi\)-conjugated verdazyl systems can readily induce F intermolecular exchange interactions depending on the overlapping molecular orbitals [30; 31; 32]. In this letter, we present a model compound that exhibits an unconventional SP transition. We synthesized single crystals of the verdazyl-based salt (\(p\)-MePy-V-\(p\)-CN)PF\({}_{6}\)-CH\({}_{3}\)CN [\(p\)-MePy-V-\(p\)-CN = 3-(4-methylpyridyl)-1-phenyl-5-(4-cyanophenyl)-verdazyl]. Our molecular orbital (MO) calculations and the analysis of the magnetic behavior indicated the SP transition from a spin-1/2 uniform AF chain to a spin-1/2 F-AF alternating chain forming the Haldane state. Furthermore, we demonstrated that the flexibility of the molecular orbitals in this compound allows the transformation of the exchange interactions into the Haldane state. We synthesized \(p\)-MePy-V-\(p\)-CN using a conventional procedure [33] and prepared an iodide salt of the radical cation (\(p\)-MePy-V-\(p\)-CN)I using a reported procedure for salts with similar chemical structures [34]. The crystal structures were determined on the basis of intensity data collected using a Rigaku AFC-8R Mercury CCD RA-Micro7 diffractometer and XtaLAB Synergy-S. The magnetic susceptibility was measured using a commercial SQUID magnetometer (MPMS-XL, Quantum Design). The experimental result was corrected for the diamagnetic contributions calculated by Pascal's method. The specific heat was measured using a commercial calorimeter (PPMS, Quantum Design) by using a thermal relaxation method. Considering the isotropic nature of organic radical systems, all experiments were performed using small randomly oriented single crystals. \(Ab\)\(initio\) MO calculations were performed using the UB3LYP method with the basis set 6-31G and 6-31G(\(d\),\(p\)) in the Gaussian 09 program package. For the estimation of intermolecular magnetic interaction, we applied our evaluation scheme that have been studied previously [35]. The quantum Monte Carlo (QMC) code is based on the directed loop algorithm in the stochastic series expansion representation [36]. The calculations for the spin-1/2 uniform and alternating Heisenberg chains were performed for \(N=256\) under the periodic boundary condition using the ALPS application [37; 38]. The numerical diagonalization based on the Lanczos algorithm is performed to obtain the energy eigenvalue and the wave function of the ground state as well as the first excited state of the spin Hamiltonian under the periodic boundary condition up to \(N\)=28. For the calculation of the string order parameter Ostr, considering the periodic boundary condition we calculated \(O_{\rm str}(N/2)\) with the distance of \(N/2\) for the \(N\)-spin system, and extrapolated them to \(N\rightarrow\infty\). We observed an SP transition to an effective Haldane state at \(T_{\rm SP}=70\) K in (\(p\)-MePy-V-\(p\)-CN)PF\({}_{6}\)-CH\({}_{3}\)CN, whose molecular structure is shown in Fig.1(a). The crystallographic parameters at room temperature are as follows: orthorhombic, space group \(Pna2_{1}\), \(a=7.3758(3)\) A, \(b=15.4561(8)\) A, \(c=21.2673(11)\) A, V = 2424.5(2) A\({}^{3}\)[39]. For \(T<T_{\rm SP}\), the space group changed to monoclinic \(P2_{1}\) owing to the structural phase transition associated with the SP transition. The crystallographic parameters at 25 K are as follows: monoclinic, space group \(P2_{1}\), \(a=7.2203(8)\) A, \(b=20.834(2)\) A, \(c=15.4026(14)\) A, \(\beta=91.776(11)^{\circ}\), V = 2315.9(4) A\({}^{3}\)[39]. Each verdazyl radical \(p\)-MePy-V-\(p\)-CN has a spin-1/2, and approximately 62 % of the total spin density is present on the central verdazyl ring (including four N atoms). The phenyl and cyanophenyl rings account for approximately 15-18 % of the relatively large total spin density, whereas the methylpyridine ring accounts for less than 5 % of the total spin density, yielding the shape of a singly occupied molecular orbital (SOMO), as shown in Fig. 1(b). For \(T>T_{\rm SP}\), the \(\pi\)-\(\pi\) stacking of radicals with glide reflection symmetry forms a 1D uniform structure along the \(a\)-axis, as shown in Figs. 1(c) and 1(d). When \(T<T_{\rm SP}\), the glide reflection symmetry disappears, and two crystallographically independent molecules form a 1D alternating structure, as shown in Figs. 1(e) and 1(f). Here, the N-N short contacts in the central verdazyl indicate lattice shrinkage at low temperatures. Because the nonmagnetic PF\({}_{6}\) and CH\({}_{3}\)CN are located between the 1D chains, the one-dimensionality of the present system is enhanced. MO calculations were performed to evaluate the exchange interactions between the spins of the molecules forming the 1D chains. The evaluation presented the following values: \(J_{\rm AF1}/k_{\rm B}=93\) K for \(T>T_{\rm SP}\), \(J_{\rm AF2}/k_{\rm B}=174\) K, and \(J_{\rm F}/k_{\rm B}=-56\) K for \(T<T_{\rm SP}\); these are defined in the Heisenberg spin Hamiltonian given by \(\mathcal{H}=J_{n}\sum_{<i,j>}\)\(\mathcal{S}_{i}\)\(\cdot\)\(\mathcal{S}_{j}\), where \(\sum_{<i,j>}\) denotes the sum of the neighboring spin pairs. The MO calculations indicate that a spin-1/2 AF uniform chain changes to a spin-1/2 F-AF alternating chain at \(T_{\rm SP}\), as shown in Figs. 1(c) and 1(e). Figure 2(a) shows the temperature dependence of the magnetic susceptibility (\(\chi=M/H\)) at 1.0 T. We observed a broad peak at approximately 74 K, indicating AF correlations in the 1D spin chain. When the temperature was further decreased, \(\chi\) abruptly decreased at \(T_{\rm SP}\) = 70 K, suggesting the formation of a nonmagnetic singlet state with an excitation gap below \(T_{\rm SP}\). We calculated the magnetic susceptibilities of the spin-1/2 Heisenberg AF uniform and F-AF alternating chains using the QMC method, where the ratio \(J_{\rm F}/J_{\rm AF2}\) = -0.32, evaluated from the MO calculation, was assumed. The experiment and calculation were in good agreement for both temperature regions using the parameters: \(J_{\rm AF1}/k_{\rm B}=119\) K, \(J_{\rm AF2}/k_{\rm B}=177\) K, and \(J_{\rm F}/k_{\rm B}=-57\) K, as shown in Fig. 2(a). The obtained parameters for the spin-1/2 F-AF alternating chain indicate that the ground state has an energy gap of 150 K. It is confirmed that the evaluation of exchange interactions from MO calculations is reliable in the present case, as in other verdazyl compounds. At \(T_{\rm SP}\), temperature hysteresis was not observed in \(\chi\), as shown in Fig. 2(b), which is consistent with the characteristics of the second-order SP transition. In contrast, there was no anomalous behavior associated with the second-order phase transition at \(T_{\rm SP}\) in specific heat \(C_{\rm p}\), as shown in Fig. 2(c). Several organic compounds exhibiting an SP transition did not exhibit phase transition signals at corresponding specific heat values because their crystal structures were not significantly distorted by the SP transitions [40; 41; 42]. The structural change in the present compound was also relatively small, as shown in Figs. 1(c)-1(f). The phase boundary of a conventional SP system is predicted to have magnetic field dependence [43]. If we assume the predicted relation, the temperature shift for the present system is expected to be approximately 0.2 K even at 10 T, which is difficult to examine under experimental conditions. The observed \(\chi\) at 3 T confirms that the magnetic field dependence of the \(T_{\rm SP}\) is not significant in the present compound, as shown in Fig. 2(b). Here, we compare the ground state energies of the AF uniform and F-AF alternating chains assuming the parameters evaluated from the magnetization analysis. The ground state of the spin-1/2 Heisenberg AF chain is a well-known Tomonaga-Luttinger liquid (TLL), which is a quantum critical state with fermionic spin-1/2 spinon excitation. In the case of the spin-1/2 Heisenberg F-AF alternating chain, the ground state is essentially the same as the spin-1 Haldane state [24; 25]. Two spins coupled by F interaction can be regarded as an effective spin-1, and two spin-1/2 particles on different spin-1 sites form a singlet dimer via AF interaction, as illustrated in Fig. 2(d). We consider the Heisenberg spin Hamiltonian \(H_{\rm uni}\) for uniform chain and \(H_{\rm alt}\) for alternating chain given by \[H_{\rm uni}=\sum_{i=1}^{N}(J_{\rm AF1}\mathbf{S}_{i}\!\cdot\!\mathbf{S}_ {i+i}),\\ H_{\rm alt}=\sum_{i=1}^{N}(J_{\rm F}\mathbf{S}_{2i-1}\!\cdot\!\mathbf{S}_ {2i}+J_{\rm AF2}\mathbf{S}_{2i}\!\cdot\!\mathbf{S}_{2i+1}), \tag{1}\] where \(\mathbf{S}\) is the spin-1/2 operator, and \(N\) is the system size. The string order parameter for this system is defined by \[O_{\rm str}=-4\langle S_{2i}^{z}{\rm exp}[i\pi(S_{2i+1}^{z}+S_{2i+2}^{z}+\cdots +S_{2j-2}^{z})]S_{2j-1}^{z}\rangle, \tag{2}\] which indicates hidden topological order with a specific value in the Haldane phase [44; 45; 46; 47]. \(O_{\rm str}\) was evaluated as 0.989 for the present F-AF alternating chain, whereas \(O_{\rm str}\) for the AF uniform chain approached zero, as shown in Fig. 2(e). The evaluated value of \(O_{\rm str}\) was consistent with the range of 0.38 \(<\)\(O_{\rm str}\)\(<\)1 for the effective spin-1 Haldane phase [24]. We calculated the ground state energies of the AF uniform and F-AF alternating chains by numerically diagonalizing the Hamiltonian. Figure 2(f) shows the system size dependence of the calculated ground state energy \(E\) per spin site, which was normalized by \(J_{\rm AF1}\). It was confirmed that the effective Haldane state for the F-AF alternating chain has a distinctly lower energy than that of the TLL in the AF uniform chain. If we start from \(J_{\rm F}\)=0, the ground state energy of the alternation chain with \(J_{\rm AF2}\)\(>\)\(|J_{\rm F}|\) is primarily lowered by the second-order perturbations of \(J_{\rm F}\); hence, the sign of \(J_{\rm F}\) does not have a significant effect on the value of \(E\). If we assume the AF-AF alternating chain with a positive ratio of \(J_{\rm F}\)/\(J_{\rm AF2}\) = 0.32, the ground state energy is evaluated as \(E\) = -0.565 at \(N\)=20, which is extremely close to \(E\) = -0.564 of the F-AF alternating chain. Moreover, the \(E\) for the AF dimer (\(J_{\rm F}\) =0) given by -(3/4)(\(J_{\rm AF2}\)/\(J_{\rm AF1}\))/2 = -0.558 is also close to that of the alternating chains, demonstrating that the most critical factor that lowers Figure 1: (color online) (a) Molecular structure of (\(p\)-MePy-V-\(p\)-CN)PF\({}_{6}\). (b) Singly occupied molecular orbital of \(p\)-MePy-V-\(p\)-CN. The purple (green) color indicates isosurfaces of the wave function with positive (negative) sign. (c) 1D structure forming the spin-1/2 AF uniform chain and (d) interchain structure viewed along the chain direction for \(T>T_{\rm SP}\). (e) 1D structure forming the spin-1/2 F-AF alternating chain and (f) interchain structure viewed along the chain direction for \(T<T_{\rm SP}\). Hydrogen atoms, PF\({}_{6}\) anions, and CH\({}_{3}\)CN molecules are omitted for clarity. The broken lines indicate N-N short contacts in the molecular pairs associated with \(J_{\rm AF1}\), \(J_{\rm AF2}\), and \(J_{\rm F}\). The purple spheres represent spin-1/2 in each molecule and are mainly located in the central ring with four N atoms. The broken circle encloses \(p\)-MePy-V-\(p\)-CN radicals comprising each spin chain structure. the ground state energy (gain of magnetic energy) in the SP transition is the absolute value of \(J_{\rm AF2}\). Here, we examine the conversion of the AF uniform chain to the F-AF alternating chain, that is, the conversion of \(J_{\rm AF1}\) to \(J_{\rm F}\) and \(J_{\rm AF2}\), in terms of molecular orbital coupling. Our verdazyl radical can exhibit a delocalized \(\pi\)-electron spin density even on non-planar molecular structures, yielding flexible MOs, which allows the modulation of intermolecular exchange interactions. The MO of the \(p\)-MePy-V-\(p\)-CN radical was modulated by changing the dihedral angle at the SP transition. The dihedral angles \(\theta_{1}\), \(\theta_{2}\), and \(\theta_{3}\) in Fig. 3(a) for the two crystallographically independent molecules in the low-temperature phase exhibited changes from high-temperature phase of (\(\Delta\theta_{1}\), \(\Delta\theta_{2}\), \(\Delta\theta_{3}\))=(3.0\({}^{\circ}\), -4.4\({}^{\circ}\), 0.9\({}^{\circ}\)) and (1.6\({}^{\circ}\), 7.2\({}^{\circ}\), -4.7\({}^{\circ}\)), respectively. Moreover, to evaluate the changes in intermolecular distance associated with the SP transition, we defined a coordinate system with the origin as the average position of four nitrogen atoms, as shown in Fig. 3(b). As the \(xy\)-plane is defined to be parallel to the central ring of the molecule, the changes in the \(z\)-direction and the \(xy\)-plane almost correspond to the intermolecular distance and the lateral shift of the facing molecular pairs associated with the exchange interactions. Accordingly, we evaluated the changes in the values mentioned above from the position in the high-temperature phase: (\(\Delta x\), \(\Delta y\), \(\Delta z\)) = (0.03 A, -0.03 A, -0.14 A) for \(J_{\rm AF2}\) and (\(\Delta x\), \(\Delta y\), \(\Delta z\)) = (0.07 A, 0.26 A, -0.16 A) for \(J_{\rm F}\). For \(J_{\rm AF2}\), the lateral shift was very slight, revealing that the main structural change accompanying the SP transition was a contraction in the 1D stacking direction. The evaluated \(\Delta z\) was actually identical to the change in the N-N short contact. Conversely, a relatively large lateral shift for the molecular pair associated with \(J_{\rm F}\) was observed. Lateral shifts often reduce the energy gap between the highest occupied MO (HOMO) and the lowest unoccupied MO (LUMO) and produce mostly degenerate HOMO and LUMO, which leads to intermolecular F exchange interactions. We simulated the change in \(J_{\rm F}\) with respect to the lateral shift using the MO calculation, in which the origin is defined at the actual position of the molecular pair associated with the \(J_{\rm F}\), as shown in Fig. 3(c). The sign of \(J_{\rm F}\) changed dramatically depending on the lateral shift. In particular, the changes in the \(y\)-direction were remarkable. Considering that the largest shift of 0.26 A is estimated in the \(y\)-direction, the calculated data demonstrate that the drastic change in the intermolecular exchange interaction can be caused by the structural change associated with the SP transition. The face-to-face approach observed in the molecular pair associated with \(J_{\rm AF2}\) enhances the overlapping of SOMOs, which generally increases the AF exchange interactions. Given that the ground state energy of the alternation chain strongly depends on the magnitude of the AF interaction, as described in the numerical analysis, the present system gains magnetic energy by increasing the AF interaction, that is, the conversion of \(J_{\rm AF1}\) to \(J_{\rm AF2}\). Therefore, the lattice distortion that enhance the AF interaction at the SP transition is considered to induce the lateral shift of another molecular pair in the 1D stacking contributing to a stable MO overlap, resulting in the conversion of \(J_{\rm AF1}\) to \(J_{\rm F}\). Figure 2: (color online) (a) Temperature dependence of magnetic susceptibility (\(\chi=M/H\)) of (\(p\)-MePy-V-\(p\)-CN)PF\({}_{6}\)-CH\({}_{3}\)CN at 1.0 T, where 3.4 % paramagnetic impurities due to the lattice defects are subtracted assuming the Curie contribution. The arrow indicates SP transition temperature \(T_{\rm SP}\). The solid lines with open triangles and squares represent the results calculated by the QMC method. (b) Temperature dependence of \(\chi\) near \(T_{\rm SP}\) at 1.0 T and 3.0 T. \(T\)-up and \(T\)-down represent measurements with heating and cooling processes, respectively. (c) Temperature dependence of the specific heat \(C_{\rm p}\) of (\(p\)-MePy-V-\(p\)-CN)PF\({}_{6}\)-CH\({}_{3}\)CN at 0 T. (d) Valence bond picture of the effective Haldane state in the spin-1/2 F-AF alternating chain. The ovals represent the valence bond singlet pairs of the two \(S=1/2\) spins. (e) System size \(N\) dependence of string order parameter \(O_{\rm str}\) and (f) ground state energy \(E\) for the spin-1/2 AF uniform chain and the spin-1/2 F-AF alternating chain. Here, \(E\) is normalized by \(J_{\rm AF1}\) for both cases. The solid lines indicate fitting curves with \(N^{-1/4}\) for \(O_{\rm str}\) and \(N^{-2}\) for \(E\). The arrows indicate the values evaluated by extrapolation to \(N\)\(\rightarrow\)\(\infty\). The conventional SP transition to the AF-AF alternating chain predicted \(T_{\rm SP}=0.8\eta J/k_{\rm B}\), where \(J\) is the exchange interaction in the uniform chain, and \(\eta\) is the generalized spin-lattice coupling parameter [48]. If this relationship is applied to the present system, we obtain \(\eta\approx 0.74\). The SP systems reported thus far, including inorganic compounds, have smaller \(\eta\) values. The large value of \(\eta\) in the present system suggests that the flexibility of MOs in our radical systems yields effective strong spin-orbit coupling. In summary, we successfully synthesized a model compound that exhibited an unconventional SP transition. The spin-1/2 uniform AF chain was converted to a spin-1/2 F-AF alternating chain, forming the effective spin-1 Haldane state at the SP transition. The present results demonstrate the flexibility of MOs in organic radical systems that can easily change the magnitude and sign of exchange interactions, realizing the unconventional quantum phenomena caused by spin-lattice couplings. In this study, we provide a research area focusing on quantum phenomena generated by strong spin-lattice couplings in radical-based magnets. ###### Acknowledgements. We thank S. Shimono and Y. Kubota for valuable discussions and Y. Hosokoshi for letting us use the laboratory equipments. This research was partly supported by the Asahi Glass Foundation and the joint-research program of the Institute for Molecular Science.
2308.04101
Extensions of Yamamoto-Nayak's Theorem
A result of Nayak asserts that $\underset{m\to \infty}\lim |A^m|^{1/m}$ exists for each $n\times n$ complex matrix $A$, where $|A| = (A^*A)^{1/2}$, and the limit is given in terms of the spectral decomposition. We extend the result of Nayak, namely, we prove that the limit of $\underset{m\to \infty}\lim |BA^mC|^{1/m}$ exists for any $n\times n$ complex matrices $A$, $B$, and $C$ where $B$ and $C$ are nonsingular; the limit is obtained and is independent of $B$. We then provide generalization in the context of real semisimple Lie groups.
Huajun Huang, Tin-Yau Tam
2023-08-08T07:30:45Z
http://arxiv.org/abs/2308.04101v3
# Extensions of Yamamoto-Nayak's theorem ###### Abstract. A result of Nayak asserts that \(\lim\limits_{m\to\infty}\lvert A^{m}\rvert^{1/m}\) exists for each \(n\times n\) complex matrix \(A\), where \(\lvert A\rvert=(A^{*}A)^{1/2}\), and the limit is given in terms of the spectral decomposition. We extend the result of Nayak, namely, we prove that the limit of \(\lim\limits_{m\to\infty}\lvert BA^{m}C\rvert^{1/m}\) exists for any \(n\times n\) complex matrices \(A\), \(B\), and \(C\), where \(B\) and \(C\) are nonsingular; the limit is obtained and is independent of \(B\). We then provide generalization in the context of real semisimple Lie groups. 2020 Mathematics Subject Classification: 15A18, 15A45, 22E46 ## 1. Introduction Let \(\mathbb{N}\) (resp. \(\mathbb{R}\), \(\mathbb{C}\)) be the set of positive integers (resp. real numbers, complex numbers). Let \(M_{n}(\mathbb{C})\) (resp. \(M_{n}(\mathbb{R})\)) denote the set of \(n\times n\) complex (resp. real) matrices, \(\operatorname{GL_{n}}(\mathbb{C})\) the group of \(n\times n\) complex nonsingular matrices, and \(\operatorname{U}(n)\) the group of \(n\times n\) unitary matrices. Let \(\mathbb{P}_{n}\) (resp. \(\overline{\mathbb{P}}_{n}\)) be the set of positive definite (resp. positive semidefinite) matrices in \(M_{n}(\mathbb{C})\). For \(A\in M_{n}(\mathbb{C})\), let \(\lvert A\rvert=(A^{*}A)^{1/2}\) and let \(\lambda_{1}(A),\dots,\lambda_{n}(A)\) denote the eigenvalues of \(A\), counting multiplicities, in the way that \(\lvert\lambda_{1}(A)\rvert\geq\lvert\lambda_{2}(A)\rvert\geq\dots\geq\lvert \lambda_{n}(A)\rvert\). Define the following: * \(\lambda(A)=(\lambda_{1}(A),\dots,\lambda_{n}(A))\) the \(n\)-tuple of eigenvalues; * \(\lvert\lambda\rvert(A)=(\lvert\lambda_{1}(A)\rvert,\dots,\lvert\lambda_{n}(A )\rvert)\) the \(n\)-tuple of eigenvalue moduli with non-increasing order; * \(s(A)=(s_{1}(A),\dots,s_{n}(A))\) the \(n\)-tuple of singular values of \(A\) with \(s_{1}(A)\geq s_{2}(A)\geq\dots\geq s_{n}(A)\). Beruling-Gelfand's spectral radius formula asserts that [1, p.70] for \(A\in M_{n}(\mathbb{C})\): \[\lim\limits_{m\to\infty}\lVert A^{m}\rVert^{1/m}=\rho(A), \tag{1.1}\] where \(\rho(A)\) denotes the spectral radius of \(A\) and \(\lVert A\rVert=s_{1}(A)\) denotes the spectral norm of \(A\). Indeed it is true for all matrix norms [2, p.349]. It is an interesting asymptotic result as it relates algebraic and analytic properties of \(A\) in a nice way. Yamamoto generalized the result to all singular values [3] \[\lim\limits_{m\to\infty}[s_{i}(A^{m})]^{1/m}=\lvert\lambda_{i}(A)\rvert,\quad i =1,\dots,n. \tag{1.2}\] ## 1. Introduction Let \(A\) be a \(n\)-dimensional vector space with \(A\) elements \(x\in A\) and \(y\in A\). We say that \(A\) is _semisimple_ if \(A\) is semisimple if and only if \(A\) is semisimple. Let \(\mathbb{C}^{n}\) be a \(n\)-dimensional vector space with \(A\) elements \(x\in A\) and \(y\in A\). We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. Let \(\mathbb{C}^{n}\) be a semisimple vector space with \(A\) elements \(x\in A\) and \(y\in A\). We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semis semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semis semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semis semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semis semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semisimple if and only if \(A\) is semis semisimple. We say that \(A\) is semis semisimple if and only if \(A\) is semisimple. We say that \(A\) is semis semisimple if and only if \(A\) is semis semisimple. We say that \(A\) is semis semisimple if and only if \(A\) is semis semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semis semisimple if and only if \(A\) is semisimple. We say that \(A\) is semis semisimple if and only if \(A\) is semis semisimple. We say that \(A\) is semisimple if and only if \(A\) is semisimple. We say that \(A\) is semis semisimple if and only if 2. \(\lim_{m\to\infty}A_{m}=B=\lim_{m\to\infty}C_{m}\) _for some_ \(B\in\mathbb{P}_{n}\)_,_ _then \(\lim_{m\to\infty}B_{m}=B\)._ Proof.: Since \(\lim_{m\to\infty}C_{m}=B\), the sequence \(\{C_{m}\}_{m\in\mathbb{N}}\) is contained in a compact set of \(\mathbb{P}_{n}\). So is the sequence \(\{B_{m}\}_{m\in\mathbb{N}}\). For every limit point \(B^{\prime}\) of \(\{B_{m}\}_{m\in\mathbb{N}}\) we have \(B\leq B^{\prime}\leq B\) so that \(B^{\prime}=B\). Therefore, \(\lim_{m\to\infty}B_{m}=B\). **Lemma 2.2**.: _Suppose that the sequence \(\{B_{m}\}_{m\in\mathbb{N}}\subseteq\mathrm{GL}_{n}(\mathbb{C})\) satisfies_ \[\lim_{m\to\infty}s_{n}(B_{m})^{1/m}=1=\lim_{m\to\infty}s_{1}(B_{m})^{1/m}. \tag{2.1}\] _Then for any \(\{A_{m}\}_{m\in\mathbb{N}}\subseteq M_{n}(\mathbb{C})\) and \(A\in\mathbb{P}_{n}\),_ \[\lim_{m\to\infty}|A_{m}|^{1/m}=A\quad\Rightarrow\quad\lim_{m\to\infty}|B_{m}A _{m}|^{1/m}=A. \tag{2.2}\] Proof.: In the Lowner order, \[s_{n}(B_{m})^{2}I_{n}\leq B_{m}^{*}B_{m}\leq s_{1}(B_{m})^{2}I_{n}. \tag{2.3}\] Hence for \(A_{m}\in M_{n}(\mathbb{C})\): \[A_{m}^{*}(B_{m}^{*}B_{m}-s_{n}(B_{m})^{2}I_{n})A_{m} \geq 0, \tag{2.4}\] \[A_{m}^{*}(s_{1}(B_{m})^{2}I_{n}-B_{m}^{*}B_{m})A_{m} \geq 0, \tag{2.5}\] which give \[s_{n}(B_{m})^{2}A_{m}^{*}A_{m}\leq A_{m}^{*}B_{m}^{*}B_{m}A_{m}\leq s_{1}(B_{m })^{2}A_{m}^{*}A_{m}. \tag{2.6}\] By [7, Theorem 1.5.9], the function \(X\mapsto X^{1/(2m)}\) is monotone on \(\mathbb{P}_{n}\). So \[s_{n}(B_{m})^{1/m}|A_{m}|^{1/m}\leq|B_{m}A_{m}|^{1/m}\leq s_{1}(B_{m})^{1/m}| A_{m}|^{1/m}. \tag{2.7}\] By (2.1) and Lemma 2.1, when \(\lim_{m\to\infty}|A_{m}|^{1/m}=A\), we get \[\lim_{m\to\infty}|B_{m}A_{m}|^{1/m}=\lim_{m\to\infty}|A_{m}|^{1/m}=A.\qed\] **Remark 2.3**.: Condition (2.1) is equivalent to each of the following conditions: 1. \(\lim_{m\to\infty}|B_{m}|^{1/m}=I_{n}\). 2. \(\lim_{m\to\infty}\rho(B_{m})^{1/m}=1=\lim_{m\to\infty}\rho(B_{m}^{-1})^{1/m}\). 3. For every norm \(\|\cdot\|^{\prime}\) of \(M_{n}(\mathbb{C})\), \[\lim_{m\to\infty}\|B_{m}\|^{\prime 1/m}=1=\lim_{m\to\infty}\|B_{m}^{-1}\|^{ \prime 1/m},\] since every norm is equivalent to the spectral norm and \(s_{n}(B_{m})=\|B_{m}^{-1}\|^{-1}\). **Remark 2.4**.: Lemma 2.2 still holds when \(\{A_{m}\}_{m\in\mathbb{N}}\) are \(n\times r\) complex matrices and \(A\in\mathbb{P}_{r}\), and the proof is analogous. Given \(A\in M_{n}(\mathbb{C})\) and \(I,J\subseteq[n]\), let \(A[I,J]\) denote the submatrix of \(A\) with rows indexed by \(I\) and columns indexed by \(J\), and abbreviate \(A[p]=A[[p],[p]]\) for \(p\in[n]\). **Lemma 2.5**.: _Let \(D=\mathrm{diag}(d_{1},\ldots,d_{n})\in M_{n}(\mathbb{R})\), where \(d_{1}\geq\cdots\geq d_{n}\geq 0\). Then for any nonsingular lower triangular matrix \(L\in\mathrm{GL}_{\mathrm{n}}(\mathbb{C})\), we have_ \[\lim_{m\to\infty}|D^{m}L|^{1/m}=D. \tag{2.8}\] Proof.: Rewrite \[D=\mathrm{diag}(d_{1},\ldots,d_{n})=\mu_{1}I_{n_{1}}\oplus\cdots\oplus\mu_{k}I_ {n_{k}}, \tag{2.9}\] where \(k\in\mathbb{N}\), \(\mu_{1}>\cdots>\mu_{k}\geq 0\), and \(n_{1},\ldots,n_{k}\in\mathbb{N}\) such that \(n_{1}+\cdots+n_{k}=n\). If \(k=1\) then \(D=d_{1}I_{n}\) and (2.8) is obviously true. We assume \(k\geq 2\) in the following proof. Denote \[|D^{m}L|^{1/m}:=X_{m}:=U_{m}D_{m}U_{m}^{*}, \tag{2.10}\] where \(U_{m}\in\mathrm{U}(n)\) and \[D_{m}=\mathrm{diag}(d_{m,1},d_{m,2},\ldots,d_{m,n}), \tag{2.11}\] with \(d_{m,1}\geq d_{m,2}\geq\cdots\geq d_{m,n}\geq 0\). The spectral norm \(\|\cdot\|\) is submultiplicative, that is \(\|AB\|\leq\|A\|\|B\|\) for all \(A,B\in M_{n}(\mathbb{R})\). So \[\|X_{m}\|=\|D^{m}L\|^{1/m}\leq(\|D\|^{m}\|L\|)^{1/m}=(\|L\|)^{1/m}d_{1}. \tag{2.12}\] Since \(\lim_{m\to\infty}(\|L\|)^{1/m}=1\), the sequence \(\{X_{m}\}_{m\in\mathbb{N}}\) is contained in a bounded closed subset of \(\overline{\mathbb{P}}_{n}\), which is compact. It remains to show that every limit point of \(\{X_{m}\}_{m\in\mathbb{N}}\) equals to \(D\). For every \(p\in[n]\) and \(K\subseteq[n]\) with \(|K|=p\), first apply Cauchy-Binet formula to \(X_{m}^{2m}=U_{m}D_{m}^{2m}U_{m}^{*}\) to have \[\det((X_{m}^{2m})[[p],K]) \tag{2.13}\] \[= \sum_{\begin{subarray}{c}J\subseteq[n]\\ |J|=p\end{subarray}}\det(U_{m}[[p],J])\ \det(D_{m}^{2m}[J,J])\ \det(U_{m}^{*}[J,K])\] \[= \sum_{\begin{subarray}{c}J\subseteq[n]\\ |J|=p\end{subarray}}\det(U_{m}[[p],J])\ \overline{\det(U_{m}[K,J])}\ (\prod_{j\in J}d_{m,j})^{2m}.\] Applying Cauchy-Binet formula to \(X_{m}^{2m}=|D^{m}L|^{2}=L^{*}D^{2m}L\) yields \[\det((X_{m}^{2m})[[p],K]) = \det((L^{*}D^{2m}L)[[p],K]) \tag{2.14}\] \[= \sum_{\begin{subarray}{c}J\subseteq[n]\\ |J|=p\end{subarray}}\det(L^{*}[[p],J])\det(D^{2m}[J,J])\det(L[J,K])\] \[= \sum_{\begin{subarray}{c}J\subseteq[n]\\ |J|=p\end{subarray}}\overline{\det(L[J,[p]])}\det(L[J,K])(\prod_{j\in J}d_{j}) ^{2m}.\] In particular, (2.13) and (2.14) imply that for \(p\in[n]\), \[\det((X_{m}^{2m})[p]) = \sum_{\begin{subarray}{c}J\subseteq[n]\\ |J|=p\end{subarray}}|\det(U_{m}[[p],J])|^{2}(\prod_{j\in J}d_{m,j})^{2m} \tag{2.15}\] \[= \sum_{\begin{subarray}{c}J\subseteq[n]\\ |J|=p\end{subarray}}|\det(L[J,[p]])|^{2}(\prod_{j\in J}d_{j})^{2m}\] (2.16) \[\geq |\det(L[p])|^{2}(\prod_{j=1}^{p}d_{j})^{2m}. \tag{2.17}\] We also have \[\sum_{\begin{subarray}{c}J\subseteq[n]\\ |J|=p\end{subarray}}|\det(U_{m}[[p],J])|^{2}=\det((U_{m}U_{m}^{*})[p])=1. \tag{2.18}\] Suppose that \(X\) is a limit point of \(\{X_{m}\}_{m\in\mathbb{N}}\), and \(\{X_{m\ell}\}_{\ell\in\mathbb{N}}\subseteq\{X_{m}\}_{m\in\mathbb{N}}\) satisfying \[X=\lim_{\ell\to\infty}X_{m_{\ell}}=\lim_{\ell\to\infty}U_{m_{\ell}}D_{m_{\ell} }U_{m_{\ell}}^{*}. \tag{2.19}\] By (1.3), \(s(X)=|\lambda|(D)=(d_{1},\ldots,d_{n})\). So \(\lim_{\ell\to\infty}D_{m_{\ell}}=D\). The sequence \(\{U_{m_{\ell}}\}_{\ell\in\mathbb{N}}\) is in the compact unitary group \(\mathrm{U}(n)\); hence it has a converging subsequence. By refining and reindexing the subsequence, we may assume that \(\lim_{\ell\to\infty}U_{m_{\ell}}=U\) for some \(U\in\mathrm{U}(n)\). Then \(X=UDU^{*}\). Since \(\lim_{m\to\infty}D_{m}=D=\mu_{1}I_{n_{1}}\oplus\cdots\oplus\mu_{k}I_{n_{k}}\), for \(J\subseteq[n]\), \(|J|=n_{1}\) and \(J\neq[n_{1}]\): \[\lim_{m\to\infty}\Big{(}\prod_{j\in J}d_{m,j}\Big{)}^{2m}/\Big{(}\prod_{i\in[ n_{1}]}d_{i}\Big{)}^{2m}=\lim_{m\to\infty}\Big{(}\prod_{j\in J}\frac{d_{m,j}}{ \mu_{1}}\Big{)}^{2m}=0. \tag{2.20}\] Consider \(\{X_{m_{\ell}}\}_{\ell\in\mathbb{N}}\). By (2.15), (2.16), and (2.18) for \(p=n_{1}\), we have \[\lim_{\ell\to\infty}\det((X_{m_{\ell}}^{2m_{\ell}})[n_{1}])/\Big{(} \prod_{i\in[n_{1}]}d_{i}\Big{)}^{2m_{\ell}} \tag{2.21}\] \[= |\det(U[n_{1}])|^{2}\lim_{\ell\to\infty}\Big{(}\prod_{j\in[n_{1}] }\frac{d_{m_{\ell},j}}{\mu_{1}}\Big{)}^{2m_{\ell}}\] \[= |\det(L[p])|^{2}>0. \tag{2.22}\] Now let \(K\subseteq[n]\), \(|K|=n_{1}\), and \(K\neq[n_{1}]\). Since \(L\) is lower triangular, we have \(\det(L[[n_{1}],K])=0\), so that by (2.14), \[\lim_{\ell\to\infty}\det((X_{m_{\ell}}^{2m_{\ell}})[[n_{1}],K])/\Big{(}\prod_{ i\in[n_{1}]}d_{i}\Big{)}^{2m_{\ell}}=0. \tag{2.23}\] Together with (2.13), we have \[0 = \lim_{\ell\to\infty}\det((X_{m_{\ell}}^{2m_{\ell}})[[n_{1}],K])/ \Big{(}\prod_{i\in[n_{1}]}d_{i}\Big{)}^{2m_{\ell}} \tag{2.24}\] \[= \det(U[n_{1}])\;\overline{\det(U[K,[n_{1}]])}\;\lim_{\ell\to\infty }\Big{(}\prod_{j\in[n_{1}]}\frac{d_{m_{\ell},j}}{\mu_{1}}\Big{)}^{2m_{\ell}}.\] Therefore, (2.22) and (2.24) imply that \(\det(U[n_{1}])\neq 0\) and \(\det(U[K,[n_{1}]])=0\) for any \(K\subseteq[n]\), \(|K|=n_{1}\), and \(K\neq[n_{1}]\). The rows of \(U[n_{1}]\) form a basis of the row vector space of dimension \(n_{1}\). If there exists \(j\in[n]\setminus[n_{1}]\) such that \(U[\{j\},[n_{1}]]\neq 0\), then there is \((c_{1},\ldots,c_{n_{1}})\neq(0,\ldots,0)\) such that \[U[\{j\},[n_{1}]]=\sum_{i\in[n_{1}]}c_{i}U[\{i\},[n_{1}]]. \tag{2.25}\] Suppose \(c_{s}\neq 0\) for certain \(s\in[n_{1}]\). Then for \(K=([n_{1}]\cup\{j\})\setminus\{s\}\) we get \(\det(U[K,[n_{1}]])\neq 0\), which is a contradiction. Hence \(U\in\mathrm{U}(n)\) has the form \[U=\begin{bmatrix}U[n_{1}]&*\\ 0&*\end{bmatrix}=\begin{bmatrix}U[n_{1}]&0\\ 0&*\end{bmatrix}. \tag{2.26}\] For each \(t\in[k-1]\), applying (2.13) - (2.18) for \(p=n_{1}+\cdots+n_{t}\) and analogous arguments to \(\{X_{m_{\ell}}\}_{\ell\in\mathbb{N}}\), we have \[U=\begin{bmatrix}U[n_{1}+\cdots+n_{t}]&0\\ 0&*\end{bmatrix}. \tag{2.27}\] Therefore, \(U=U_{1}\oplus\cdots\oplus U_{k}\), where each \(U_{i}\in\mathrm{U}(n_{i})\). Overall, \[X=UDU^{*}=(U_{1}\oplus\cdots\oplus U_{k})(\mu_{1}I_{n_{1}}\oplus\cdots\oplus \mu_{k}I_{n_{k}})(U_{1}^{*}\oplus\cdots\oplus U_{k}^{*})=D. \tag{2.28}\] So every limit point of \(\{X_{m}\}_{m\in\mathbb{N}}\) equals to \(D\) and thus (2.8) is proved. The complete multiplicative Jordan decomposition (CMJD) of \(A\in\mathrm{GL}_{n}(\mathbb{C})\) is \(A=EHU\), where * \(E\) is _elliptic_, that is, \(E\) is diagonalizable and \(|\lambda|(E)=(1,\ldots,1)\); * \(H\) is _hyperbolic_, that is, \(H\) is diagonalizable and \(\lambda(H)=|\lambda|(H)\); * \(U\) is _unipotent_, that is, \(\lambda(U)=(1,\ldots,1)\); * the components \(E\), \(H\), and \(U\) mutually commute. The CMJD of \(A\) is unique [8]. Explicitly, suppose that \(A\) is similar to the Jordan canonical form \[A=M\big{(}\bigoplus_{i=1}^{k}J_{A}(\mu_{i})\big{)}M^{-1}, \tag{2.29}\] where \(\mu_{1},\ldots,\mu_{k}\) are the distinct eigenvalues of \(A\) and each \(J_{A}(\mu_{i})\) for \(i\in[k]\) is the direct sum of Jordan blocks of \(A\) associated to the eigenvalue \(\mu_{i}\); we arrange \(\mu_{1},\ldots,\mu_{k}\) in the way that \(|\mu_{1}|\geq\cdots\geq|\mu_{k}|\); let \(n_{i}\) (\(i\in[k]\)) be the algebraic multiplicity of the eigenvalue \(\mu_{i}\). The CMJD \(A=EHU\) is given by: \[E = M\big{(}\bigoplus_{i=1}^{k}\frac{\mu_{i}}{|\mu_{i}|}I_{n_{i}}\big{)} M^{-1}, \tag{2.30}\] \[H = M\big{(}\bigoplus_{i=1}^{k}|\mu_{i}|I_{n_{i}}\big{)}M^{-1},\] (2.31) \[U = M\big{(}\bigoplus_{i=1}^{k}\frac{1}{\mu_{i}}J_{A}(\mu_{i}) \big{)}M^{-1}, \tag{2.32}\] in which \(\bigoplus_{i=1}^{k}|\mu_{i}|I_{n_{i}}=\mathrm{diag}(|\lambda_{1}(A)|,\ldots,| \lambda_{n}(A)|)\). When \(A\in M_{n}(\mathbb{C})\) is singular, the CMJD of \(A\) is not well-defined. However, \(A\) still has the hyperbolic component \(H\) defined by (2.31), where we assume that \(|\mu_{1}|\geq\cdots\geq|\mu_{k-1}|>|\mu_{k}|=0\). Let \[E^{\prime} = M\big{(}\bigoplus_{i=1}^{k-1}\frac{\mu_{i}}{|\mu_{i}|}I_{n_{i} }\oplus I_{n_{k}}\big{)}M^{-1}, \tag{2.33}\] \[U^{\prime} = M\big{[}\bigoplus_{i=1}^{k-1}\frac{1}{\mu_{i}}J_{A}(\mu_{i}) \oplus(I_{n_{k}}+J_{A}(0))\big{]}M^{-1}. \tag{2.34}\] Then \(E^{\prime}\) is elliptic, \(U^{\prime}\) is unipotent, \(E^{\prime}\), \(H\), and \(U^{\prime}\) mutually commute, and when \(m\) is no less than the maximal size of the Jordan blocks of \(A\) associated to eigenvalue \(0\), we have \[A^{m}=E^{\prime m}H^{m}U^{\prime m}. \tag{2.35}\] **Theorem 2.6**.: _Suppose \(A\in M_{n}(\mathbb{C})\). Let \(H=MDM^{-1}\) be the hyperbolic element of \(A\) given in (2.31), in which \(D=\mathrm{diag}(|\lambda_{1}(A)|,\ldots,|\lambda_{n}(A)|).\) For any \(B,C\in\mathrm{GL}_{\mathrm{n}}(\mathbb{C})\), let \(M^{-1}C=LQ\) in which \(L\) is lower triangular and \(Q\) is unitary. Then_ \[\lim_{m\to\infty}|BA^{m}C|^{1/m}=Q^{*}DQ, \tag{2.36}\] _in which the limit is independent of \(B\) and the choice of \(M\). Analogous result is true for \(A\in M_{n}(\mathbb{R})\), \(B,C\in\mathrm{GL}_{\mathrm{n}}(\mathbb{R})\) and the unitary matrix \(Q\) may be replaced by a real orthogonal matrix._ Proof.: We prove for the nonsingular case \(A\in\mathrm{GL}_{\mathrm{n}}(\mathbb{C})\) and use the CMJD \(A=EHU\). The proof for singular \(A\) is similar, in which we use \(A^{m}=E^{\prime m}H^{m}U^{\prime m}\) when \(m\) is sufficiently large. For the nonsingular \(A=EHU\), the components \(E\), \(H=MDM^{-1}\), and \(U\) mutually commute, so that \[BA^{m}C=BE^{m}U^{m}MD^{m}M^{-1}C=(BE^{m}U^{m}M)(D^{m}LQ). \tag{2.37}\] By Lemma 2.5, \[\lim_{m\to\infty}|D^{m}LQ|^{1/m} = \lim_{m\to\infty}(Q^{*}L^{*}D^{2m}LQ)^{1/(2m)} \tag{2.38}\] \[= Q^{*}(\lim_{m\to\infty}|D^{m}L|^{1/m})Q=Q^{*}DQ.\] We claim that \[\lim_{m\to\infty}s_{n}(BE^{m}U^{m}M)^{1/m}=1=\lim_{m\to\infty}s_{1}(BE^{m}U^{m} M)^{1/m}, \tag{2.39}\] so that by (2.37), (2.38), and Lemma 2.2, we get (2.36). The independence of the choice of \(B\) regarding the limit (2.36) is shown in Lemma 2.2. Next we will prove (2.39). By (2.30), the matrices \(E^{m}\) and \(E^{-m}\) for all \(m\in\mathbb{N}\) are in the following compact subset of \(\operatorname{GL}_{\mathrm{n}}(\mathbb{C})\): \[\{M\operatorname{diag}(t_{1},t_{2},\cdots,t_{n})M^{-1}\mid t_{i}\in\mathbb{C},\ |t_{i}|=1,\ i\in[n]\}. \tag{2.40}\] The function \(A\mapsto\|A\|\) is continuous in \(M_{n}(\mathbb{C})\). So it has a maximum \(c>0\) in the compact set (2.40). We get \(\|E^{m}\|<c\) and \(\|E^{-m}\|<c\) for all \(m\in\mathbb{N}\). By (2.32), the unipotent element \(U=M(I+N)M^{-1}\) in which \(N\in M_{n}(\mathbb{C})\) is strictly upper triangular. Since \(N^{n}=0\), for all \(m\in\mathbb{N}\), we have \(U^{m}=M(I+N)^{m}M^{-1}\) and \(U^{-m}=M(I+N)^{-m}M^{-1}\) in which the entries of matrices \[(I+N)^{m}=\sum_{i=0}^{n-1}\binom{m}{i}N^{i}\quad\text{and}\quad(I+N)^{-m}= \sum_{i=0}^{n-1}\binom{-m}{i}N^{i} \tag{2.41}\] can be expressed as polynomials of \(N\) of degrees less than \(n\). So are the entries of \(U^{m}\) and \(U^{-m}\). There exists a fixed polynomial \(g(x)\in\mathbb{R}[x]\) such that \(\|U^{m}\|\leq|g(m)|\) and \(\|U^{-m}\|\leq|g(m)|\) for all \(m\in\mathbb{N}\). Therefore, \[s_{1}(BE^{m}U^{m}M) \leq \|B\|\|E^{m}\|\|U^{m}\|\|M\|\leq(c\|B\|\|M\|)|g(m)|, \tag{2.42}\] \[s_{n}(BE^{m}U^{m}M) = \|(BE^{m}U^{m}M)^{-1}\|^{-1}=\|M^{-1}U^{-m}E^{-m}B^{-1}\|^{-1}\] (2.43) \[\geq \|M^{-1}\|^{-1}\|U^{-m}\|^{-1}\|E^{-m}\|^{-1}\|B^{-1}\|^{-1}\] \[\geq (c^{-1}s_{n}(B)s_{n}(M))|g(m)|^{-1}.\] Since \(\lim_{m\to\infty}|g(m)|^{1/m}=1\) for every nonzero polynomial \(g(x)\), the classical Sandwich Theorem implies (2.39). The limit in (2.36) is described in terms of \(Q\), which depends on \(M\). We shall show that the limit in (2.36) is independent of the choice of \(M\) as long as the hyperbolic element \(H\) of \(A\) satisfies that \(H=MDM^{-1}\) for \(D=\operatorname{diag}(|\lambda_{1}(A)|,\ldots,|\lambda_{n}(A)|).\) Suppose that \(\hat{M}\in\operatorname{GL}_{\mathrm{n}}(\mathbb{C})\) is another choice, that is, \(H=\hat{M}D\hat{M}^{-1}\). If \(D=|\gamma_{1}|I_{m_{1}}\oplus\cdots\oplus|\gamma_{s}|I_{m_{s}}\), where \(\gamma_{1}>\cdots>\gamma_{s}\geq 0\), \(m_{1},\ldots,m_{s}\in\mathbb{N}\) and \(m_{1}+\cdots+m_{s}=n\), then \(\hat{M}=M(M_{1}\oplus\cdots\oplus M_{s})\), where \(M_{i}\in\operatorname{GL}_{\mathrm{m_{i}}}(\mathbb{C})\). Consider \[\hat{M}^{-1}C=(M_{1}^{-1}\oplus\cdots\oplus M_{s}^{-1})M^{-1}C=(M_{1}^{-1} \oplus\cdots\oplus M_{s}^{-1})LQ.\] Note that \(\hat{L}:=(M_{1}^{-1}\oplus\cdots\oplus M_{s}^{-1})L\) is in block lower triangular form and the main diagonal blocks are of size \(m_{1},\ldots,m_{s}\). Performing Gram-Schmidt process on the rows of \(\hat{L}\) from the top row to the bottom row, we have \(\hat{L}=L_{1}\hat{Q}\) where \(L_{1}\) is lower triangular and \(\hat{Q}=Q_{1}\oplus\cdots\oplus Q_{s}\) is unitary. Hence \(\hat{M}^{-1}C=L_{1}\hat{Q}Q\). Thus \[(\hat{Q}Q)^{*}D(\hat{Q}Q)=Q^{*}(\hat{Q})^{*}D\hat{Q}Q=Q^{*}DQ,\] which is independent of the choice of \(M\). **Remark 2.7**.: When \(B=C=I_{n}\), Theorem 2.6 recovers Nayak's result (see Theorem 1.1). Suppose \[D=\operatorname{diag}(|\lambda_{1}(A)|,\ldots,|\lambda_{n}(A)|)=\gamma_{1}I_{ m_{1}}\oplus\cdots\oplus\gamma_{s}I_{m_{s}},\] where \(\gamma_{1}>\cdots>\gamma_{s}\geq 0\). The hyperbolic part of \(A\) is \(H=MDM^{-1}\), and \(M=Q^{*}L^{-1}\) where \(Q^{*}\) is unitary and \(L^{-1}\) is lower triangular. If \(M\) and \(Q^{*}\) are partitioned according to the column partition \((m_{1},\ldots,m_{s})\vdash n\) such that \[M=[M_{1}\mid\cdots\mid M_{s}],\qquad Q^{*}=[Q_{1}\mid\cdots\mid Q_{s}],\] then \(E_{j}\) in Theorem 1.1 is the orthogonal projection onto \(\operatorname{Im}[M_{j}\mid\cdots\mid M_{s}]=\operatorname{Im}[Q_{j}\mid \cdots\mid Q_{s}]\). The columns of \(Q^{*}\) are orthonormal. Hence \(E_{j}-E_{j+1}=Q_{j}Q_{j}^{*}\) and we get Theorem 1.1(i): \[\lim_{m\to\infty}|A^{m}|^{1/m}=Q^{*}DQ=\sum_{j=1}^{s}\gamma_{j}Q_{j}Q_{j}^{*}= \sum_{j=1}^{s}\gamma_{j}(E_{j}-E_{j+1}). \tag{2.44}\] From (2.29), each \(\operatorname{Im}E_{j}=\operatorname{Im}[M_{j}\mid\cdots\mid M_{s}]\) is the direct sum of generalized eigenspaces of \(A\) with the eigenvalue moduli no more than \(\gamma_{j}\). Apparently, each \(\operatorname{Im}E_{j}\) (and thus each \(\operatorname{Im}E_{j}\setminus\operatorname{Im}E_{j+1}\)) is invariant under the action of \(A\) and all \(A^{k}\) for \(k\in\mathbb{N}\). We get Theorem 1.1(iii). Finally, every nonzero vector \(x\in\mathbb{C}^{n}\) can be uniquely written as \(x=x_{1}+\cdots+x_{s}\) where \(x_{i}\in\operatorname{Im}Q_{i}\) for \(1\leq i\leq s\). Note that \(\operatorname{Im}Q_{i}\) is the \(\gamma_{i}\)-eigenspace for the hyperbolic element \(H\). Given \(x\), suppose \(x\in\operatorname{Im}E_{j}\setminus\operatorname{Im}E_{j+1}\) for certain \(1\leq j\leq s\). Then \(x_{1}=\cdots=x_{j-1}=0\) and \(x_{j}\neq 0\), so that \[A^{m}x=\sum_{i=j}^{s}A^{m}x_{i}=\sum_{i=j}^{s}E^{m}U^{m}(H^{m}x_{i})=\sum_{i=j} ^{s}\gamma_{i}^{m}E^{m}U^{m}x_{i}. \tag{2.45}\] Theorem 2.6 shows that \(\lim_{m\to\infty}|E^{m}U^{m}|^{1/m}=I_{n}\). Remark 2.4 implies that for each nonzero \(y\in\mathbb{C}^{n}\), \[\lim_{m\to\infty}\|E^{m}U^{m}y\|^{1/m}=\lim_{m\to\infty}\|y\|^{1/m}=1. \tag{2.46}\] Therefore, \[\lim_{m\to\infty}\|A^{m}x\|^{1/m} = \lim_{m\to\infty}\|\sum_{i=j}^{s}\gamma_{i}^{m}E^{m}U^{m}x_{i}\|^{1/m}\] \[= \gamma_{j}\lim_{m\to\infty}\|E^{m}U^{m}x_{j}+\sum_{i=j+1}^{s}( \gamma_{i}/\gamma_{j})^{m}E^{m}U^{m}x_{i}\|^{1/m}.\] By (2.46), \(\lim_{m\to\infty}\|E^{m}U^{m}x_{j}\|^{1/m}=1\) and \(\lim_{m\to\infty}(\gamma_{i}/\gamma_{j})^{m}E^{m}U^{m}x_{i}=0\) for each \(j+1\leq i\leq s.\) Therefore, it is not hard to get Theorem 1.1(ii): \[\lim_{m\to\infty}\|A^{m}x\|^{1/m}=\gamma_{j}. \tag{2.47}\] The positive semidefinite part \(|A|^{\prime}\) of the other polar decomposition of \(A=|A|^{\prime}U\), where \(U\in\mathrm{U}(n)\), is \[|A|^{\prime}:=(AA^{*})^{1/2}=|A^{*}|. \tag{2.48}\] It is easy to see that the hyperbolic component of \(A^{*}\) is \(H^{*}\) where \(H\) is defined by (2.31) for all \(A\in M_{n}(\mathbb{C})\). Moreover, if \(A\in\mathrm{GL_{n}}(\mathbb{C})\) and \(A\) has the CMJD \(A=EHU\), then \(A^{*}=E^{*}H^{*}U^{*}\) is the CMJD of \(A^{*}\). We have the following result. **Corollary 2.8**.: _Suppose \(A\in M_{n}(\mathbb{C})\). Let \(H=MDM^{-1}\) be the hyperbolic element of \(A\) given in (2.31), in which \(D=\mathrm{diag}(|\lambda_{1}(A)|,\ldots,|\lambda_{n}(A)|).\) For any \(B,C\in\mathrm{GL_{n}}(\mathbb{C})\), let \(BM=QR\) in which \(Q\) is unitary and \(R\) is upper triangular. Then_ \[\lim_{m\to\infty}|BA^{m}C|^{1/m}=QDQ^{*}, \tag{2.49}\] _in which the limit is independent of \(C\) and the choice of \(M\)._ Proof.: We have \[\lim_{m\to\infty}|BA^{m}C|^{\prime 1/m}=\lim_{m\to\infty}|C^{*}(A^{*})^{m}B^{* }|^{1/m}.\] The hyperbolic component of \(A^{*}\) is \(H^{*}=M^{-*}DM^{*}\), and \(M^{*}B^{*}=R^{*}Q^{*}\) where \(R^{*}\) is lower triangular and \(Q^{*}\) is unitary. Then apply Theorem 2.6 to get the limit. ## 3. Semisimple Lie Group Extensions In [4], Yamamoto's theorem (1.2) was extended in the context of real semisimple Lie groups. Here we will extend Theorem 2.6 in the same context. Theorem 2.6 involves CMJD, polar decomposition/SVD, and the QR decomposition \(C^{*}M^{-*}=Q^{*}L^{*}\) in \(\mathrm{GL_{n}}(\mathbb{C})\). They correspond to CMJD, Cartan decomposition/\(KA_{+}K\) decomposition, and the Iwasawa decomposition in the real semisimple Lie groups. ### Decompositions on real semisimple Lie groups Let \(G\) be a non-compact connected real semisimple Lie group with the corresponding Lie algebra \(\mathfrak{g}\), which must be real semisimple. The Cartan decompositions on \(\mathfrak{g}\) and \(G\) are discussed in [9, VI.2, VI.3]. Explicitly, given a Cartan involution \(\theta\) of \(\mathfrak{g}\), let \(\mathfrak{k}\) (resp. \(\mathfrak{p}\)) be the \(+1\) (resp. \(-1\)) eigenspace of \(\theta\). The decomposition \[\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p} \tag{3.1}\] is a _Cartan decomposition of \(\mathfrak{g}\)_. Let \(\Theta\) be the global Cartan involution on \(G\) with th differential \(\theta\). Denote \[g^{*}=\Theta(g),\qquad g\in G. \tag{3.2}\] The analytic subgroup \(K\) of \(G\) with Lie algebra \(\mathfrak{k}\) is exactly the subgroup of \(G\) fixed by \(\Theta\). Denote \[P:=\exp\mathfrak{p}. \tag{3.3}\] For \(p=\exp X\in P\) where \(X\in\mathfrak{p}\) and \(r\in\mathbb{R}\), \[p^{r}:=\exp(rX)\in P, \tag{3.4}\] which is well defined since the map \(K\times\mathfrak{p}\to G\) defined by \((k,X)\mapsto k\exp X\) is a diffeomorphism onto. Every \(g\in G\) can be uniquely written as \[g=k(g)p(g),\qquad k(g)\in K,\ p(g)\in P, \tag{3.5}\] which is called the _Cartan decomposition of \(g\) in \(G\)_. We have \(k^{*}=k^{-1}\) for \(k\in K\) and \(p^{*}=p\) for \(p\in P\). Hence for \(g\in G\), we have \(g^{*}g\in P\) and \[p(g)=(g^{*}g)^{1/2}. \tag{3.6}\] For simplicity, we denote \(|g|:=p(g)\), which is consistent with the notation in matrices. The analogy of matrix QR decomposition to semisimple Lie groups is the Iwasawa decomposition [9, VI.4]. Let \(\mathfrak{a}\) be a maximal abelian subspace of \(\mathfrak{p}\). With respect to the ad-action, \(\mathfrak{g}\) has the _restricted root space decomposition_ \[\mathfrak{g}=\mathfrak{g}_{0}\oplus\bigoplus_{\lambda\in\Sigma}\mathfrak{g}_ {\lambda} \tag{3.7}\] in which \(g_{\lambda}\) and the set \(\Sigma\) of restricted roots are given by \[\mathfrak{g}_{\lambda} := \{X\in\mathfrak{g}\mid(\operatorname{ad}H)X=\lambda(H)X\ \text{for all}\ H\in\mathfrak{a}\}, \tag{3.8}\] \[\Sigma := \{\lambda\in\mathfrak{a}^{*}\setminus\{0\}\mid\mathfrak{g}_{ \lambda}\neq 0\}. \tag{3.9}\] Fix a _closed_ Weyl chamber \(\mathfrak{a}_{+}\) in \(\mathfrak{a}\). In Lie group \(G\), set \[A:=\exp\mathfrak{a},\qquad A_{+}:=\exp\mathfrak{a}_{+}. \tag{3.10}\] The set \(\Sigma^{+}\) of positive roots in the dual space \(\mathfrak{a}^{*}\) are also fixed by \(\mathfrak{a}_{+}\). Then \[\mathfrak{n}:=\bigoplus_{\lambda\in\Sigma^{+}}\mathfrak{g}_{\lambda} \tag{3.11}\] is a nilpotent Lie subalgebra of \(\mathfrak{g}\). _The Iwasawa decomposition of Lie algebra \(\mathfrak{g}\)_ is the vector space direct sum [9, Proposition 6.43]: \[\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{a}\oplus\mathfrak{n}. \tag{3.12}\] Let \[N:=\exp\mathfrak{n}. \tag{3.13}\] The _Iwasawa decomposition_ of \(G\): \[K\times A\times N\to G,\qquad(k,a,n)\mapsto kan. \tag{3.14}\] is a diffeomorphism onto [9, Proposition 6.46]. The \(KA_{+}K\)_decomposition_ of \(G\) says that [8, Theorem 1.1]: \[G=KA_{+}K. \tag{3.15}\] In particular, we have \(P=\operatorname{Ad}(K)A_{+}\) where \(\operatorname{Ad}(k)a=kak^{-1}\). This decomposition is related to the Cartan decomposition in the way that if \(g=k_{1}ak_{2}\) for \(k_{1},k_{2}\in K\) and \(a\in A_{+}\), then \(g=(k_{1}k_{2})(k_{2}^{-1}ak_{2})\) is the Cartan decomposition of \(g\), where \(k_{1}k_{2}\in K\) and \(k_{2}^{-1}ak_{2}\in P\). An element \(h\in G\) is called _hyperbolic_ if \(h=\exp X\) where \(X\in\mathfrak{g}\) is real semisimple, that is, \(\operatorname{ad}X\in\operatorname{End}(\mathfrak{g})\) is diagonalizable over \(\mathbb{R}\). An element \(u\in G\) is called _unipotent_ if \(u=\exp(N)\) where \(N\in\mathfrak{g}\) is nilpotent, that is, \(\operatorname{ad}N\in\operatorname{End}(\mathfrak{g})\) is nilpotent. An element \(e\in G\) is _elliptic_ if \(\operatorname{Ad}(e)\in\operatorname{Aut}(\mathfrak{g})\) is diagonalizable over \(\mathbb{C}\) with eigenvalues of modulus \(1\). _The complete multiplicative Jordan decomposition (CMJD)_ for \(G\) asserts that each \(g\in G\) can be uniquely written as [10, Proposition 2.1] \[g=e(g)h(g)u(g), \tag{3.16}\] where \(e(g)\) is elliptic, \(h(g)\) is hyperbolic, \(u(g)\) is unipotent, and the three elements mutually commute. Moreover, an element \(h\in G\) is hyperbolic if and only if \(h\) is conjugate to a unique element \(b(h)\in A_{+}\)[10, Proposition 2.4]. Denote \[b(g):=b(h(g)). \tag{3.17}\] ### The limit of \(|g_{1}g^{m}g_{2}|^{1/m}\) Theorem 2.6 can be extended to real semisimple Lie groups below. **Theorem 3.1**.: _Let \(G\) be a noncompact connected real semisimple Lie group. Then for any \(g,g_{1},g_{2}\in G\),_ \[\lim_{m\to\infty}|g_{1}g^{m}g_{2}|^{1/m}=kb(g)k^{*}=kb(g)k^{-1}, \tag{3.18}\] _in which_ 1. \(g=ehu\) _is the CMJD of_ \(g\) _in_ \(G\)_, where_ \(e\) _is elliptic,_ \(h=qb(g)q^{-1}\) _is hyperbolic with_ \(b(g)\in A_{+}\) _and_ \(q\in G\)_, and_ \(u\) _is unipotent,_ 2. \(k\) _comes from the Iwasawa decomposition_ \(g_{2}^{*}q^{-*}=kan\)_, where_ \(k\in K\)_,_ \(a\in A\)_, and_ \(n\in N\)_._ _In particular, the choices of \(g_{1}\) and \(q\) do not affect the limit (3.18)._ Proof.: Let \(G\) have dimension \(n\). We look at the adjoint representations \(\operatorname{Ad}:G\to\operatorname{Aut}(\mathfrak{g})\). There exists an orthonormal basis (with respect to the Killing form of \(\mathfrak{g}\)) such that \(\operatorname{Ad}(K)\) (resp. \(\operatorname{Ad}(P)\), \(\operatorname{Ad}(A)\), \(\operatorname{Ad}(N)\)) consists of unitary (resp. positive definite, positive diagonal, unit upper triangular) matrices in \(\operatorname{GL}_{\operatorname{n}}(\mathbb{C})\). Moreover, we have \(\operatorname{Ad}(g^{*})=\operatorname{Ad}(g)^{*}\) and \(\operatorname{Ad}(|g|)=|\operatorname{Ad}(g)|\) for \(g\in G\) under the basis. For \(g\in G\) and the corresponding CMJD \(g=ehu\), we have the decomposition \(\operatorname{Ad}(g)=\operatorname{Ad}(e)\operatorname{Ad}(h)\operatorname{ Ad}(u)\) in which \(\operatorname{Ad}(e)\) is diagonalizable over \(\mathbb{C}\) with eigenvalues of modulus \(1\), \(\operatorname{Ad}(u)=\operatorname{Ad}(\exp(N))=\exp(\operatorname{ad}(N))\) is unipotent, and \(\operatorname{Ad}(h)=\operatorname{Ad}(\exp(X))=\exp(\operatorname{ad}(X))\) is hyperbolic. Thus the CMJD of the matrix \(\operatorname{Ad}(g)\) is \(\operatorname{Ad}(g)=\operatorname{Ad}(e)\operatorname{Ad}(h)\operatorname {Ad}(u)\). As \(h\) is hyperbolic, there is \(q\in G\) such that \(h=qb(g)q^{-1}\), where \(b(g)\in A_{+}\)[10, Proposition 2.4]. So \(\operatorname{Ad}(h)=\operatorname{Ad}(q)\operatorname{Ad}(b(g))\operatorname {Ad}(q)^{-1}\). Note that \((q^{-1}g_{2})^{*}=kan\) by Iwasawa decomposition. Thus \[\operatorname{Ad}(q^{-1}g_{2})=\operatorname{Ad}(n^{*})\operatorname{Ad}(a^{*} )\operatorname{Ad}(k^{*})=(\operatorname{Ad}(n))^{*}(\operatorname{Ad}(a))( \operatorname{Ad}(k))^{-1}\] in which \((\operatorname{Ad}(n))^{*}\) is unit lower triangular and \(\operatorname{Ad}(a)\) is diagonal. By Theorem 2.6, \[\lim_{m\to\infty}|\operatorname{Ad}(g_{1})\operatorname{Ad}(g)^{m}\operatorname {Ad}(g_{2})|^{1/m}=\operatorname{Ad}(k)^{-1}\operatorname{Ad}(b(g)) \operatorname{Ad}(k)=\operatorname{Ad}(k^{-1}b(g)k),\] As the adjoint representation \(\operatorname{Ad}:G\to\operatorname{Aut}\mathfrak{g}\), \(g\mapsto\operatorname{Ad}(g)\) is continuous, we have \[\operatorname{Ad}(\lim_{m\to\infty}|g_{1}g^{m}g_{2}|^{1/m}) = \lim_{m\to\infty}\operatorname{Ad}(|g_{1}g^{m}g_{2}|^{1/m})\] \[= \lim_{m\to\infty}(\operatorname{Ad}(|g_{1}g^{m}g_{2}|))^{1/m}\] \[= \lim_{m\to\infty}|\operatorname{Ad}(g_{1}g^{m}g_{2})|^{1/m}\] \[= \lim_{m\to\infty}|\operatorname{Ad}(g_{1})\operatorname{Ad}(g)^{ m}\operatorname{Ad}(g_{2})|^{1/m}\] \[= \operatorname{Ad}(k^{-1}b(g)k).\] Hence every limit point of \(\{|g_{1}g^{m}g_{2}|^{1/m}\}_{m\in\mathbb{N}}\) has the form \(zk^{-1}b(g)k\), where \(z\) is in the center \(Z\) of \(G\). On one hand, the CMJD of \(zk^{-1}b(g)k\) is \(z(k^{-1}b(g)k)\) in which \(z\) is elliptic and \(k^{-1}b(g)k\) is hyperbolic. On the other hand, every limit point of \(\{|g_{1}g^{m}g_{2}|^{1/m}\}_{m\in\mathbb{N}}\subseteq P\) must be in \(P\), which is hyperbolic [10, Proposition 6.2]. By the uniqueness of CMJD, we conclude that \(z\) is the identity. As a result, we have \[\lim_{m\to\infty}|g_{1}g^{m}g_{2}|^{1/m}=k^{-1}b(g)k.\] We are going to show that the limit (3.18) is independent of the choice of \(q\). Let \(\hat{q}\in G\) such that \(h=\hat{q}b(g)\hat{q}^{-1}\). Then \(\hat{q}=qr\), where \(r\) fixes \(b(g)\) via conjugation, hat is, \(rb(g)r^{-1}=b(g)\). Let \(g_{2}^{*}q^{-*}=kan\) and \(g_{2}^{*}\hat{q}^{-*}=\hat{k}\hat{a}\hat{n}\) according to the Iwasawa decomposition. By Theorem 2.6, \[\operatorname{Ad}(\lim_{m\to\infty}|g_{1}g^{m}g_{2}|^{1/m})=\operatorname{Ad}( k^{-1}b(g)k)=\operatorname{Ad}(\hat{k}^{-1}b(g)\hat{k})\] as the limit is independent of \(\operatorname{Ad}(q)\). So \(k^{-1}b(g)k=z(\hat{k}^{-1}b(g)\hat{k})\), where \(z\in Z\) is elliptic. As \(k^{-1}b(g)k\) and \(\hat{k}^{-1}b(g)\hat{k}\) are both in \(P\) so they are hyperbolic. By the uniqueness of CMJD, \(z\) is the identity and hence \(k^{-1}b(g)k=\hat{k}^{-1}b(g)\hat{k}\). Thus the limit (3.18) is independent of the choice of \(q\). Similarly the limit is independent of \(g_{1}\). **Remark 3.2**.: Once we fix the Cartan decomposition, \(G=KP\), the left side \(\lim_{m\to\infty}|g_{1}g^{m}g_{2}|^{1/m}\) of (3.18) is clearly fixed. In other words, the right side \(kb(g)k^{-1}\) of (3.18) is independent of the choice of \(A\) and \(A_{+}\) which determines the fundamental roots and vice versa, and \(N\), as long as the Iwasawa decomposition is of the form \(KAN\). Here is an explanation. If we choose another maximal abelian subspace \(\tilde{\mathfrak{a}}\) of \(\mathfrak{p}\), set \(\tilde{A}:=\exp\tilde{\mathfrak{a}}\) and fix a positive Weyl chamber \(\tilde{A}_{+}\), then [9, p.378]\(\tilde{\mathfrak{a}}=\operatorname{Ad}(v)\mathfrak{a}\) for some \(v\in K\) and thus \(\tilde{A}=vAv^{-1}\); furthermore, we may choose \(v\) in the way that \(\tilde{A}_{+}=vA_{+}v^{-1}\) and thus \(\tilde{N}=vNv^{-1}\). So \[g_{2}^{*}\tilde{q}^{-*}=g_{2}^{*}(qv^{-1})^{-*}=g_{2}^{*}(q)^{-*}v^{-1}=kanov ^{-1}=\tilde{k}\tilde{a}\tilde{n},\] where \[\tilde{k}=kv^{-1}\in K,\quad\tilde{a}=vav^{-1}\in\tilde{A},\quad\tilde{n}= vnv^{-1}\in\tilde{N}.\] Regarding the unique hyperbolic element \(h\) of \(g\), \[h=qb(g)q^{-1}=\tilde{q}\tilde{b}(g)\tilde{q}^{-1},\] where \(\tilde{b}(g)=vb(g)v^{-1}\in\tilde{A}_{+}\) and \(\tilde{q}=qv^{-1}\). As a result, we have \[\tilde{k}\tilde{b}(g)\tilde{k}^{-1}=(kv^{-1})vb(g)v^{-1}(kv^{-1})^{-1}=kb(g)k ^{-1},\] that is, the limit is independent the choice of \(A\) and \(A_{+}\), and thus independent of the choice of \(A\) and \(N\).
2310.00893
Engineering the Neural Collapse Geometry of Supervised-Contrastive Loss
Supervised-contrastive loss (SCL) is an alternative to cross-entropy (CE) for classification tasks that makes use of similarities in the embedding space to allow for richer representations. In this work, we propose methods to engineer the geometry of these learnt feature embeddings by modifying the contrastive loss. In pursuit of adjusting the geometry we explore the impact of prototypes, fixed embeddings included during training to alter the final feature geometry. Specifically, through empirical findings, we demonstrate that the inclusion of prototypes in every batch induces the geometry of the learnt embeddings to align with that of the prototypes. We gain further insights by considering a limiting scenario where the number of prototypes far outnumber the original batch size. Through this, we establish a connection to cross-entropy (CE) loss with a fixed classifier and normalized embeddings. We validate our findings by conducting a series of experiments with deep neural networks on benchmark vision datasets.
Jaidev Gill, Vala Vakilian, Christos Thrampoulidis
2023-10-02T04:23:17Z
http://arxiv.org/abs/2310.00893v1
# Engineering the Neural Collapse Geometry of ###### Abstract Supervised-contrastive loss (SCL) is an alternative to cross-entropy (CE) for classification tasks that makes use of similarities in the embedding space to allow for richer representations. In this work, we propose methods to engineer the geometry of these learnt feature embeddings by modifying the contrastive loss. In pursuit of adjusting the geometry we explore the impact of prototypes, fixed embeddings included during training to alter the final feature geometry. Specifically, through empirical findings, we demonstrate that the inclusion of prototypes in every batch induces the geometry of the learnt embeddings to align with that of the prototypes. We gain further insights by considering a limiting scenario where the number of prototypes far outnumber the original batch size. Through this, we establish a connection to cross-entropy (CE) loss with a fixed classifier and normalized embeddings. We validate our findings by conducting a series of experiments with deep neural networks on benchmark vision datasets. ## 1 Introduction Understanding the structure of the learned features of deep neural networks has gained significant attention through a recent line of research surrounding a phenomenon known as _Neural Collapse_ (NC) formalized by [1]. The authors of [1] have found that, when training a deep-net on balanced datasets with cross-entropy (CE) loss beyond zero training error, the feature embeddings collapse to their corresponding class mean and align with the learned classifier, overall forming a simplex equiangular tight frame (ETF) geometry. In other words, at the terminal phase of training, the class-mean embeddings form an implicit geometry described by vectors of equal norms and angles that are maximally separated. Following [2], we call this geometry "implicit," since it is not enforced by an explicit regularization, but rather induced by common optimizers, such as SGD. A number of followup studies have provided further analysis on the NC phenomenon [3, 4, 5] and extended the implicit-geometry characterization of CE to imbalanced data [6, 2]. At the same time, [7, 8, 9] have Figure 1: Comparison of Gram matrices \(\mathbf{G_{M}}\) at last epoch (350) trained on STEP imbalanced CIFAR-10 and ResNet-18 with (Top) vanilla SCL (\(n_{w}=0\)) (Middle) Class averaging (BCL) [8] satisfying class representation requirements through batch binding [9] (Bottom) SCL with (\(n_{w}=100\)) prototypes. shown, both empirically and theoretically, that such characterizations can be extended to other loss functions, specifically the supervised-contrastive loss (SCL). Drawing inspiration from unsupervised contrastive learning [10], SCL was proposed by [11] as a substitute to CE for classification. Specifically, SCL makes use of semantic information by directly contrasting learned features. [7] was the first to theoretically analyze the implicit geometry of SCL, demonstrating that it forms an ETF when data is balanced. However, when the label distribution is imbalanced, the geometry changes. To combat this, [8] proposed a training framework, which they called balanced contrastive learning (BCL), improving the generalization test accuracy under imbalances. Their framework uses a class averaging modification to SCL alongside a set of \(k\)_trainable_ prototypes, representing class centers, trained using a logit adjusted cross-entropy [12, 13]. According to the authors, the BCL framework drives the implicit geometry to an ETF. In another related work, drawing inspiration from unsupervised frameworks such as MoCo [14], [15] introduced PaCo, a supervised contrastive method that also takes advantage of such trainable class centers to improve test accuracy under imbalances. These works collectively suggest that prototypes can play a crucial role in determining the implicit geometry when training with SCL. However, in their respective frameworks, prototypes are treated as trainable parameters, optimized alongside various other heuristics and modifications. Thus, it is challenging to ascertain their specific impact on the training process. This raises the question _what is the direct impact of prototypes on the SCL geometry when isolated from other modifications?_ In order to answer this question, this paper investigates the implicit geometry of SCL with _fixed_ prototypes, departing from the conventional approach of using trainable prototypes. We introduce a new method to incorporate fixed prototypes to the SCL training by augmenting each batch with copies of all class prototypes. Our experimental results demonstrate that choosing prototypes that form an ETF leads to a remarkably accurate convergence of the embeddings' implicit geometry to an ETF, _regardless of imbalances_. Furthermore, this convergence is achieved with a moderate number of prototype copies per batch. Importantly, we argue that the computational overhead incurred by our SCL approach with fixed prototypes remains independent of the number of copies of prototypes, motivating an investigation into its behavior as the number of copies increases. In this limit, we derive a simplified form of SCL and subsequently prove that the implicit geometry indeed becomes an ETF when ETF prototypes are selected. Intriguingly, this simplified SCL form resembles the CE loss with fixed classifiers, albeit utilizing normalized embeddings. Finally, realizing the flexibility of choosing prototypes that form an arbitrary target geometry, we pose the question: _Is it possible to tune the learned features to form an arbitrary and possibly asymmetric geometry?_ Through experiments on deep-nets and standard datasets, we demonstrate that by selecting prototypes accordingly we can achieve implicit geometries that deviate from symmetric geometries, such as ETFs. ## 2 Tuning Geometry with Prototypes **Setup.** We consider a \(k\)-class classification task with training dataset \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i}):i\in[N]\}\) where \(\mathbf{x}_{i}\in\mathbb{R}^{p}\) are the \(N\) training points with labels \(y_{i}\in[k]\).1 The SCL loss is optimized over batches \(B\subset[N]\) belonging to a batch-set \(\mathcal{B}\). Concretely, \(\mathcal{L}:=\sum_{B\in\mathcal{B}}\mathcal{L}_{B}\), where the loss for each batch \(B\) is given below as Footnote 1: We denote \([N]:=\{1,2,\ldots,N\}\). \[\mathcal{L}_{B}:=\sum_{i\in R}\frac{1}{n_{B,y_{i}}-1}\sum_{ \begin{subarray}{c}j\in H\\ j\neq i\\ y_{j}=y_{i}\end{subarray}}\log\big{(}\sum_{\begin{subarray}{c}\ell\in R\\ \ell\neq i\end{subarray}}\exp\left(\mathbf{h}_{i}^{\top}\mathbf{h}_{\ell}- \mathbf{h}_{i}^{\top}\mathbf{h}_{j}\right)\big{)}\,. \tag{1}\] Here, \(\mathbf{h}_{i}:=\mathbf{h}_{\mathbf{\theta}}(\mathbf{x}_{i})\in\mathbb{R}^{d}\) is the last-layer learned feature-embedding corresponding to the original training point \(\mathbf{x}_{i}\) for a network parameterized by parameters \(\mathbf{\theta}\). Also, \(n_{B,y_{i}}\) is the number of samples sharing the same label as \(\mathbf{h}_{i}\) in the batch \(B\). Lastly, we let \(|B|=n\) be the batch size. As per standard practice [10, 11], we assume a normalization layer as part of the last layer, hence \(\|\mathbf{h}_{i}\|=1\)\(\forall i\in[N]\). It is also common to include a scaling of the inner products by a temperature parameter \(\tau\)[11]; since this can be absorbed in the normalization, we drop it above for simplicity. **Methodology.** Inspired by the class-complement method of [8], the learnable class centers of [15], and the batch-binding algorithm of [9], we propose using _fixed_ prototypes. These prototypes collectively form a desired reference geometry for the embeddings to learn. **Definition 1** (Prototype).: A _prototype_\(\mathbf{w}_{c}\in\mathbb{R}^{d}\) for class \(c\in[k]\) is a fixed vector that represents the desired representation of embeddings \(\{\mathbf{h}_{i}\}_{y_{i}=c}\) in class \(c\). Our method optimizes SCL with a new batch \(\{\mathbf{h}_{i}\}_{i\in B}\cup\mathcal{W}\), where \(\mathcal{W}:=\bigcup_{i=1}^{n_{w}}\{\mathbf{w}_{1},\mathbf{w}_{2},\ldots, \mathbf{w}_{k}\}\) and \(n_{w}\) is the number of added prototypes per class. We highlight two key aspects of this strategy. (i) First, as \(n_{w}\) increases, there is _no increase_ in the computational complexity of the loss computation. This is because the number of required inner product computations between embeddings increases from \(\nicefrac{{n^{2}}}{{2}}\) in vanilla SCL (Eq. (1)) to \(\nicefrac{{n^{2}}}{{2}}+nk\) when prototypes are introduced. This increase is solely due to the presence of \(k\) distinct prototypes and remains constant regardless of the value of \(n_{w}.\) As we will see, this aspect becomes critical as increasing the number of prototypes can help the learned embeddings converge faster to the chosen prototype geometry (see Defn. 2) with minimal added computational overhead (at least when \(k=O(n)\)). (ii) Second, we guarantee that prototypes are fixed and form a suitable, engineered geometry, defined formally in Definition 2 below. In particular, this is in contrast to [15] where prototypes are learned, and [8] which conjectures that the trained prototypes form an ETF. **Definition 2** (Prototype Geometry).: Given a set of prototypes \(\{\mathbf{w}_{c}\}_{c\in[k]}\) the prototype geometry is characterized by a symmetric matrix \(\mathbf{G}_{*}=\mathbf{W}^{\top}\mathbf{W}\) where \(\mathbf{W}=[\mathbf{w}_{1}\cdots\mathbf{w}_{k}].\) Experiments.To display the impact of prototypes on feature geometry, we train a ResNet-18 [16] backbone with a two layer MLP projector head [11] using prototypes. We train the model with batch doubling [9] resulting in a batch size of \(n=2048\), a constant learning rate of \(0.1\), and temperature parameter \(\tau=0.1\) as in [9]. We modify CIFAR-10 (\(k=10\)) such that the first 5 classes contain 5000 examples and the last 5 classes have \(5000/R\) samples with imbalance ratios \(R=10,100\). In Fig. 1, we compare the final epoch geometry of models trained with vanilla SCL, BCL [8] without prototype training and logit-adjusted CE [13], and, SCL with 100 prototypes per class. The figure suggest that the embeddings trained with SCL and prototypes form an ETF geometry irrespective of imbalance ratio. On the other hand, SCL and BCL geometries are noticeably less symmetric with angles between minority centers decreasing with higher imbalance ratios. This highlights the impact of prototypes on achieving an ETF geometry, and further emphasizes their importance within frameworks such as BCL. To study the impact of the number of prototypes we define the concept of Geometric Convergence (see Defn. 3 below) and compare the convergence to ETF (\(\Delta_{\mathbf{G}_{ETF}}\)) when training with SCL using different number of prototypes \(n_{w}=0,10,50,100\). As illustrated in Fig. 2, without prototypes (\(n_{w}=0\)) SCL does _not_ converge to ETF. However, simply adding 100 total prototype examples to the batch significantly improves convergence to the ETF geometry (\(n_{w}=10\)). Moreover, once the prototypes make up \(\sim 20\%\) of the batch size, convergence is nearly perfect (see \(n_{w}=50\)). This observation motivates the study of SCL when \(n_{w}\) outnumbers the training datapoints within the batch. **Definition 3** (Geometry Convergence).: We say that the geometry of learned embeddings has successfully converged if \(\mathbf{G}_{\mathbf{M}}\rightarrow\mathbf{G}_{*},\) where \(\mathbf{G}_{*}\) is as given in Defn. 2. Here, \(\mathbf{G}_{\mathbf{M}}=\mathbf{M}^{\top}\mathbf{M},\)\(\mathbf{M}=[\boldsymbol{\mu}_{1}\cdots\boldsymbol{\mu}_{k}]\) where \(\boldsymbol{\mu}_{c}=\text{Ave}_{y_{i}=c}\mathbf{h}_{i}\). As a measure of convergence, we track \(\Delta_{\mathbf{G}_{*}}=\|\mathbf{G}_{\mathbf{M}/\|\mathbf{G}_{\mathbf{M}}\| _{F}}-\mathbf{G}_{*}/\|\mathbf{G}_{*}\|_{F}\|_{F}\). ## 3 Connection to Cross-Entropy Having seen the impact of increasing the number of prototypes (\(n_{w}\)) on the learnt geometry, it is natural to ask how the prototypes impact the loss as these prototypes begin to outnumber the original number of samples in the batch. Further, this sheds light on prototype-based methods that help improve test accuracy such as BCL [8] and PaCO [15] as both losses include multiplicative hyperparameters to tune the impact of prototypes. **Proposition 1**.: _Let \(\hat{n}:=k\cdot n_{w}\) be the total number of prototypes added to the batch, and \(n\) be the original batch size. Then in the limit \(\hat{n}\gg n\) the batch-wise SCL loss becomes,_ \[\mathcal{L}_{B}\rightarrow-\sum_{i\in B}\left[\log\left(\frac{\exp(\mathbf{ w}_{y_{i}}^{\top}\mathbf{h}_{i})}{\sum\limits_{c\in[k]}\exp(\mathbf{w}_{c}^{ \top}\mathbf{h}_{i})}\right)+\mathbf{w}_{y_{i}}^{\top}\mathbf{h}_{i}\right]\] As shown in Prop. 1, in the presence of a large number of prototypes, optimizing SCL is akin to optimizing cross-entropy with a fixed classifier. **Remark 1**.: _This setting is remarkably similar to [17] that trains CE loss with fixed classifiers forming an ETF geometry. However two key differences emerge: (i) the features and prototypes are normalized, i.e. \(\|\mathbf{h}_{i}\|=1\)\(\forall i\in B\), \(\|\mathbf{w}_{c}\|=1\)\(\forall c\in[k]\), and (ii) here, there is an additional alignment-promoting regularization induced by the inner product between \(\mathbf{h}_{i}\) and \(\mathbf{w}_{y_{i}}\). As we will see below, we also explore choices of prototypes that deviate from the ETF geometry._ Figure 2: Convergence metric \(\Delta_{\mathbf{G}_{ETF}}\) tracked throughout training of ResNet-18 on STEP-imbalanced (\(R=10,100\)) CIFAR-10 while varying the number \(n_{w}\) of prototypes per class. As \(n_{w}\) increases, the feature geometry exhibits a stronger convergence to ETF. As described in Rem. 1, optimizing CE with a fixed classifier has been previously studied [17, 18]; however, typically embeddings are not normalized and different geometries have yet to be considered. In particular, we have empirically found that normalizing embeddings leads to faster geometry convergence consistent with the results of [18]. Lastly, we arrive at this setting from an entirely different view, one of understanding SCL with prototypes. Below, we use the simplified loss in Proposition 1, to analytically study the geometry of embeddings in the specific setting (that of the experiments of Section 2) where prototypes form an ETF. To facilitate the analysis, we adopt the unconstrained-features model (UFM) [5], where the embeddings, \(\mathbf{h}_{i}\), are treated as free variables. **Proposition 2**.: _If \(\{\mathbf{w}_{c}\}_{c\in[k]}\) form an equiangular tight frame, i.e. \(\mathbf{w}_{c}^{\top}\mathbf{w}_{c^{\prime}}=\frac{-1}{k-1}\) for \(c\neq c^{\prime}\), then the optimal embeddings align with their corresponding prototype, \(\mathbf{h}_{i}^{*}=\mathbf{w}_{y_{i}}\)._ Prop. 1 and Prop. 2 (the proofs of which are deferred to the appendix) emphasize the impact of prototypes on SCL optimization showing that in the limit \(n_{w}\gg n\), ETF is the optimal geometry. However, as mentioned in [19, 20, 12] allowing for better separability for minority classes can potentially improve generalization. Thus we now consider convergence to non-symmetric geometries, which could potentially favor minority separability. In Fig. 3, we use the limiting form of SCL given in Prop. 1 to illustrate the final learnt geometry \(\mathbf{G_{M}}\) of features trained using three possible prototype geometries: 1) **ETF** which assigns equal angles between prototypes 2) **Improved Minority Angles** which assigns a larger angle between prototypes belonging to the minority classes and 3) **Majority Collapse**, an extreme case which assigns the same prototype for all majority classes, forcing the majority class features to collapse to the same vector. Models are trained with a similar setup as in Fig. 2 and Fig. 1 albeit with learning rate annealing of 0.1 at epochs 200 and 300 as we observed that it expedites convergence. It is clear in Fig. 3 that the learnt features can be significantly altered based on the choice of prototypes allowing for geometries with more clear separability of minority classes. In summary, these experiments demonstrate the flexibility of SCL with prototypes, and create an opportunity to explore a wide variety of prototype geometries. This exploration could lead to identifying geometries that result in improved test performance when trained under label imbalances. We leave this to future work. ## 4 Concluding Remarks In this work, we have isolated and explored the effects of prototypes on supervised-contrastive loss. In doing so, we have identified a reliable method in tuning the learnt embedding geometry. In addition, a theoretical link to cross-entropy was established. Overall, our discoveries indicate that employing fixed prototypes offers a promising avenue for streamlining framework modifications that typically treat prototypes as trainable parameters without a clear understanding of their direct contribution. Moreover, this opens up an exciting avenue for future research to explore how choosing prototype geometries favoring larger angles for minority classes can positively impact generalization performance. ## Appendix A Proof of Propositions ### Proof of Proposition 1 Let \(|B|=n\), and note that \(\|\mathbf{w}_{c}\|=1,\forall c\in[k]\). Then we have that \(\mathcal{L}_{B}\) with prototypes is of the form, Figure 3: Comparison of the Gram matrices (\(\mathbf{G_{M}}\)) of learned embeddings with different prototypes trained with the limiting form of SCL given in Thm. 1. (Top) ETF prototypes (Middle) Large Minority Angles (Bottom) Majority Collapse. \[\mathcal{L}_{B}=\sum\limits_{i\in B}\frac{n_{w}}{n_{B,y_{i}}+n_{w}-1}\mathcal{L}_ {s}+\sum\limits_{c\in[k]}\frac{n_{w}}{n_{B,y_{i}}+n_{w}-1}\mathcal{L}_{p}\] Here, \(\mathcal{L}_{s}\) is the loss accrued while iterating over each sample in the batch and \(\mathcal{L}_{p}\) is the loss accrued while iterating through each prototype and are given as follows: \[\mathcal{L}_{s} :=\sum\limits_{\begin{subarray}{c}j\neq i\\ y_{j}=y_{i}\end{subarray}}\frac{-1}{n_{w}}\log\left(\frac{\exp(\mathbf{h}_{i}^ {\top}\mathbf{h}_{j})}{\sum\limits_{\begin{subarray}{c}j\in[k]}\exp(\mathbf{h }_{i}^{\top}\mathbf{h}_{i})+n_{w}\sum\limits_{c\in[k]}\exp(\mathbf{w}_{c}^{ \top}\mathbf{h}_{i})\end{subarray}}\right)\] \[\qquad-\log\left(\frac{\exp(\mathbf{h}_{i}^{\top}\mathbf{w}_{y_{i }})}{\sum\limits_{\begin{subarray}{c}j\in[n]\\ y_{j}=c\end{subarray}}\exp(\mathbf{h}_{i}^{\top}\mathbf{h}_{i})+n_{w}\sum \limits_{c\in[k]}\exp(\mathbf{w}_{c}^{\top}\mathbf{h}_{i})}\right)\,.\] \[\mathcal{L}_{p} :=\sum\limits_{\begin{subarray}{c}j\in[k]\\ y_{j}=j\end{subarray}}-\log\left(\frac{\exp(\mathbf{w}_{c}^{\top}\mathbf{h}_{j}) }{\sum\limits_{\begin{subarray}{c}j\in[k]\\ c\in[k]\end{subarray}}\exp(\mathbf{h}_{i}^{\top}\mathbf{w}_{c})+n_{w}\Delta+(n_ {w}-1)e}\right)\] \[-(n_{w}-1)\log\left(\frac{e}{\sum\limits_{\begin{subarray}{c}j \in[k]\\ \neq i\end{subarray}}\exp(\mathbf{h}_{i}^{\top}\mathbf{w}_{c})+n_{w}\Delta+(n_ {w}-1)e}\right)\,.\] Above, we have used \(\Lambda=\sum_{c\neq\hat{c}\hat{c}}\exp(\mathbf{w}_{c}^{\top}\mathbf{w}_{c})\) to denote a fixed constant that is determined after selecting the desired geometry, and for compact notation, we have denote \(e=\exp(1)\). For clarity, we analyze each term \(\mathcal{L}_{s}\) and \(\mathcal{L}_{p}\) separately. First, as \(n_{w}\gg n_{B,y_{i}}\) the first term of \(\mathcal{L}_{s}\) is proportional to \(\nicefrac{{1}}{{n_{w}}}\), we can neglect it. Moreover, as \(n_{w}\) increases, \(\sum\limits_{\begin{subarray}{c}\ell\neq i\\ c\in[k]\end{subarray}}\exp(\mathbf{h}_{\ell}^{\top}\mathbf{h}_{i})\ll n_{w} \sum\limits_{c\in[k]}\exp(\mathbf{w}_{c}^{\top}\mathbf{h}_{i})\). Thus, for large \(n_{w}\) we have that, \[\mathcal{L}_{s}\approx-\log\left(\frac{\exp(\mathbf{h}_{i}^{\top}\mathbf{w}_{ y_{i}})}{n_{w}\sum\limits_{c\in[k]}\exp(\mathbf{w}_{c}^{\top}\mathbf{h}_{i})}\right)\] Now considering \(\mathcal{L}_{p}\), we have that the denominators of the logarithms are approximately given as, \[\sum\limits_{\begin{subarray}{c}\ell\in[n]\\ \neq i\end{subarray}}\exp(\mathbf{h}_{\ell}^{\top}\mathbf{w}_{c})+n_{w}\Delta+( n_{w}-1)e\approx n_{w}\Delta+(n_{w}-1)e\] Thus we get that, \[\mathcal{L}_{p}\approx\sum\limits_{\begin{subarray}{c}j\in[n]\\ y_{j}=c\end{subarray}}-\log\left(\frac{\exp(\mathbf{w}_{c}^{\top}\mathbf{h}_{j}) }{n_{w}\Lambda+(n_{w}-1)e}\right)-\Phi\,,\] where \(\Phi:=(n_{w}-1)\log\left(\frac{e}{n_{w}\Lambda+(n_{w}-1)e}\right)\). Furthermore, in the limit \(\frac{n_{w}}{n_{B,y_{i}}+n_{w}-1}\to 1\). Combining the above, the per-batch loss (\(\mathcal{L}_{B}\)) can be expressed as, \[\mathcal{L}_{B}\approx\sum\limits_{i\in B}-\log\left(\frac{\exp (\mathbf{h}_{i}^{\top}\mathbf{w}_{y_{i}})}{\sum\limits_{c\in[k]}\exp(\mathbf{ w}_{c}^{\top}\mathbf{h}_{i})}\right)+\log(n_{w})\] \[+\sum\limits_{\begin{subarray}{c}\ell\in[k]\\ y_{j}=c\end{subarray}}\sum\limits_{\begin{subarray}{c}j\in[n]\\ y_{j}=c\end{subarray}}-\log\left(\exp(\mathbf{w}_{c}^{\top}\mathbf{h}_{j}) \right)-\Phi+\log(n_{w}\Lambda+(n_{w}-1)e)\,.\] Since the optimal embeddings \(\mathbf{h}_{i}^{*}\) are independent of any additive constants on the objective it suffices to drop them during optimization. Thus we arrive at the desired: \[\mathcal{L}_{B}\rightarrow-\sum\limits_{i\in B}\left[\log\left(\frac{\exp( \mathbf{w}_{y_{i}}^{\top}\mathbf{h}_{i})}{\sum\limits_{c\in[k]}\exp(\mathbf{w}_ {c}^{\top}\mathbf{h}_{i})}\right)+\mathbf{w}_{y_{i}}^{\top}\mathbf{h}_{i} \right]\,.\] ### Proof Sketch of Proposition 2 We follow a similar proof technique to [17], thus we only mention the delicate aspects necessary to handle the alignment term. Consider the minimization program given below, with the objective as given by Prop. 1, while relaxing the norm constraint on the embeddings. \[\min_{\|\mathbf{h}_{i}\|^{2}\leq 1}-\sum\limits_{i\in B}\left[\log\left(\frac{\exp (\mathbf{w}_{y_{i}}^{\top}\mathbf{h}_{i})}{\sum\limits_{c\in[k]}\exp(\mathbf{w}_ {c}^{\top}\mathbf{h}_{i})}\right)+\mathbf{w}_{y_{i}}^{\top}\mathbf{h}_{i}\right]\] Now, as a first step, one can define the Lagrangian \(L(\{\mathbf{h}_{i}\},\{\lambda_{i}\})\) for \(i\in[n]\). Noting that \(\{\lambda_{i}\}_{i\in[n]}\) are the dual variables associated with the norm constraints, as in [17] we can prove by contradiction that \(\lambda_{i}\neq 0\). This implies that \(\|\mathbf{h}_{i}\|^{2}=1\) and \(\lambda_{i}>0\) from the KKT conditions. As a next step, we can define \(p_{y}=\frac{\exp(\mathbf{w}_{y}^{\top}\mathbf{h}_{i})}{\sum_{c\in[k]}\exp( \mathbf{w}_{y}^{\top}\mathbf{h}_{i})}\) (as in [17]). From here, one can establish that for \(\tilde{c}\neq\hat{c}\neq y_{i}\) we have that, \[\frac{p_{\hat{c}}}{p_{\hat{c}}}=\frac{\exp(\mathbf{h}_{i}^{\top}\mathbf{w}_{c})}{ \exp(\mathbf{h}_{i}^{\top}\mathbf{w}_{c})}=\frac{\frac{1}{k-1}-2\lambda_{i} \mathbf{h}_{i}^{\top}\mathbf{w}_{\hat{c}}}{\frac{1}{k-1}-2\lambda_{i}\mathbf{ h}_{i}^{\top}\mathbf{w}_{\hat{c}}}\] Taking \(x=\mathbf{h}_{i}^{\top}\mathbf{w}_{\hat{c}}\) we define the function \(\frac{a-bx}{\exp(x)}\). For CE with fixed ETF classifier, the authors of [17] use the monotonicity of \(\frac{\exp(x)}{x}\) to complete the proof. In our case, the function \(\frac{a-bx}{\exp(x)}\) is strictly decreasing under the constraints \(0\leq a\leq 1,b>0\) in the interval \(x\in[-1,1]\). Therefore, it holds that \(\mathbf{h}_{i}^{\top}\mathbf{w}_{\hat{c}}=\mathbf{h}_{i}^{\top}\mathbf{w}_{ \hat{c}}\) and \(p_{\hat{c}}=p_{\hat{c}}=p\). With this fact established, one can directly take the gradient of the Lagrangian, and solve for \(\mathbf{h}_{i}^{*}\), i.e. set \(\nabla_{\mathbf{h}_{i}}L=0\). Using the established facts in this proof sketch one will find that \(\mathbf{h}_{i}^{*}=\mathbf{w}_{y_{i}}\).
2310.11746
Realizing topologically protected ghost surface polaritons by lattice transformation optics
While conventional surface waves propagate along the surface and decay perpendicularly from the interface, the ghost surface polaritons show oblique propagation direction with respect to the interface. Here, we have discovered topologically protected ghost surface polaritons by applying the lattice transformation optics method to gyromagnetic photonic crystals. By introducing the transformation optics method to periodic systems, we develop the lattice transformation optics method to engineer the band structures and propagation directions of the surface polaritons. We show that a simple shear transformation on the square lattice can tailor the propagation directions with ease. The reversed ghost surface polariton is discovered by setting a negative shear factor. Interestingly, we find the topological invariant Chern number will change sign when the orientation of the Brillouin zone flipped during the transformation. Our findings open up new avenues for studying ghost surface polaritons and provide a general engineering method for periodic systems.
Xianghong Kong, Chuanjie Hu, Xingsi Liu, Chunqi Zheng, Jianfeng Chen, Huanyang Chen, Cheng-Wei Qiu
2023-10-18T07:04:28Z
http://arxiv.org/abs/2310.11746v2
# Engineering band structures and topological invariants by transformation optics ###### Abstract By introducing the transformation optics method to periodic systems, we show the tunability of the band structures by comparing the results from original spaces and transformed spaces. Interestingly, we find the topological invariant Chern number will change sign when the orientation of the Brillouin zone flipped. The new platform we provided for engineering the band diagram and topological invariant might lead to the development of both transformation optics and photonic topological states. ## I Introduction Since the discovery of transformation optics (TO) [1; 2], it has become a powerful analytical tool for designing various applications such as cloak [1], field concentrator [3], optical black hole [4], and illusion optics [5]. By relating the complex transformed structure to the simple original structure, an intuitive and insightful understanding of the transformed structure can be achieved. Among all the different applications designed by TO, only a very few cover the periodic structure design [6; 7; 8]. However, due to the existence of phase factor in the Bloch function, the band diagram of the transformed structure cannot be predicted from the original structure except at \(\Gamma\) point [8]. The recent discovery of photonic topological insulator [9; 10] has attracted many researchers to develop new structures and mechanisms in the photonic platforms, such as photonic spin Hall effect [11; 12; 13], photonic valley Hall effect [14; 15], high-order photonic topological insulator [16; 17], nodal lines [18; 19], and so on. Due to the bulk-boundary correspondence, edge mode could be discovered at the boundary of nontrivial photonic crystals, which are immune to local disorders and defects. Although the symmetry indicator [20; 21] and recently developed deep learning techniques [22; 23] do help in designing various photonic topological structures, a more intuitive analytic design is still waiting to be discovered. In this paper, the TO method is applied to the period system, where we discover the precise relation between the band diagram of the transformed structure and the original structure. Furthermore, the topological invariant Chern number is calculated and compared, where we find the Chern number would change its sign if the orientation of the Brillouin zone flips. By introducing the TO method into the topology study, we can not only broaden the research scope of the TO method but also engineer the topological invariant in an insightful way, which helps deepen the understanding of the photonic topological insulator. ## II Band structures engineering As shown in Fig. 1a, we consider a square lattice (period \(a=1\)m) with a Yttrium-Iron-Garnet (YIG) rod (\(r=0.11a\)) in the center. The permittivity of the YIG rod is \(\epsilon=15\epsilon_{0}\) and the permeability tensor is [9]: \[\vec{\mu}=\begin{bmatrix}\mu&i\kappa&0\\ -i\kappa&\mu&0\\ 0&0&\mu_{0}\end{bmatrix} \tag{1}\] where \(\mu=14\mu_{0}\) and \(\kappa=12.4\mu_{0}\). The authors show in Ref. [9] that the lowest four bands of the TM mode are well separated and nontrivial Chern numbers can be achieved in such gyromagnetic photonic crystal. If a linear coordinate transformation is applied to the square lattice as shown in Fig. 1a, the structure will transform into a diamond lattice and the shape of the Brillouin zone will also change. According to the transformation optics, the field distributions and materials are transformed in the form of [1]: \[\vec{\epsilon}^{\prime}=\frac{\vec{J}\cdot\vec{\epsilon}\cdot\vec{J}^{T}}{ \det\left(\vec{J}\right)},\ \ \vec{\mu}^{\prime}=\frac{\vec{J}\cdot\vec{\mu}\cdot\vec{J}^{T}}{\det\left(\vec{ J}\right)} \tag{2a}\] \[\vec{E}^{\prime}=\left(\vec{J}^{T}\right)^{-1}\cdot\vec{E},\ \ \vec{H}^{\prime}=\left(\vec{J}^{T}\right)^{-1}\cdot\vec{H} \tag{2b}\] where \[\vec{J}=\begin{bmatrix}\frac{\partial x^{\prime}}{\partial x}&\frac{\partial x ^{\prime}}{\partial y}&\frac{\partial x^{\prime}}{\partial z}\\ \frac{\partial y^{\prime}}{\partial x}&\frac{\partial y^{\prime}}{\partial y}& \frac{\partial y^{\prime}}{\partial z}\\ \frac{\partial z^{\prime}}{\partial x}&\frac{\partial z^{\prime}}{\partial y}& \frac{\partial z^{\prime}}{\partial z}\end{bmatrix}\] is the Jacobian matrix of the transformation. The Bloch theorem shows the electric field should have the form \(E_{z}=e^{i\vec{k}^{T}\cdot\vec{r}}u_{E}\left(\vec{r}\right)\) where \(u_{E}\left(\vec{r}\right)\) is a periodic function and satisfies \(u_{\vec{E}}\left(\vec{r}\right)=u_{\vec{E}}\left(\vec{r}+\vec{a}_{1}\right)=u_{ \vec{E}}\left(\vec{r}+\vec{a}_{2}\right)\). Here, \(\vec{a}_{1}=(1,0)^{T}\) and \(\vec{a}_{2}=(0,1)^{T}\) are lattice vectors of the square crystal. The phase difference between point \(A\) and point \(B\) (see Fig. 1a) in the original space should be the same compared with the points \(A^{\prime}\) and \(B^{\prime}\) in the transformed space. Hence, we can conclude that: \[e^{i\vec{k}^{T}\cdot\vec{a}_{1}^{\prime}}=e^{i\vec{k}^{T}\cdot\vec{a}_{1}} \tag{3a}\] \[e^{i\vec{k}^{T}\cdot\vec{a}_{2}^{\prime}}=e^{i\vec{k}^{T}\cdot\vec{a}_{2}} \tag{3b}\] Although Eq. (3) is not only valid in the linear transformation, we want to emphasize that not all the nonlinear transformations can match the condition shown in Eq. (3). The existence of the lattice vector \(\vec{a}_{1}^{\prime}\) and \(\vec{a}_{2}^{\prime}\) in the transformed space indicates two pairs of the periodic boundary should be reserved during the transformation. The line shape of the periodic boundary does not have to be straight (see Appendix A). In our linear case, the relation between \(\vec{a}_{1}\), \(\vec{a}_{2}\) and \(\vec{a}_{1}^{\prime}\), \(\vec{a}_{2}^{\prime}\) can be expressed as: \[[\vec{a}_{1}^{\prime},\vec{a}_{2}^{\prime}]=\begin{bmatrix}\frac{\partial x^{ \prime}}{\partial x}&\frac{\partial x^{\prime}}{\partial y}\\ \frac{\partial y^{\prime}}{\partial x}&\frac{\partial y^{\prime}}{\partial y }\end{bmatrix}[\vec{a}_{1},\vec{a}_{2}] \tag{4}\] Combining Eq. (3) and Eq. (4), we can figure out how the k space transforms: \[\vec{k}=\begin{bmatrix}\frac{\partial x^{\prime}}{\partial x}&\frac{\partial y ^{\prime}}{\partial x}\\ \frac{\partial x^{\prime}}{\partial y}&\frac{\partial y^{\prime}}{\partial y }\end{bmatrix}\vec{k}^{\prime} \tag{5}\] As shown in Fig. 1b, the four corners of the Brillouin zone of the original square lattice are \((-\pi/a,-\pi/a)\), \((\pi/a,-\pi/a)\), \((\pi/a,\pi/a)\), and \((-\pi/a,\pi/a)\). When the transformation matrix is \(\bar{J}_{1}=\begin{bmatrix}0.75&0.25&0\\ 0.25&0.75&0\\ 0&0&1\end{bmatrix}\), we can figure out the four corners of the transformed Brillouin zone \((-\pi/a,-\pi/a)\), \((2\pi/a,-2\pi/a)\), \((\pi/a,\pi/a)\), and \((-2\pi/a,2\pi/a)\) by applying Eq. (5). Similarly, for the transformation matrix \(\bar{J}_{2}=\begin{bmatrix}0.25&0.75&0\\ 0.75&0.25&0\\ 0&0&1\end{bmatrix}\), we can find out the corners of the transformed Brillouin zone are \((-\pi/a,-\pi/a)\), \((-2\pi/a,2\pi/a)\), \((\pi/a,\pi/a)\), and \((2\pi/a,-2\pi/a)\). Although the Brillouin zones look the same under the transformation \(\bar{J}_{1}\) and \(\bar{\bar{J}}_{2}\) (see Fig. 1b), the orientation has been flipped by comparing the chirality (right-handed for \(\bar{J}_{1}\) and left-handed for \(\bar{J}_{2}\)) of the four corners just calculated. No matter how complicated the transformation may behave in the real space, the reaction in the k space is always a simple linear stretch or compression as shown in Eq. (5). In the middle of Fig. 1c we show the Comsol simulation result of \(E_{z}\) field distribution at \(k_{x}=0.8\pi/a\), \(k_{y}=0.6\pi/a\). By applying Eq. (5), we can figure out the transformed \(\vec{k}^{\prime}\), which are \(k^{\prime}_{x}=0.9\pi/a\), \(k^{\prime}_{y}=0.5\pi/a\) and \(k^{\prime}_{x}=0.5\pi/a\), \(k^{\prime}_{y}=0.9\pi/a\) for \(\bar{J}_{1}\), \(\bar{J}_{2}\) respectively. The transformation of the electric field in the real space satisfies the relation given in Eq. (2b). The field is compressed in the direction of \(y^{\prime}=-x^{\prime}\) under the transformation \(\bar{J}_{1}\) while for \(\bar{\bar{J}}_{2}\) the field is flipped along the direction of \(y^{\prime}=x^{\prime}\) after the compression. Figure 1: The band structures and eigenfield distributions under the linear transformation. (a) Center: A square lattice (period \(a=1\)m) with a YIG rod in the center (radius \(r=0.11a\), \(\epsilon=15\epsilon_{0}\), \(\mu=14\mu_{0}\), \(\kappa=12.4\mu_{0}\)). Right and left: The diamond lattice is transformed from the square lattice in the middle under the matrix \(\bar{J}_{1}\), \(\bar{\bar{J}}_{2}\) respectively. (b) Band diagrams of the second band of the corresponding structures in (a). The white dots denote the locations of the eigenfields in (c). (c) \(|E_{z}|\) field distributions at \(k^{\prime}_{x}=0.5\pi/a\), \(k^{\prime}_{y}=0.9\pi/a\) (left), \(k_{x}=0.8\pi/a\), \(k_{y}=0.6\pi/a\) (middle), and \(k^{\prime}_{x}=0.9\pi/a\), \(k^{\prime}_{y}=0.5\pi/a\) (right) of the corresponding structures in (a). ## III Topological invariant engineering Following the definition from Ref. [9], the Berry connection can be expressed as: \[A_{x}=\iint E_{z}^{*}\epsilon_{zz}\left(\vec{r}\right)\partial_{k_{x}}E_{z}\, \mathrm{d}x\,\mathrm{d}y \tag{6a}\] \[A_{y}=\iint E_{z}^{*}\epsilon_{zz}\left(\vec{r}\right)\partial_{k_{y}}E_{z}\, \mathrm{d}x\,\mathrm{d}y \tag{6b}\] where the integration domain is the unit cell in real space. The Chern number is defined as: \[C=\frac{1}{2\pi i}\iint\left(\partial_{k_{x}}A_{y}-\partial_{k_{y}}A_{x}\right) \mathrm{d}k_{x}\,\mathrm{d}k_{y} \tag{7}\] The authors have shown the nontrivial Chern number in the gyromagnetic structure [9], where \(C=1\) for the second band and \(C=-2\) for the third band. If we apply the time-reversal transformation to the gyromagnetic material, we will change the permeability tensor \(\bar{\bar{\mu}}=\begin{bmatrix}\mu&i\kappa&0\\ -i\kappa&\mu&0\\ 0&0&\mu_{0}\end{bmatrix}\) into \(\bar{\bar{\mu}}^{\prime}=\begin{bmatrix}\mu&-i\kappa&0\\ i\kappa&\mu&0\\ 0&0&\mu_{0}\end{bmatrix}\) (the permittivity of the YIG rod and the background vacuum won't change). Obviously, the Chern number of the time-reversal transformed structure will change its sign since it is related to the sign of \(\kappa\) in the permeability tensor. Interestingly, the transformation can also be explained from the perspective of TO. If we apply the Jacobian matrix \(\bar{\bar{J}}=\begin{bmatrix}1&0&0\\ 0&-1&0\\ 0&0&-1\end{bmatrix}\) to the gyromagnetic permeability tensor, we can achieve the exactly same transformed permeability tensor as the time-reversal transformation will do (again, the permittivity of the YIG rod and the background vacuum won't change). Hence, the TO shows its ability to engineer the topological invariant Chern number. Through detailed derivation (see Appendix B), we discover the relation between the Chern number in the transformed space and the original space: \[C^{\prime}\!=\mathrm{sign}\left(\det\left(\frac{\partial\left(k_{x}^{\prime},k _{y}^{\prime}\right)}{\partial\left(k_{x},k_{y}\right)}\right)\right)C \tag{8}\] According to Eq. (8), we can conclude that the Chern number in the transformed space will change its sign compared with the original space when the orientation of the Brillouin zone is flipped after the transformation. As shown in Fig. 2, the Berry curvatures of the original structure and the transformed structures are plotted. Here, the Berry curvature is the integrand in Eq. (7), which means we can achieve the Chern number by summing up the Berry curvature shown in Fig. 2. For our linear transformation \(\bar{\bar{J}}_{1}\), \(\bar{\bar{J}}_{2}\), by combining Eq. (5) and Eq. (8) we can get \(C^{\prime}=\mathrm{sign}\left(\det\left(\frac{\partial\left(x^{\prime},y^{ \prime}\right)}{\partial\left(x,y\right)}\right)\right)C\). For transformation \(\bar{\bar{J}}_{1}\), since \(\det\left(\frac{\partial\left(x^{\prime},y^{\prime}\right)}{\partial\left(x,y \right)}\right)>0\), the Chern number is \(C^{\prime}=1\) for the second band and \(C^{\prime}=-2\) for the third band, which is the same as the original structure. However, due to \(\det\left(\frac{\partial\left(x^{\prime},y^{\prime}\right)}{\partial\left(x,y \right)}\right)<0\) for \(\bar{J}_{2}\), the Chern number changes its sign and turns into \(C^{\prime}=-1\) for the second band and \(C^{\prime}=2\) for the third band. These results can be verified by observing the distributions of the Berry curvatures as shown in Fig. 2 easily. ## IV Conclusions In the paper, we have shown that the TO method can be applied to the periodic structures to tune the band diagrams and the eigenfield distributions. Although the transformation in the real space can be nonlinear and complicated, its effect on the band diagram will always be linear. Since we assume the sign of determinant of the Jacobian matrix does not change in real space, it may constrain the ability of the TO method to engineer the topological invariant. Whether the Chern number can be tuned to a value with a different absolute value by TO is still open to be discovered. ## Appendix A Curved periodic boundary As shown in Fig. 3, a square lattice can be transformed into a lattice with curved periodic boundaries. The period of the square is \(1\)m and the side length of the smaller square is \(0.4\)m. The permittivity and permeability of the smaller square are \(\epsilon=2\epsilon_{0}\) and \(\mu=2\mu_{0}\) respectively. The Figure 2: Berry curvatures of the original structure and the transformed structures. Top: Berry curvatures of the second band. Bottom: Berry curvatures of the third band. Middle: Original square lattice. Right and left: The diamond lattices transformed under \(\bar{\bar{J}}_{1}\), \(\bar{\bar{J}}_{2}\) respectively. transformation matrix is \(\bar{J}=\begin{bmatrix}1&\mp 0.2&0\\ 0&1&0\\ 0&0&1\end{bmatrix}\) where '\(\cdot\)' is for the domain \(y>0\) and '\(+\)' is for the domain \(y<0\). The corresponding material transformation follows the rules governed by Eq. (2a). Since the transformed lattice vectors are the same as the original lattice vectors (\(\vec{a}_{1}^{\prime}=\vec{a}_{1}\), \(\vec{a}_{2}^{\prime}=\vec{a}_{2}\)), the k space also keeps the same \(\vec{k}^{\prime}=\vec{k}\) according to the Eq. (3). The \(|E_{z}|\) field distribution of the original structure is plotted at \(k_{x}=\pi/3\), \(k_{y}=\pi/2\), and \(f=0.74c/a\) in Fig. 3. It matches to \(k_{x}^{\prime}=\pi/3\), \(k_{y}^{\prime}=\pi/2\), and \(f^{\prime}=0.74c/a\) in the transformed space. By comparing the field distributions, we find they exactly follow the rules given in Eq. (2b). For more complicated transformed periodic boundaries, we can use the piecewise linear boundaries to approximate them and run Comsol simulations to help understand the eigenfields. ## Appendix B Chern number calculation under transformation optics Here, we will derive the Chern number of the original space and transformed space. The Berry connection in the 2D photonic crystal system can be written as [9]: \[A_{x}=\iint E_{i}^{*}\epsilon^{ij}\partial_{k_{x}}E_{j}\,\mathrm{d}x\, \mathrm{d}y \tag{21a}\] \[A_{y}=\iint E_{i}^{*}\epsilon^{ij}\partial_{k_{y}}E_{j}\,\mathrm{d}x\, \mathrm{d}y \tag{21b}\] where \(\epsilon^{ij}\), \(E_{i}\) represent the 3 by 3 permittivity and 3 by 1 electric field in the tensor form. In our 2D case, we have \(E_{1}=0,E_{2}=0,E_{3}=E_{z}\). Repeated indices \(i\) and \(j\) are summed over according to the Einstein summation rules. Hence, the Chern number in the original space is: \[C = \frac{1}{2\pi i}\iint\left(\partial_{k_{x}}A_{y}-\partial_{k_{y}} A_{x}\right)\mathrm{d}k_{x}\,\mathrm{d}k_{y}\] (22) \[= \frac{1}{2\pi i}\ Similar to Eq. (14), the Berry connection defined in the transformed space is \[A^{\prime}_{x}\!\!=\!\!\iint\frac{\det\left(\frac{\partial\left(x^{\prime},y^{ \prime},z^{\prime}\right)}{\partial\left(x,y,z\right)}\right)}{\left|\det\left( \frac{\partial\left(x^{\prime},y^{\prime}\right)}{\partial\left(x,y\right)} \right)\right|}E^{\prime*}_{i^{\prime}}\epsilon^{i^{\prime}j^{\prime}}\partial _{b^{\prime}_{x}}E_{j^{\prime}}\mathrm{d}x^{\prime}\mathrm{d}y^{\prime} \tag{15}\] Here the normalization term \(\frac{\det\left(\frac{\partial\left(x^{\prime},y^{\prime},z^{\prime}\right)}{ \partial\left(x,y,z\right)}\right)}{\left|\det\left(\frac{\partial\left(x^{ \prime},y^{\prime}\right)}{\partial\left(x,y\right)}\right)\right|}\) does not relate to variable \(k^{\prime}_{x}\), which means it can be taken out from the derivative with respect to \(k^{\prime}_{x}\) and put at the front of the integral term as shown in Eq. (15). Following the same procedure as Eq. (13), let's replace the electric field \(E^{\prime}_{j}\) and permittivity \(\epsilon^{i^{\prime}j^{\prime}}\) in the transformed space with the electric field \(E_{j}\) and permittivity \(\epsilon^{ij}\) in the original space according to the transformation optics, we can easily get: \[A^{\prime}_{x}=\iint E^{*}_{i}\epsilon^{ij}\partial_{b^{\prime}_{x}}E_{j} \,\mathrm{d}x\,\mathrm{d}y \tag{16}\] Similarly, \[A^{\prime}_{y}=\iint E^{*}_{i}\epsilon^{ij}\partial_{b^{\prime}_{y}}E_{j}\, \mathrm{d}x\,\mathrm{d}y \tag{17}\] The Chern number after transformation can be calculated as \[C^{\prime} = \frac{1}{2\pi i}\iint\left(\partial_{k^{\prime}_{x}}A^{\prime}_ {y}-\partial_{k^{\prime}_{y}}A^{\prime}_{x}\right)\mathrm{d}k^{\prime}_{x} \mathrm{d}k^{\prime}_{y}\!=\!\frac{1}{2\pi i}\iiiint\!\left(\partial_{k^{ \prime}_{x}}E^{*}_{i}\epsilon^{ij}\partial_{b^{\prime}_{x}}E_{j}\!-\!\partial _{k^{\prime}_{y}}E^{*}_{i}\epsilon^{ij}\partial_{b^{\prime}_{x}}E_{j}\right) \!\left|\frac{\partial\left(k^{\prime}_{x},k^{\prime}_{y}\right)}{\partial \left(k_{x},k_{y}\right)}\right|\mathrm{d}x\mathrm{d}y\mathrm{d}k_{x}\mathrm{ d}k_{y} \tag{18}\] \[= \frac{1}{2\pi i}\iiiint\!\left(\frac{\partial_{k_{x}}E^{*}_{i} \epsilon^{ij}\partial_{b_{y}}E_{j}\!-\!\partial_{k_{x}}E^{*}_{i}\epsilon^{ij} \partial_{k_{x}}E_{j}}{\det\left(\frac{\partial\left(k^{\prime}_{x},k^{\prime }_{y}\right)}{\partial\left(k_{x},k_{y}\right)}\right)}\right|\frac{\partial \left(k^{\prime}_{x},k^{\prime}_{y}\right)}{\partial\left(k_{x},k_{y}\right) }\right|\mathrm{d}x\mathrm{d}y\mathrm{d}k_{x}\mathrm{d}k_{y}=\mathrm{sign} \left(\det\left(\frac{\partial\left(k^{\prime}_{x},k^{\prime}_{y}\right)}{ \partial\left(k_{x},k_{y}\right)}\right)\right)C\] As shown in Eq. (18), the Chern number can change its sign after transformation according to the change of the orientation of the Brillouin zone. ###### Acknowledgements. The authors acknowledge the financial support from the National Research Foundation (grant no. NRF-CRP22-2019-0006).
2301.08556
NeRF in the Palm of Your Hand: Corrective Augmentation for Robotics via Novel-View Synthesis
Expert demonstrations are a rich source of supervision for training visual robotic manipulation policies, but imitation learning methods often require either a large number of demonstrations or expensive online expert supervision to learn reactive closed-loop behaviors. In this work, we introduce SPARTN (Synthetic Perturbations for Augmenting Robot Trajectories via NeRF): a fully-offline data augmentation scheme for improving robot policies that use eye-in-hand cameras. Our approach leverages neural radiance fields (NeRFs) to synthetically inject corrective noise into visual demonstrations, using NeRFs to generate perturbed viewpoints while simultaneously calculating the corrective actions. This requires no additional expert supervision or environment interaction, and distills the geometric information in NeRFs into a real-time reactive RGB-only policy. In a simulated 6-DoF visual grasping benchmark, SPARTN improves success rates by 2.8$\times$ over imitation learning without the corrective augmentations and even outperforms some methods that use online supervision. It additionally closes the gap between RGB-only and RGB-D success rates, eliminating the previous need for depth sensors. In real-world 6-DoF robotic grasping experiments from limited human demonstrations, our method improves absolute success rates by $22.5\%$ on average, including objects that are traditionally challenging for depth-based methods. See video results at \url{https://bland.website/spartn}.
Allan Zhou, Moo Jin Kim, Lirui Wang, Pete Florence, Chelsea Finn
2023-01-18T23:25:27Z
http://arxiv.org/abs/2301.08556v1
# NeRF in the Palm of Your Hand: ###### Abstract Expert demonstrations are a rich source of supervision for training visual robotic manipulation policies, but imitation learning methods often require either a large number of demonstrations or expensive online expert supervision to learn reactive closed-loop behaviors. In this work, we introduce SPARTN (Synthetic Perturbations for Augmenting Robot Trajectories via NeRF): a fully-offline data augmentation scheme for improving robot policies that use eye-in-hand cameras. Our approach leverages neural radiance fields (NeRFs) to synthetically inject corrective noise into visual demonstrations, using NeRFs to generate perturbed viewpoints while simultaneously calculating the corrective actions. This requires no additional expert supervision or environment interaction, and distills the geometric information in NeRFs into a real-time reactive RGB-only policy. In a simulated 6-DoF visual grasping benchmark, SPARTN improves success rates by 2.8\(\times\) over imitation learning without the corrective augmentations and even outperforms some methods that use online supervision. It additionally closes the gap between RGB-only and RGB-D success rates, eliminating the previous need for depth sensors. In real-world 6-DoF robotic grasping experiments from limited human demonstrations, our method improves absolute success rates by \(22.5\%\) on average, including objects that are traditionally challenging for depth-based methods. See video results at [https://bland.website/sparth](https://bland.website/sparth). ## 1 Introduction Object grasping is a central problem in vision-based control and is fundamental to many robotic manipulation problems. While there has been significant progress in top-down bin picking settings [21, 34], 6-DoF grasping of arbitrary objects amidst clutter remains an open problem, and is especially challenging for shiny or reflective objects that are not visible to depth cameras. For example, the task of grasping a wine glass from the stem shown in Figure 1 requires precise 6-DoF control (using full 3D translation and 3D rotation of the gripper) and closed-loop perception of a transparent object. Traditional 6-DoF grasping pipelines [57, 8] synthesize only one grasp pose and use a motion planner to generate a collision-free trajectory to reach the grasp [54, 40, 38, 4]. However, the use of open-loop trajectory execution prevents the system from using perceptual feedback for reactive, precise grasping behavior. In this paper, we study how to learn closed-loop policies for 6-DoF object grasping from RGB images, which can be trained with imitation or reinforcement learning methods [58]. Imitation learning from expert demonstrations is a simple and promising approach to this problem, but is known to suffer from compounding errors [46]. As a result, complex vision-based tasks can require online expert supervision [19, 46] or environment interaction [13, 44], both of which are expensive and time-consuming to collect. On the other hand, offline "feedback augmentation" methods [22, 14] can be effective at combating compounding errors, but are severely limited in scope and thus far have not been ap Figure 1: SPARTN is an offline data augmentation method for behavior cloning eye-in-hand visual policies. It simulates recovery in a demonstration by using NeRFs to render high-fidelity observations (right) from noisy states, then generates corrective action labels. plied to visual observations. Other recent works have found that using eye-in-hand cameras mounted on a robot's wrist can significantly improve the performance of visuomotor policies trained with imitation learning [17, 20, 35], but still do not address the underlying issue of compounding errors. We develop an approach that helps address compounding errors to improve vision-based policies, while building on the success of eye-in-hand cameras. To improve imitation learning for quasi-static tasks like grasping, we propose a simple yet effective offline data augmentation technique. For an eye-in-hand camera, the images in each demonstration trajectory form a collection of views of the demonstration scene, which we use to train neural radiance fields (NeRFs) [37] of each scene. Then, we can augment the demonstration data with corrective feedback by injecting noise into the camera poses along the demonstration and using the demonstration's NeRF to render observations from the new camera pose. Because the camera to end-effector transform is known, we can compute corrective action labels for the newly rendered observations by considering the action that would return the gripper to the expert trajectory. The augmented data can be combined with the original demonstrations to train a reactive, real-time policy. Since the NeRFs are trained on the original demonstrations, this method effectively "distills" the 3D information from each NeRF into the policy. The main contribution of this work is a NeRF-based data augmentation technique, called SPARTN (Synthetic Perturbations for Augmenting Robot Trajectories via NeRF), that improves behavior cloning for eye-in-hand visual grasping policies. By leveraging view-synthesis methods like NeRF, SPARTN extends the idea of corrective feedback augmentation to the visual domain. The resulting approach can produce (i) reactive, (ii) real-time, and (iii) RGB-only policies for 6-DoF grasping. The data augmentation is fully offline and does not require additional effort from expert demonstrators nor online environment interactions. We evaluate SPARTN on 6-DoF robotic grasping tasks both in simulation and in the real world. On a previously-proposed simulated 6-DoF grasping benchmark [58], the augmentation from SPARTN improves grasp success rates by \(2.8\times\) compared to training without SPARTN, and even outperforms some methods that use expensive online supervision. On eight challenging real-world grasping tasks with a Franka Emika Panda robot, SPARTN improves the absolute average success rate by 22.5%. ## 2 Related Work **Robotic Grasping**. Grasping is a long-studied topic in robotics [50]; see the multitude of survey articles for a complete review [2, 4, 26]. Most data-driven grasping systems focus on learning how to predict some parameterized "grasp" (whether a full 6-DoF pose, 2-DoF table-top position, etc.), and leave intermediate motion generation to be open-loop, handled either through motion planners or simple heuristics, e.g. [57, 52, 43, 12, 34, 42]. Other works have trained _closed-loop_ grasping policies [58, 21, 39, 21], and bring all the benefits of closed-loop policies: including for example, the ability to avoid obstacles, to perform precise grasping without precise calibration, and to react to dynamic objects. Additionally, grasping policies are often designed for top-down (2- or 3-DoF) grasping [21, 43, 31], while 6-DoF grasping typically requires depth or 3D information [39, 53, 58, 42]. In contrast, our method trains a reactive 6-DoF grasping policy with only RGB data. See Table 1 for a summary of the assumptions of the most related grasping works. **Imitation learning and data augmentation**. Behavior cloning is known to struggle with covariate shift: small errors cause imitation policies to fall slightly off of the data distribution and it is then difficult to correct the mistake back onto the data manifold. DAgger [46] and its variants [16, 23, 36] mitigate this issue by obtaining expert corrections throughout training. Alternatively, DART [29] projects noise during expert demonstration collection, which is especially effective with algorithmic experts but interferes with human demonstration collection. Previous works [14, 22] have injected noise into the low-dimensional system state after data collection (in a fully offline manner), but the visual observations are left out, limiting the helpfulness of noise injection. Our method can be seen as a visual, fully-offline version of noise injection that does not require perturbing the expert during demonstration collection, using NeRF to synthetically render perturbed states post-hoc. Unlike standard image augmentations for policy learning [30, 61], our method uses NeRF to learn a 3D model of the demonstration scene, which enables us to generate high-fidelity novel views for data augmentation. In addition, while standard image augmentation approaches do not modify the action labels, we leverage hand-eye coordination to calculate corrective actions for augmented observations. **NeRF for Robotics**. A number of recent works have in \begin{table} \begin{tabular}{|c|c|c|c|} \hline Method & No Depth & Full 6-DoF & Closed-Loop \\ \hline [34, 45] & & & \\ [53] & & ✓ & ✓ \\ [18, 24, 40, 54] & & ✓ & \\ [21, 32] & ✓ & & ✓ \\ [43] & ✓ & & \\ [39] & & & ✓ \\ [56, 58] & & ✓ & ✓ \\ SPARTN (ours) & ✓ & ✓ & ✓ \\ \hline \end{tabular} \end{table} Table 1: Comparison of our approach with related grasping work. SPARTN is the only approach to learn closed-loop 6-DoF grasping policies from only RGB inputs. vestigated applications of NeRF and related methods in robotics, including localization [63], navigation [1], dynamics modeling [7, 9, 33], reinforcement learning [10], and data generation for other learning-based methods [18, 62]. NeRF-Supervision [62], for example, generates pixel-level correspondence to learn dense object descriptors, which are useful for manipulation tasks. For grasping, various methods have leveraged NeRF [3, 18, 24] for open-loop grasp synthesis. In contrast, our method uses NeRF offline to augment data for grasping and distills a reactive, real-time, RGB-only, closed-loop policy. ## 3 Methodology We now introduce SPARTN, which augments an eye-in-hand robot demonstration dataset using NeRF. We first review preliminaries, then describe a method overview, followed by details of training NeRFs and augmenting corrective behavior. ### Preliminaries **Imitation learning.** In imitation learning, we assume access to a dataset of \(N\) expert trajectories \(\mathcal{D}=\{\tau\}_{i=1}^{N}\), where each trajectory consists of a sequence of state-action pairs, \(\tau=\{(s_{k},a_{k})\}_{k=0}^{K}\). In purely _offline_ imitation learning, there exists no other primary data-collection assumptions other than this dataset \(\mathcal{D}\), i.e. no reward labels or online interactions. A standard method for this offline setting is behavior cloning (BC), which trains a policy \(\pi_{\theta}\) to mimic the expert via supervised learning, by minimizing the objective \(\mathcal{L}(\theta)=\mathbb{E}_{(s,a)\sim\mathcal{D}}[\ell\big{(}\pi_{\theta}( s),a\big{)}]\), where \(\ell\) is some loss function in the action space. **NeRF.** Our method uses novel-view synthesis as a building block, for which we use Neural Radiance Fields [37]. For each scene, NeRF performs novel-view synthesis by training a scene-specific neural radiance field \(F_{\Theta}\) from a "training set" of posed images \(\{(I_{k},T_{k})\}\), where each \(I_{k}\in\mathbb{R}^{w\times h\times 3},T_{k}\in SE(3)\). After \(F_{\Theta}\) is trained, through volume rendering NeRF can render new views of a scene from any requested pose, which can be summarized as \(I=\text{NeRF-Render}(T;F_{\Theta})\) - in particular, this works best "near" the training set of poses. Since we train many NeRFs (one per demonstration), we use an accelerated implementation of NeRF (Instant-NeRF [41]). **Corrective Noise Augmentation.** A simple method which has been shown to improve the robustness of behavior-cloned policies is to perform corrective noise augmentation [14, 22]. The idea is to take a nominal trajectory \(\tau=\{(s_{k},a_{k})\}_{k=0}^{K}\) and create a noise-distributed corrective trajectory \(\tilde{\tau}=\{(\tilde{s}_{k},\tilde{a}_{k})\}_{k=0}^{K}\), where each sampled state \(\tilde{s}_{k}\) is a noisy version of the measured state, i.e. \(\tilde{s}_{k}\sim s_{k}+\epsilon\) and \(\epsilon\) is sampled noise. To perform corrective feedback augmentation, \(\tilde{a}_{k}\) is chosen such that it will return the state to the nominal trajectory: given the true inverse dynamics \(f^{-1}\) of the environment, then \(\tilde{a}_{k}=f^{-1}(\tilde{s}_{k},s_{k+1})\), or compute the commanded control that can be tracked with a stabilizing controller to \(s_{k+1}\). An easy way to parameterize this is by choosing the action space of the learned policy to be the input to a stabilizing controller, as in [14, 22]. Note that it is also common in the imitation and reinforcement learning literature to apply noise to inputs, which is typically interpreted as a way to regularize the policy [27, 30]. Meanwhile, in addition to potential regularization effects, the _corrective_ noise augmentation has been interpreted to specifically reduce compounding errors [14, 22], but of course has limits if the scale of perturbations is too large or in highly non-smooth dynamics regimes. A critical limitation of prior works using corrective noise augmentation is that they have not been applied to visual observations. ### Overview: Visual Corrective Augmentation Consider the case where an agent receives partial visual observations \(I\) instead of the full state of the world, and also receives direct proprioceptive state measurements, \(s^{\text{robot}}\), as is commonly the case in robotics. In general, the corrective augmentation of Sec. 3.1 requires obtaining the visual observation \(\tilde{I}_{k}\) for each noisy state \(\tilde{s}_{k}\), which could be expensive or impossible in the actual environment. An insight of this work is that for _eye-in-hand_ robot policies (where the visual observation comes from a camera mounted on the wrist) in static scenes, we can readily generate novel visual observations using novel-view synthesis (i.e., NeRF) without further interactions in the environment. In this setting, a primary subset of the _observations_ are posed images \((I,T)\), together with some _actions_\(a\). The key intuition of our method can be grasped by considering how to perform visually corrective augmentation via NeRF, as illustrated in Figure 2. Noisy states and corrective actions \((\tilde{T}_{k},\tilde{a}_{k})\) can be generated for a trajectory of posed observations and actions \(\tau=\{(I_{k},T_{k},a_{k})\}_{k=0}^{K}\) (Sec. 3.1). The key pre-processing step is to train a trajectory-specific NeRF \(F_{\Theta}^{\tau}\) for each demonstration \(\tau\) (Sec. 3.3). These trajectory-specific NeRFs enable us to render observations \(\tilde{I}_{k}\) for noisy states \(\tilde{T}_{k}\), completing the augmentation process and resulting in visually corrective transitions \((\tilde{I}_{k},\tilde{T}_{k},\tilde{a}_{k})\) (Sec. 3.4). Algorithm 1 and Figure 3 overview this process. ### Training NeRFs from Robot Demonstrations SPARTN uses novel-view synthesis, in particular NeRF, to generate observations \(\tilde{I}_{k}\) for noisy states without environment interaction. We train a NeRF \(F_{\Theta}^{\tau}\) for each demonstration trajectory using the image observations \((I_{1},\cdots,I_{K})\in\tau\) as the training set of views. After training \(F_{\Theta}^{\tau}\), we create observations \(\tilde{I}_{k}\) for perturbed robot states \(\tilde{T}_{k}\) by rendering the view from the perturbed camera pose using \(F_{\Theta}^{\tau}\). An important detail is that the end-effector reference frame used for control in demonstrations may differ from the reference frame for the camera itself, but through standard eye-in-hand calibration we can transform all visual observations used for training and augmenting the NeRFs into the cameras frame. Given a transform \({}^{\text{to}}T_{k}^{\text{from}}\) which transforms between two frames, we simply transform all NeRF-poses to the world frame: \({}^{W}T_{k}^{C}={}^{W}T_{k}^{EE}T^{C}\), where \(W\) is the world frame, \(E\) is the end-effector frame which changes at each step \(k\), and \(C\) is the camera (NeRF) frame statically linked to the end-effector frame, and \({}^{E}T^{C}\) is acquired through hand-eye calibration. **COLMAP camera poses.** Real-world calibration error means that our camera-to-world transforms \(\{{}^{W}T_{k}^{C}\}_{k=1}^{K}\) are noisy and we obtain higher-quality NeRFs by using camera poses estimated by COLMAP [48, 49]. Up to noise and a scale factor \(\beta\), the only difference from the world frame camera transforms is that COLMAP uses an arbitrary reference frame \(V\neq W\). We denote the COLMAP outputs \(\{{}^{V}H_{k}^{C}\}_{k=1}^{K}\), using \(H\) instead of \(T\) because of the difference in scale. We now introduce notation to separate the rotation and translation components of a transform: \[{}^{a}T^{b}:=\left(\text{Rot}\left[{}^{a}T^{b}\right],\text{Trans}\left[{}^{a }T^{b}\right]\right) \tag{1}\] Since we train the NeRFs on COLMAP's camera poses, we must convert perturbed camera poses \({}^{W}T_{k}^{\tilde{C}}\) to COLMAP's frame in order to render the observations. In other words, we must call NeRF-Render\(\left({}^{V}H_{k}^{\tilde{C}},F_{\Theta}^{\tau}\right)\), where: \[{}^{V}H_{k}^{\tilde{C}} =\left(\text{Rot}\left[{}^{V}T_{k}^{\tilde{C}}\right],\beta\ \text{Trans}\left[{}^{V}T_{k}^{\tilde{C}}\right]\right) \tag{2}\] \[{}^{V}T_{k}^{\tilde{C}} ={}^{V}T^{W}\ {}^{V}T_{k}^{\tilde{C}}. \tag{3}\] Both \(\beta\) and \({}^{V}T^{W}\) can be estimated from the pairs \(\{\left({}^{W}T_{k}^{C},{}^{V}H_{k}^{C}\right)\}_{k=1}^{K}\), as described in Appendix D.2. **Static Scene Assumption.** An additional consideration for robotic manipulation is that the standard NeRF formulation Figure 3: An overview of the SPARTN training process. A NeRF is trained for each of the original demonstrations in \(\mathcal{D}\). We use these NeRFs to generate visual corrective augmentations for each demonstration and collect them in \(\tilde{\mathcal{D}}\). The policy \(\pi_{\theta}\) can be trained on \(\mathcal{D}\) and \(\tilde{\mathcal{D}}\) using standard behavior cloning methods. Figure 2: An illustration of how SPARTN creates augmentations from an original demonstration (in reality, this process is repeated for every available demonstration). **(i)**: The eye-in-hand demonstration contains posed images \(\{(I_{k},T_{k})\}_{k=1}^{K}\). **(ii)**: We train a neural radiance field (NeRF) of the demonstration scene on the posed images. **(iii)**: We sample perturbations around each pose to **simulate** noise in the demonstration, and calculate the corrective action (in magenta) that would stabilize the trajectory. **(iv)**: We use the NeRF to render observations for the perturbed poses. The end result is augmented image-action pairs for improving behavior cloning. assumes a static scene, while manipulation tasks such as grasping will usually move objects in the scene. To address this, we apply the NeRF training and augmentation process to only the subsets of each demonstration trajectory where no robot-object interaction occurs, instead of all timesteps. For grasping, a simple and effective heuristic is to only apply SPARTN to the portions of each demonstration _before_ the expert closes the gripper. **Masking Out the Robot Gripper.** The robot's gripper is often within the view of an eye-in-hand camera, which breaks the static scene assumption. To address this, we leverage that the gripper is in the same location in each image assuming the camera is rigidly mounted and the gripper is open. Further, since NeRF is trained per-ray, we can simply mask pixels from training images. We construct a single binary mask \(M\in\{0,1\}^{w\times h}\), where a \(1\) indicates gripper pixels to mask out. Figure 4 shows how we use the same mask to splice the gripper back into each NeRF rendering output, \(\tilde{I}_{k}\leftarrow\neg M\odot\tilde{I}_{k}+M\odot I_{k}\), where \(\odot\) is element-wise multiplication broadcasted across the color channels. **NeRF Quality.** Even when the training views from a demonstration are suboptimal for NeRF training, SPARTN benefits from the fact that our augmentation is local and the perturbations \(\varepsilon\) are typically small, so we only need the NeRF to generalize in a small region around the demonstration trajectory. As we will verify in the experiments, SPARTN can be effective even with a limited number of training views compared to other NeRF applications. ### NeRFing Corrective Noise Augmentation Given a visual augmentation model (Sec. 3.3), we can adapt methods for corrective augmentation (Sec. 3.1) into the visual domain. Our goal is to create noise-distributed corrective transitions \(\{(\tilde{I}_{k},\tilde{T}_{k},\tilde{a}_{k})\}\). First, we describe this simply in the global frame. In order to sample from the noise-distributed corrective trajectory, one can first apply noise to the measured end-effector pose, \(\tilde{T}_{k}:=T_{k}\varepsilon\), where \(\varepsilon\sim\text{NoiseDist}(SE(3))\) is a randomly sampled rotation and translation. The high-fidelity perturbed image \(\tilde{I}_{k}\) corresponding to \(\tilde{T}_{k}\) can then be rendered using the trajectory-specific NeRF \(F^{\varepsilon}_{\Theta}\), without requiring access to the actual environment. For the actions, the simplest case is when they are inputs to a global-frame stabilizing Cartesian controller controller [25], in which case \(\tilde{a}_{k}=a_{k}=\hat{T}_{k}\) will provide stabilization to the nominal trajectory, where \(\hat{T}_{k}\) is the _desired_ pose sent to the lower-level controller. **Corrective Relative Actions.** As is common in prior works, we observe better performance by parameterizing the learned policy as a _relative_ rather than global action. Consider the as-discussed global-frame version, with (1) _observations_ as a global-frame measured SE(3) end-effector pose \({}^{W}T^{E}_{k}\), where \(W\) refers to world-frame, and \(E\) to the end-effector frame at timestep \(k\), and (2) _action_ as a global-frame desired SE(3) end-effector pose \({}^{W}T^{\tilde{E}}_{k}\). To switch to a relative action space, we adjust the action to \[a_{k}={}^{E}T^{\tilde{E}}_{k}=({}^{W}T^{E}_{k})^{-1}\;{}^{W}T^{\tilde{E}}_{k }=\;{}^{E}T^{WW}_{k}T^{\tilde{E}}_{k}. \tag{4}\] To additionally formulate the corrective noise augmentation in the relative frame, we consider the SE(3)-noise \(\varepsilon\) as transforming from the noisy end-effector frame to the measured end-effector frame, i.e. \({}^{E}T^{\tilde{E}}:=\varepsilon\). This accordingly adjusts the observation as \({}^{W}T^{\tilde{E}}_{k}=\;{}^{W}T^{E}_{k}T^{\tilde{E}}_{k}=\;{}^{W}T^{E}_{k}\varepsilon\) and the _relative_ action as: \[\tilde{a}_{k}={}^{\tilde{E}}T^{\tilde{E}}_{k}=({}^{W}T^{E}_{k}\;{}^{E}T^{ \tilde{E}}_{k})^{-1}\;{}^{W}T^{\tilde{E}}_{k}=\varepsilon^{-1}\;a_{k}. \tag{5}\] Concisely, this amounts to post-pending \(\varepsilon\) to the measured pose, and pre-pending \(\varepsilon^{-1}\) to the un-noised relative action. **Non-\(\text{SE}(3)\) actions.** Thus far we have assumed that the actions \(a\in\text{SE}(3)\) only command desired pose transforms. In practice, the action space may contain _additional_ dimensions, for example to open and close the gripper itself. The SPARTN augmentation does not concern these aspects of the action, so the corrective actions simply copy the original action for non-\(\text{SE}(3)\) dimensions. **Summary.** Figure 2 summarizes our augmentation procedure: given a demonstration dataset \(\mathcal{D}\), we first train all the neural radiance fields \(\{F^{\varepsilon}_{\Theta}\}_{\tau\in\mathcal{D}}\) and save the weights to disk. We then augment each transition in the original dataset with \(N_{\text{aug}}\) noisy corrective transitions produced using the process we described above, and save these transitions into an augmented dataset \(\tilde{\mathcal{D}}\). Appendix Algorithm 2 describes the precise augmentation procedure in detail. After the augmented dataset has been created, it can simply be combined with the original dataset to augment BC training. Various sampling strategies are possible, but in our experiments we simply construct mini-batches for BC training by sampling from the original and augmented datasets with equal probability. ## 4 Simulation Experiments We evaluate SPARTN and related approaches in the simulated 6-DoF grasping benchmark first introduced in [58], which features a simulated Franka Emika Panda arm with Figure 4: An illustration of how the gripper is inserted into the result of the NeRF rendering process. Gray regions indicate pixels being masked out by the binary gripper mask \(M\in\{0,1\}^{w\times h}\), which denotes the pixels where the gripper is located in all frames. a parallel-jaw gripper. Objects are placed on a table and must be lifted above a height threshold in a successful grasp. Policies receive either RGB, RGBD, or point cloud observations from a wrist camera, and control the gripper by commanding relative pose changes in 6-DoF end-effector space. ### Data Collection and Evaluation Protocol We follow the training and evaluation procedure from [58]. The training dataset includes \(2{,}500\) demonstrations of grasping \(1{,}500\) ShapeNet [6] objects. Demonstrations are up to \(20\) timesteps and are generated by trajectory optimization to precomputed grasps from the ACRONYM dataset [11]. Policies are evaluated on grasping _held-out_ objects from the YCB [5] or ShapeNet datasets. Though it is more "out of distribution" relative to the training objects, the YCB evaluation is more realistic for tabletop grasping scenarios (ShapeNet includes a wide variety of objects, e.g. airplanes and bicycles). Each held-out object is evaluated ten times, each with a different initial robot configuration and object's initial pose. ### Comparisons We compare SPARTN against other approaches ranging from simple behavior cloning to online-supervised imitation and reinforcement learning: **Behavior Cloning (BC)**: Trains policies via supervised learning on the demonstration dataset. **DART**[29]: Introduces a modified demonstration setup where a continuous Gaussian noise is injected into the robot's state _during_ the expert demonstration, then trains policies on the modified demonstrations with BC. **Homography Augmentation (HA)**: A simplification of SPARTN where perturbations can be 3D rotations, but not translations, of the camera. For pure rotation, we can calculate the homography transform for rendering the rotated view without NeRF. Augmented actions are computed similarly to SPARTN. **DAgger**[46]: An online-supervised method where a policy is first trained using BC on offline demos, and then the expert provides action labels for states from the policy's rollouts throughout further policy training. **GA-DDPG**[58]: Jointly trains policies via BC and fine-tunes them with reinforcement learning (DDPG [51]). Of the methods considered, DAgger and GA-DDPG require online environment interaction and supervision from an expert or reward function, respectively. The other methods, including SPARTN, train only on pre-collected demonstration datasets. Since the DART data collection setup noisily perturbs the robot as the expert demonstrates, DART can interfere with human expert demonstration, though this challenge does not arise here with our simulated expert. \begin{table} \begin{tabular}{c c c c} \hline \hline Supervision & Method & Input & YCB SR(\%) & SN SR(\%) \\ \hline \multirow{4}{*}{Offline} & BC & RGB & \(28.9\pm 2.4\) & \(57.4\pm 0.2\) \\ & & RGB & \(51.2\pm 5.2\) & \(57.8\pm 2.2\) \\ & DART\({}^{\dagger}\) & RGBD & \(4.6\) & \(4.5\) \\ & Point cloud & _65.6_ & _73.6_ \\ & \({}^{*}\)HA\({}^{*}\) & RGB & \(26.7\pm 2.4\) & \(57.5\pm 0.8\) \\ & SPARTN (ours) & RGB & \(74.7\pm 2.4\) & \(66.9\pm 1.6\) \\ \hline \multirow{4}{*}{Online\({}^{*}\)} & RGB & \(53.3\) & \(53.1\) \\ & DAgger & RGBD & \(67.1\) & \(60.4\) \\ & Point cloud & \(77.2\) & \(75.8\) \\ & GA-DDPG & Point cloud & \(88.2\) & \(57.3\) \\ \hline \hline \end{tabular} \end{table} Table 2: Grasping success rates (SR) on _held-out objects_ from YCB [5] or ShapeNet (SN) [6] in a simulated 6-DoF grasping benchmark [58]. We **bold** the best offline RGB-only results, though we include online and non-RGB methods for comparison. Online\({}^{*}\) requires additional environment interactions, while Offline only uses demonstration data. DART\({}^{\dagger}\) is offline but requires a special demonstration collection setup. SPARTN outperforms other offline RGB-only methods, while RL (GA-DDPG) performs best overall while requiring millions of interactions. We calculate average success rates and standard error over 4 random seeds. _Italicized_ success rates were reported in prior work’ [58]. Figure 5: Example tasks in the simulated 6-DoF grasping benchmark, first introduced in [58]. There are \(\sim 1{,}500\) ShapeNet objects in training demonstrations, and held out YCB objects for evaluation. A camera mounted to the Franka Panda robot’s arm provides observations, and the policy controls the 6-DoF end-effector pose. \begin{table} \begin{tabular}{c c c} \hline \hline Method & Image aug. & YCB SR(\%) \\ \hline \multirow{3}{*}{BC} & Without & \(25.3\pm 1.6\) \\ & With & \(28.9\pm 2.4\) \\ & Without & \(51.2\pm 3.2\) \\ & With & \(48.6\pm 2.8\) \\ & Without & \(23.9\pm 4.0\) \\ & With & \(29.7\pm 2.4\) \\ & Without & \(68.3\pm 3.6\) \\ & With & \(74.7\pm 2.4\) \\ \hline \hline \end{tabular} \end{table} Table 3: Ablating the effect of standard image augmentation on each method for RGB policies. Average success rates and standard errors are calculated over four seeds. ### Training Details We follow the RGB architectural and training hyperparameters of [58], with policies consisting of a ResNet-18 [15] image encoder followed by an MLP. We apply random crop and color jitter to the training images. We re-use the same training BC hyperparameters for SPARTN. Appendix C.1 describes the complete training details. For SPARTN, we create \(N_{aug}=100\) augmented transitions from each original transition and save the augmented dataset to disk before BC training. To sample the perturbations \(\varepsilon\sim\text{NoiseDist}(\text{SE}(3))\), we parameterize the rotations in terms of Euler angles \((\phi,\theta,\varphi)\) and uniformly sample both rotation and translation parameters: \[(\phi,\theta,\varphi),(t_{x},t_{y},t_{z}) \sim\mathcal{U}(-\alpha,\alpha),\mathcal{U}(-\beta,\beta) \tag{6}\] \[\varepsilon :=(R(\phi,\theta,\varphi),(t_{x},t_{y},t_{z})) \tag{7}\] In simulation, we set \(\alpha=0.2\) radians and \(\beta=3\) mm. Following the DART hyperparameters in [58], we only augment timesteps \(5-13\) of each demonstration. Appendix B contains samples of SPARTN's perturbed observations rendered via NeRF. ### Results Table 2 shows grasping success rates on held-out objects from either the YCB or ShapeNet (SN) datasets. SPARTN significantly outperforms the other two offline-supervised approaches for RGB inputs. Since DART injects noise during expert collection, the amount of augmentation is limited by the number of demonstrations the expert can collect. Meanwhile, SPARTN can cheaply generate an arbitrary number of augmented examples on top of the existing demonstration dataset, leading to more data diversity without any additional effort from the expert. See the supplementary material for videos of rollouts for SPARTN and BC policies, with success and failure cases. Naive BC performs relatively well on ShapeNet evaluation, likely because the evaluation set is "in-distribution". Correspondingly, corrective methods (DART, SPARTN) achieve smaller improvements over BC on ShapeNet and larger improvements on YCB. On YCB, SPARTN's performance is closer to RL (GA-DDPG) than to other offline methods. It actually outperforms DAgger for RGB policies, perhaps because DAgger's data distribution changes online presenting a more challenging learning problem. Notably, on YCB SPARTN outperforms DART with point clouds and is comparable to DAgger with point clouds. Since SPARTN itself only requires RGB images during both NeRF training and policy training, it significantly closes the gap between the performance of RGB-only and the best depth-based methods. Since consumer depth cameras can struggle with common thin and reflective items [62], improving RGB-only grasping can enable more robust grasping of such objects. **Effect of Image Augmentations.** We try ablating the effect of standard image augmentations (random crop and color jitter) for the BC, DART, HA, and SPARTN methods. Table 3 shows that the image augmentations have a small effect on the performance of most methods, relative to the larger effect of SPARTN's corrective augmentations. ## 5 Real-World Experiments The simulation benchmark results show that SPARTN can improve grasping generalization in imitation learning without online supervision. Here, we verify that SPARTN can enable real-world robotic grasping of challenging objects from limited human demonstration. See the website in the supplement for video results. ### Experimental details **Robot setup.** The robotic manipulator is a Franka Emika Panda robot arm with a wrist-mounted consumer grade webcam. Policies take images of size \(256\times 256\) as input and output a 6-DoF action representing the desired change in end-effector position and orientation, as well as a binary \begin{table} \begin{tabular}{c c c c} \hline \hline Target Object & \# Demos & BC SR(\%) & **SPARTN SR(\%)** \\ \hline Banana & \(14\) & \(55\) & \(\mathbf{75}\) \\ Thin box & \(20\) & \(35\) & \(\mathbf{65}\) \\ Steel water bottle & \(15\) & \(20\) & \(\mathbf{40}\) \\ Wine glass & \(25\) & \(70\) & \(\mathbf{90}\) \\ Lubriderm bottle & \(17\) & \(25\) & \(\mathbf{60}\) \\ White tape roll & \(15\) & \(40\) & \(\mathbf{45}\) \\ Tupperware & \(20\) & \(40\) & \(\mathbf{75}\) \\ Fork & \(20\) & \(25\) & \(\mathbf{40}\) \\ \hline Average & & \(38.75\) & \(\mathbf{61.25}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Success rates (SR) of behavior cloning (BC) and SPARTN 6-DoF grasping policies on a suite of eight real-world target objects. SPARTN outperforms BC in every environment, achieving an average absolute performance boost of \(22.5\%\). Each success rate is computed over \(20\) trials. Figure 6: Real-world grasping environments. In each environment, the task is to grasp the labeled target object in a particular way. The target objects have various geometric shapes and exhibit a diverse range of characteristics, including reflectiveness, transparency, and radial symmetry. open/close gripper action. We use a Cartesian impedance controller to command the pose changes at a frequency of 4 Hz. We task the robot to grasp a target object in eight different environments depicted in Figure 6. The target objects include natural shapes that are common in the real world and exhibit a diverse range of attributes, such as reflective-ness, transparency, and radial symmetry. **Comparisons and Evaluation.** In each environment, we collect a small number of expert grasping demonstrations with a virtual reality controller. Because DART is difficult to use with human demonstration collection, we compare SPARTN to a vanilla BC policy trained on the same set of demonstrations. Policies are evaluated with the same objects seen in the demonstrations. Initial object and robot configurations are randomized during both data collection and evaluation. **Training.** To improve NeRF quality for SPARTN, we program the robot to collect a few images of the scene from a fixed set of poses before the start of each demonstration. This automatic process is only used to improve COLMAP's pose estimation and the subsequent NeRF training. For SPARTN, we generate \(N_{aug}=50\) augmented transitions from each original transition in the demonstrations. We sample perturbations \(\epsilon\) according to Eq. 6 with \(\alpha=0.05\) radians and \(\beta=0.4\) mm. Appendix D.2 describes COLMAP and NeRF training in more detail, and Appendix B shows sample SPARTN observations rendered by NeRF. Aside from using the augmented dataset, SPARTN policies are trained using the same BC architecture and training hyperparameters described in Appendix D.1. ### Results Table 4 shows grasping success rates in the eight real-world environments. Quantitatively, SPARTN policies outperform the baseline BC policies across the board, on average achieving an absolute \(22.5\%\) increase in success rate. Figure 7 shows qualitative differences in performance between the BC and SPARTN policies. SPARTN generally exhibits more reactive behaviors than the baseline policy: it navigates towards the target object better while occasionally avoiding obstacles, executes actions with greater precision, and even reattempts the grasp more successfully after an initial miss. In some cases, the differences are stark: for instance, SPARTN may successfully move toward and grasp the target object while the baseline fails to even reach it. We present further analysis of policy rollouts in Appendix D.3, showing how SPARTN qualitatively performs better than BC even in cases where both methods fail to grasp the target object. The supplementary videos of real-world policy rollouts illustrate all of these differences, revealing that SPARTN induces important reactive closed-loop behaviors that enable the manipulator to successfully execute grasps in the real world. ## 6 Conclusion We introduce SPARTN, which augments eye-in-hand demonstrations with perturbed visual observations and corrective actions. Our augmentation leverages novel-view synthesis, in particular NeRF, to produce these augmentations during _training_-time. SPARTN can improve behavior cloning training of robust, real-time, and closed-loop 6-DoF visual control policies. We show that SPARTN-trained policies outperform other offline-supervised methods in a simulated 6-DoF grasping generalization benchmark. Our policies can also perform on par with imitation methods that require depth information and online supervision. We verify that SPARTN can train policies to grasp a variety of objects in the real world from limited human demonstrations. Despite its strong performance, SPARTN also has some limitations. First, SPARTN is limited to tasks with static scenes like grasping: extending to a broader set of manipulation tasks would require effective view synthesis for dynamic scenes. Second, training a neural radiance field for every demonstration before training is computationally expensive, a limitation that may be mitigated through amortized NeRF models [55, 59, 64]. Finally, the noise distribution must be tuned for a given platform, e.g. it is tuned separately for the simulated and real experiments. An interesting direction for future work is to use policy actions to generate the noise, which would result in a fully offline variant of DAgger [46] that uses NeRF as a simulator. Figure 7: Sample real-world policy rollouts illustrating how SPARTN succeeds (green) in cases where BC fails (red). **Top left:** While BC fails to reach the steel bottle, SPARTN successfully reaches and grasps it. **Top right**: While BC collides into the orange bottle (a distractor object), SPARTN takes a more rounded path to avoid it before grasping the white Lubricherdam bottle. **Bottom left**: BC fails to recover after a missed grasp, while SPARTN successfully reattempts the grasp after failing the first time. **Bottom right**: SPARTN operates with higher precision than BC and successfully completes a difficult fork grasping task. ## 7 Acknowledgements We thank Jimmy Wu, Kaylee Burns, Bohan Wu, and Suraj Nair for technical advice on various aspects of the real robot setup, and Archit Sharma for helpful conceptual discussions. This project was funded by ONR grant N00014-21-1-2685. AZ acknowledges the support of the NSF Graduate Research Fellowship. CF is a fellow of the CIFAR Learning in Machines and Brains Program.
2302.11557
K-Diag: Knowledge-enhanced Disease Diagnosis in Radiographic Imaging
In this paper, we consider the problem of disease diagnosis. Unlike the conventional learning paradigm that treats labels independently, we propose a knowledge-enhanced framework, that enables training visual representation with the guidance of medical domain knowledge. In particular, we make the following contributions: First, to explicitly incorporate experts' knowledge, we propose to learn a neural representation for the medical knowledge graph via contrastive learning, implicitly establishing relations between different medical concepts. Second, while training the visual encoder, we keep the parameters of the knowledge encoder frozen and propose to learn a set of prompt vectors for efficient adaptation. Third, we adopt a Transformer-based disease-query module for cross-model fusion, which naturally enables explainable diagnosis results via cross attention. To validate the effectiveness of our proposed framework, we conduct thorough experiments on three x-ray imaging datasets across different anatomy structures, showing our model is able to exploit the implicit relations between diseases/findings, thus is beneficial to the commonly encountered problem in the medical domain, namely, long-tailed and zero-shot recognition, which conventional methods either struggle or completely fail to realize.
Chaoyi Wu, Xiaoman Zhang, Yanfeng Wang, Ya Zhang, Weidi Xie
2023-02-22T18:53:57Z
http://arxiv.org/abs/2302.11557v2
# K-Diag: Knowledge-enhanced Disease Diagnosis in Radiographic Imaging ###### Abstract In this paper, we consider the problem of disease diagnosis. Unlike the conventional learning paradigm that treats labels independently, we propose a knowledge-enhanced framework, that enables training visual representation with the guidance of medical domain knowledge. In particular, we make the following contributions: **First**, to explicitly incorporate experts' knowledge, we propose to learn a neural representation of medical knowledge graph via contrastive learning, implicitly establishing relations between different medical concepts. **Second**, while training the visual encoder, we keep the parameters of the knowledge encoder frozen and propose to learn a set of prompt vectors for efficient adaptation. **Third**, we adopt a Transformer-based disease-query module for cross-model fusion, which naturally enables explainable diagnosis results via cross attention. To validate the effectiveness of our proposed framework, we conduct thorough experiments on three x-ray imaging datasets across different anatomy structures, showing our model can exploit the implicit relations between diseases/findings, thus is beneficial to the commonly encountered problem in the medical domain, namely, long-tailed and zero-shot recognition, which conventional methods either struggle or completely fail to realize. ## 1 Introduction The application of artificial intelligence (AI) has delivered impressive results in diagnosing diseases from medical scans [23]. A commonly adopted framework is to train vision models by supervised learning with discrete labels and predict pathology categories within a fixed-size vocabulary at inference time [6, 34]. However, such a learning paradigm suffers from two limitations: _first_, the model is unable to generalize toward previously unseen categories; _second_, the labels are converted into one-hot vectors as illustrated in Fig. 1, that are **orthogonal** in the embedding space, leaving the intrinsic relations between different pathologies or diseases unexploited. In the recent literature, jointly training visual-language models has shown promising progress in computer vision [30, 31], often called Foundation Models. For example, CLIP [31] and ALIGN [17] have demonstrated remarkable "zero-shot" generalization for various downstream tasks by learning the joint representation of image and text with simple noise contrastive learning. Crucially, the data used to train these powerful foundation models can simply be crawled from the Internet without laborious manual annotation. However, as commonly known, collecting training data at scale in the medical domain is often impractical [6, 2], due to its safety-critical nature. In this paper, we, therefore, explore an alternative by injecting medical expert knowledge into the visual representation learning procedure. We present a novel knowledge-enhanced classification framework, as shown in the upper right of Fig. 1, _first_, to explicitly incorporate experts' knowledge, we build a neural representation for the medical knowledge graph via contrastive learning, acting as a "knowledge encoder" that explicitly encodes the relations between medical concepts in the embedding space; _second_, while training the visual encoder, we keep the parameters of the knowledge encoder frozen, and only learn a set of prompt vectors for efficient adaptation; _third_, we adopt a Transformer-based disease-query module for text-image cross-modality fusion, where the disease names act as queries that cross-attend the visual features and infer the likelihood of the disease existence, naturally, it enables explainable diagnosis results via the cross attentions. To demonstrate the effectiveness of the proposed knowledge-enhanced classification framework, we conduct thorough experiments to analyze from three perspectives: _first_, we experiment on three disease diagnosis tasks across different anatomy structures, to show our method is efficient across various image distributions; _second_, we train on a combination of 11 public chest x-ray datasets, showing our model can better exploit the potential of various public datasets regardless of their annotation granularity, which traditional training paradigm can slightly benefit from; _third_, we perform zero-shot disease diagnosis, _i.e._, evaluating unseen classes on PadChest [4], achieving an AUC of Figure 1: In the conventional training scheme (left), manual annotations are often converted into **discrete** one-hot vectors, that are **orthogonal** in the embedding space, thus ignoring the implicit relations between labels. While in our proposed knowledge-enhanced classification framework (right), the labels are transformed into **continuous** vectors in a knowledge embedding space, to capture the implicit relations, and further used to supervise visual representation learning. at least 0.600 on 79 out of 106 unseen radiographic findings, Note that, such a task is completely unachievable in conventional supervised training. ## 2 Method In this section, we start by describing the considered problem scenario in Sec. 2.1, followed by the procedure of condensing the medical domain knowledge into a text encoder in Sec. 2.2. In Sec. 2.3, we detail the proposed knowledge-enhanced classification model, including the visual encoder, knowledge encoder, learnable prompt module, and disease-query module, and describe the training procedure. ### Problem Scenario Given a dataset with \(N\) sample pairs, _i.e._, \(\mathcal{D}_{\text{train}}=\{(x_{1},y_{1}),(x_{2},y_{2}),\ldots,(x_{N},y_{N})\}\), where \(x_{i}\in\mathbb{R}^{H\times W\times 3}\) refers to the input image, and \(y_{i}\in\mathcal{T}=\{t_{1},\ldots,t_{Q}\}\) denotes the ground truth annotation from a pool of \(|Q|\) candidate diseases. Unlike conventional supervised learning that often converts the labels to one-hot vectors, our goal is to train a classification model that leverages the semantics encapsulated in the disease category texts, specifically, \[S_{i}=\Phi_{\text{query}}(\Phi_{\text{visual}}(x_{i}),\Phi_{\text{prompt}}( \Phi_{\text{knowledge}}(\mathcal{T}))), \tag{1}\] where \(S_{i}\in\mathbb{R}^{Q}\) refers to the inferred likelihood of the patient having any disease in \(\mathcal{T}\). \(\Phi_{\text{knowledge}}(\cdot),\Phi_{\text{prompt}}(\cdot),\Phi_{\text{ visual}}(\cdot),\Phi_{\text{query}}(\cdot)\) refer to the trainable modules in our proposed knowledge-enhanced classification framework, that will be detailed in the following sections. ### Knowledge Encoder To explicitly incorporate experts' knowledge, we propose to inject the medical domain knowledge into a text encoder (\(\Phi_{\text{knowledge}}\)), by implicitly modeling the relations between medical entities in the textural embedding space. Specifically, we employ an off-the-shelf knowledge graph in the medical community, namely, Unified Medical Language System (UMLS) [3], to fine-tune a pre-trained BERT language model. In the following section, we detail the training procedure, as shown in Fig. 2. **Notation.** Let \(\mathcal{D}_{\text{UMLS}}=\{(n,d)_{i}\}_{i=1}^{||D|}\) denote a concept dictionary for UMLS in text form, each concept (\(n_{i}\)) is associated with one corresponding definition (\(d_{i}\)), for example, the concept "pulmonary infiltrate" is defined as "A finding indicating the presence of an inflammatory or neoplastic cellular infiltrate in the lung parenchyma". **Training.** Here, we train the text encoder by maximizing the similarities between positive concept-definition pairs, _i.e._, pull the distance of language description and its corresponding concept in the textual embedding space. Given \(N\) randomly sampled concepts and definitions, we pass them through a standard BERT architecture [8], and take the average-pooled features as their textual embeddings, _i.e._, the concept \(\mathbf{n}_{i}\in\mathbb{R}^{N\times d}\) definition \(\mathbf{d}_{i}\in\mathbb{R}^{N\times d}\). At training time, each mini-batch can be expressed as \(\{(n_{i},d_{i})\}_{i=1}^{N}\), and the model can be trained via contrastive learning [28]: \[\mathcal{L}_{\text{contrastive}}=-\frac{1}{2N}\sum_{k=1}^{N}(\log\frac{e^{( \langle\mathbf{\langle}\mathbf{u}_{i},\mathbf{d}_{i}\rangle/\tau)}}{\sum_{k=1}^{N}e^{( \langle\mathbf{\langle}\mathbf{u}_{i},\mathbf{d}_{k}\rangle/\tau)}}+\log\frac{e^{(\langle \mathbf{d}_{i},\mathbf{u}_{i}\rangle/\tau)}}{\sum_{k=1}^{N}e^{(\langle\mathbf{d}_{i},\mathbf{u}_ {k}\rangle/\tau)}}). \tag{2}\] where \(\tau\in\mathbb{R}^{+}\) is a scalar temperature parameter. Once this is trained, the text encoder effectively becomes a "knowledge encoder" with domain experts' knowledge injected. ### Knowledge-enhanced Classification Model After injecting the domain knowledge into the text encoder, here, we describe the procedure to guide the visual representation learning with the knowledge encoder. Specifically, the classification model consists of four core modules, namely, the visual encoder, frozen knowledge encoder, prompt module, and disease-query module. **Visual Encoder.** Given an image scan \(x_{i}\in\mathbb{R}^{H\times W\times 3}\), we compute the features with a visual backbone, which can be either ResNet [13] or Vision Transformer [9], \(\mathbf{x}_{i}=\Phi_{\text{visual}}(x_{i})\in\mathbb{R}^{h\times w\times d}\), where \(d\) refers to the feature dimension, and \(h,w\) denote the size of the output feature map, feature dimension \(d\) is set to 256. **Frozen Knowledge Encoder.** Given the disease categories \(\mathcal{T}\), we compute the features with the pre-trained knowledge encoder: \(\mathbf{T}=\Phi_{\text{knowledge}}(\mathcal{T})\in\mathbb{R}^{Q\times d}\), where \(d\) refers to the feature dimension, and \(Q\) refers to the category number. As the number of classes in downstream medical diagnosis datasets is usually extremely limited, _e.g._, 10 diseases on VinDr-Mammo [25], to prevent the knowledge encoder from over-fitting on certain training classes, deviating from the originally embedded knowledge graph, we keep its parameters frozen, and use it to guide the learning of visual encoder, effectively, such a training procedure resembles knowledge injection into visual representation learning. Figure 2: Overview of the knowledge-enhanced disease diagnosis workflow. The knowledge encoder (left) is first trained to learn a neural representation of the medical knowledge graph via contrastive learning, and then used to guide the visual representation learning in our knowledge-enhanced classification model (right). **Learnable Prompt Module.** To bring more flexibility, we also introduce a learnable prompt module (\(\Phi_{\text{prompt}}\)) for efficient knowledge adaptation. Specifically, as shown in the lower-right of Fig. 2, it consists of a set of learnable vectors, _i.e._, \(\mathbf{h}\in\mathbb{R}^{N\times d}\), where \(N\) denotes the feature numbers and \(d\) is the embedding dimension of each feature. Given the disease embeddings (\(\mathbf{T}\in\mathbb{R}^{Q\times d}\)), we first use an MLP to project it into a probability distribution over the learnable prompt vectors, \(\mathbf{p}=\text{SoftMax}(\text{MLP}(\mathbf{T}))\in\mathbb{R}^{Q\times N}\). Then, the output of the Prompt Module can be calculated as the matrix multiplication between \(\mathbf{p}\) and \(\mathbf{h}\), _i.e._, that can be formulated as \(\mathbf{k}=\Phi_{\text{prompt}}(T)=(\mathbf{p}\cdot\mathbf{h})\in\mathbb{R}^{Q\times d}\). **Disease-Query Module.** We use a 4-layer transformer decoder with an MLP to get the final prediction. Given the disease categories \(\mathcal{T}\), we have converted them into a set of disease embeddings (\(\mathbf{k}\)) with the pre-trained knowledge encoder and learnable prompt module. As inputs to the Transformer decoders, disease embeddings are treated as queries, and the encoded features (\(\mathbf{x}_{i}\)) act as key and value of the disease-query module: \(s_{i}=\Phi_{\text{query}}(\mathbf{x}_{i},\mathbf{k})\in\mathbb{R}^{Q\times C}\), where \(C\) represents the class number and is set as 2, since the diagnosis tasks are all binary classification. Cross-entropy loss is used as the optimization function. ## 3 Experiments ### Datasets In this paper, we conduct experiments on datasets of X-ray images, due to there exists sufficient data across anatomy structures and annotated pathology categories in this field, supporting thorough evaluation. **VinDr-PCXR [27]** is a new pediatric CXR dataset of 9,125 studies, which was officially divided into a training set and a test set of 7,728 and 1,397 studies respectively. Each scan in the training set was manually annotated for the presence of 15 diseases by a pediatric radiologist who has more than ten years of experience, while in the official test set, there are 11 diseases. Additionally, for fair and robust evaluation, we further merged the rare diseases (positive samples less than 5) into "other diseases", resulting in only 6 classes in the test set, including no finding, bronchitis, broncho-pneumonia, other disease, bronchiolitis, and pneumonia. **VinDr-Mammo [25]** is a full-field digital mammography dataset comprising 20,000 images (5,000 four-view scans). Each scan was manually annotated for no finding or the presence of 10 mammography findings, including mass, calcification, asymmetry, focal asymmetry, global asymmetry, architectural distortion, and suspicious lymph node, skin thickening, skin retraction, nipple retraction. The dataset was official divided into a training set and a test set with 4,000 and 1,000 exams respectively. **VinDr-SpineXr [26]** is a spine X-ray dataset comprising 10,468 spine X-ray images from 5,000 studies. Each image was manually annotated by an experienced radiologist with no finding or abnormal findings in 7 categories, including osteophytes, foraminal stenosis, vertebral collapse, disc space narrowing, spondyllysis, surgical implant, and other lesions. The dataset was official divided into a training set a test set of 4,000 and 1,000 studies respectively. **PadChest [4]** is a chest X-ray dataset with 160,868 chest X-ray images labeled with 174 different radiographic findings, 19 differential diagnoses, only 27% of the labels (totaling 39,053 examples) come from board-certified radiologists, and the rest are obtained by using a recurrent neural network with attention trained on the radiology reports. For evaluation purposes, we only test on samples annotated by board-certified radiologists, and report the zero-shot test results. **CXR-Mix.** In this paper, we also construct a dataset by assembling 11 public datasets, including ChestXray-14 [33] GoogleNIH [22] Covid-CXR2 [29] CheXpert [15] Object-CXR [14] NLM-TB [16] RSNA [32] SIIM-ACR [11] VinDR [24] OpenI [7] MIMIC-CXR [19], termed as **CXR-Mix**. We refer the readers to [6] for more details of these datasets. For the datasets [33, 15] with official train/val/test splits, we use them directly, for those [22, 24, 29, 14] with official train/test splits, we random split the train split with \(0.8/0.2\) for train/val, in other cases, we randomly split the datasets [32, 11, 16, 7, 19] with \(0.7/0.1/0.2\) for train/val/test. As a result, our constructed CXR-Mix ends up with \(763,520\) chest X-rays for training, \(28,925\) for val, and \(28,448\) for test, spanning across a total of 38 classes in the CXR-Mix. **Note that**, each of these datasets was originally collected to serve different purposes, the annotations are often partially available, for example, the images from the pneumonia dataset lack labels for pneumothorax, we use \(-1\) to denote the label missing and will not calculate the final CE loss on them. ### Implementation and Training Details **Knowledge-Enhanced Text Encoder.** To construct the knowledge encoder, we initialize the text encoder from ClinicalBERT [1], and finetune it for 100K training steps. In each mini-batch, 64 concept-definition pairs are used for training. We set the maximal sequence length as 256, though the definition could be long sometimes. We use AdamW [21] as the optimizer with \(lr=1\times 10^{-4}\) and \(lr_{\text{warm up}}=1\times 10^{-5}\). **Knowledge-Enhanced Classification Framework.** We freeze the knowledge encoder and set other parts to be learnable, _i.e._, visual encoder, prompt module, disease-query module. We train the model for 100 epochs with batch size 128, and use AdamW [21] is adopted as the optimizer with \(lr=1\times 10^{-4}\) and \(lr_{\text{warm up}}=1\times 10^{-5}\). ## 4 Results Here, we conduct experiments to validate the effectiveness of our knowledge-enhanced classification model. In Sec. 4.1, we first compare knowledge-enhanced training with standard training using discrete labels across different architectures and then perform a thorough ablation study of proposed modules. In Sec. 4.2, we conduct an analysis on the proposed knowledge encoder by replacing it with other pre-trained language models. In Sec. 4.3, we experiment on **CXR-Mix**, to show that our model can effectively exploit knowledge across datasets with varying annotation granularity, we conduct analysis from three aspects: (i) the ability to combine various partial-labeled datasets, (ii) to leverage implicit relations between diseases, (iii) to diagnose diseases that are unseen at training time, resembling an open-set recognition scenario. ### Analysis of Knowledge-Enhanced Classification Model **Comparison to Conventional Training Scheme.** As baselines, we adopt the widely used ResNet-50 [13] and ViT-16 [10], and train with conventional learning scheme, _i.e._, using discrete labels. The results are summarized in Tab. 1, we refer the reader to detailed results for each category presented in the supplementary material (Tab. 4, 6 and 5). Our proposed knowledge-enhanced model achieves a higher average AUC on all three datasets across different architectures. **Ablation Study.** We conduct a thorough ablation study of the proposed model by removing individual modules and varying the hyper-parameters, as shown in Tab. 1. Specifically, the performance of ResNet-50 is improved to 71.39%, 83.23%, and 87.76% for the three different tasks equipped with the proposed knowledge encoder to guide the visual representation learning. While combining with the learnable prompt (LP) module, which potentially offers more flexibility for knowledge adaptation, we observe a significant performance gain, up to 2.58% on average AUC scores on VinDr-PCXR. The same conclusion can be drawn for the ViT-16 visual backbone. To analyze the effect of prompts, we experiment with different numbers of prompts. As shown, the optimal number of prompts varies from task to task, but in general, adding the LP module benefits all the downstream tasks. The only exception is for the VinDR-Mammo task with ResNet as a backbone, which is mainly caused by the extremely small test samples in some categories (Tab. 5), _e.g._, skin retraction and skin thickening. **Qualitative Visualisation.** To provide a visualization that can potentially be used for clinicians to discover and understand the evidence that AI algorithm bases its predictions on, the disease-query module in our proposed architecture enables detailed visualization for each of the queries with positive output. Specifically, we average the cross-attention map in each transformer layer in the disease query module, and visualize the results in Fig. 3. The model's attention well matches radiologists' diagnoses of different diseases, _i.e._ red boxes labeled by board-certified radiologists. \begin{table} \begin{tabular}{l l l l|c c c} \hline \hline Model & Backbone & KE & LP & VinDr-PCXR & VinDr-Mammo & VinDr-SpineXr \\ \hline Res [13] & ResNet-50 & ✗ & ✗ & 70.53 \(\pm\) 1.00 & 82.54 \(\pm\) 1.79 & 87.35 \(\pm\) 0.38 \\ Res+KE & ResNet-50 & ✓ & ✗ & 71.39 \(\pm\) 0.66 & 83.23 \(\pm\) 1.98 & 87.76 \(\pm\) 0.41 \\ Res+KE+LP & ResNet-50 & ✓ & 32 & **73.97 \(\pm\) 0.28** & 80.59 \(\pm\) 3.11 & 88.46 \(\pm\) 0.22 \\ Res+KE+LP & ResNet-50 & ✓ & 64 & 72.88 \(\pm\) 0.19 & 81.27\(\pm\) 1.83 & **88.90 \(\pm\) 0.04** \\ Res+KE+LP & ResNet-50 & ✓ & 128 & 73.70 \(\pm\) 0.41 & **84.80 \(\pm\) 1.04** & 88.22 \(\pm\) 0.53 \\ \hline ViT [9] & ViT-16 & ✗ & ✗ & 69.06\(\pm\) 0.74 & 80.50 \(\pm\) 2.47 & 85.56 \(\pm\) 0.97 \\ ViT+KE & ViT-16 & ✓ & ✗ & 71.69 \(\pm\) 1.64 & 83.89 \(\pm\) 0.38 & 85.75 \(\pm\) 0.44 \\ ViT+KE+LP & ViT-16 & ✓ & 32 & **72.90 \(\pm\) 0.97** & 83.67 \(\pm\) 2.34 & 86.55 \(\pm\) 0.48 \\ ViT+KE+LP & ViT-16 & ✓ & 64 & 71.07 \(\pm\) 3.28 & 84.33 \(\pm\) 1.54 & **86.83 \(\pm\) 0.81** \\ ViT+KE+LP & ViT-16 & ✓ & 128 & 72.47 \(\pm\) 0.78 & **84.46 \(\pm\) 0.84** & 86.33 \(\pm\) 0.23 \\ \hline \hline \end{tabular} \end{table} Table 1: Compare with Baseline Models with ResNet-50 [13] and ViT-16 [9] as backbone on disease classification tasks. KE indicates the proposed knowledge encoder, LP indicates the proposed learnable prompt module, and the number denotes the prompt number. AUC scores averaged across different diseases are reported. We report the mean and standard deviation of three different seeds. ### Analysis of the Knowledge-Enhanced Text Encoder Here we investigate another way of incorporating prior knowledge, that is, to guide the visual representation with a text encoder pre-trained on the large medical corpus, such as the electronic health records MIMIC III [18] or scientific publications PubMed, to guide the classification task. As shown in Tab. 2, while comparing with the models that adopts ClinicalBERT [1] or PubMedBERT [12] as knowledge encoder, we can make two observations: (i) guiding visual representation learning with domain knowledge generally works better, _e.g._, results of using ClinicalBERT or PubMedBERT outperform conventional training with discrete labels, (ii) our proposed knowledge-enhanced text encoder consistently demonstrates superior results, that can be attributed to the explicitly injected domain knowledge, rather than implicitly learning it from the document corpus. ### Analysis on the CXR-Mix In this section, we experiment on the assembled dataset **CXR-Mix**, to demonstrate the effectiveness of our proposed framework in exploiting knowledge across datasets and annotation granularity. \begin{table} \begin{tabular}{l c|c c c} \hline \hline Model & Knowledge Encoder & VinDr-PCXR & VinDr-Mammo & VinDr-SpineXr \\ \hline Res [13] & - & 70.53 \(\pm\) 1.00 & 82.54 \(\pm\) 1.79 & 87.35 \(\pm\) 0.38 \\ Res+KE+LP & ClinicalBERT [1] & 70.83 \(\pm\) 0.96 & 83.55 \(\pm\) 0.72 & 88.19 \(\pm\) 0.25 \\ Res+KE+LP & PubMedBERT [12] & 71.73 \(\pm\) 0.75 & 81.96 \(\pm\) 1.82 & 88.25 \(\pm\) 0.65 \\ Res+KE+LP & Ours & **73.97 \(\pm\) 0.28** & **84.80 \(\pm\) 1.04** & **88.90 \(\pm\) 0.04** \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study on knowledge encoder with ResNet as a backbone, mean and standard deviation for AUC scores is reported with three different seeds. we use the optimal prompt numbers according to the ablation study, _i.e._, 32 for VinDr-PCXR, 128 for VinDr-Mammo, and 64 for VinDr-SpineXr. Figure 3: Sample visualization of randomly chosen samples from VinDr-SpinXr, we present both the original image (left) and an attention map generated from our proposed model with ResNet-50 as the backbone (right). The Ability to Combine Various Partial-labeled Datasets:Unlike the traditional approach, which requires to carefully merging the label space from different datasets [6; 5; 20] to benefit from them, our formulation of embedding the 'disease name' with a knowledge encoder naturally enables us to train models on the mixture of multiple datasets, handling different granularities of diagnosis targets and inconsistent pathology expression. As shown in Tab. 3, we compare to TorchXRayVision [6] that merges the label space and trains a baseline model with discrete labels, our knowledge-enhanced framework improves the performance from 82.60% to 85.13% and form 77.39% to 79.54% under the two commonly-used backbones, ResNet and ViT, respectively. The Ability to Leverage Class Diversity:In this part, we further show that our framework can significantly improve the performance of each dataset, by training on data from other categories. Specifically, we consider each dataset separately, _i.e._, measuring the performance on their own test splits. We propose to decouple the data increment into two dimensions, "Diversity" and "Amount". "Diversity" refers to only adding the cases beyond the target classes and keeping the amount of data of target classes constant, while "Amount" refers to increasing the target class cases. As shown in Fig. 4, based on our structure, adding "Diversity" can improve the results on all 11 datasets. In particular, for some relatively small datasets, the gain is more significant, _e.g._, GoogleNIH, SIIM-ACR, and OpenI. Such an experiment has validated that a knowledge-enhanced model enables to leverage of the shared information between classes, and can be greatly beneficial for dealing with long-tailed diseases, which are seldom annotated in common datasets, by leveraging the publicly available data. Figure 4: Analyse the performance gain on the assembling dataset. “Separation” refers to using a single dataset to train our framework. “+Diversity” refers to adding the cases beyond the target classes, increasing the class diversity, and keeping the data amount of the target classes constant. “+Diversity+Amount” means directly mixing the 11 datasets and for most datasets, the data amount of the target classes will further increase. \begin{table} \begin{tabular}{l c c|l c c} \hline \hline Methods & Prompt & AvgAUC & Methods & Prompt & AvgAUC \\ \hline Res [6] & - & 82.60 \(\pm\) 1.27 & ViT [6] & - & 77.39 \(\pm\) 0.71 \\ Res+KE & - & 83.11 \(\pm\) 0.27 & ViT+KE & - & 78.30 \(\pm\) 0.93 \\ Res+KE+LP & 32 & 84.45 \(\pm\) 1.06 & ViT+KE+LP & 32 & 79.25 \(\pm\) 1.05 \\ Res+KE+LP & 64 & **85.13**\(\pm\) 0.78 & ViT+KE+LP & 64 & **79.54**\(\pm\) **0.60** \\ Res+KE+LP & 128 & 83.38 \(\pm\) 0.18 & ViT+KE+LP & 128 & 78.42 \(\pm\) 0.71 \\ \hline \hline \end{tabular} \end{table} Table 3: Compare with Baseline Models on disease classification tasks on the assembling dataset. AvgAUC refers to the AUC score averaged across different diseases. The first line refers to the use of the training flow proposed by TorchXrayVision [6] and use ResNet or ViT as the backbone. The Ability to Diagnose Open-set Unseen Diseases:Conventional models can only handle a close-set classification, while, with knowledge-enhanced design, our model enables to predict open-set diseases that never appear in the training set. This is meaningful for clinical practical usage, as some rare or new diseases can hardly find an off-shelf dataset to re-train models. We test our model on the PadChest testset [4] and dismiss the classes that exist in the assembling dataset and those having very few cases (\(n<=50\)), that can hardly have statistically convincing test results. During testing, to get the embedding, we simply input the names of unseen classes into the knowledge encoder, and continue the standard evaluation procedure, _i.e._, learnable prompt module, disease-query module. As shown in Fig. 5, without any example in the training set, our model can directly achieve an AUC of at least \(0.800\) on \(14\) findings, at least \(0.700\) on \(46\) findings and at least \(0.600\) on \(79\) findings (unseen at training time) out of \(106\) radiographic findings. This demonstrates our model can break the limits of the labeling classes and be adapted to more practical medical scenarios. ## 5 Conclusion In this paper, we propose a novel knowledge-enhanced classification model, that enables learning visual representation by exploiting the relationship between different medical concepts in the knowledge graph. While conducting thorough experiments on x-ray image datasets across different anatomy structures, we show that injecting medical prior knowledge is beneficial for tackling (i) long-tailed recognition, (ii) zero-shot recognition. As for future work, we plan to generalize the idea towards self-supervised learning on pairs of image and text reports and more diverse modalities. Figure 5: AUC and \(95\%\) CI are shown on the unseen classes under the zero-shot setting. \(n\) represents the number of related cases. The top \(46\) classes are plotted in the figure to show what classes our model can achieve AUC \(>0.700\) on. Generally, our method achieves an AUC of at least \(0.800\) on \(14\) findings and at least \(0.600\) on \(79\) findings out of \(106\) radiographic findings where \(n>50\) in the PadChest test dataset (\(n=39,053\)).
2301.10319
Designing Data: Proactive Data Collection and Iteration for Machine Learning
Lack of diversity in data collection has caused significant failures in machine learning (ML) applications. While ML developers perform post-collection interventions, these are time intensive and rarely comprehensive. Thus, new methods to track & manage data collection, iteration, and model training are necessary for evaluating whether datasets reflect real world variability. We present designing data, an iterative approach to data collection connecting HCI concepts with ML techniques. Our process includes (1) Pre-Collection Planning, to reflexively prompt and document expected data distributions; (2) Collection Monitoring, to systematically encourage sampling diversity; and (3) Data Familiarity, to identify samples that are unfamiliar to a model using density estimation. We apply designing data to a data collection and modeling task. We find models trained on ''designed'' datasets generalize better across intersectional groups than those trained on similarly sized but less targeted datasets, and that data familiarity is effective for debugging datasets.
Aspen Hopkins, Fred Hohman, Luca Zappella, Xavier Suau Cuadros, Dominik Moritz
2023-01-24T21:40:29Z
http://arxiv.org/abs/2301.10319v2
# Designing Data: Proactive Data Collection and Iteration for Machine Learning ###### Abstract. Lack of diversity in data collection has caused significant failures in machine learning (ML) applications. While ML developers perform post-collection interventions, these are time intensive and rarely comprehensive. Thus, new methods to track & manage data collection, iteration, and model training are necessary for evaluating whether datasets reflect real world variability. We present _designing data_, an iterative, bias mitigating approach to data collection connecting HCI concepts with ML techniques. Our process includes (1) Pre-Collection Planning, to reflexively prompt and document expected data distributions; (2) Collection Monitoring, to systematically encourage sampling diversity; and (3) Data Familiarity, to identify samples that are unfamiliar to a model through Out-of-Distribution (OOD) methods. We instantiate designing data through our own data collection and applied ML case study. We find models trained on "designed" datasets generalize better across intersectional groups than those trained on similarly sized but less targeted datasets, and that data familiarity is effective for debugging datasets. ## 1. Introduction Curating representative training and testing datasets is fundamental to developing robust, generalizable machine learning (ML) models. However, understanding what is representative for a specific task is an iterative process. ML practitioners need to change data, models, and their associated processes as they become more familiar with their modeling task, as the state of the world evolves, and as ML products are updated or maintained. Iteration directed by this evolving understanding seeks to improve model performance, often editing datasets to ensure desired outcomes. Failure to effectively recognize data quality and coverage needs can lead to biased ML models (Steintein et al., 2017). Such failures are responsible for the perpetuation--even exacerbation--of systemic power and access differentials and the deployment of inaccessible or defective product experiences. Yet building representative datasets is an arduous undertaking (Steintein et al., 2017; Steintein et al., 2017) that relies on the efficacy of human-specified data requirements. To ensure a dataset covers all, or as many, characteristics as possible, specifications must be the result of a comprehensive enumeration of possible categories--an open and hard problem that few have practically grappled with in the context of ML. Further contributing to this difficulty is the realization that it is not enough for the training datasets to be aligned with expected distributions: they must also include enough examples from conceptually harder or less common categories if said categories are to be learned [8]. Failure to sufficiently consider both the critical dimensions of data and their relative complexity can have troubling consequences. Instances of such misssteps span issues of justice, healthcare, hiring practices, voice and face recognition, and loan qualifications, wherein biases of data and algorithms limit technological use and cause harm [6; 8; 15; 56; 59]. Yet understanding these data requirements even after training is difficult; knowing them _a priori_ is exceptionally so. Enumerating critical axes along which data must be collected is at the discretion of teams responsible for collection; data coverage is ultimately influenced by the awareness of the collectors [36]. Prior work has shown that it is a significant challenge for data collection teams to do this enumeration, lacking effective tools for data planning [38], and as a result iteratively patch and recollect data as issues surface--a reactive and often expensive, time consuming process. Rather than emphasize tools that enable better collection and data iteration practices--that _design better data_--research in fairness and machine learning has largely focused on prescriptive "how-to" frameworks, definitions of fairness, and post-collection analysis techniques [5; 84]. While there are exceptions to this [35], the hidden technical debt [73] accumulated from poor data design remains an under explored space. To reduce this technical debt and encourage diverse datasets, methods of externalizing 1 data collection, iteration, and training are necessary for ensuring datasets reflect diverse experiences and are robust when deployed in real-world models. Footnote 1: For example, documenting design decisions and communicating the evolving state of the data. We present _designing data_, an iterative, bias mitigating approach to data planning, collection, and ML development; we implement this approach in an interactive dashboard that scaffolds each step for practitioner use. Motivated by the thematic analysis of 24 semi-structured interviews with ML practitioners, designing data is a structured, holistic parallel to the current standards for developing production-quality datasets [35]. Each step (shown in Figure 1) proactively introduces interventions to improve models prior to deployment: _(1) Pre-Collection Planning_ prioritizes reflexive consideration for domain and data needs prior to modeling, documents expected distributions of attributes, and highlights potential biases through questions related to class or characteristic representation within data. _(2) Collection Monitoring_ communicates insight into dataset evolution, allowing users to make targeted adjustments to collection processes based on new insight or disparities between expected distributions and existing data. _(3) Data Familiarity_ identifies data that an ML model, e.g., a neural network, perceives as unfamiliar, creating new directives for data and model iteration. We centralize the naturally disparate, iterative processes of data collection and model development through our _ML Collect_ dashboard, encompassing each designing data step. Through _Pre-Collection Planning_, axes of meta-information (such as demographics, behavioral specifics, or environmental factors) are documented along with their expected distributions. This documentation informs future data collection steps, and encourages reflexive thinking [76]. The metadata is used for evaluating evolving datasets and later for contextualizing familiarity scoring. _Collection Monitoring_ describes data as it evolves, a response to prior work showing that externalizing data evolution enables process verification and discovering data missteps [35]. Finally, _Data Familiarity_ highlights discrepancies between model performance and our expectations. Our familiarity metric incorporates density estimation, borrowing from Out-of-Distribution (OOD) work to characterize a model's layer responses to a given sample. While density estimation for OOD detection is well studied [24], our use of density estimation to direct data work (e.g. collection and annotation) is unique. By gaining insight on how a partially trained model perceives data, we can focus efforts on the most useful subsets. In our experiments we use Gaussian Mixture Models (GMMs).2 The log-likelihood of these densities is used to score how "familiar" samples are to the complex model--data that is unfamiliar to the model--falling within a region of low density--suggests the data are not as comparatively well-known to the model, highlighting a potential mismatch between the data composition and task requirements. These scores act as general tool to capture both uncommon and/or noisy data--user response is determined by the state of the dataset and model performance. Footnote 2: There are multiple approaches to density estimation. For simplicity’s sake, we only describe GMMs, but suggest [72; 81] for exploring alternatives. We demonstrate designing data's effectiveness by auditing our own data collection and modeling task: a human activity recognition (HAR) task using inertial measurement units (IMU) data--time series data representing X, Y, and Z positioning--to classify hand position while texting, similar to a prior experiment by Goel et al. [26]. We collected IMU data from participants using our iOS data collection app, along with rich metadata describing behavior and participant characteristics. We reflect on our use of the ML Collect dashboard for our task in section 6, including multiple instances of retargeted data collection _before_ modeling. Our task selection was inspired by the unique challenges faced in encouraging data diversity in IMU data [3; 67], but as designing data considers both pre- and post-training needs for building diverse datasets, it generalizes well across many data types; we chose a notably human-specific task in response to the myriad fairness issues faced when deploying ML, however each step can be treated as domain-agnostic. Our experimental evaluation mimics a data collection and modeling task--first, we collected data iteratively, using our dashboard to build a more diverse set of data. We then trained convolutional neural networks (CNNs) using a leave-\(P\)-groups-out cross-validation. We find models trained on highly diverse data outperform models trained on less diverse data. Then, we evaluate models trained with data curated in response to familiarity scoring to those without. This includes two experimental comparisons: first, models trained on data wherein familiarity scores were used to highlight and replace noisy data compared to those without this intervention. Finally, we evaluate the models trained on data designed with and without data familiarity compared to randomly selected datasets. In general, we find that models trained on more diverse data and with familiarity-based interventions outperform others. Through our work, we argue that data cannot simply be collected but must be _designed_, intentionally curated for the sake of better models and technology. Designing data does this by optimizing for diversity early in data collection to ground designed data as fundamental to less biased, generalizable technology. It considers both pre- and post-training needs for building diverse, representative datasets for better models. ## 2. Related Work ### Reflexivity and Self-Reflection Outside computer science, qualitative research and statistics communities have managed representative data collection in a variety of ways, from expert panels to standards in population survey techniques, yet these methods face their own complications and do not necessarily translate to the needs of machine learning teams. However, several methodologies remain underexplored by ML and ML-adjacent communities. Of these, _reflexivity_ is particularly relevant to bias mitigation. Social scientists practice reflexivity to externalize implicit subjectivity present in data collection and interpretation (Zhou et al., 2017; Zhang et al., 2018). Reflexivity entails deliberately examining practitioners' own assumptions, practices, and belief systems, then _contrasting_ them with alternative perspectives. This acknowledges positionality--how differences in social position and power shape identities and access in society. While reflexivity is typically practiced retrospectively, Soefforg and Glas (Soefforg and Glas, 2018) outlined how reflexivity could be an active process through _"ongoing reflection about our own social location and [...] our assumptions regarding others' perceptions."_ This approach includes recording assumptions of positionality; routinizing reflexivity; including other actors in the process; and communicating reflexive outcomes with data. Separate from yet related to reflexivity is recent data visualization work prompting viewers to reflect on their individual beliefs of data (Shou et al., 2018; Zhang et al., 2018). Examples of this include The New York Times "You Draw It" visualizations, where readers draw a projected trendline of what they think the data looks like and then compare that projection with the real data (Bahdan et al., 2018; Zhang et al., 2018; Zhang et al., 2018), and an MIT Tech Review article illustrating the complexity of building fair recidivism models wherein readers change different hyperparameters then contrast their outcomes to existing models (Shou et al., 2018; Zhang et al., 2018). Such visualizations act as powerful tools ensuring self-reflection--an important factor in building representative datasets. When building datasets for ML, it is difficult to know what a population or phenomena looks like before collection. But because models learn what is encoded in data, data must be _designed_ to reflect what a practitioners wants a model or user experience to be, not simply what is currently reflected in the world today. Our work leverages active reflexivity and visual prompts described above to force practitioners to consider not only what dimension of data they want to collect, but also what they want (or expect) their distributions to look like. These are represented in our ML Collect dashboard, the details of which are described in Section 4. Asking these questions upfront requires practitioners to consider concerns in their data instead of having them be an after-the-fact problem. ### ML Documentation Perhaps the closest alternative to _designing data_ is model and dataset documentation, such as Model Cards (Sandel et al., 2018) or Datasheets for Datasets (Datasheets, 2018). Universally, such work details what to include within said documentation for transparency in downstream model and dataset use (Datta et al., 2018; Zhang et al., 2018; Zhang et al., 2018). While important, these guidelines are not integrated within development tools and typically do not provide steps to improve model fairness. Further, the resulting documentation must be manually authored and updated separately from development. Model Cards, for example, are a retrospective documentation of a model's intended use. The simplicity of the prompts has led to widespread adoption (Sandel et al., 2018), a response to the lack of best practices in ML development. Model Cards do not encourage reconciliation of model weaknesses nor actively direct iteration for increased fairness--they are intended for transparency. Despite these limitations, the Model Cards framework has been used as a corrective measure in pursuit of fair ML. While Model Cards advocate for evaluation of model performance across different subpopulations, including demographic or cultural groups, we extend this key contribution to intersectional groups and embed it within our designing data process and dashboard. Through reflexivity and distribution prompts, we require users to actively surface and engage with their priors--and later guide evaluation for specific subpopulations and intersectional groups. ### Bias Mitigation and Diversity Metrics While examples of algorithmic bias are often highlighted by media outlets, this frequency belies the difficulty of initial discovery--failures are hard to uncover during development. Bias mitigation strategies can be divided into three stages of a ML model development: pre-training (e.g., sample weighting or dataset balancing), in-training (e.g., adding specific constraints in the function that is being optimized) and post-training (e.g., by tweaking the prediction to meet some fairness metric) (Zhou et al., 2018). Pre-training can be further divided into collection and post-collection. Despite the fundamental nature of data collection, technical approaches to bias mitigation typically focus on post-collection efforts--e.g., modification of the dataset, reweighting and fine-tuning of hyperparameters, filtering output, or some combination therein--which act largely as stopgaps, are sensitive to underlying data (Shou et al., 2018), and ultimately may not resolve underlying issues at hand. For example, diversity metrics have been used to direct bias mitigation efforts post-collection (Datta et al., 2018). While several methods of measuring diversity exist (e.g., distance-based, combinatorial, utility and ranking, and coverage metrics), the overall intention is to measure nuance of data based on variability of their constituent elements. Instances of this can be found in subset selection, where declarations of "diversity constraints" define expected frequencies for sensitive values that the data must satisfy (Datta et al., 2018; Zhang et al., 2018; Zhang et al., 2018), or to measure relative coverage within a dataset (Datta et al., 2018; Zhang et al., 2018; Zhang et al., 2018). Uniformly, these methods act as stopgaps to biased and/or homogeneous data (Datta et al., 2018). Producing diverse subsets using diversity metrics does not guarantee fairness across samples in the form of appropriate representation of sensitive attributes (Datta et al., 2018). Partially, this is because their effectiveness relies on the extent of the dataset. But it is also because fairness has multiple measures (Zhou et al., 2018). Fair treatment across social groups may also require _different things for different contexts_. For instance, consider a dataset in which each data point has a gender. One notion of group fairness, useful for ensuring that the ground truth is not distorted, is proportional representation, i.e., the distribution of sensitive characteristics in the output set should be identical to that of the input dataset. Another notion of fairness, argued necessary for reversing the effects of historical biases, could be equal representation where the representation of sensitive characteristics should be equal independent of the ratio in the input dataset. Mitchell et al. (Mitchell et al., 2017) described diversity and fairness as related but distinct concepts; diversity refers to the inherent variability of a dataset, while fairness describes equitable treatment across various attributes--aligning most closely with _group fairness_ definitions. When working with data about people, we note that that it is exceptionally difficult to recognize what confers an "unfair" biasing effect, and that the consequences for ineffectively representing features within a dataset are generally negative. Thus, _designing data_ adopts a diversity framing in pursuit of overall improvement when evaluating collected data. ### Learnability and Familiarity Diversity metrics largely consider what data is used in training and evaluation, but miss a fundamental element of ML behavior: _what concepts a model actually learns_. Having diverse characteristics with sample size parity in data is one step towards mitigating bias but does not ensure equitable learning across classes (or demographics). Additional complications arise when working with data acquired independently, possibly through a process in which the data scientist has little or no control. This "found data" (Krause et al., 2016) introduces unique challenges to ensuring data coverage for scientists and engineers. For both found and big data contexts, post-collection approaches such as subset selection with diversity constraints and class imbalance corrections are introduced to counter bias and skew (Krause et al., 2016) yet can obfuscate information about appearance frequency in a dataset: if there are limited examples of \(X\) and thus we oversample, those oversampled instances of \(X\) may cause the model to infer incorrect characteristics of a class, affecting accuracy metrics as well as production performance. Such approaches are useful but limited; for example, when improving a model without access to the original training dataset, balancing in the traditional sense is impossible. Regardless of the context, these data balancing approaches ignore data learnability: having an equal number of samples per class neglects the fact that some classes are inherently easier to learn than others (Krause et al., 2016; Krause et al., 2016; Krause et al., 2016). Some concepts present increased complexity to a given model, requiring more samples to be appropriately learned. Unfortunately, this complexity might not become apparent until after deployment. In order to build better systems for deployment, discovering what has and has not been learned by a model is critical. In the past, testing data acted as a proxy for this evaluation--but while test datasets offer excellent thresholds for performance expectations, these are not the same measurements: testing accuracy measures whether a model was correct in its classification, without consideration for why a model reached its conclusion. For this reason, models are often poorly calibrated in their outcome confidence (e.g. when a model is incorrect yet overly confident in its classification). Determining the learnability of a class or concept--whether it can be learned given a distribution of samples--is a space of ongoing research (Krause et al., 2016; Krause et al., 2016). Klawonn et al. (Klawonn et al., 2017) sought to exploit class learnability to increase generalizability. Using supervised training, they avoided harder classes, guiding their model to spend time on a class proportional to its learnability (Klawonn et al., 2017). While useful in some contexts, avoiding difficult classes may harmfully bias models. An alternative to such interventions is to incorporate the model itself as a guide for future iterations of data planning and collection. This can be accomplished by understanding the structure of a neural network, where each hidden layer within a creates a new representation of the input data (Krause et al., 2016). These concepts semantically vary based on the type of data--images, for example, are roughly learned by early layers detecting edges whereas later layers learn more sophisticated shapes and objects. How a layer (or layers) in a model responds to a given data point can be measured, then used to estimate densities across an embedding space. These embeddings are vectorized representations of data, meaning similar points are closer in distance than others, according to a specific model. Points that fall into a densely populated region (meaning many similar samples were been learned in training) as _familiar_ to a model. Conversely, points that fall into scarcely populated areas are unfamiliar, or uncommon (Krause et al., 2016). This approach holds methodological similarities to detecting samples that are Out-of-Distribution (OOD) to the training dataset (Klawonn et al., 2017; Klawonn et al., 2017). Our implementation of density estimation--described at length in Section 9--is most similar to Lee et al. (Lee et al., 2017), who use density estimation for detecting anomalous samples _post_-training (e.g., adversarial attacks), as is common with OOD applications. These methods are typically used to highlight cases when a model will likely fail (such as to determine when to return control to drivers of autonomous vehicles). In contrast, we detect interesting data samples to contribute to our training optimization. We note that it is our application of density estimation (not the technique itself) which is the contribution of note, as it has not previously been incorporated into directing data work. Our use of density estimation is a response to two drawbacks we note from work incorporating diversity metrics to improve data representation and model performance: (1) diversity metrics require _prior awareness_ of what facets of the data are variable--for data that people are unaccustomed to, this might be impossible--and (2) measuring diversity within a given dataset does not account for what the model actually _learns_ or is robust (Klawonn et al., 2017; Klawonn et al., 2017). Because it is _model_ outputs that we are ultimately concerned for, these are important weaknesses to counter. ## 3. Formative interviews and Data Collection Themes To understand ML practitioners' data collection needs, we conducted 24 semi-structured exploratory interviews with individuals possessing extensive machine learning and data collection experience within a large technology company. The interviewees ranged from ML research scientists and designers focused on ML experiences to engineering product managers. Interviews lasted on average one hour. As interviews progressed, common themes surfaced that directed our attention to issues of data coverage and representation. We describe those themes (T1-T3) below. #### 3.0.1. Critical dimensions of data are hard to know a priori (T1) A proactive approach to data collection requires knowing what axes are important for observation, particularly when focused on a given task. As one interviewee put it, _"how do we know who and what is missing?"_. While it is typically impossible to have a complete understanding of such critical dimensions of the data before starting data collection, there are some common characteristics for human-centric data collection based on existing knowledge of population statistics and power imbalances. This was a shared difficulty described in nearly all interviews. How useful that information is depends on the context of how the data is used--for example, a person's accent, language patterns, speech pathology, and history are all important factors in speech recognition, but not for creating a personalized wine recommendation. Unlike summary statistics, surfacing individual sample failures and uncovering what is missing in data subsets, essentially "debugging" data, requires significant effort. As another interviewee said, _"fairness analysis is useful to a point"_. Generic tools and dashboards for surfacing these nuanced Figure 2. (A) Jupyter notebook dashboard guides practitioners interactively through designing their datasets. (B) Before collection, the dashboard prompts documentation of expected data distribution. (C) These expectations are visualized as histograms in an interactive dashboard. (D) As data is collected, data distributions (pink) overlay expected distributions (blue), highlighting divergent patterns. limitations of data do not exist, but there is both a clear need and a desire from ML practitioners for them (Rapolha et al., 2017). #### 3.0.2. Difficulty of collection leads to compromises (T2) Data collection is a difficult process to launch, requiring significant tooling and systems building. This difficulty contributes to issues of representation in data, as the emphasis in early data collection is on _how_ to collect and structure data rather than building a complete picture of _what_ to collect for. Further, early data collection efforts tend to prototypes, making convenience sampling canon. In cases of human-centric data, such as speech or movement, such requirements can encumber coverage of diverse participants across many axes (e.g., age, race, gender, and education). These circumstances have the potential to inhibit generalizing machine learning systems--while there is a natural iteration in datasets stemming from dataset shift--as the world changes and training data is no longer relevant (Krishnan et al., 2017)--and collectors' evolving understanding, it is a cycle full of (in some cases unnecessary) forking paths (Rapolha et al., 2017; Krishnan et al., 2017) and dead-ends. #### 3.0.3. Model failures are invisible without participation and iteration (T3) Real world failures are only visible when communicated. But that communication only comes from those invested enough in the tool, system, or research agenda to make the effort to bridge the communication gap. To quote an interviewee that described this dilemma: "_The people who had issues were invisible to the system because they didn't like using it_". While typical marketing and product development practices incorporate focus groups, product ratings, reviews, and surveys, and customer service departments to direct decision making, software development tools like issue trackers are relied on to flag problems engineers and developers encounter (a particularly important function as a body of code grows with time). Similar methods of closely engaging people with creators exist in a variety of environments such as polling and design methodologies. However, there are no tools that address the gap between a deployed ML product (let alone is early prototype) and a user. Such gaps are widened by language and knowledge barriers, and the products themselves are inaccessible to many that do not align with the priors on which the system is built, an effect that arises from missing critical dimensions of data. As these limitations can have substantial downstream impacts, we sought to introduce early, comprehensive data & model checks to facilitate agile pivoting in our development of designing data. ## 4. Designing data Existing bias mitigation and model evaluation approaches attempt to address the themes uncovered in our formative interviews but are not comprehensive to data collection _and_ machine learning pipelines. In response to this disparity, we propose an iterative, bias mitigating approach to data collection and machine learning that we call _designing data_. Designing data responds to our themes by introducing interventions before, during, and after collection and training. Shown in Figure 1, each step ensures datasets are intentionally curated from both a bottom-up and top-down perspective--meaning from both the data and the model view. These steps are intended to complement existing bias mitigation strategies rather than replace them. Designing data incorporates interdisciplinary strategies drawn from ethnography, data visualization, and ML domains. We developed designing data for cases where data is collected from scratch (a common scenario outside academia), but individual steps are easily adapted to other contexts. We describe the steps below before instantiating them in a human activity recognition (HAR) task (Section 5). ### Pre-Collection Planning To facilitate broader consideration of potentially critical facets of data, data design requires explicit documentation of _what_ will be collected--including expected dimensions and distributions of data--_before_ collection begins. Our documentation process ensures developers pay close attention to data diversity and coverage early in an ML pipeline and creates reference points for reflection when new information is uncovered. This process aligns with prior work in heuristics, implicit bias, cognition, and fake news susceptibility: deliberation--_reflection_--can correct and expose intuitive mistakes, such as those seen in data collection decision making (Bradbury et al., 2015; Goyal et al., 2016; Goyal et al., 2017; Goyal et al., 2017). Our dashboard ensures close attention to data diversity through a series of open-text prompts, drop-down selections, and simple declarations for the users to complete. When the data relates to a human subject (i.e., images of faces, movement data, or voice recordings), the dashboard prompts self-reported demographic information about teams or individual users involved in developing the dataset. These answers are used to prompt consideration for communities not represented within the team--in explicitly referencing, for example, gender or age-related discrepancies, teams are encouraged to consult with individuals from these communities. While we focused our efforts on a human-centered task, this approach can be extended to other contexts. For example, when collecting mineralogical data, a geologist might be asked to document assumptions about a particular geographical and temporal region. Following this period of self-reflection, users are asked what data distributions they expect. Further descriptions of the dashboard can be found under Section 5.3. An example of the dimensions and related distributions set by users is shown in Figure 2. This reflexive process is simple yet uncommon to the ML community at large. In the case of inertial measurement unit (IMU) data, it appears relatively straightforward to collect data for an activity such as texting: simply record the movement that occurs during that activity. But a common pitfall in data collection is the absence of descriptive information on the recorded subject--in general, demographic information, contexts, and other potentially impactful factors are rarely considered until arguably too late in model development--after initial deployment, and even publication of datasets (Goyal et al., 2017). This extends to IMU data as well. The absence of such meta-information has major implications for if a model generalizes well: how do you know broad applicability of a model if you have not articulated rudimentary limitations of underlying data? While it is difficult to enumerate all critical factors, the first step in our designing data process provides scaffolding to start, responding to concerns listed in our first and second themes: _critical dimensions of data are hard to know a priori (T1)_, and _difficulty of collection leads to compromises (T2)_. ### Collection Monitoring By stating expected distributions prior to collection, auditing the data against those distributions is straightforward, allowing readjustments to be made quickly when necessary. Our dashboard includes graphs highlighting distribution disparities that were perpetuated or introduced as the data was collected, as shown in Figure 2 (D). These charts benefit users by noting when the data collection process is either skewed (for example, through convenience sampling) or when previously stated expectations did not align with reality. The iterative nature of this step--collecting data, checking, then adapting data collection--shortenns the response time to correct fundamental errors in data. It highlights limitations that might previously have remained unnoticed, as previously described in our third theme: _Model failures are invisible without participation and iteration (T3)_. ### Data Familiarity The purpose of designing data is not to replace existing tools, but rather to encourage a holistic approach that incorporates existing ML techniques for bias mitigation. As such, for interventions _during training_, we refer to existing literature (Krizhevsky et al., 2014; Krizhevsky et al., 2014). After a model has been trained, however, understanding how well the model has learned the data (and the data that it has not) is critical prior to deployment. The value in this form of model auditing is significant: some data may prove more difficult to learn due to their inherent complexity, or from not having enough similar examples. Despite increased rigor in data collection, expected and actual distributions might not match the learning needs for the model, thus requiring a stop gap such as under or oversampling, generating synthetic data, or continued data collection. We incorporate _data familiarity_ values as a measure of how familiar, or well learned, our model is to individual data points. We describe familiarity scoring in detail in Section 5. In this way, we ensure that low familiarity data points or "edge cases"--those that either do not have enough representation in the dataset or are particularly difficult for a model to learn--are caught early and responded to by reweighting or replacing data (as is the case when the data is mislabeled). Data familiarity offers a final check on the model and data, since critical axes in data are difficult to know a priori. While we use familiarity scores to uncover cases the model does not understand as well, this is useless if the dataset itself has insufficient coverage. Thus collecting better data that encompasses natural variability _and_ checking how a model responds to said data are both critical. In this way, we again respond to our second (T2) and third themes (T3). ## 5. Methodology ### Task Selection We instantiated our designing data approach through a human activity recognition (HAR) task using _inertial measurement unit_ (IMU) data--time series data representing motion along X, Y, and Z axes. While designing data generally applies to all data collection and machine learning processes, our selection of data type and task were motivated by the unique challenges IMU data presents to building data diversity. First, IMU data inherently lacks the closeness of mapping (Han et al., 2015) that image and audio data have to human mental models of the world, making it more difficult to audit. Recognizing when IMU data coverage is incomplete can be difficult when compared to image data as levels of abstraction act to obfuscate fundamental problems in a dataset (Krizhevsky et al., 2014; Krizhevsky et al., 2014)). Second, IMU data requires contextualization to create meaning--real-time labeling, or additional information from audio and video--and is harder to collect in real world scenarios, unlike images and audio clips which are now ubiquitous (Krizhevsky et al., 2014). Yet IMU data still has the potential to bias ML models in key contexts. Thus, applying designing data in this context provides great added value and proves its merit under a typically challenging context. Our experimental task resembles work by Goel et al. (Goel et al., 2016), who improved mobile text entry by categorizing different hand positions when users typed. We simplified the task to binary handedness (typing with the left or right hand), but introduced a source of natural variability by prompting users to perform a series of actions in parallel while typing. This contrasts Goel et al. (Goel et al., 2016), who collected data while users sat in a lab environment. Data was collected using an iOS app developed in Swift. Collected data from our participants was saved to an encrypted database. Throughout the case study, we used this data to populate our dashboard, train and evaluate our IMU classifiers, then refine our dataset according to familiarity evaluations. We describe each step in detail below. ### Data Collection We built a custom iOS app for data collection using the Swift programming language with public frameworks including SwiftUI, CommonCrypto, and CoreMotion. The app collects right and left-handed texting data across different contexts people may type in, such as texting while walking or laying down. Example views of the app are shown in Figure 3. All data were collected with informed consent. We introduced 6 typing scenarios (walking, sitting, standing, lying on your back, lying on your side, and lying on your front) to represent a selection of possible contexts users might normally experience typing. Participants were then asked to type an English phrase from MacKenzie and Soukorff's phrase set (Koren et al., 2015) using their left or right hand for a total of 6 (positions) \(\times\) 4 (typing sessions) \(\times\) 2 (hands) \(=72\) trials. Entered phrases were recorded as a trial. The app disabled keyboard autocomplete and autocorrect. Beginning when people pressed "start", we collected data at a sampling rate of 200 Hz. When the task is finished, metadata, IMU data, and trial metadata (including scenario, phrase, keyboard recording, and session time) were pushed to our database. In total, we collected \(>3.88\) million measurements from \(1455\) sessions. Before collecting IMU data, participants were asked to sign an informed consent letter and provide demographic information and metadata. This information included race (as defined by the 2020 US Census), ethnicity, gender, sex, age, hand length (measured in millimeters) and nail length (measured in millimeters), hand dominance, phone version and phone size. Participants were automatically redirected to the iOS _Measure_ app to determine hand and nail length. We collected data from 33 participants recruited over three separate periods in response to data disparities highlighted by our ML Collect dashboard. Self-reported gender was 36% female, 64% male (with reported sex the same). Participants identified their race as Black or African American, Asian, Multiple/Other, and White. Ages ranged from 24-62. We later bin these ages in the following ranges: 18-24, 25-34, 35-44, 45-54, and 55-64.5 Footnote 5: We recognize our data is limited to a subset of the broader population. This work is intended as a proof of concept rather than a production quality system, and thus perfectly encapsulates the very concerns we describe in developing early machine learning prototypes. By emphasizing data diversity _even_ in this setting, we intend to validate the use of interventional measures to simultaneously highlight next steps in data collection, and improve the data one can collect. ### Dashboard We built our dashboard in JupyterLab using Altair (Sten et al., 2017), a declarative visualization library for Python, and Ipywidgets (Python, 2017). As shown in Figure 2 (A), the dashboard asks a series of questions to prompt reflexive consideration for the team or individuals personal biases. Using drop-down selections, they are asked to describe their team's representational make-up--including race, accessibility needs, age, sex and gender identities. After filling out this information, users are told: "_The following groups and their subsequent intersections are not participating in your project development. To ensure an optimal result, take steps to consider how their experiences and views might differ from the currently represented ones._" This notice is followed by a list of demographic information not identified by the users. This is an important step: prior work has shown that simply recognizing such information (Zhou et al., 2017; Zhang et al., 2017), and introducing design frictions (Zhou et al., 2017; Zhang et al., 2017), can increase the quality and consideration that goes into data work. One limitation to this procedural approach is how scoped our questions are--they are not all-encompassing, but intended to start the process of reflection and inquiry early on. As a final step in this reflexive process, and to minimize this limitation, a series of open-ended questions that take in free-form text asks, "What's missing, in the context of your project?" followed by some examples that expand on axes of diversity teams might need to consider. Users are then asked to enter expected dimensions and distributions of data before collecting data (Figure 2 B,C). When distributions have incorrect values (i.e., do not add up to 100%), the dashboard normalizes. This active expression of expected data encourages users to acknowledge and document the specific limitations of their data, setting a precedent of conscious decision making from the beginning. It creates a simple provenance for early assumptions and a baseline to evaluate against during data collection. From these inputted dimensions, audits of population statistics, missing data, and undersampled subsets are presented to users, reflecting _During Collection_. New dimensions can be added as needed. Different categories of the data, including demographic information and metadata (described in the participant subsection), task specifics, and class representation are then shown in visualizations Figure 2. Similarly, intersectional categories (such as age _and_ hand size) are shown. This view reflects the data evolution as more data is collected, allowing real-time insight into what additional collection might be required. Following collection, the dashboard allows users to use a pre-trained model or train their own. Following training, saved states of a neural network and model architecture are loaded into our familiarity functions. Data is inputted to build out familiarity scores, the final step in our designing data process. ### Familiarity To measure the familiarity of the data, we incorporate density estimates of layer activations across a neural network (NN). Different layers of NN capture different features of the input data (Zhou et al., 2017). Familiarity scores can therefore be extracted from any layer. Earlier layers capture fundamental structure found in the input data, while deeper layers capture semantic content. In this paper we focus on the penultimate layer, the layer before the prediction softmax, as it is the final feature representation used to make a prediction. Passing \(N\) inputs through the network produces an activation matrix \(A(N)e\mathbb{R}^{N\times M}\) for all \(M\) neurons in subset of selected layers \(L^{\prime}\subseteq L\). We learn a Gaussian Mixture model on the activations of a layer as a representation of the whole training set, given by the following: \[p(x|\lambda)=\sum_{i=1}^{M}wig(x\mid\mu_{i},\Sigma_{i}) \tag{1}\] where \(x\) is a matrix of layer activations, \(w_{i},i=1,...,M\) are the mixture weights, and \(g(x|\mu_{i},\Sigma_{i}),i=1,...,M\) are the component Gaussian densities. Note, this mode of density estimation is interchangeable with other density estimation or out-of-distribution methods such as (Zhou et al., 2017; Zhang et al., 2017). For each data point sample in the current training set, we obtain the activations from layer \(l\) and compute a dimensionality reduction from the original space using PCA. This projection step serves two purposes: it reduces the "dispersion" of points that is typical of high dimensional spaces (a NN activation can easily have thousands of dimensions, hence, each point is likely to be isolated), and it makes the reminder of the computation more tractable. In the projected space we then perform a Variational Bayesian estimation of a Gaussian mixture (Zhou et al., 2017). The fitted GMM is the tool that allows us to give a familiarity score to each new sample. Given a sample, we can then extract the activation from the same layer \(l\), apply the same dimensionality reduction and then Figure 3. Our IMU Data Collection App showing the primary screens of the study. Participants entered their demographic information such as hand dominance, hand and nail length, sex and gender, and age (left). The app instructs a participant to type on their phone using either their right or left hand in a physical configuration (middle). Participants then type the presented phrase (right). evaluate the log-likelihood provided by the fitted GMM--our familiarity score. If the sample falls into a densely populated area, its log-likelihood will be high: from the perspective of the features extracted by the current state of layer \(l\), this sample appears as common or _familiar_. Conversely, if the sample falls into a scarcely populated area, its GMM log-likelihood will be low and we infer that the sample is rare in relationship to the features extracted by layer \(l\). This familiarity measurement can be applied to new samples previously unseen by the model or on existing training samples. Unfamiliar samples are "edge cases"--those that either do not have enough representation in the dataset, are particularly difficult for a model to learn, or erroneous collected data (e.g., noise). In early dataset development, familiarity scores act as useful checks for data with little signal--samples where we expect the model to perform well yet _may_ be noisy and thus require human inspection. Note that the same data shown to a different model is likely to obtain a different familiarity score. That is, each sample is tightly coupled to how a specific model perceives it. This analysis cannot be done on the dataset without the guidance of a trained model. Familiarity scores are presented in the dashboard through a series of graphs depicting their range and frequency, providing users insight on how familiar individual samples are to a given model. ### Modeling We followed the generic _Activity Recognition Chain_, which includes pre-processing, segmentation, feature extraction, and classification steps (Krizhevsky et al., 2014). Instead of explicitly extracting features, we used 1D convolutional neural nets (CNN) for sequence classification. The CNN's architecture performs feature extraction through the convolution of the input signal with a kernel (or filter). Our architecture included: two 1D convolutional layers (standard for time series data), max-pooling layers, a dropout layer, a fully connected dense layer with ReLu activations, and a fully connected dense layer with softmax activations. To pre-process data, each session was segmented into \(200ms\) windows, with \(40ms\) overlap between segments. Session timing varied by how long it took participants to finish typing a given phrase. We corrected our IMU dataset to account for gravitational acceleration effects, then normalized (using a direct current blocker) and segmented IMU data in series to ensure all windows were of equal length. We discarded windows containing data from multiple sessions. Training batch sizes were \(256\) (batch)\(\times 200\) (ms)\(\times 3\) (IMU), where \(ms\) is the window of time, \(batch\) is the batch size, and IMU represents the three accelerometer data sources. ## 6. Case Study: Reflecting on Our Data Collection Process It was unclear how diverse IMU data would influence our modeling experiments, or if the categories developed through our reflexive prompts would meaningfully align with variation within the data. Prior work on human gestures has shown age (Steinteiner et al., 2016), emotion (Steiner et al., 2016), and health (Steiner et al., 2016) influence movement and gesture presentation. Hand position during typing is similarly distinguishable (Steiner et al., 2016), yet IMU data collection rarely includes meta information about participants. Similarly, context is often not documented for image data (e.g. the proverbial question of _what's outside the frame?_). We incorporated our dashboard's _Pre-Collection_ suite of prompts in our own IMU data collection to determine what characteristics to collect for. It was through these prompts that we realized a need to measure additional information as typical demographics did not capture how people held their phones. We noted that Phone Size, Handedness, and Mail Length (particularly in the case of acrylic tails) may play a role in how people text, despite not being variables typically considered in such tasks. It was also through this process of reflecting on impactful features that we realized the advantages of asking participants to act out various behaviors during their typing tasks. This meaningfully shaped our task. Other measurements that were noted during this process would have added substantial complexity to our collection procedure. Most prominent of these was hand strength and dexterity. These features are impactful to how individuals type--a person with carpel tunnel or arthritis will type differently compared to someone without these conditions--but required additional tooling to accurately capture. Instead, we noted strength and dexterity for future evaluation, to be completed prior to public deployment. The results of _Collection Monitoring_ led to three instances of additional, retargeted data collection efforts based on unexpected or previously unnoticed skew. During our initial data collection, there were multiple categories which did not match our previously described expected distributions. We call out several in Figure 4 to demonstrate the outcomes of our data collection re-targeting. While all participants were employees from a large tech company, we did not anticipate our initial wave of data collection to be so skewed towards individuals between the ages of 21 and 40. In truth, we initially had no participants over the age of 38 and were concerned that this absence would limit representation within the dataset of alternative typing patterns seen in older populations. As a result, we emphasized diversity in participant age moving forward, with some success--each consecutive wave improved the data coverage. Similarly, we noted that the majority of our participants self-described as White, but that there were no Black or Indigenous participants whatsoever. Despite our best attempts, this was only partially amended; in optimizing across intersectional groups, we were not able to perfectly match our expected or updated distributions (at least within the context of this set of iterations). In contrast, our initially collected Handedness distribution perfectly matched our expectations. This prompted a discussion of whether this distribution was actually reasonable--despite matching the ratio of left versus right handedness in US populations, we believed that right and left handed individuals would type in dramatically different ways, thus may require sample size parity to be appropriately learned by the model. Collection Monitoring also provided us with the impetus to explore intersectional groups early on. We noted there was no representation of Black and Female with Small phone sizes (or people who identified as female or non-binary in general). Finally, we noted an unexpected flaw in our data collection methodology through Collection Monitoring. Initial scene data _should_ have been equally represented across all scenes, however there were slightly fewer instances of standing_right compared to other scenes. We realized this was due to a small error in how the scenes were randomly presented participants and fixed this moving forward. Because we were tracking our distributions _while_ collecting data, we were able to ameliorate data coverage and capture unexpected bugs early without slowing downstream development. Finally, while we discuss our evaluation of familiarity at length in Section 9, we note that familiarity redirected our collection efforts in several instances. For one, it led us to seek out additional data for an intersectional subset for Female participants with Large phones and Small hands, as several of these samples were counted among the least familiar. We later realized that a number of data points were noisy, thus the additional data replaced noisy data in the intersectional subset rather than augmenting it. ## 7. Modeling Experiments The ability to produce diverse subsets using diversity metrics does not guarantee appropriate representation of sensitive attributes. Partially, this is because fairness has multiple measures. Fair treatment across social groups may require different things for different contexts. Consider a dataset in which each data point has a gender. One notion of group fairness is proportional representation, i.e., the distribution of sensitive characteristics in the output set should be identical to that of the input dataset. While useful for ensuring ground truth is not distorted, it may not preclude biased outcomes. Another notion of fairness (argued as necessary for reversing the effects of historical inequality) argues that truly equitable representation within datasets requires sensitive characteristics (e.g., gender or race) be equal, independent of the true ratio present in the input dataset. Increasingly, fairness research has considered not the dataset distributions themselves but rather a model's error rates across demographics (e.g., (Krause et al., 2017)). In this paper, our approach to encouraging fair outcomes considers the amalgamation of both data and model. The following experiments adapt the former definition of fairness--rather than seek a high _average_ accuracy across classes, we look at nuanced performance--accuracy, loss, and misclassification--between intersectional groups. To this end, we consider several questions as part of our designing data evaluation: 1. [leftmargin=*] 2. **Does auditing across our documented categories to increase data diversity improve model generalizability?** This has two-fold implications. We do not know if the categories developed during our Pre-Collection step describe context that is _meaningful_ to a model. These are not features explicitly learned by the model but rather metadata describing the contexts of the time series data. We anticipate datasets audited for diversity will perform better across intersectional groups. 3. **Is data familiarity useful in auditing model + data?** Similar to the prior question, there are two motivations here. First, familiarity can be used to _debug_ a dataset--to uncover noisy examples and remove or replace them. Second, familiarity can be used to _direct_ data iteration--to recognize what samples are out of distribution (unfamiliar) to the model, therefore highlighting what additional collection must be taken before a model is deployed. We expect using data familiarity measures to audit or restructure our training datasets will improve overall model accuracy or ensure similar performance across groups (the form of fairness we are adopting here). We approach our evaluations as if we were attempting to _deploy a model_, as that is our intended application. We evaluate our designing data interventions through a series of modeling experiments, described below. First, we show that diverse data _does_ lead to better performance. Then, we use familiarity scores to uncover noisy data failures within the dataset and describe how removing these samples impacts performance. Finally, we show that supplementing the dataset with unfamiliar samples improves model accuracy and calibrates confidence scores. ## 8. Diverse Data We sought to answer Q1 through our first set of experiments. We compare "diverse" models to what we expect to be "less diverse" models. We do so for two reasons: first, it may not be clear to Figure 4. Selected views of data distributions across three different waves of data collection. Each new iteration of data collection was the result of evolving understanding about the data. For _Age_, initially there were no participants over the age of 38. _Handedness_ initially met expected distributions, however these did not match downstream needs. In _Race_, there was substantial skew for White participants. An error in data collection led to a skew in which scenes were presented to participants. practitioners that collecting diverse data early in development is critical to building functional tools. Second, despite best efforts to curate a list of meaningful characteristics, we did not know if these additional data dimensions had any true effect on the classification task. For example, did hand size actually increase the difficulty the model faced classifying left and right handed typing? Both diverse and less diverse models are trained using the same number of training samples and are evaluated on the same test data. Less diverse models are trained on data where one group (e.g. small handed) was left out. In this way, we perform leave-one-out cross-validation _and_ consider the specific effects of a given demographic group. Paying close attention to intersectional groups, we expect to see greater performance stratification in less diverse models; less diverse models will not perform as well generally as diverse models. We describe our experimental protocols below. First, shuffle then randomly select train and test datasets such that no typing trials are split across datasets. We save the test dataset to evaluate every model trained with the current train/test split. Then, we compare the full dataset and train/test distributions using a visual check and earth mover's distance (EMD). If the difference in distributions are significant, repeat the first step. We group the data by category (e.g., _Sex_) then by type (e.g., _Female_). For each category, we compare the number of samples within a group and record the smallest size. Next, randomly downsample each group such that each subset is of equal size to one another. We then compare the distribution of data sampled out to the downsampled group data, and repeat sampling in the case of skew and save the sampled data. Lastly, we repeat the prior step but do so from the complete training set: this is the "diverse" dataset for the category. For each group within a category, we append all other groups together and then leave the current group out such that new training groups were \(Female\bigcup Interser\) (for example). For each set within a category, train a new model using our previously described 1D CNN. We monitor for overfitting by setting early stopping based on loss with a patience of 4. We save every model, their weights, training accuracy and loss curves, and training dataset, keeping track of model version. We then generate predictions from the original test set. We repeat \(r=10\) times for each set, keeping the random seed the same. We repeat \(r\) times again, changing the random seed for downsampling to ensure our results are stable. We compare models across overall, group-specific, and intersectional accuracies. We hypothesize that some categories are more meaningful to performance than others. Performance disparity--such as lower accuracy--across the categories _despite_ equal numbers of examples would highlight support this hypothesis. ### Evaluation When compared to models trained on less diverse data, we found that models with diverse data had an overall higher accuracy (Figure 5) _and_ performed better across intersectional groups, as shown Figure 5. The performance of every model on the same test set, split across categories. For each category, a model’s label indicates which subgroup was held out from its training set. To ensure fair comparison, within each category each model was trained on the same number of instances. Notice for each category that the “diverse” model (highlighted with a darker color), i.e., the model with no subgroup held out, almost exclusively performs the best, despite having the same number of data instances as the other models. Note that each model was trained 10 x 10 times, keeping the random seed stable for \(r\) times, then repeating with a different seed. Weights from the models with highest accuracy across the trials were kept for later experiments. in Figure 6. For example, Figure 6 B shows a significant dip in performance for the model trained without data from the **right-side-left** condition compared to diverse model (Figure 6 C). This is as we anticipated, and we see that for data where participants were typing in the **right-side-left** condition, the less-diverse model actually performs _worse_ than random chance. This pattern is repeatable--across \(k\) models where we intentionally left out one group (Figure 6 A), we see a correlated diagonal of lighter color indicating lower testing accuracy, supporting our hypothesis that the extensive characteristics we collected data for _do_ effect IMU performance. In contrast, diverse models show less variation in performance, instead performing better across different demographics--matching a measure of fairness called Equality of Odds, where negative and false positive rates across groups are similar. Finally, the influence of these demographics on performance varied. In general, removing activity and age group subsets was more harmful to models than hand size, gender, or race subsets, but there were exceptions. For example, all models performed well on **standing-left** and **standing-right** test data even when subsets were left out of training, while performance on **21-25** testing data varied dramatically. Critically, we found that some intersectional subgroups performed drastically different compared to their overall group performance, yet we would not have caught these instances if not for careful evaluation and visualizations across these subgroups. In the case of IMU data, which we have argued lacks the semantic meaning necessary to contextualize data, we found that having descriptive information is necessary for evaluating model robustness. This information lays the foundation for debugging which retrospective documentation--such as what is described for datasheets--does not necessarily inform nor direct data collection as it occurs. While these records hold substantial value, they are disjoint from data collection _processes_. Figure 6. Performance for each subgroup on the same testing dataset (y-axis) per model (x-axis), split across categories. The large square matrix (left) shows accuracy per subgroup of models where a subgroup was left out of their training set, whereas the rectangular matrix (right) shows performance for models trained on diverse data. The small gray bar chart to the right of the rectangular matrix indicates how many instances in the test set belong to each subgroup of data. (A) The arrow highlights the diagonal of the matrix: subgroups of data that perform worse than others since this corresponds to a model trained without this particular subgroup. (B) For example, taking the left_side_right model from the matrix (a model whose dataset is missing people typing with their right hand while laying on their left side), we see it performs poorly on the left_side_right subgroup. (C) In comparison, models trained on a more diverse, same sized dataset show no large dip in subgroup accuracy. ## 9. Familiarity We evaluate familiarity's efficacy in directing dataset iteration (Q2) through a simple series of experiments. First, we look to improve our best IMU classifier from the prior experiment through directly modifying the training data. We do this using only "self familiarity" scores (i.e., the familiarity of the training data themselves)--measuring only what the model has been exposed in training so as to not overfit. ### Familiarity for Debugging Our dataset was collected "in the wild" without additional annotation from participants. While participants were given clear instructions, we still anticipated that a small number of samples would show significant noise or distortion--participants might drop their phones as they type, or switch hands part way through a session. This is often the case for any data collection process, but left uncapught such samples may introduce unwanted effects downstream. Thus, a method to surface potentially noisy samples was necessary. Inspecting time series plots for each session was untenable due to the size of data collected. Familiarity offers a possible solution to uncovering these noisy samples. To use familiarity in this way, we must at least partially train a model on the current dataset, then find the most unfamiliar samples, as described in Section 5. This approach does not discriminate between noise and conceptually difficult samples, requiring human overview to determine whether said samples should be removed; however, it _greatly_ reduces the number of samples to check instead of the entire dataset. In this way, data familiarity offers an alternative to other work such as (Sandel, 2017; Sandel et al., 2018), which incorporate loss as a metric for capturing noisy samples. We hypothesize that the least familiar samples within a noisy dataset will include instances of noisy data which may pose the greatest harm to the model. We explored how to incorporate familiarity as a tool for debugging first through an automated approach to removing data, then by incorporating human review. In initial experiments, we replaced familiar data with noisy data, rather than the OOD data we had expected. As a result, we discovered our dataset was far noisier than modeling and review initially indicated. Our protocol is as follows: first, we train an initial model on all available training data. We apply self-familiarity to the training and testing set, selecting only 0.1% of the data, which corresponds to least familiar samples. This data is either removed from the dataset, or visualized and manually reviewed per sample to evaluate if they are truly noisy or simply uncommon. We removed the same number of samples with manual review of truly noisy data as with the automated removal, using the sampling methods described in Section 9.2. We compared outcomes of both automated removal and removal through human review, but found that the deleted portion of least familiar data included both noisy data and critical outliers. In the following experiments, we evaluate the results of these interventions to a baseline case where no data is removed. In all cases, models are trained following Section 8's protocol. #### 9.1.1. Evaluation In practice, we found that familiarity worked well as a tool to encourage diversity _and_ for debugging. Before removing noisy data from the complete dataset, we found that a large percentage of the least familiar samples showed significant distortion, despite our efforts to normalize the data. Because of the presence of these noisy data, running our familiarity experiments with data that was not cleaned did not show the same levels of general improvement. In this context, matching metadata to noisy samples was not the correct comparison--these samples were not exemplars of the subpopulation. In manually evaluating our unfamiliar data before removal, we found there were cases where unfamiliar samples were not noisy, but rather underrepresented intersectional groups. For instance, a person identifying as an **Asian Female** with **Small** hands and Figure 7. Distinct examples of data characterized as most “unfamiliar” to a model. (A) is an example of data we would consider out of distribution, (B) presents a case of sensor failure—the sensor stopped recording part way through the task—and (C) shows a particularly noisy sample, likely where someone dropped their phone mid-typing. **Large** phone was as unfamiliar to the model as incredibly noisy examples 4. Noisy data--such as from someone dropping their phone--has different implications for the model than a sample that's simply unusual because the angle a phone was held is different. We found there were relatively few instances of these characteristics presented together within the dataset (from only a few individuals). For this reason, simply removing 0.1% of the least familiar data prior to experiments did not improve performance to the extent we see when manual review was implemented. That is to say, familiarity cannot distinguish between distorted noise and underrepresented or out of distribution data. Therefore for datasets where significant noise may be present, _human review_ is necessary to evaluate the quality of the data. Footnote 4: We show examples of distinct, least familiar samples to an example model in Figure 7. ### Familiarity for Diverse Data Coverage Then, we compare familiarity scores across the different descriptive groups, as described in Section 8. These scores can be used to determine next steps for additional data collection, augmentation, or modified (over/under) sampling. Here, we sample _out_ a percentage of the _most_ familiar data, and _add_ data that matches the intersectional characteristics for the same percentage of _least_ familiar data. Substituted data was held out from training in Section 8, thus is new to the models. We match samples based on the metadata characteristics, using combinatorial optimization to find the most similar instances to unfamiliar data. For each experiment, we look exclusively at familiarity for the final dense layer of our model--a common choice for research exploring model embeddings as it is as "most aligned" with what's human recognizable. Using PCA, we project down to 50 dimensions, then fit 5 GMMs to last dense layer for each 1D CNN trained in Section 8. We call this model \(M_{1}\). Scoring our training data, we save familiarity scores and model weights. We vary the range of familiarity scores to sample from, percentage of data, and two sampling in methods--top \(k\) and random selection from a least familiar data range--compared to a random baseline, and compare model performance across each. We then select a window range from the data distribution to sample the most familiar data from. To determine the optimal sampling sizes developing our experimental protocol was what sampling method to incorporate. Given that our intention was to _encourage_ diversity in our dataset, an ineffective sampling strategy might exacerbate edge case failures. Thus we explored three general approaches: (1) Replace \(k\) most familiar samples with \(k\) least familiar samples; (2) Distributed sampling across a window of \(k+i\) most familiar samples with \(k\) least familiar samples, where \(i\) represents a percentage of the overall training set; and (3) Distributed sampling across a window of \(k+i\) most familiar samples with \(k+i\) least familiar samples, where \(i\) represents a percentage of the overall training set. Each sampling mechanism was compared across multiple \(k\) and \(k+i\) values to determine the relative "sweet spot" for our sampling strategy given a particular training dataset. We randomly select \(X\) percent of best and worst scores, varying percentage between 0.5%--0.01%. We train model(s) on each variation of window size and sampling percentage. We repeat the previous steps \(k\) times to ensure a multifold validation, then compare the intersectional performances of \(M_{1}\) to the new model \(M_{t}\) trained on the familiarity-informed dataset. This is repeated per data cleaning scenario described in Section 9.1. We structure these experiments--varying window size and sampling percentage--to uncover a sweet spot: too much data removed may harm performance of currently familiar groups, and too large a window might impinge on less familiar data. #### 9.2.1. Evaluation Self-familiarity scores create a distinct curve, with unfamiliar data falling into the long-tail. We find accuracy scores are far more striated prior to familiarity interventions, showing some concepts are learned better than others, and that following familiarity interventions, models do show targeted improvement. Models performed more poorly in regions with high numbers of low familiarity data (an example of which is shown in Figure 9). Of note, the models did not necessarily improve overall--although this was frequently the case--but instead showed improvement in areas of low performance and regression in those with high performance. This aligns with a working definition of fairness we described previously. In Figure 9, we show two model performances on intersectional groups: (A) is our highest performing model trained on the **diverse-data-scene** dataset with no familiarity intervention. We can see lower accuracies in **back-bed-left** and **right-side-left** compared to other subgroups. In contrast, **sitting-left** and **back-bed-right** had much higher accuracy compared to other subgroups. Figure 9 (B) shows the difference between this model and a model trained on the same data with familiarity interventions. Here, regions that previously showed a relative poor performance showed dramatic improvement, while those with higher relative accuracy showed some regression. Finally, we found that incorporating selecting 0.1% least familiar data was the optimal for our experiments. This percentage will depend on the distribution and shape of the underlying dataset, and when incorporating familiarity in future experiments a similar exploratory set of experiments should be conducted initially. Overall, we noted _Familiarity captured what samples needed increased representation within the dataset_. So long as the data was cleaned, we found this trend was consistent across all models. ## 10. Discussion and Future Work It is clear that we need processes which integrate data _and_ models in systematic, transparent ways. Instances of unrepresentative data remain unidentified in research and product development (Zhu et al., 2017), impacting user experiences. Our goal for _designing data_ is to support more equitable and inclusive machine learning data collection and modeling development. Data will always be shaped by the perspective of the observer; our work highlights how reflection and systematic processes may curtail harmful biasing effects earlier in development, before they are difficult to detangle from the dataset. _Designing data_ assures intentional and critical reflection is given to the what and how of data collection, even when there is only a small number of people influencing it. While we do not suggest _designing data_ replaces valuable tools such as expert panels and focus groups, these resources are often inaccessible or prohibitively expensive, and offer only small portion of the solution to bias mitigation. It was developed in response to existing challenges in data collection and model deployment that are currently unmet holistically (see Section 3). While each step can be incorporated in isolation, the interventions are complimentary, compensating for the cascading effects of both upstream and downstream misteps. While broadly applicable, we believe our approach is particularly useful for software engineers and data scientists building tools for real world applications. ### Limitations and Future Extensions of Familiarity _Distinguishing between rare and noisy data._ A weakness of familiarity is that we have no current method of distinguishing noisy samples from out of distribution data. While an unfamiliar sample might stand out to the model, in many cases, human review is necessary to evaluate its implications. For this reason, _in very noisy datasets_, familiarity may be a tool best used for debugging. Future research might seek to incorporate algorithmic methods of distinguishing sources of uncertainty, however there is currently little work on the topic. Existing research either relies on the learning rate as a proxy for discriminating types of uncertainty as aleatoric or epistemic [22], for example, or builds on Bayesian networks [44]. Both face various weaknesses, and there remains great need for additional techniques and evaluations. _Familiarity for new data._ While we explored familiarity largely from the perspective of "self-familiarity"--that is, what the model has already been exposed to, it also introduces a mechanism by which we can understand how a model responds to data it's previously not seen. This may offer a mechanism of transparency through which future users could evaluate how a model responds to new data. In this work, we computed familiarity from a single layer. In future work, we will explore how familiarity computed at different layers can be leveraged. Given that each layer captures distinct features within the data, aggregating information across depths of the model may lead to more holistic identification of unfamiliar data _and_ what features are more specifically so to the model. One way to do this is through a Product of Experts (PoE) paradigm [32] where each layer is considered an "expert". _Comparisons against active learning._ On the surface, familiarity appears similar to active-learning (AL). AL requires practitioners to choose which data to use given a large collection. In our scenario, we must understand which data to _collect_ or gather when there is no additional data readily available to run the AL algorithm on. One way to circumvent this difference is to apply AL to the training set, and then extract statistics on the metadata that the AL algorithm indicates as most useful. For example, one could use the entropy of the logits: high entropy on a data point might be an indication that the model is still uncertain about such type of data. An issue Figure 8. Comparison of intersectional groups of a less diverse model (A) to a diverse model (B). Striated accuracy across populations—as described by the metadata descriptions—performed worse when groups were left out, indicating that these characteristics were aligned with meaningful diversity in the data. with such approach is that it implicitly assumes models are well calibrated, which is not always the case. _Interactive systems for designing data._ Designing data advances how we account for the interplay between data and model (Srivastava et al., 2016), considering both within the deployment cycle to compensate for missteps in either. In our case study, we use data visualizations (e.g., Figure 2) to compare practitioners' expectations against collected data distributions, then visualize familiarity to explore rare or noisy samples. The visualizations and interfaces used in this work are largely static; however, we see a great opportunity to build the designing data process into future interactive systems and tools for better data work and model evaluation. From the HCI and visualization communities, there are a number of interactive systems that have helped ML practitioners explore their data (Bummer et al., 2016; Srivastava et al., 2016; Srivastava et al., 2016) and evaluate their models (Bummer et al., 2016; Srivastava et al., 2016; Srivastava et al., 2016); for an in-depth survey on visual analytics for ML see (Srivastava et al., 2016). Directions for future interactive systems might include tools to help practitioners reflect on their data collection practices (e.g., digging into their expectations, as discussed in Section 6), or tools to direct familiarity analyses (shown in Section 9). ### Conclusion Through an interdisciplinary meeting of human-centered interventions and algorithmic evaluation, designing data emphasizes planning over rapid implementation to ensure prototype datasets are conscientiously designed before deployment. We argue that the earlier data designers think about diversity, the greater they can reduce future technical debt. Much like in refactoring code, prototype data must be refactored regularly prior to production, else early data design decisions may lead to unforeseen downstream effects. ###### Acknowledgements. We thank our colleagues at Apple for their time and effort participating in our research. We especially thank Kayur Patel, Donghao Ren, and Halden Lin for their help and guidance.
2305.05585
Improving Implicit Feedback-Based Recommendation through Multi-Behavior Alignment
Recommender systems that learn from implicit feedback often use large volumes of a single type of implicit user feedback, such as clicks, to enhance the prediction of sparse target behavior such as purchases. Using multiple types of implicit user feedback for such target behavior prediction purposes is still an open question. Existing studies that attempted to learn from multiple types of user behavior often fail to: (i) learn universal and accurate user preferences from different behavioral data distributions, and (ii) overcome the noise and bias in observed implicit user feedback. To address the above problems, we propose multi-behavior alignment (MBA), a novel recommendation framework that learns from implicit feedback by using multiple types of behavioral data. We conjecture that multiple types of behavior from the same user (e.g., clicks and purchases) should reflect similar preferences of that user. To this end, we regard the underlying universal user preferences as a latent variable. The variable is inferred by maximizing the likelihood of multiple observed behavioral data distributions and, at the same time, minimizing the Kullback-Leibler divergence (KL-divergence) between user models learned from auxiliary behavior (such as clicks or views) and the target behavior separately. MBA infers universal user preferences from multi-behavior data and performs data denoising to enable effective knowledge transfer. We conduct experiments on three datasets, including a dataset collected from an operational e-commerce platform. Empirical results demonstrate the effectiveness of our proposed method in utilizing multiple types of behavioral data to enhance the prediction of the target behavior.
Xin Xin, Xiangyuan Liu, Hanbing Wang, Pengjie Ren, Zhumin Chen, Jiahuan Lei, Xinlei Shi, Hengliang Luo, Joemon Jose, Maarten de Rijke, Zhaochun Ren
2023-05-09T16:19:07Z
http://arxiv.org/abs/2305.05585v1
# Improving Implicit Feedback-Based Recommendation through Multi-Behavior Alignment ###### Abstract. Recommender systems that learn from implicit feedback often use large volumes of a single type of implicit user feedback, such as clicks, to enhance the prediction of sparse target behavior such as purchases. Using multiple types of implicit user feedback for such target behavior prediction purposes is still an open question. Existing studies that attempted to learn from multiple types of user behavior often fail to: (i) learn universal and accurate user preferences from different behavioral data distributions, and (ii) overcome the noise and bias in observed implicit user feedback. To address the above problems, we propose **multi**-behavior alignment (MBA), a novel recommendation framework that learns from implicit feedback by using multiple types of behavioral data. We conjecture that multiple types of behavior from the same user (e.g., clicks and purchases) should reflect similar preferences of that user. To this end, we regard the underlying universal user preferences as a latent variable. The variable is inferred by maximizing the likelihood of multiple observed behavioral data distributions and, at the same time, minimizing the Kullback-Leibler divergence (KL-divergence) between user models learned from auxiliary behavior (such as clicks or views) and the target behavior separately. MBA infers universal user preferences from multi-behavior data and performs data denoising to enable effective knowledge transfer. We conduct experiments on three datasets, including a dataset collected from an operational e-commerce platform. Empirical results demonstrate the effectiveness of our proposed method in utilizing multiple types of behavioral data to enhance the prediction of the target behavior. Implicit feedback recommendation, Multi-behavior recommendation, Recommendation denoising, Transfer learning + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + Footnote †: Equal contribution. + + Footnote †: Equal contribution. from implicit user feedback are typically trained on a single type of implicit user behavior, such as clicks. However, in real-world scenarios, multiple types of user behavior are logged when a user interacts with a recommender system. For example, users may click, add to a cart, and purchase items on an e-commerce platform [31]. Simply learning recommenders from a single type of behavioral data such as clicks can lead to a misunderstanding of a user's real user preferences since the click data is noisy and can easily be corrupted due to bias [5], and thus lead to suboptimal target behavior (e.g., purchases) predictions. Meanwhile, only considering purchase data tends to lead to severe cold-start problems [26; 41; 48] and data sparsity problems [23; 27]. **Using multiple types of behavioral data.** How can we use multiple types of _auxiliary_ behavioral data (such as clicks) to enhance the prediction of sparse _target_ user behavior (such as purchases) and thereby improve recommendation performance? Some prior work [2; 12] has used multi-task learning to train recommender systems on both target behavior and multiple types of auxiliary behavior. Building on recent advances in graph neural networks, Jin et al. [18] encode target behavior and multiple types of auxiliary behavior into a heterogeneous graph and perform convolution operations on the constructed graph for recommendation. In addition, recent research tries to integrate the micro-behavior of user-item interactions into representation learning in the sequential and session-based recommendation [25; 44; 46]. These publications focus on mining user preferences from user-item interactions, which is different from our task of predicting target behavior from multiple types of user behavior. **Limitations of current approaches.** Prior work on using multiple types of behavioral data to improve the prediction of the target behavior in a recommendation setting has two main limitations. The first limitation concerns the gap between data distributions of different types of behavior. This gap impacts the learning of universal and effective user preferences. For example, users may have clicked on but not purchased items, resulting in different positive and negative instance distributions across auxiliary and target behaviors. Existing work typically learns separate user preferences for different types of behavior and then combines those preferences to obtain an aggregate user representation. We argue that: (i) user preferences learned separately based on different types of behavior may not consistently lead to the true user preferences, and (ii) multiple types of user behavior should reflect similar user preferences; in other words, there should be an underlying universal set of user preferences under different types of behavior of the same user. The second limitation concerns the presence of noise and bias in auxiliary behavioral data, which impacts knowledge extraction and transfer. A basic assumption of recommendations based on implicit feedback is that observed interactions between users and items reflect positive user preferences, while unobserved interactions are considered negative training instances. However, this assumption seldom holds in reality. A click may be triggered by popularity bias [5], which does not reflect a positive preference. And an unobserved interaction may be attributed to a lack of exposure [6]. Hence, simply incorporating noisy or biased behavioral data may lead to sub-optimal recommendation performance. **Motivation.** Our assumption is that multiple types of behavior from the same user (e.g., clicks and purchases) should reflect similar preferences of that user. To illustrate this assumption, consider Figure 1, which shows distributions of items that two users (\(u_{1}\) and \(u_{2}\)) interacted with (clicks \(c\) and purchases \(p\)), in the Beibei and Taobao datasets (described in Section 4.2 below). For both users, the items they clicked or purchased are relatively close. These observations motivate our hypothesis that multiple types of user behavior reflect similar user preferences, which is vital to improve the recommendation performance further. **Proposed method.** To address the problem of learning from multiple types of auxiliary behavioral data and improve the prediction of the target behavior (and hence recommendation performance), we propose a training framework called **multi-behavior alignment** (MBA). MBA aligns user preferences learned from different types of behavior. The key assumption behind MBA is that multiple types of behavior from the same user reflect similar underlying user preferences. To address the data distribution limitation mentioned above, we utilize KL-divergence to measure the discrepancy between user models learned from multiple types of auxiliary behavior and target behavior, and then conduct knowledge transfer by minimizing this discrepancy to improve the recommendation performance. For the second limitation mentioned above (concerning noise and bias in behavioral data), MBA regards the underlying universal user preferences as a latent variable. The variable is then inferred by maximizing the likelihood of multiple types of observed behavioral data while minimizing the discrepancy between models trained on different types of behavioral data. In this manner, MBA denoises multiple types of behavioral data and enables more effective knowledge transfer across multiple types of user behavior. To demonstrate the effectiveness of the proposed method, we conduct extensive experiments on two open benchmark datasets and one dataset collected from an operational e-commerce platform. Figure 1. Distributions of items interacted with by two users in the Beibei and Taobao datasets (described in §4.2). Item representations are obtained by a matrix factorization model trained on the purchase behavior data. \(u_{i\ell}\) (\(u_{i\ell}\)) represents the distribution of items clicked (purchased) by user \(u_{i}\). Experimental results show that the proposed MBA framework outperforms related state-of-the-art baselines. **Main contributions.** Our main contributions are as follows: * We argue that multiple types of auxiliary and target behavior should reflect similar user preferences, and we propose to infer the true user preferences from multiple types of behavioral data. * We propose a learning framework MBA to jointly perform data denoising and knowledge transfer across multiple types of behavioral data to enhance target behavior prediction and hence improve the recommendation performance. * We conduct experiments on three datasets to demonstrate the effectiveness of the MBA method. One of these datasets is collected from an operational e-commerce platform, and includes clicks and purchase behavior data. Experimental results show state-of-the-art recommendation performance of the proposed MBA method. ## 2. Related Work We review prior work on multi-behavior recommendation and on denoising methods for recommendation from implicit feedback. ### Multi-behavior recommendation Unlike conventional implicit feedback recommendation models (Kang et al., 2018; Liu et al., 2019; Liu et al., 2019), which train a recommender on a single type of user behavior (e.g., clicks), multi-behavior recommendation models use multiple types of auxiliary behavior data to enhance the recommendation performance on target behavior (Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019). Recent studies use multi-task learning to perform joint optimization on learning auxiliary behavior and target behavior. For example, Gao et al. (2018) propose a multi-task learning framework to learn user preferences from multi-behavior data based on a pre-defined relationship between different behavior. Since different behavioral interactions between users and items can form a heterogeneous graph, recent studies also focus on using graph neural network (GNN) to mine the correlations among different types of behavior. For example, Wang et al. (2019) uses the auxiliary behavior data to build global item-to-item relations and further improve the recommendation performance of target behavior. Jin et al. (2019) propose a graph convolutional network (GCN) based model on capturing the diverse influence of different types of behavior and the various semantics of different types of behavior. Xia et al. (2019) incorporate multi-behavior signals through graph-based meta-learning. Chen et al. (2019) regard the multi-behavior recommendation task as a multi-relationship prediction task and train the recommender with an efficient non-sampling method. Additionally, some studies apply contrastive learning or a variational autoencoder (VAE) to improve the multi-behavior recommender. Xuan et al. (2019) propose a knowledge graph enhanced contrastive learning framework to capture multi-behavioral dependencies better and solve the data sparsity problem of the target behavior, and Ma et al. (2019) propose a VAE-based model to conduct multi-behavior recommendation. Another related research field is based on micro-behaviors (Xuan et al., 2019; Liu et al., 2019; Liu et al., 2019), which utilize the micro-operation sequence in the process of user-item interactions to capture user preferences and predict the next item. For example, Yuan et al. (2019) focus on "sequential patterns" and "dyadic relational patterns" in micro-behaviors, and then use an extended self-attention network to mine the relationship between micro-behavior and user preferences. This work focuses on mining user preferences from the micro-operation sequence. However, existing studies still neglect the different data distributions across multiple types of user behavior, and thus fail to learn accurate and universal user preferences. Besides, prior work does not consider the noisy signals of user implicit feedback data, resulting in ineffective knowledge extraction and transfers. ### Recommendation denoising Existing recommender systems are usually trained with implicit feedback since it is much easier to collect than explicit ratings (Xuan et al., 2019). Recently, some research (Liu et al., 2019; Liu et al., 2019; Liu et al., 2019) has pointed out that implicit feedback can easily be corrupted by different factors, such as various kinds of bias (Chen et al., 2019) or users' mistaken clicks. Therefore, there have been efforts aimed at alleviating the noisy problem of implicit recommendation. These efforts include sample selection methods (Xuan et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019), re-weighting methods (Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019), methods using additional information (Liu et al., 2019; Liu et al., 2019; Liu et al., 2019), and methods designing specific denoising architectures (Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019). Sample selection methods aim to design more effective samplers for model training. For example, Gantner et al. (Gantner et al., 2019) consider popular but un-interacted items as items that are highly likely to be negative ones, while Ding et al. (Ding et al., 2020) consider clicked but not purchased items as likely to be negative samples. Re-weighting methods typically identify noisy samples as instances with higher loss values and then assign lower weights to them. For example, Wang et al. (Wang et al., 2019) discard the large-loss samples with a dynamic threshold in each iteration. Wang et al. (Wang et al., 2019) utilize the differences between model predictions as the denoising signals. Additional information such as dwell time (Liu et al., 2019), gaze pattern (Liu et al., 2019) and auxiliary item features (Liu et al., 2019) can also be used to denoise implicit feedback. Methods designing specific denoising architectures improve the robustness of recommender systems by designing special modules. Wu et al. (Wu et al., 2019) use self-supervised learning on user-item interaction graphs to improve the robustness of graph-based recommendation models. Gao et al. (2018) utilize the self-labeled memorized data as denoising signals to improve the robustness of recommendation models. Unlike the work listed above, which does not consider multiple types of user behavior, in this work, we focus on extracting underlying user preferences from (potentially) corrupted multi-behavior data and then conducting knowledge transfer to improve the recommendation performance. ## 3. Method In this section, we detail our proposed MBA framework for multi-behavior recommendation. We first introduce notations and the problem formulation in Section 3.1. After that, we describe how to perform multi-behavior alignment on noisy implicit feedback in Section 3.2. Finally, training details are given in Section 3.3. ### Notation and problem formulation We write \(u\in\mathcal{U}\) and \(i\in\mathcal{I}\) for a user and an item, where \(\mathcal{U}\) and \(\mathcal{I}\) indicate the user set and the item set, respectively. Without loss of generality, we regard click behavior as the auxiliary behavior and purchase behavior as the target behavior. We write \(\mathbf{R}_{f}\in\mathbb{R}^{|\mathcal{U}|\times|\mathcal{I}|}\) for the observed purchase behavior data. Specifically, each item \(r^{f}_{u,i}\in\mathbf{R}_{f}\) is set to 1 if there is a purchase behavior between user and item \(i\); otherwise \(r^{f}_{u,i}\) is set as 0. Similarly, we denote \(\mathbf{R}_{g}\in\mathbb{R}^{|\mathcal{U}|\times|I}\) as the observed click behavior data, where each \(r^{g}_{u,i}\in\mathbf{R}_{g}\) is set as 1 if there is a click behavior between user \(u\) and item \(i\); otherwise \(r^{g}_{u,i}=0\). We use \(P(\mathbf{R}_{f})\) and \(P(\mathbf{R}_{g})\) to denote the user preference distribution learned from \(\mathbf{R}_{f}\) and \(\mathbf{R}_{g}\), respectively. We assume that there is an underlying latent true user preference matrix \(\mathbf{R}_{t}\) with \(r^{t}_{u,i}\in\mathbf{R}_{t}\) as the true preference of user \(u\) over item \(i\). The probabilistic distribution of \(\mathbf{R}_{t}\) is denoted as \(P(\mathbf{R}_{t})\). Both the data observation of \(\mathbf{R}_{f}\) and \(\mathbf{R}_{g}\) is motivated by the latent universal true user preference distribution \(P(\mathbf{R}_{t})\) plus different kinds of noises or biases. Formally, we assume that \(P(\mathbf{R}_{t})\) follows a Bernoulli distribution and can be approximated by a target recommender model \(t_{\theta}\) with \(\theta\) as the parameters: \[r^{t}_{u,i}\sim\text{Bernoulli}(t_{\theta}(u,i)). \tag{1}\] Since the true user preferences \(r^{t}_{u,i}\) are intractable, we need to introduce the learning signals from the observed \(r^{f}_{u,i}\) and \(r^{g}_{u,i}\) to infer \(r^{t}_{u,i}\). As a result, we introduce the following models to depict the correlations between the observed user implicit feedback (i.e., \(r^{f}_{u,i}\) and \(r^{g}_{u,i}\)) and the latent true user preferences \(r^{t}_{u,i}\): \[r^{f}_{u,i}\mid r^{t}_{u,i} =0\sim\text{Bernoulli}(h^{f}_{\phi^{\prime}}(u,i))\] \[r^{f}_{u,i}\mid r^{t}_{u,i} =1\sim\text{Bernoulli}(h^{f}_{\varphi}(u,i))\] \[r^{g}_{u,i}\mid r^{t}_{u,i} =0\sim\text{Bernoulli}(h^{g}_{\phi^{\prime}}(u,i)), \tag{2}\] where \(h^{f}_{\phi}(u,i)\) and \(h^{f}_{\varphi}(u,i)\) are parameterized by \(\phi\) and \(\varphi\) in the observed purchase behavior data, respectively, while \(h^{g}_{\phi^{\prime}}(u,i)\) and \(h^{g}_{\varphi^{\prime}}(u,i)\) are parameterized by \(\phi^{\prime}\) and \(\varphi^{\prime}\) in the observed click behavior data respectively. The target of our task is formulated as follows: given the observed multi-behavior user implicit feedback, i.e., \(\mathbf{R}_{f}\) and \(\mathbf{R}_{g}\), we aim to train the latent true user preference model \(t_{\theta}\), and then use \(t_{\theta}\) to improve the prediction performance on target behavior. More precisely, during model inference, we introduce both \(P(\mathbf{R}_{f})\) and \(P(\mathbf{R}_{t})\) to perform the target behavior recommendation and use a hyperparameter \(\beta\) to balance the \(P(\mathbf{R}_{t})\) and \(P(\mathbf{R}_{f})\), which is formulated as: \[\text{score}=\beta P(\mathbf{R}_{t})+(1-\beta)P(\mathbf{R}_{f}). \tag{3}\] We select items with the highest score as the target behavior recommendation results. ### Multi-behavior alignment on noisy data The key motivation for MBA is that multiple types of user behavior should reflect similar user preferences. Hence, Eq. 4 is expected to be achieved with the convergence of the training models: \[P(\mathbf{R}_{f})\approx P(\mathbf{R}_{g})\approx P(\mathbf{R}_{t}). \tag{4}\] Therefore, \(P(\mathbf{R}_{f})\) and \(P(\mathbf{R}_{t})\) should have a relatively small KL-divergence, which is formulated as follows: \[KL[P(\mathbf{R}_{f})\|P(\mathbf{R}_{t})]=E_{P(\mathbf{R}_{f})}[\log P(\mathbf{ R}_{f})-\log P(\mathbf{R}_{t})]. \tag{5}\] Similarly, we also have the KL-divergence between \(P(\mathbf{R}_{g})\) and \(P(\mathbf{R}_{t})\): \[KL[P(\mathbf{R}_{g})\|P(\mathbf{R}_{t})]=E_{P(\mathbf{R}_{g})}[\log P(\mathbf{ R}_{g})-\log P(\mathbf{R}_{t})]. \tag{6}\] However, naively minimizing the above KL-divergence is not feasible since it overlooks the data distribution gaps and correlations between multiple types of behavior. To address this issue, we use Bayes' theorem to rewrite \(P(\mathbf{R}_{t})\) as follows: \[P(\mathbf{R}_{t})=\frac{P(\mathbf{R}_{f})P(\mathbf{R}_{t}\mid\mathbf{R}_{f})} {P(\mathbf{R}_{f}\mid\mathbf{R}_{t})}=\frac{P(\mathbf{R}_{g})P(\mathbf{R}_{f} \mid\mathbf{R}_{g})}{P(\mathbf{R}_{g}\mid\mathbf{R}_{t})}. \tag{7}\] By substituting the right part of Eq. 7 into Eq. 5 and rearranging errs, we obtain the following equation: \[E_{P(\mathbf{R}_{f})}[\log P(\mathbf{R}_{g}\mid\mathbf{R}_{t})] -KL[P(\mathbf{R}_{f})\|P(\mathbf{R}_{t})]\] \[=\log P(\mathbf{R}_{g})-KL[P(\mathbf{R}_{f})\|P(\mathbf{R}_{t} \mid\mathbf{R}_{g})]. \tag{8}\] Since \(KL[P(\mathbf{R}_{f})\|P(\mathbf{R}_{t}\mid\mathbf{R}_{g})]\geq 0\), the left side of Eq. 8 is an approximate lower bound of the logarithm \(\log P(\mathbf{R}_{g})\). The bound is satisfied if, and only if, \(P(\mathbf{R}_{f})\) perfectly recovers \(P(\mathbf{R}_{t}\mid\mathbf{R}_{g})\), which means \(P(\mathbf{R}_{f})\) trained on the observed target behavior can perfectly approximates the true user preference distribution captured from the auxiliary behavior data. The above condition is in line with the main motivation of the MBA, i.e., different behavior data should reflect similar user preferences. We see that the left side of Eq. 8 is based on the expectation over \(P(\mathbf{R}_{f})\), which means that we are trying to train \(P(\mathbf{R}_{f})\) with the given corrupted auxiliary behavior data \(\mathbf{R}_{g}\) (i.e., the term \(E_{P(\mathbf{R}_{f})}[\log P(\mathbf{R}_{g}\mid\mathbf{R}_{t})]\)) and then to transmit the information from \(P(\mathbf{R}_{f})\) to \(P(\mathbf{R}_{t})\) via the term \(KL[P(\mathbf{R}_{f})\|P(\mathbf{R}_{t})]\). Such a learning process is ineffective for learning the true user preference distribution \(P(\mathbf{R}_{t})\) and the target recommender model \(t_{\theta}\). To overcome the above issue, according to Eq. 4, when the training process has converged, the preference distributions \(P(\mathbf{R}_{f})\) and \(P(\mathbf{R}_{t})\) would be close to each other. As a result, we can change the expectation over \(P(\mathbf{R}_{f})\) to the expectation over \(P(\mathbf{R}_{f})\) to learn \(P(\mathbf{R}_{f})\) more effectively. So we modify the left side of Eq. 8 as \[E_{P(\mathbf{R}_{t})}[\log P(\mathbf{R}_{g}\mid\mathbf{R}_{t})] -KL[P(\mathbf{R}_{f})\|P(\mathbf{R}_{t})]\] \[\approx\log P(\mathbf{R}_{g})-KL[P(\mathbf{R}_{f})\|P(\mathbf{R}_{t} \mid\mathbf{R}_{g})]. \tag{9}\] Similarly, if we substitute the middle part of Eq. 7 into Eq. 6 and perform similar derivations, we can obtain: \[E_{P(\mathbf{R}_{t})}[\log P(\mathbf{R}_{f}\mid\mathbf{R}_{t})] -KL[P(\mathbf{R}_{g})\|P(\mathbf{R}_{t})]\] \[\approx\log P(\mathbf{R}_{f})-KL[P(\mathbf{R}_{g})\|P(\mathbf{R}_{t }\mid\mathbf{R}_{f})]. \tag{10}\] The left side of Eq. 10 is an approximate lower bound of \(\log P(\mathbf{R}_{f})\). The bound is satisfied only if \(P(\mathbf{R}_{g})\) perfectly recovers \(P(\mathbf{R}_{t}\mid\mathbf{R}_{f})\), which means \(P(\mathbf{R}_{g})\) trained on the observed auxiliary behaviors can perfectly approximate the true user preference distribution captured from the target behavior data. Such condition further verifies the soundness of MBA, i.e., multiple types of user behavior are motivated by similar underlying user preferences. Combining the left side of both Eq. 9 and Eq. 10 we obtain the loss function as: \[L =-E_{P(\mathbf{R}_{t})}[\log P(\mathbf{R}_{g}\mid\mathbf{R}_{t})] +KL[P(\mathbf{R}_{f})\|P(\mathbf{R}_{t})]\] \[-E_{P(\mathbf{R}_{t})}[\log P(\mathbf{R}_{f}\mid\mathbf{R}_{t})] +KL[P(\mathbf{R}_{g})\|P(\mathbf{R}_{t})]. \tag{11}\] We can see that the loss function aims to maximize the likelihood of data observation (i.e., \(P(\mathbf{R}_{g}\mid\mathbf{R}_{t})\) and \(P(\mathbf{R}_{f}\mid\mathbf{R}_{f})\)) and minimize the KL-divergence between distributions learned from different user behavior data. The learning process of MBA serves as a filter to simultaneously denoise multiple types of user behavior and conduct beneficial knowledge transfers to infer the true user preferences to enhance the prediction of the target behavior. ### Training details As described in Section 3.1, we learn the user preference distributions \(P(\mathbf{R}_{f})\) and \(P(\mathbf{R}_{g})\) from \(\mathbf{R}_{f}\) and \(\mathbf{R}_{g}\), respectively. In order to enhance the learning stability, we pre-train \(P(\mathbf{R}_{f})\) and \(P(\mathbf{R}_{g})\) in \(\mathbf{R}_{f}\) and \(\mathbf{R}_{g}\), respectively. We use the same model structures of our target recommender \(t_{\theta}\) as the pre-training model. As the training converges, the KL-divergence will gradually approach 0. In order to enhance the role of the KL-divergence in conveying information, we set a hyperparameter \(\alpha\) to enhance the effectiveness of the KL-divergence. Then we obtain the following training loss function: \[\begin{split} L_{MBA}&=-E_{P(\mathbf{R}_{f})}[ \log P(\mathbf{R}_{g}\mid\mathbf{R}_{t})]+\alpha KL[P(\mathbf{R}_{f})\|P( \mathbf{R}_{t})]\\ &-E_{P(\mathbf{R}_{f})}[\log P(\mathbf{R}_{f}\mid\mathbf{R}_{t}) ]+\alpha KL[P(\mathbf{R}_{g})\|P(\mathbf{R}_{t})].\end{split} \tag{12}\] #### 3.3.1. Expectation derivation As described in Section 3.1, both \(\mathbf{R}_{f}\) and \(\mathbf{R}_{g}\) contain various kinds of noise and bias. In order to infer the latent true user preferences from the corrupted multi-behavior data, we use \(h^{f}_{\phi}(u,i)\) and \(h^{g}_{\varphi}(u,i)\) to capture the correlations between the true user preferences and the observed purchase data. Similarly, \(h^{g}_{\phi^{\prime}}(u,i)\) and \(h^{g}_{\varphi^{\prime}}(u,i)\) are used to capture the correlations between the true user preferences and the observed click data, as shown in Eq. 2. Specifically, we expand \(E_{P(\mathbf{R}_{t})}[\log P(\mathbf{R}_{g}\mid\mathbf{R}_{t})]\) as: \[\begin{split}& E_{P(\mathbf{R}_{t})}[\log P(\mathbf{R}_{g}\mid \mathbf{R}_{t})]=\sum_{(u,i)}E_{r^{\prime}_{u,i}\sim P(\mathbf{R}_{t})}[\log P (r^{g}_{u,i}\mid r^{t}_{u,i})]\\ &=\sum_{(u,i)\mid r^{g}_{u,i}=1}\left[\log h^{g}_{\varphi^{\prime }}(u,i)t_{\theta}(u,i)+\right.\\ &\left.\log h^{g}_{\phi^{\prime}}(u,i)(1-t_{\theta}(u,i))\right] +\\ &\left.\sum_{(u,i)\mid r^{g}_{u,i}=0}\left[\log(1-h^{g}_{\varphi^{ \prime}}(u,i))t_{\theta}(u,i)+\right.\right.\\ &\left.\left.\log(1-h^{g}_{\phi^{\prime}}(u,i))(1-t_{\theta}(u,i ))\right]\right..\end{split} \tag{13}\] Similarly, the term \(E_{P(\mathbf{R}_{t})}[\log P(\mathbf{R}_{f}\mid\mathbf{R}_{t})]\) can be expanded as: \[\begin{split}& E_{P(\mathbf{R}_{t})}[\log P(\mathbf{R}_{f}\mid \mathbf{R}_{t})]=\sum_{(u,i)}E_{r^{\prime}_{u,i}\sim P(\mathbf{R}_{t})}[\log P (r^{f}_{u,i}\mid r^{t}_{u,i})]\\ &=\sum_{(u,i)\mid r^{f}_{u,i}=1}\left[\log h^{f}_{\phi}(u,i)t_{ \theta}(u,i)+\right.\\ &\left.\left.\log h^{f}_{\phi}(u,i)(1-t_{\theta}(u,i))\right]+ \right.\\ &\left.\sum_{(u,i)\mid r^{f}_{u,i}=0}\left[\log(1-h^{f}_{\phi}(u, i))t_{\theta}(u,i)+\right.\right.\\ &\left.\log(1-h^{f}_{\phi}(u,i))(1-t_{\theta}(u,i))\right]. \end{split} \tag{14}\] By aligning and denoising the observed target behavior and auxiliary behavior data simultaneously, the target recommender \(t_{\theta}\) is trained to learn the universal true user preference distribution. #### 3.3.2. Alternative model training In the learning stage, we find that directly training \(t_{\theta}\) with Eq. 12-Eq. 14 does not yield satisfactory results, which is caused by the simultaneous update of five models (i.e., \(h^{g}_{\phi^{\prime}}\), \(h^{g}_{\varphi^{\prime}}\), \(h^{f}_{\phi}\), \(h^{f}_{\phi}\) and \(t_{\theta}\)) in such an optimization process. These five models may interfere with each other and prevent \(t_{\theta}\) from learning well. To address this problem, we set two alternative training steps to train the involved models iteratively. In the first training step, we assume that a user tends to not click or purchase items that the user dislikes. That is to say, given \(r^{t}_{u,i}=0\) we have \(r^{f}_{u,i}\approx 0\) and \(r^{g}_{u,i}\approx 0\), so we have \(h^{f}_{\phi}\approx 0\) and \(h^{g}_{\phi^{\prime}}\approx 0\) according to Eq. 2. Thus in this step, only the models \(h^{f}_{\phi}\), \(h^{g}_{\phi^{\prime}}\) and \(t_{\theta}\) are trained. Then Eq. 13 can be reformulated as: \[\begin{split}& E_{P(\mathbf{R}_{t})}[\log P(\mathbf{R}_{g}\mid \mathbf{R}_{t})]=L_{CN}+L_{CP},\end{split} \tag{15}\] where \[\begin{split}& L_{CN}=\sum_{(u,i)\mid r^{g}_{u,i}=0}\log(1-h^{g}_{ \varphi^{\prime}}(u,i))\cdot t_{\theta}(u,i),\\ & L_{CP}=\sum_{(u,i)\mid r^{f}_{u,i}=1}\log h^{g}_{\varphi^{ \prime}}(u,i)\cdot t_{\theta}(u,i)-C_{1}\cdot(1-t_{\theta}(u,i)).\end{split}\] Meanwhile, Eq. 14 can be reformulated as: \[\begin{split}& E_{P(\mathbf{R}_{t})}[\log P(\mathbf{R}_{f}\mid \mathbf{R}_{t})]=L_{PN}+L_{PP},\end{split} \tag{16}\] where \[\begin{split}& L_{PN}=\sum_{(u,i)\mid r^{f}_{u,i}=0}\log(1-h^{f}_{ \varphi}(u,i))\cdot t_{\theta}(u,i),\\ & L_{PP}=\sum_{(u,i)\mid r^{f}_{u,i}=1}\log h^{f}_{\varphi}(u,i) \cdot t_{\theta}(u,i)-C_{1}\cdot(1-t_{\theta}(u,i)).\end{split}\] Here, we denote \(C_{1}\) as a large positive hyperparameter to replace \(-\log h^{g}_{\phi^{\prime}}(u,i)\) and \(-\log h^{f}_{\phi}(u,i)\). In the second training step, we assume that a user tends to click and purchase the items that the user likes. That is to say, given \(r^{t}_{u,i}=1\) we have \(r^{f}_{u,i}\approx 1\) and \(r^{g}_{u,i}\approx 1\), so we have \(h^{f}_{\varphi}\approx 1\) and \(h^{g}_{\varphi^{\prime}}\approx 1\) according to Eq. 2. Thus in this step, only the models \(h^{f}_{\phi^{\prime}}\) and \(t_{\theta}\) will be updated. Then Eq. 13 can be reformulated as: \[\begin{split}& E_{P(\mathbf{R}_{t})}[\log P(\mathbf{R}_{g}\mid \mathbf{R}_{t})]=L^{\prime}_{CP}+L^{\prime}_{CN},\end{split} \tag{17}\] where \[\begin{split}& L^{\prime}_{CP}=\sum_{(u,i)\mid r^{f}_{u,i}=1}\log h^{f}_{ \phi}(u,i)(1-t_{\theta}(u,i)),\end{split} \tag{18}\] \[L^{\prime}_{PN}=\sum_{(u,i)|r^{f}_{u,i}=0}C_{2}t_{\theta}(u,i)+\log(1-h^{f}_{\phi}( u,i))(1-t_{\theta}(u,i)).\] \(C_{2}\) is a large positive hyperparameter to replace \(-\log(1-h^{g}_{\varphi^{\prime}}(u,i))\) and \(-\log(1-h^{f}_{\varphi}(u,i))\). #### 3.3.3. Training procedure In order to facilitate the description of sampling and training process, we divide \(E_{P(\mathbf{R}_{q})}\left[\log P(\mathbf{R}_{g}\mid\mathbf{R}_{t})\right]\) and \(E_{P(\mathbf{R}_{t})}\left[\log P(\mathbf{R}_{f}\mid\mathbf{R}_{t})\right]\) into four parts (see Eq. 15 to Eq. 18), namely click positive loss (\(L_{CP}\) and \(L^{\prime}_{CP}\)), click negative loss (\(L_{CN}\) and \(L^{\prime}_{CN}\)), purchase positive loss (\(L_{PP}\) and \(L^{\prime}_{PP}\)), and purchase negative loss (\(L_{PN}\) and \(L^{\prime}_{PN}\)). Each sample in the training set can be categorized into one of three situations: (i) clicked and purchased, (ii) clicked but not purchased, and (iii) not clicked and not purchased. The three situations involve different terms in \(E_{P(\mathbf{R}_{q})}\left[\log P(\mathbf{R}_{g}\mid\mathbf{R}_{t})\right]\) and \(E_{P(\mathbf{R}_{t})}\left[\log P(\mathbf{R}_{f}\mid\mathbf{R}_{t})\right]\). In situation (i), each sample involves the \(L_{CP}\) and \(L_{PP}\) (or \(L^{\prime}_{CP}\) and \(L^{\prime}_{PP}\) in the alternative training step). In situation (ii), each sample involves the \(L_{CP}\) and \(L_{PN}\) (or \(L^{\prime}_{CP}\) and \(L^{\prime}_{PN}\) in the alternative training step). In situation (iii), each sample involves the \(L_{CN}\) and \(L_{PN}\) (or \(L^{\prime}_{CN}\) and \(L^{\prime}_{PN}\) in the alternative training step). We then train MBA according to the observed multiple types of user behavior data in situations (i) and (ii), and use the samples in situation (iii) as our negative samples. Details of the training process for MBA are provided in Algorithm 1. ``` Input: The observed multi-behavior data \(\mathcal{D}\), hyperparameter settings: Output: All model parameters \(\varphi\), \(\varphi^{\prime}\), \(\phi\),\(\phi^{\prime}\),\(\theta\); 1whilenot coveragedo 2 Sample (\(u\), \(i\)) from \(\mathcal{D}\) ; 3 flag = 0 ; 4\(L_{KL}=\alpha KL[P(\mathbf{R}_{f})\|P(\mathbf{R}_{t})]+\alpha KL[P(\mathbf{R}_ {g})\|P(\mathbf{R}_{t})]\) ; 5if flag=0then 6if\(r^{f}_{u,i}=1\)and\(r^{g}_{u,i}=1\)then 7 Compute \(L_{MBA}=L_{KL}-(L_{CP}+L_{PP})\) ; 8elseif\(r^{f}_{u,i}=0\)and\(r^{g}_{u,i}=1\)then 9 Compute \(L_{MBA}=L_{KL}-(L_{CP}+L_{PN})\) ; 10elseif\(r^{f}_{u,i}=0\)and\(r^{g}_{u,i}=0\)then 11 Compute \(L_{MBA}=L_{KL}-(L_{CN}+L_{PN})\) ; 12 13 end if 14 Update \(\varphi\), \(\varphi^{\prime}\), and \(\theta\) through \(L_{MBA}\) ; 15 flag = 1 ; 16 17else 18 Compute \(L_{MBA}\) similar to line 6-line 12 using \(L_{KL},L^{\prime}_{PP},L^{\prime}_{CP},L^{\prime}_{PN},L^{\prime}_{CN}\); 19 Update \(\phi\), \(\phi^{\prime}\), and \(\theta\) through \(L_{MBA}\) ; 20 21 flag = 0 ; 22 23 end if 24 25 end while ``` **Algorithm 1**Training Process of MBA ## 4. Experimental Settings ### Experimental questions Our experiments are conducted to answer the following research questions: **(RQ1)** How do the proposed methods perform compared with state-of-the-art recommendation baselines on different datasets? **(RQ2)** How do the proposed methods perform compared with other denoising frameworks? **(RQ3)** Can MBA help to learn universal user preferences from users' multiple types of behavior? **(RQ4)** How do the components and the hyperparameter settings affect the recommendation performance of MBA? ### Datasets To evaluate the effectiveness of our method, we conduct a series of experiments on three real-world benchmark datasets, including Beibei1(Beibei et al., 2017), Taobao2(Tao et al., 2018), and MBD (**multi-behavior dataset**), a dataset we collected from an operational e-commerce platform. The details are as follows: (i) The Beibei dataset is an open dataset collected from Beibei, the largest infant product e-commerce platform in China, which includes three types of behavior, _click_, _add-to-cart_ and _purchase_. This work uses two kinds of behavioral data, clicks and purchases. (ii) The Taobao dataset is an open dataset collected from Taobao, the largest e-commerce platform in China, which includes three types of behavior, _click_, _add to cart_ and _purchase_. In this work, we use clicks and purchases of this dataset. (ii) The MBD dataset is collected from an operational e-commerce platform, and includes two types of behavior, _click_ and _purchase_. For each dataset, we ensure that users have interactions on both types of behavior, and we set click data as auxiliary behavior data and purchase data as target behavior data. Table 1 shows the statistics of our datasets. Footnote 1: [https://www.beibei.com/](https://www.beibei.com/) Footnote 2: [https://ianchi.aliyun.com/dataset/dataDetail?dataId=649](https://ianchi.aliyun.com/dataset/dataDetail?dataId=649) ### Evaluation protocols We divide the datasets into training and test sets with a ratio of 4:1. We adopt two widely used metrics Recall@\(k\) and NDCG@\(k\). Recall@\(k\) represents the coverage of true positive items that appear in the final top-\(k\) ranked list. NDCG@\(k\) measures the ranking quality of the final recommended items. In our experiments, we use the setting of \(k=10,20\). For our method and the baselines, the reported results are the average values over all users. For every result, we conduct the experiments three times and report the average values. ### Baselines To demonstrate the effectiveness of our method, we compare MBA with several state-of-the-art methods. The methods used for comparison include single-behavior models, multi-behavior models, and recommendation denoising methods. The single-behavior models that we consider are: **(i) MF-BPR**(Wang et al., 2018), which uses bayesian personalized ranking (BPR) \begin{table} \begin{tabular}{l c c c c} \hline \hline Dataset & Users & Items & Purchases & Clicks \\ \hline Beibei & 21,716 & 7,977 & 243,661 & 1,930,069 \\ Taobao & 48,658 & 39,395 & 208,905 & 1,238,659 \\ MBD & 102,556 & 20,237 & 230,958 & 659,914 \\ \hline \hline \end{tabular} \end{table} Table 1. Statistics of the datasets. loss to optimize matrix factorization. (ii) **NGCF**(Nagumura et al., 2019), which encodes collaborative signals into the embedding process through multiple graph convolutional layers and models higher-order connectivity in user-item graphs. (iii) **LightGCN**(Nagumura et al., 2019), which simplifies graph convolution by removing the matrix transformation and non-linear activation. We use the BPR loss to optimize LightGCN. The multi-behavior models that we consider are: (i) **MB-GCN**(Nagumura et al., 2019), which constructs a multi-behavior heterogeneous graph and uses GCN to perform behavior-aware embedding propagation. (ii) **MB- GMN**(Nagumura et al., 2019), which incorporates multi-behavior pattern modeling with the meta-learning paradigm. (iii) **CML**(Nagumura et al., 2019), which uses a new multi-behavior contrastive learning paradigm to capture the transferable user-item relationships from multi-behavior data. To verify that the proposed method improves performance by denoising implicit feedback, we also introduce the following denoising frameworks: (i) **WBPR**(Nagumura et al., 2019), which is a re-sampling-based method which considers popular, but un-interacted items are highly likely to be negative. (ii) **T-CE**(Nagumura et al., 2019), which is a re-weighting based method which discards the large-loss samples with a dynamic threshold in each iteration. (iii) **DeCA**(Nagumura et al., 2019), which is a newly proposed denoising method that utilizes the agreement predictions on clean examples across different models and minimizes the KL-divergence between the real user preference parameterized by two recommendation models. (iv) **SGDL**(Nagumura et al., 2019), which is a new denoising paradigm that utilizes self-labeled memorized data as denoising signals to improve the robustness of recommendation models. ### Implementation details We implement our method with PyTorch.3 Without special mention, we set MF as our base model \(t_{\theta}\) since MF is still one of the best models for capturing user preferences for recommendations (Zhou et al., 2018). The model is optimized by Adam (Kingmaa et al., 2014) optimizer with a learning rate of 0.001, where the batch size is set as 2048. The embedding size is set to 32. The hyperparameters \(\alpha\), \(C_{1}\) and \(C_{2}\) are search from { 1, 10, 100, 1000}. \(\beta\) is search from { 0.7, 0.8, 1}. To avoid over-fitting, \(L_{2}\) normalization is searched in { 10\({}^{-6}\), 10\({}^{-5}\),..., 1}. Each training step is formed by one interacted example, and one randomly sampled negative example for efficient computation. We use Recall@20 on the test set for early stopping if the value does not increase after 20 epochs. Footnote 3: Our code is available at [http://github.com/LiuXiangYuan/MBA](http://github.com/LiuXiangYuan/MBA). For the hyperparameters of all recommendation baselines, we use the values suggested by the original papers with carefully fine-tuning on the three datasets. For all graph-based methods, the number of graph-based message propagation layers is fixed at 3. ## 5. Experimental Results ### Performance comparison (RQ1) To answer RQ1, we conduct experiments on the Beibei, Taobao and MBD datasets. The performance comparisons are reported in Table 2. From the table, we have the following observations. First, the proposed MBA method achieves the best performance and consistently outperforms all baselines across all datasets. For instance, the average improvement of MBA over the strongest baseline is approximately 6.3% on the Beibei dataset, 6.6% on the Taobao dataset and 1.5% on the MBD dataset. These improvements demonstrate the effectiveness of MBA. We contribute the significant performance improvement to the following two reasons: (i) we align the user preferences based on two types of two behavior, transferring useful information from the auxiliary behavior data to enhance the performance of the target behavior predictions; (ii) noisy interactions are reduced through preference alignment, which helps to improve the learning of the latent universal true user preferences. Second, except CML the multi-behavior models outperform the single-behavior models by a large margin. This reflects the fact that adding auxiliary behavior information can improve the recommendation performance of the target behavior. We conjecture that CML cannot achieve satisfactory performance because it incorporates the knowledge contained in auxiliary behavior through contrastive meta-learning, which introduces more noisy signals. Furthermore, we compare MBA with the best single-behavior model (NGCF on the Beibei and MBD datasets, LightGCN on the Taobao dataset), and see that MBA achieves an average improvement of 12.4% on the Beibei dataset, 26.8% on the Taobao dataset and 15.3% on the MBD dataset. To conclude, the proposed MBA approach consistently and significantly outperforms related single-behavior and multi-behavior recommendation baselines on the purchase prediction task. ### Denoising (RQ2) Table 3 reports on a performance comparison with existing denoising frameworks on the Beibei, Taobao and MBD datasets. The results demonstrate that MBA can provide more robust recommendations and improve overall performance than competing approaches. Most of the denoising baselines do not obtain satisfactory results, even after carefully tuning their hyperparameters. Only WBPR can outperform normal training in some cases. However, MBA consistently outperforms normal training and other denoising frameworks. We think the reasons for this are as follows: (i) In MBA, we use the alignment between multi-behavior data as the denoising signal and then transmit information from the multi-behavior distribution to the latent universal true user preference distribution. This learning process facilitates knowledge transfer across multiple types of user behavior and filters out noisy signals. (ii) In the original papers of the compared denoising baselines, testing is conducted based on explicit user-item ratings. However, our method does not use any explicit information like ratings, only implicit interaction data is considered. To further explore the generalization capability of MBA, we also adopt LightGCN as our base model (i.e., using LightGCN as \(t_{\theta}\)). The results are also shown in Table 3. We see that MBA is still more effective than the baseline methods. We find that LightGCN-based MBA does not perform as well as MF-based MBA on the Beibei and Taobao datasets. We think the possible reasons are as follows: (i) LightGCN is more complex than MF, making MBA more difficult to train; (ii) LightGCN may be more sensitive to noisy signals due to the aggregation of neighbourhoods, resulting in a decline in the MBA performance compared to using MF as the base model. To conclude, the proposed MBA can generate more accurate recommendation compared with existing denoising frameworks.
2310.01555
The Lie superalgebra of transpositions
We consider the group algebra of the symmetric group as a superalgebra, and describe its Lie subsuperalgebra generated by the transpositions. The updated version corrects some of the arguments made in Sections 4.5 - 4.7. The statements of the main results are unaffected.
Christopher M. Drupieski, Jonathan R. Kujawa
2023-10-02T18:50:07Z
http://arxiv.org/abs/2310.01555v2
# The Lie superalgebra of transpositions ###### Abstract. We consider the group algebra of the symmetric group as a superalgebra, and describe its Lie subsuperalgebra generated by the transpositions. 2020 Mathematics Subject Classification: Primary 17B10. Secondary 20B30. CMD was supported in part by Simons Collaboration Grant for Mathematicians No. 426905. JRK was supported in part by Simons Collaboration Grant for Mathematicians No. 525043. Introduction ### The Lie superalgebra Let \(\mathbb{C}S_{n}\) be a Lie superalgebra and let \(\mathfrak{g}(W^{\lambda})\) be the Lie superalgebra of \(\mathbb{C}S_{n}\). The Lie superalgebra \(\mathfrak{g}(W^{\lambda})\) is defined by \[\mathfrak{g}(W^{\lambda})=\big{[}\bigoplus_{\begin{subarray}{c}\lambda\in \overline{\mathcal{P}}(n)\\ \lambda=\lambda^{\prime}\end{subarray}}\mathfrak{sl}\left(W^{\lambda}\right) \big{]}\oplus\big{[}\bigoplus_{\begin{subarray}{c}\lambda\in\overline{ \mathcal{P}}(n)\\ \lambda\neq\lambda^{\prime}\end{subarray}}\mathfrak{sl}\left(W^{\lambda} \right)\big{]}.\] Here \(\mathfrak{sl}(W^{\lambda})\) denotes the Lie superalgebra of \(\mathbb{C}S_{n}\) and \(\mathfrak{sl}(W^{\lambda})\) denotes the subspace of \(\mathfrak{sl}(W^{\lambda})\). The Lie superalgebra \(\mathfrak{sl}(W^{\lambda})\) is defined by \[\mathfrak{sl}(W^{\lambda})=\big{[}\bigoplus_{\begin{subarray}{c}\lambda\in \overline{\mathcal{P}}(n)\\ \lambda=\lambda^{\prime}\end{subarray}}\mathfrak{sl}\left(W^{\lambda}\right) \big{]}\oplus\big{[}\bigoplus_{\begin{subarray}{c}\lambda\in\overline{ \mathcal{P}}(n)\\ \lambda\neq\lambda^{\prime}\end{subarray}}\mathfrak{sl}\left(W^{\lambda} \right)\big{]}.\] The Lie superalgebra \(\mathfrak{sl}(W^{\lambda})\) is defined by \[\mathfrak{sl}(W^{\lambda})=\big{[}\bigoplus_{\begin{subarray}{c}\lambda\in \overline{\mathcal{P}}(n)\\ \lambda=\lambda^{\prime}\end{subarray}}\mathfrak{sl}\left(W^{\lambda}\right) \big{]}. are aware the graded representation theory of the braid group is rather neglected. While this paper focuses on the questions raised by WunderNatur, it does suggest that the graded representation theory of the braid group should be notably different from the classical setting and is worth further study. For example, if one considers the algebra \(\mathcal{A}=\mathbb{C}[q,q^{-1}]\) as a superalgebra where \(q\) is declared to be of odd superdegree (i.e., if we consider \(\mathcal{A}\) as a 'graded field'), then the Iwahori-Hecke algebra defined over \(\mathcal{A}\), \(H_{d}(q)_{\mathcal{A}}\), is a superalgebra when the generators are taken to be of odd superdegree. There is a surjective superalgebra homomorphism from \(\mathcal{A}B_{n}\) to \(H_{d}(q)_{\mathcal{A}}\) and it would be interesting to study the supermodules for the braid group afforded by this map. ## 2. Preliminaries ### Conventions Set \(\mathbb{Z}_{2}=\mathbb{Z}/2\mathbb{Z}=\{\overline{0},\overline{1}\}\). Following the literature, we use the prefix'super' to indicate that an object is \(\mathbb{Z}_{2}\)-graded. We denote the decomposition of a vector superspace into its \(\mathbb{Z}_{2}\)-homogeneous components by \(V=V_{\overline{0}}\oplus V_{\overline{1}}\), calling \(V_{\overline{0}}\) and \(V_{\overline{1}}\) the even and odd subspaces of \(V\), respectively, and writing \(\overline{v}\in\mathbb{Z}_{2}\) to denote the superdegree of a homogeneous element \(v\in V_{\overline{0}}\cup V_{\overline{1}}\). If we state a formula in which homogeneous degrees of elements are specified, we mean that the formula is true as written for homogeneous elements, and that it extends by linearity to non-homogeneous elements. When written without additional adornment, we consider the field \(\mathbb{C}\) to be a superspace concentrated in even superdegree. All superspaces are assumed to be vector spaces over the field \(\mathbb{C}\), all linear maps are \(\mathbb{C}\)-linear, and except when indicated by a modifier (e.g., 'Lie'), all superalgebras are assumed to be associative and unital. Given a superspace \(V\), let \(\dim(V)=\dim_{\mathbb{C}}(V)\) be the ordinary dimension of \(V\) as a \(\mathbb{C}\)-vector space. A linear map between superspaces is _even_ if it preserves homogeneous degrees, and is _odd_ if it reverses homogeneous degrees. Given superspaces \(V\) and \(W\), let \(\operatorname{Hom}(V,W)=\operatorname{Hom}_{\mathbb{C}}(V,W)\) be the superspace of all \(\mathbb{C}\)-linear maps \(\phi:V\to W\), and let \(\operatorname{End}(V)=\operatorname{Hom}_{\mathbb{C}}(V,V)\). Let \(V^{*}=\operatorname{Hom}(V,\mathbb{C})\) be the usual linear dual of \(V\). In general, isomorphisms between superspaces will be denoted using the symbol '\(\cong\)' and, except when stated otherwise, should be understood as arising via an even linear map. We write '\(\simeq\)' rather than '\(\cong\)' to emphasize when an isomorphism arises via an odd linear map. Given a superspace \(V\), let \(\Pi(V)=\{v^{\overline{*}}:v\in V\}\) be its parity shift. As a superspace, \(\Pi(V)_{\overline{0}}=V_{\overline{1}}\) and \(\Pi(V)_{\overline{1}}=V_{\overline{0}}\), with \(\overline{v^{\overline{*}}}=\overline{v}+\overline{1}\). Then the map \((-)^{\pi}:v\mapsto(-1)^{\overline{v}}v^{\pi}\) defines an odd isomorphism \(V\simeq\Pi(V)\). Given a superalgebra \(A\) and (left) \(A\)-supermodules \(M\) and \(N\), we say that a linear map \(f:M\to N\) is an \(A\)-supermodule homomorphism if \(f(a.m)=(-1)^{\overline{a}}.\overline{f}a.f(m)\) for all \(a\in A\) and \(m\in M\), and we write \(\operatorname{Hom}_{A}(M,N)\) for the set of all \(A\)-supermodule homomorphisms from \(M\) to \(N\). The parity shift \(\Pi(M)\) of an \(A\)-supermodule is again an \(A\)-supermodule, with action defined by \(a.m^{\pi}=(a.m)^{\pi}\). Then the function \((-)^{\pi}:m\mapsto(-1)^{\overline{m}}m^{\pi}\) is an odd \(A\)-supermodule isomorphism \(M\simeq\Pi(M)\). Let \(\mathbb{N}=\{0,1,2,3,\ldots\}\) be the set of non-negative integers. ### Semisimple superalgebras Most of the material in this section comes from [2, SS2] and [4, SS3.1]. For the authors' benefit, we write out some of the details that were left to the reader in [2, 4]. As in [2, 4], we make the standing assumption that each superalgebra is finite-dimensional. A superalgebra \(A\) is _simple_ if it has no nontrivial superideals. **Example 2.2.1** (Type M simple superalgebras).: Given a finite-dimensional superspace \(V\), the endomorphism algebra \(\operatorname{End}(V)\) is a simple superalgebra. Fixing a homogeneous basis for \(V\), and making the identification \(V\cong\mathbb{C}^{m|n}:=\mathbb{C}^{m}\oplus\Pi(\mathbb{C}^{n})\) for some \(m,n\in\mathbb{N}\) via this choice of basis, \(\operatorname{End}(V)\) identifies with the matrix superalgebra \[M(m|n):=\left\{\left[\begin{array}{c|c}A&B\\ \hline C&D\end{array}\right]:A\in M_{m}(\mathbb{C}),B\in M_{m\times n}(\mathbb{C} ),C\in M_{n\times m}(\mathbb{C}),D\in M_{n}(\mathbb{C})\right\}.\] As an ungraded associative algebra, \(M(m|n)=M_{m+n}(\mathbb{C})\). **Example 2.2.2** (Type Q simple superalgebras).: Let \(V\) be a finite-dimensional vector superspace equipped with an odd involution \(J:V\to V\); i.e., an odd linear map such that \(J\circ J=\operatorname{id}_{V}\). Then the set \[Q(V)=Q(V,J)=\{\theta\in\operatorname{End}(V):J\circ\theta=\theta\circ J\} \tag{2.2.1}\] is a simple subsuperalgebra of \(\operatorname{End}(V)\). Fix a basis \(\{v_{1},\ldots,v_{n}\}\) for \(V_{\overline{0}}\), and set \(v_{i}^{\prime}=J(v_{i})\) for \(1\leq i\leq n\), so that \(\{v_{1}^{\prime},\ldots,v_{n}^{\prime}\}\) is a basis for \(V_{\overline{1}}\). Via this choice of homogeneous basis, one has \(V\cong\mathbb{C}^{n|n}\) and \(Q(V)\) identifies with the set of supermatrices \[Q(n):=\left\{\left[\begin{array}{c|c}A&B\\ \hline B&A\end{array}\right]:A\in M_{n}(\mathbb{C}),B\in M_{n}(\mathbb{C}) \right\}. \tag{2.2.2}\] As an ungraded associative algebra, \(Q(n)\cong M_{n}(\mathbb{C})\oplus M_{n}(\mathbb{C})\) via the map \([\begin{smallmatrix}A&B\\ B&A\end{smallmatrix}]\mapsto(A+B,A-B)\). **Remark 2.2.3**.: In the literature, the definition (2.2.1) is frequently stated with the requirement that the graded commutator \(J\circ\theta-(-1)^{\overline{\theta}}\cdot\theta\circ J\) be equal to \(0\), rather than the requirement that the ordinary commutator \(J\circ\theta-\theta\circ J\) be equal to \(0\). We find it more convenient to use the version stated here. Through appropriate choices of homogeneous bases, both versions admit the matrix realization (2.2.2). For related discussion, see [4, SS1.1.4]. Given an associative superalgebra \(A\), let \(|A|\) denote the underlying associative algebra obtained by forgetting the superspace structure on \(A\). Let \[Z(A)=\left\{a\in A:ab=(-1)^{\overline{a}\cdot\overline{b}}ba\text{ for all }b\in A\right\}\] be the graded center of \(A\) (i.e., the center in the sense of superalgebras), and let \[Z(|A|)=\{a\in A:ab=ba\text{ for all }b\in A\}\] be the ungraded center of \(A\) (i.e., the center in the ordinary, non-super sense). Then both \(Z(A)\) and \(Z(|A|)\) are subsuperspaces of \(A\). That is, \(Z(B)=Z(B)_{\overline{0}}\oplus Z(B)_{\overline{1}}\) for \(B\in\{A,|A|\}\). Also note that \(Z(A)_{\overline{0}}=Z(|A|)_{\overline{0}}\). **Example 2.2.4**.: Let \(m,n\in\mathbb{N}\). 1. \(Z(M(m|n))=Z(M(m|n))_{\overline{0}}=Z(|M(m|n)|)\), spanned by the identity matrix \(I_{m|n}\). 2. \(Z(Q(n))_{\overline{0}}\) is spanned by the identity matrix \(I_{n|n}\). 3. \(Z(Q(n))_{\overline{1}}=0\), but \(Z(|Q(n)|)_{\overline{1}}\) is nonzero, spanned by the 'odd identity matrix' \([\begin{smallmatrix}0&I_{n}\\ I_{n}&0\end{smallmatrix}]\). **Theorem 2.2.5** ([4, Theorem 3.1]).: _Let \(A\) be a finite-dimensional simple associative superalgebra._ 1. _If_ \(Z(|A|)_{\overline{1}}=0\)_, then_ \(A\cong M(m|n)\) _for some_ \(m,n\in\mathbb{N}\)_._ 2. _If_ \(Z(|A|)_{\overline{1}}\neq 0\)_, then_ \(A\cong Q(n)\) _for some_ \(n\in\mathbb{N}\)_._ If \(V\) is an irreducible \(A\)-supermodule, then either \(V\) is irreducible as an \(|A|\)-module, in which case \(V\) is said to be _absolutely irreducible_ (or of Type M), or else \(V\) is reducible as an \(|A|\)-module, in which case \(V\) is said to be _self-associate_ (or of Type Q). Given a superspace \(V\), let \(\pi_{V}:V\to V\) be the parity automorphism, defined by \[\pi_{V}(v)=(-1)^{\overline{v}}v.\] In particular, \(\pi_{A}:A\to A\) is a superalgebra automorphism. A subspace \(U\) of a vector superspace \(V\) is a subs_super_space of \(V\) if and only if \(\pi_{V}(U)=U\). Given an \(|A|\)-module \(U\), let \(\pi_{A}^{*}(U)\) be the \(|A|\)-module obtained by pulling back the module structure along \(\pi_{A}\). If \(U\) is an \(|A|\)-submodule of an \(A\)-supermodule \(V\), then \(\pi_{V}(U)\) is also an \(|A|\)-submodule of \(V\), and the map \(\pi_{V}(U)\to\pi_{A}^{*}(U)\), \(\pi_{V}(u)\mapsto\pi_{A}^{*}(u)\), is an \(|A|\)-module isomorphism. In particular, for each \(A\)-supermodule \(V\), one has \(V=\pi_{V}(V)\cong\pi_{A}^{*}(V)\) as \(A\)-supermodules. **Lemma 2.2.6** ([2, Lemma 2.3]).: _Let \(V\) be a finite-dimensional self-associate irreducible \(A\)-supermodule, and let \(U\) be an irreducible \(|A|\)-submodule of \(V\). Then as an \(|A|\)-module,_ \[V=U\oplus\pi_{V}(U)\cong U\oplus\pi_{A}^{*}(U),\] _with \(U\not\cong\pi_{V}(U)\) as \(|A|\)-modules, and the homogeneous subspaces of \(V\) are_ \[V_{\overline{0}}=\{u+\pi_{V}(u):u\in U\}\quad\text{and}\quad V_{\overline{1}}= \{u-\pi_{V}(u):u\in U\}\,.\] _In particular, if \(u_{1},\dots,u_{n}\) is a basis for \(U\), then_ \[\{u_{1}+\pi_{V}(u_{1}),\dots,u_{n}+\pi_{V}(u_{n})\}\quad\text{and}\quad\{u_{1 }-\pi_{V}(u_{1}),\dots,u_{n}-\pi_{V}(u_{n})\}\] _are bases for \(V_{\overline{0}}\) and \(V_{\overline{1}}\), respectively._ _The linear map \(J=J_{V}:V\to V\), defined for \(u\in U\) by \(J(u\pm\pi_{V}(u))=u\mp\pi_{V}(u)\), is an \(|A|\)-module homomorphism. Considered as a function \(J:V\to\Pi(V)\), \(u\pm\pi_{V}(u)\mapsto[u\mp\pi_{V}(u)]^{\pi}\), the map \(J\) is an even \(A\)-supermodule isomorphism \(V\cong\Pi(V)\)._ Proof.: Most of the details of the proof are given in [2], though one point that is not explicitly explained is the fact that \(U\not\cong\pi_{V}(U)\). Here is a justification for this statement. Let \(\pi=\pi_{V}\). Suppose for the sake of argument that there exists an \(|A|\)-module isomorphism \(\psi:U\to\pi(U)\). Let \(\phi=\pi\circ\psi:U\to U\). Then for all \(u\in U\) one has \(\psi(u)=\pi(\phi(u))\), and \(\phi\) is a linear bijection such that for all \(a\in A\) one has \(\phi(a\cdot u)=(-1)^{\overline{a}}a\cdot\phi(u)\). Consequently, \(\phi^{2}:U\to U\) is an \(|A|\)-module isomorphism, so by Schur's Lemma it is a nonzero scalar multiple of the identity. Rescaling \(\phi\) if necessary, we may assume that \(\phi^{2}=\operatorname{id}_{U}\). Now since \(V=U\oplus\pi(U)\) and \(V=V_{\overline{0}}\oplus V_{\overline{1}}\), it follows that also \(V=U^{+}\oplus U^{-}\), where \[U^{+} =\{\phi(u)+\pi(u):u\in U\}=\{u-\pi(\phi(u)):u\in U\}\,,\quad \text{and}\] \[U^{-} =\{\phi(u)-\pi(u):u\in U\}=\{u-\pi(\phi(u)):u\in U\}\,.\] For \(u\in U\), the decomposition of \(\phi(u)+\pi(u)\) into its even and odd components is \[\phi(u)+\pi(u) =\Big{(}\tfrac{1}{2}[\phi(u)+\pi(\phi(u))]+\tfrac{1}{2}[\phi(u)- \pi(\phi(u))]\Big{)}+\Big{(}\tfrac{1}{2}[u+\pi(u)]-\tfrac{1}{2}[u-\pi(u)]\Big{)}\] \[=\tfrac{1}{2}\Big{(}[\phi(u)+\pi(\phi(u))]+[u+\pi(u)]\Big{)}+ \tfrac{1}{2}\Big{(}[\phi(u)-\pi(\phi(u))]-[u-\pi(u)]\Big{)}\] \[=\tfrac{1}{2}\Big{(}[\phi(u)+\pi(u)]+[u+\pi(\phi(u))]\Big{)}+ \tfrac{1}{2}\Big{(}[\phi(u)+\pi(u)]-[u+\pi(\phi(u))]\Big{)}.\] After the second and third equals signs, the expressions within the big parentheses are homogeneous of even and odd superdegree, respectively. This shows that \(U^{+}\) is a subsuperspace of \(V\). Finally, for \(a\in A\) one has \[a\cdot[\phi(u)+\pi(u)]=(-1)^{\overline{a}}\cdot[\phi(a\cdot u)+\pi(a\cdot u)],\] so \(U^{+}\) is a proper \(A\)-subsupermodule of \(V\). In an entirely similar fashion, one can also show that \(U^{-}\) is a proper \(A\)-subsupermodule of \(V\). Then the irreducible \(A\)-supermodule \(V\) is a direct sum of two proper subsupermodules, a contradiction. **Remark 2.2.7**.: The decomposition of a self-associate irreducible \(A\)-supermodule into a direct sum of non-isomorphic \(|A|\)-modules is canonical, by the uniqueness of isotypical components. **Lemma 2.2.8** (Super Schur Lemma).: _Let \(V\) be a finite-dimensional irreducible \(A\)-supermodule. Then_ \[\operatorname{End}_{|A|}(V)=\begin{cases}\operatorname{span}\left\{\operatorname {id}_{V}\right\}&\text{if $V$ is absolutely irreducible,}\\ \operatorname{span}\left\{\operatorname{id}_{V},J_{V}\right\}&\text{if $V$ is self-associate,} \end{cases}\] _where \(J_{V}\) is defined as in Lemma 2.2.6. In particular, if \(V\) is self-associate, then \(J_{V}\) is the unique \(|A|\)-module homomorphism (up to scalar multiples) that is homogeneous of odd superdegree._ Proof.: If \(V\) is absolutely irreducible, the lemma is true by the classical Schur's Lemma. If \(V\) is self-associate, the classical Schur's Lemma gives \(\operatorname{End}_{|A|}(V)=\operatorname{span}\left\{\operatorname{id}_{U}, \operatorname{id}_{\pi(U)}\right\}\), with notation as in Lemma 2.2.6. Since \(\operatorname{id}_{V}=\operatorname{id}_{U}+\operatorname{id}_{\pi(U)}\) and \(J_{V}=\operatorname{id}_{U}-\operatorname{id}_{\pi(U)}\), the result follows. **Remark 2.2.9**.: Henceforward, if \(V\) is a finite-dimensional self-associate irreducible \(A\)-supermodule, we will write \(Q(V)\) to denote \(Q(V,J_{V})\). An \(A\)-supermodule \(V\) is _semisimple_ if every subsupermodule of \(V\) is a direct summand, or equivalently, if \(V\) is a (direct) sum of irreducible \(A\)-supermodules. **Theorem 2.2.10** (Super Artin-Wedderburn Theorem [4, Theorem 3.3]).: _The following statements are equivalent for a finite-dimensional associative superalgebra \(A\):_ 1. _Every_ \(A\)_-supermodule is semisimple._ 2. _The left regular_ \(A\)_-module is a direct sum of minimal left superideals._ 3. _The superalgebra_ \(A\) _is a direct sum of simple superalgebras. Specifically, if_ \(\{V_{1},\dots,V_{n}\}\) _is a complete, irredundant set of irreducible_ \(A\)_-supermodules (up to homogeneous isomorphism), such that_ \(V_{1},\dots,V_{n}\) _are absolutely irreducible and_ \(V_{m+1},\dots,V_{n}\) _are self-associate, then the natural maps_ \(A\to\operatorname{End}(V_{i})\)_, arising from the_ \(A\)_-supermodule structures on the_ \(V_{i}\)_, induce a superalgebra isomorphism_ \[A\cong\left(\bigoplus_{i=1}^{m}\operatorname{End}(V_{i})\right)\oplus\left( \bigoplus_{i=m+1}^{n}Q(V_{i})\right).\] _A superalgebra that satisfies these conditions is called semisimple._ **Lemma 2.2.11**.: _Let \(A\) be a finite-dimensional superalgebra. Then \(A\) is semisimple (as a superalgebra) if and only if \(|A|\) is semisimple (as an ordinary algebra)._ Proof.: If \(A\) is a direct sum of simple superalgebras, then \(|A|\) is a direct sum of simple algebras, and hence is semisimple, by Examples 2.2.1 and 2.2.2. Conversely, suppose \(|A|\) is semisimple. Let \(I_{1},\dots,I_{2m},I_{2m+1},\dots,I_{n}\) be a complete set of pairwise non-isomorphic irreducible \(|A|\)-modules, ordered so that \(\pi_{A}(I_{2j})\cong I_{2j-1}\) for \(1\leq j\leq m\), and \(\pi_{A}(I_{i})\cong I_{i}\) for \(2m<i\leq n\). For \(1\leq i\leq n\), let \(A^{I_{i}}\) be the sum of all minimal left ideals in \(|A|\) that are isomorphic to \(I_{i}\) as left \(|A|\)-modules. Then \(|A|=\bigoplus_{i=1}^{n}A^{I_{i}}\), and one has \(\pi_{A}(A^{I_{2j}})=A^{I_{2j-1}}\) for \(1\leq j\leq m\), and \(\pi_{A}(A^{I_{i}})=A^{I_{i}}\) for \(2m<i\leq n\). This implies for \(1\leq j\leq m\) and \(2m<i\leq n\) that \(A^{I_{2j-1}}\oplus A^{I_{2j}}\) and \(A^{I_{i}}\) are each subsupermodules of the left regular representation of \(A\). Given \(1\leq j\leq m\), fix a decomposition of \(A^{I_{2j}}\) into a direct sum of copies of \(I_{2j}\). Then \(A^{I_{2j-1}}\oplus A^{I_{2j}}=\bigoplus_{i=1}^{t}[\pi_{A}(U_{i})\oplus U_{i}]\) is a direct sum decomposition of \(A^{I_{2j-1}}\oplus A^{I_{2j}}\) into (self-associate) irreducible \(A\)-supermodules. Now fix an integer \(2m<i\leq n\), and set \(I=I_{i}\). We will show that \(A^{I}\) is a sum--hence a direct sum--of (absolutely) irreducible \(A\)-supermodules. First, \(A^{I}\) is a sum of minimal left ideals \(U\) such that \(U\cong I\cong\pi_{A}(I)\) as \(|A|\)-modules, and for each of these ideals \(U\) one has \(U+\pi_{A}(U)\subseteq A^{I}\) because \(\pi_{A}(A^{I})=A^{I}\). Then \(U+\pi_{A}(U)\) is an \(A\)-subsupermodule of \(A^{I}\). Since \(U\) is irreducible as an \(|A|\)-module, one has either \(U=\pi_{A}(U)\), in which case \(U\) is an irreducible \(A\)-supermodule, or the sum \(U+\pi_{A}(U)\) is a direct sum. In the latter case, one can argue exactly as in the proof of Lemma 2.2.6 (but now, without reaching a contradiction) to show that \(U+\pi_{A}(U)\) is a direct sum of two \(A\)-subsupermodules \(U^{+}\) and \(U^{-}\), each isomorphic as \(A\)-supermodules to \(U\). Given a superalgebra \(A\), one can check that \(\operatorname{Ann}_{A}(\pi_{A}^{*}(M))=\pi_{A}(\operatorname{Ann}_{A}(M))\) for each \(|A|\)-module \(M\). This implies that the Jacobson radical of \(|A|\) is closed under the parity map \(\pi_{A}\), and hence is a superideal in \(A\). Then the next lemma follows from Lemma 2.2.11. **Lemma 2.2.12** ([2, Lemma 2.6]).: _Let \(A\) be a finite-dimensional superalgebra, and let \(J=\operatorname{rad}(|A|)\) be the Jacobson radical of \(|A|\). Then \(J\) is the unique smallest superideal of \(A\) such that \(A/J\) is a semisimple superalgebra._ Finally, since each irreducible \(A\)-supermodule \(M\) is a sum of irreducible \(|A|\)-modules, one gets \(\operatorname{rad}(|A|)\subseteq\operatorname{Ann}_{A}(M)\), which implies that the superalgebras \(A\) and \(A/\operatorname{rad}(|A|)\) have the same irreducibles. Then the next lemma follows by passing to the quotient \(A/\operatorname{rad}(|A|)\), considering the left regular representations of \(A\) and \(|A|\), and applying the Super Artin-Wedderburn Theorem. **Lemma 2.2.13** ([2, Corollary 2.8]).: _Let \(A\) be a finite-dimensional superalgebra, and let \(\{V_{1},\dots,V_{n}\}\) be a complete, irredundant set of irreducible \(A\)-supermodules (up to homogeneous isomorphism) such that \(V_{1},\dots,V_{m}\) are absolutely irreducible and \(V_{m+1},\dots,V_{n}\) are self-associate. For \(m+1\leq i\leq n\), write \(V_{i}=V_{i}^{+}\oplus V_{i}^{-}\) as a direct sum of irreducible \(|A|\)-modules. Then_ \[\big{\{}V_{1},\dots,V_{m},V_{m+1}^{\pm},\dots,V_{n}^{\pm}\big{\}} \tag{2.2.3}\] _is a complete set of pairwise non-isomorphic irreducible \(|A|\)-modules._ ### Finite supergroups In this section, let \(G\) be a finite group, and suppose \(G\) contains a normal subgroup \(H\) of index \(2\). Let \(\operatorname{sgn}:G\to G/H\cong\{\pm 1\}\) be the quotient homomorphism, considered also as a representation of \(G\). Define a \(\mathbb{Z}_{2}\)-grading on \(G\) by \(G_{\overline{0}}=H=\ker(\operatorname{sgn})\) and \(G_{\overline{1}}=G\backslash H\). This grading is multiplicative and it makes \(G\) into a _supergroup_. The \(\mathbb{Z}_{2}\)-grading on \(G\) extends by linearity to a \(\mathbb{Z}_{2}\)-grading on the group algebra \(\mathbb{C}G\), making \(\mathbb{C}G\) into a superalgebra that we call the _group superalgebra_ of \(G\). Since \(\mathbb{C}G\) is semisimple as an ordinary algebra by Maschke's Theorem, then \(\mathbb{C}G\) is semisimple as a superalgebra by Lemma 2.2.11. Given a \(\mathbb{C}H\)-module \(W\) and an element \(t\in G_{\overline{1}}\), let \({}^{t}W=\{^{t}w:w\in W\}\) be the _conjugate_ representation in which the action of an element \(h\in H\) is defined by \(h.^{t}w={}^{t}[(ht{}^{-1}).w]\). Up to isomorphism, the conjugate representation does not depend on the particular choice of element in \(G_{\overline{1}}\). We say that two \(\mathbb{C}H\)-modules \(W\) and \(W^{\prime}\) are conjugate if \(W^{\prime}\cong{}^{t}W\) for some \(t\in G_{\overline{1}}\). If \(V\) is a \(\mathbb{C}G\)-module, we write \(\operatorname{Res}_{H}^{G}(V)\) for the \(\mathbb{C}H\)-module obtained by restriction, and if \(U\) is a \(\mathbb{C}H\)-module, we denote the induced \(\mathbb{C}G\)-module \(\mathbb{C}G\otimes_{\mathbb{C}H}U\) by \(\operatorname{Ind}_{H}^{G}(U)\). **Proposition 2.3.1** ([5, Proposition 5.1]).: _Let \(V\) be an irreducible \(\mathbb{C}G\)-module. Then exactly one of the following holds:_ 1. \(V\ncong V\otimes\operatorname{sgn}\) _as_ \(\mathbb{C}G\)_-modules,_ \(\operatorname{Res}^{G}_{H}(V)\) _is irreducible and isomorphic to its conjugate, and_ \(\operatorname{Ind}^{G}_{H}(\operatorname{Res}^{G}_{H}(V))\cong V\oplus(V\otimes \operatorname{sgn})\)_._ 2. \(V\cong V\otimes\operatorname{sgn}\) _as_ \(\mathbb{C}G\)_-modules,_ \(\operatorname{Res}^{G}_{H}(V)=U^{\prime}\oplus U^{\prime\prime}\) _for_ \(\mathbb{C}H\)_-submodules_ \(U^{\prime}\) _and_ \(U^{\prime\prime}\) _that are irreducible and conjugate but not isomorphic, and_ \(\operatorname{Ind}^{G}_{H}(U^{\prime})\cong\operatorname{Ind}^{G}_{H}(U^{ \prime\prime})\cong V\)_._ _Each irreducible \(\mathbb{C}H\)-module arises uniquely in this way, noting that in case (1) the irreducible \(\mathbb{C}G\)-modules modules \(V\) and \(V\otimes\operatorname{sgn}\) each determine the same \(\mathbb{C}H\)-module._ **Remark 2.3.2**.: Given a \(\mathbb{C}G\)-(super)module \(V\), it is immediate from the definitions that \(V\otimes\operatorname{sgn}\cong\pi_{\mathbb{C}G}^{*}(V)\) as \(\mathbb{C}G\)-(super)modules. We emphasize however that the sign representation is _not_ a \(\mathbb{C}G\)-_super_module, nor is the one-dimensional trivial \(\mathbb{C}G\)-module. However, their direct sum is naturally a self-associate simple \(G\)-supermodule. ### Example: The group superalgebra of the dihedral group In this section, fix a positive integer \(n\geq 3\) and let \(D_{n}\) be the corresponding dihedral group of order \(2n\). Write \[D_{n}=\langle r,s:r^{n}=s^{2}=(sr)^{2}=1\rangle=\{1,r,r^{2},\ldots,r^{n-1},s, sr,\ldots,sr^{n-1}\}\] and let \(R_{n}=\{1,r,r^{2},\ldots,r^{n-1}\}\) be the subgroup of rotations in \(D_{n}\). Then \(R_{n}\) is a normal subgroup of index \(2\) in \(D_{n}\), so \(\mathbb{C}D_{n}\) is a superalgebra with \((\mathbb{C}D_{n})_{\overline{0}}=\mathbb{C}R_{n}\). The irreducible complex representations of the group \(D_{n}\) are given as follows: * Let \(\zeta=e^{2\pi\mathrm{i}/n}\in\mathbb{C}\). Given an integer \(k\), define \(\rho_{k}:D_{n}\to GL_{2}(\mathbb{C})\) by \[\rho_{k}(r) =\begin{pmatrix}\zeta^{k}&0\\ 0&\zeta^{-k}\end{pmatrix}, \rho_{k}(s) =\begin{pmatrix}0&1\\ 1&0\end{pmatrix}.\] These representations are irreducible and pairwise non-isomorphic provided that \(1\leq k<\frac{n}{2}\). * The trivial representation \(\rho_{0}:D_{n}\to GL_{1}(\mathbb{C})\), defined by \(\rho_{0}(r)=\begin{pmatrix}1\end{pmatrix}\) and \(\rho_{0}(s)=\begin{pmatrix}1\end{pmatrix}\). * The sign representation \(\operatorname{sgn}:D_{n}\to GL_{1}(\mathbb{C})\), defined by \(\operatorname{sgn}(r)=\begin{pmatrix}1\end{pmatrix}\) and \(\operatorname{sgn}(s)=\begin{pmatrix}-1\end{pmatrix}\). * If \(n\) is even, then there are two additional \(1\)-dimensional representations of \(D_{n}\): * \(\rho_{0}^{-}:D_{n}\to GL_{1}(\mathbb{C})\), defined by \(\rho_{0}^{-}(r)=\begin{pmatrix}-1\end{pmatrix}\) and \(\rho_{0}^{-}(s)=\begin{pmatrix}1\end{pmatrix}\). * \(\operatorname{sgn}^{-}:D_{n}\to GL_{1}(\mathbb{C})\), defined by \(\operatorname{sgn}^{-}(r)=\begin{pmatrix}-1\end{pmatrix}\) and \(\operatorname{sgn}^{-}(s)=\begin{pmatrix}-1\end{pmatrix}\). Now define subspaces of \(\mathbb{C}D_{n}\) as follows: * Given an integer \(k\), let \(\lambda=e^{2\pi ik/n}\in\mathbb{C}\), and let \(V_{k}\) be the subspace of \(\mathbb{C}D_{n}\) spanned by \[\sum_{i=0}^{n-1}\lambda^{-i}\cdot r^{i}\quad\text{and}\quad\sum_{j=0}^{n-1} \lambda^{-j}\cdot sr^{j}.\] * Let \(V_{0}\) be the subspace of \(\mathbb{C}D_{n}\) spanned by \[\left(\sum_{i=0}^{n-1}r^{i}\right)+\left(\sum_{i=0}^{n-1}sr^{i}\right)\quad \text{and}\quad\left(\sum_{i=0}^{n-1}r^{i}\right)-\left(\sum_{i=0}^{n-1}sr^{i} \right).\] Then it is straightforward to check the following statements: * For all integers \(k\), \(V_{k}\) is a subsuperspace of \(\mathbb{C}D_{n}\), and \[\mathbb{C}D_{n}=\begin{cases}V_{0}\oplus V_{1}\oplus\cdots\oplus V_{n/2}& \text{if $n$ is even},\\ V_{0}\oplus V_{1}\oplus\cdots\oplus V_{\lfloor n/2\rfloor}&\text{if $n$ is odd}.\end{cases}\] * For each integer \(1\leq k<\frac{n}{2}\), \(V_{k}\) is an absolutely irreducible \(\mathbb{C}D_{n}\)-supermodule affording the representation \(\rho_{k}\) of \(D_{n}\). * \(V_{0}\) is a self-associate irreducible \(\mathbb{C}D_{n}\)-supermodule, whose restriction to \(|\mathbb{C}D_{n}|\) is \(\rho_{0}\oplus\operatorname{sgn}\). * If \(n\) is even, then \(V_{n/2}\) is a self-associate irreducible \(\mathbb{C}D_{n}\)-supermodule, whose restriction to \(|\mathbb{C}D_{n}|\) is the direct sum of \(\rho_{0}^{-}\) (spanned by \(\sum_{i=0}^{n-1}(-1)^{i}r^{i}+\sum_{i=0}^{n-1}(-1)^{i}sr^{i}\)) and \(\operatorname{sgn}^{-}\) (spanned by \(\sum_{i=0}^{n-1}(-1)^{i}r^{i}-\sum_{i=0}^{n-1}(-1)^{i}sr^{i}\)). As a consequence of these observations and Theorem 2.2.10, we deduce the existence of a superalgebra isomorphism \[\mathbb{C}D_{n}\cong\begin{cases}M(1|1)^{\oplus(\frac{n}{2}-1)}\oplus Q(1)^{ \oplus 2}&\text{if $n$ is even},\\ M(1|1)^{\oplus[n/2]}\oplus Q(1)&\text{if $n$ is odd}.\end{cases}\] ## 3. The symmetric group as a supergroup In this section, fix an integer \(n\geq 2\) and let \(S_{n}\) be the symmetric group on \(n\) letters. The sign representation \(\operatorname{sgn}:S_{n}\to\{\pm 1\}\), \(\sigma\mapsto(-1)^{\sigma}\), makes \(S_{n}\) into a supergroup such that \((S_{n})_{\overline{0}}=A_{n}\), the alternating group on \(n\) letters, and \((S_{n})_{\overline{1}}=S_{n}\backslash A_{n}\) is the set of odd permutations. Then the group algebra \(\mathbb{C}S_{n}\) becomes a superalgebra with \((\mathbb{C}S_{n})_{\overline{0}}=\mathbb{C}A_{n}\), the group algebra of \(A_{n}\). ### The irreducible supermodules of the symmetric group Write \(\lambda\vdash n\) to denote that \(\lambda\) is a partition of \(n\), and let \(\mathcal{P}(n)=\{\lambda:\lambda\vdash n\}\) be the set of all partitions of \(n\). Given \(\lambda\in\mathcal{P}(n)\), write \(\lambda^{\prime}\) for the partition that is conjugate (or transpose) to \(\lambda\), and let \(\sim\) be the equivalence relation on \(\mathcal{P}(n)\) with equivalence classes \(\{\{\lambda,\lambda^{\prime}\}:\lambda\in\mathcal{P}(n)\}\). **Definition 3.1.1**.: Let \(\overline{\mathcal{P}}(n)\) be any fixed set of representatives for the distinct equivalence classes in \(\mathcal{P}(n)\) under the relation \(\sim\). Then \(\overline{\mathcal{P}}(n)\) is a disjoint union of sets \(E_{n}\) and \(F_{n}\), where \[E_{n}=\{\lambda\in\overline{\mathcal{P}}(n):\lambda\neq\lambda^{\prime}\} \quad\text{and}\quad F_{n}=\{\lambda\in\overline{\mathcal{P}}(n):\lambda= \lambda^{\prime}\}.\] For \(\lambda\vdash n\), let \(S^{\lambda}\) be the corresponding Specht module. Then the set \(\{S^{\lambda}:\lambda\vdash n\}\) is a complete set of pairwise non-isomorphic irreducible \(\mathbb{C}S_{n}\)-modules. It is well-known that \(S^{\lambda}\otimes\operatorname{sgn}\cong S^{\lambda^{\prime}}\); see [7, Theorems 4.12 and 6.7]. If \(\lambda\neq\lambda^{\prime}\), then Proposition 2.3.1 implies that \(S^{\lambda}\) and \(S^{\lambda^{\prime}}\) are irreducible (and isomorphic) as \(\mathbb{C}A_{n}\)-modules, while for \(\lambda=\lambda^{\prime}\) one gets that \(\operatorname{Res}_{A_{n}}^{S_{n}}(S^{\lambda})=S^{\lambda^{+}}\oplus S^{ \lambda^{-}}\) for two irreducible, conjugate, non-isomorphic \(\mathbb{C}A_{n}\)-modules \(S^{\lambda^{+}}\) and \(S^{\lambda^{-}}\). In particular, if \(\tau\in S_{n}\) is any odd permutation, then multiplication by \(\tau\) defines a linear isomorphism \(S^{\mu^{+}}\to S^{\mu^{-}}\). Since \(S^{\lambda^{\prime}}\otimes\operatorname{sgn}\cong S^{\lambda}\), Schur's Lemma implies that \(\operatorname{Hom}_{\mathbb{C}S_{n}}(S^{\lambda},S^{\lambda^{\prime}}\otimes \operatorname{sgn})\cong\mathbb{C}\). For each \(\lambda\vdash n\), choose a nonzero element \(\phi^{\lambda}\) of this space, and interpret it as a linear isomorphism \(\phi^{\lambda}:S^{\lambda}\to S^{\lambda^{\prime}}\) such that \[\phi^{\lambda}(\sigma\cdot v)=(-1)^{\sigma}\sigma\cdot\phi^{\lambda}(v)\quad \text{for all}\quad v\in S^{\lambda}\text{ and }\sigma\in S_{n}. \tag{3.1.1}\] Then \(\phi^{\lambda^{\prime}}\circ\phi^{\lambda}\in\operatorname{Hom}_{\mathbb{C}S_ {n}}(S^{\lambda},S^{\lambda})=\mathbb{C}\cdot\operatorname{id}_{S^{\lambda}}\). Rescaling our choice of \(\phi^{\lambda}\) if necessary, we may assume that \(\phi^{\lambda^{\prime}}\circ\phi^{\lambda}=\operatorname{id}_{S^{\lambda}}\). This implies that \(\phi^{\lambda}\circ\phi^{\lambda^{\prime}}=\operatorname{id}_{S^{\lambda^{ \prime}}}\), as well. Now for \(\lambda=\lambda^{\prime}\), we deduce that \(\phi^{\lambda}\) is the unique self-inverse linear map satisfying (3.1.1), while for \(\lambda\neq\lambda^{\prime}\) we deduce that up to mutual rescalings of the form \((\phi^{\lambda},\phi^{\lambda^{\prime}})\mapsto(c\cdot\phi^{\lambda},\frac{1} {c}\cdot\phi^{\lambda^{\prime}})\), \(\phi^{\lambda}\) and \(\phi^{\lambda^{\prime}}\) are the unique mutually-inverse linear maps each satisfying (3.1.1). Now for each symmetric partition \(\lambda\), one has \((\phi^{\lambda})^{2}=\operatorname{id}_{S^{\lambda}}\), and hence \(S^{\lambda}\) decomposes into \(+1\) and \(-1\) eigenspaces for \(\phi^{\lambda}\). These eigenspaces are \(A_{n}\)-stable (because \(\phi^{\lambda}\) is a \(\mathbb{C}A_{n}\)-homomorphism), and hence are \(\mathbb{C}A_{n}\)-submodules of \(S^{\lambda}\). Moreover, neither eigenspace is equal to all of \(S^{\lambda}\), since otherwise (3.1.1) would imply for all \(v\in S^{\lambda}\) that \(\sigma.v=0\) for all odd permutations (which is false). Combining these observations with those made two paragraphs ago, and using the uniqueness of isotypical components, one deduces that the \(\pm 1\) eigenspaces of \(\phi^{\lambda}\) are the irreducible \(\mathbb{C}A_{n}\)-constituents of \(\operatorname{Res}_{A_{n}}^{S_{n}}(S^{\lambda})\). One can take \(S^{\lambda^{+}}\) and \(S^{\lambda^{-}}\) to be the \(+1\) and \(-1\) eigenspaces of \(\phi^{\lambda}\), respectively. **Lemma 3.1.2**.: _Let \(n>1\), and let \(W\) be an irreducible \(\mathbb{C}S_{n}\)-supermodule._ 1. _If_ \(W\) _is absolutely irreducible, then_ \(W\cong S^{\lambda}\) _as a_ \(|\mathbb{C}S_{n}|\)_-module, for some symmetric partition_ \(\lambda\vdash n\)_. Under this identification, the homogeneous subspaces of_ \(W\) _are_ \(S^{\lambda^{+}}\) _and_ \(S^{\lambda^{-}}\)_._ 2. _If_ \(W\) _is self-associate, then_ \(W\cong S^{\lambda}\oplus S^{\lambda^{\prime}}\) _as a_ \(|\mathbb{C}S_{n}|\)_-module, for some non-symmetric partition_ \(\lambda\vdash n\)_. Under this identification, the homogeneous subspaces of_ \(W\) _are_ \[W_{\overline{0}}=\{u+\phi^{\lambda}(u):u\in S^{\lambda}\}\quad\text{and}\quad W _{\overline{1}}=\{u-\phi^{\lambda}(u):u\in S^{\lambda}\}.\] Proof.: First suppose \(W\) is an absolutely irreducible \(\mathbb{C}S_{n}\)-supermodule. Then as a \(|\mathbb{C}S_{n}|\)-module, \(W\cong S^{\lambda}\) for some \(\lambda\vdash n\). Since \[W=\pi_{W}(W)\cong\pi_{\mathbb{C}S_{n}}^{*}(W)\cong W\otimes\text{sgn}\cong S^{ \lambda}\otimes\text{sgn}\cong S^{\lambda^{\prime}}\] as \(|\mathbb{C}S_{n}|\)-modules, this implies that \(\lambda=\lambda^{\prime}\). Next, since the odd permutations in \(S_{n}\) do not annihilate \(S^{\lambda}\), \(W\) cannot be simply a purely even or a purely odd superspace. Then \(W_{\overline{0}}\) and \(W_{\overline{1}}\) are nonzero \(\mathbb{C}A_{n}\)-submodules of \(W\). Since \(S^{\lambda}=S^{\lambda^{+}}\oplus S^{\lambda^{-}}\) as a \(\mathbb{C}A_{n}\)-module, the uniqueness of isotypic components implies that \(\{S^{\lambda^{+}},S^{\lambda^{-}}\}=\{W_{\overline{0}},W_{\overline{1}}\}\). Now suppose \(W\) is self-associate as a \(\mathbb{C}S_{n}\)-supermodule. Then by Lemma 2.2.6, there exists \(\lambda\vdash n\) such that, as a \(|\mathbb{C}S_{n}|\)-module, \[W=S^{\lambda}\oplus\pi_{W}(S^{\lambda})\cong S^{\lambda}\oplus\pi_{\mathbb{C} S_{n}}^{*}(S^{\lambda})\cong S^{\lambda}\oplus S^{\lambda^{\prime}},\] and \(S^{\lambda}\not\cong S^{\lambda^{\prime}}\) as \(|\mathbb{C}S_{n}|\)-modules. Then \(\lambda\neq\lambda^{\prime}\). Making the identification \(\pi_{W}(S^{\lambda})=S^{\lambda^{\prime}}\), the parity map \(\pi=\pi_{W}:W\to W\) restricts to mutually-inverse linear maps \(\pi^{\lambda}:S^{\lambda}\to S^{\lambda^{\prime}}\) and \(\pi^{\lambda^{\prime}}:S^{\lambda^{\prime}}\to S^{\lambda}\) satisfying (3.1.1). Then by uniqueness (up to mutual rescaling) of \(\phi^{\lambda}\) and \(\phi^{\lambda^{\prime}}\), we may assume that \(\pi^{\lambda}=\phi^{\lambda}\) and \(\pi^{\lambda^{\prime}}=\phi^{\lambda^{\prime}}\). Now the identification of \(W_{\overline{0}}\) and \(W_{\overline{1}}\) follows from Lemma 2.2.6. **Proposition 3.1.3**.: _Let \(n>1\)._ 1. _For each_ \(\lambda\in E_{n}\)_, there exists a self-associate irreducible_ \(\mathbb{C}S_{n}\)_-supermodule_ \(W^{\lambda}\) _such that_ \(W^{\lambda}\cong S^{\lambda}\oplus S^{\lambda^{\prime}}\) _as a_ \(|\mathbb{C}S_{n}|\)_-module, with_ \[W^{\lambda}_{\overline{0}}=\{u+\phi^{\lambda}(u):u\in S^{\lambda}\}\quad \text{and}\quad W^{\lambda}_{\overline{1}}=\{u-\phi^{\lambda}(u):u\in S^{ \lambda}\}.\] _The_ \(|\mathbb{C}S_{n}|\)_-module decomposition_ \(W^{\lambda}\cong S^{\lambda}\oplus S^{\lambda^{\prime}}\) _is unique. We denote by_ \(J^{\lambda}:W^{\lambda}\to W^{\lambda}\) _the odd involution defined for_ \(u\in S^{\lambda}\) _by_ \(J^{\lambda}(u\pm\phi^{\lambda}(u))=u\mp\phi^{\lambda}(u)\)_._ 2. _For each_ \(\lambda\in F_{n}\)_, there exists an absolutely irreducible_ \(\mathbb{C}S_{n}\)_-supermodule such that_ \(W^{\lambda}\cong S^{\lambda}\) _as a_ \(|\mathbb{C}S_{n}|\)_-module, with_ \(W^{\lambda}_{\overline{0}}=S^{\lambda^{+}}\) _and_ \(W^{\lambda}_{\overline{1}}=S^{\lambda^{-}}\)_._ _The set \(\{W^{\lambda}:\lambda\in\overline{\mathcal{P}}(n)\}\) is a complete set of pairwise non-isomorphic irreducible \(\mathbb{C}S_{n}\)-supermodules._ Proof.: Using Lemma 2.2.13, Lemma 3.1.2, and the classification of the irreducible \(|\mathbb{C}S_{n}|\)-modules, one deduces for each \(\lambda\in\overline{\mathcal{P}}(n)\) that there exists an irreducible \(\mathbb{C}S_{n}\)-supermodule \(W^{\lambda}\) with the given restriction to \(|\mathbb{C}S_{n}|\). In particular, if \(\lambda\in E_{n}\), and if \(W\) and \(W^{\prime}\) are self-associate irreducible \(\mathbb{C}S_{n}\)-supermodules that are both isomorphic as \(|\mathbb{C}S_{n}|\)-modules to \(S^{\lambda}\oplus S^{\lambda^{\prime}}\), then \(W\cong W^{\prime}\), so the notation \(W^{\lambda}\) does not depend on the choice of representative for the equivalence class \(\{\lambda,\lambda^{\prime}\}\). For \(\lambda\in E_{n}\), the decomposition \(W^{\lambda}\cong S^{\lambda}\oplus S^{\lambda^{\prime}}\) is unique by the uniqueness of isotypic components and the fact that \(S^{\lambda}\not\cong S^{\lambda^{\prime}}\) as \(|\mathbb{C}S_{n}|\)-modules. For \(\lambda\in F_{n}\), one can replace \(W^{\lambda}\) with its parity shift if necessary (to which \(W^{\lambda}\) is odd-isomorphic) to ensure that \(W^{\lambda}_{\overline{0}}=S^{\lambda^{+}}\) and \(W^{\lambda}_{\overline{1}}=S^{\lambda^{-}}\). **Remark 3.1.4**.: It is well-known that the irreducible complex representations of \(S_{n}\) are self-dual. From this and Proposition 3.1.3, it follows for all \(\lambda\vdash n\) that \((W^{\lambda})^{*}\) is isomorphic (via an even supermodule homomorphism) to either \(W^{\lambda}\) or \(\Pi(W^{\lambda})\). For \(\lambda\in E_{n}\), one always has \((W^{\lambda})^{*}\cong W^{\lambda}\), while for \(\lambda\in F_{n}\), one has \((W^{\lambda})^{*}\cong W^{\lambda}\) if and only if the \(\mathbb{C}A_{n}\)-modules \(S^{\lambda^{+}}\) and \(S^{\lambda^{-}}\) are each self-dual. The next result is an immediate consequence of [8, Theorem 2.4.10]. **Lemma 3.1.5**.: _Let \(n\geq 2\), and let \(\lambda\in\overline{\mathcal{P}}(n)\)._ 1. _Suppose_ \(\lambda\in E_{n}\)_. If_ \(\lambda=(n)\) _or_ \(\lambda=(1^{n})\)_, then_ \(\dim(W^{\lambda})=2\)_. Otherwise,_ \(\dim(W^{\lambda})\geq 2n-2\)_._ 2. _Suppose_ \(\lambda\in F_{n}\)_. If_ \(n=3\) _and_ \(\lambda=(2,1)\)_, or if_ \(n=4\) _and_ \(\lambda=(2,2)\)_, then_ \(\dim(W^{\lambda})=2\)_. If_ \(n=5\) _and_ \(\lambda=(3,1,1)\)_, then_ \(\dim(W^{\lambda})=6=n+1\)_. Otherwise,_ \(\dim(W^{\lambda})\geq n+3\)_._ Given a partition \(\lambda\vdash n\) and a permutation \(\sigma\in S_{n}\), let \(S^{\lambda}(\sigma)\in\operatorname{End}(S^{\lambda})\) and \(W^{\lambda}(\sigma)\in\operatorname{End}(W^{\lambda})\) denote the corresponding linear maps \(u\mapsto\sigma.u\). For \(\sigma\in A_{n}\), let \(S^{\lambda^{+}}(\sigma)\in\operatorname{End}(S^{\lambda^{+}})\) and \(S^{\lambda^{-}}(\sigma)\in\operatorname{End}(S^{\lambda^{-}})\) be defined similarly. By abuse of notation, we will also write \(S^{\lambda}(\sigma)\), \(W^{\lambda}(\sigma)\), etc., for the corresponding matrices when bases for the underlying modules are fixed, and we extend the notation \(S^{\lambda}(\sigma)\) to arbitrary elements \(\sigma\in\mathbb{C}S_{n}\) by linearity. The next result is an immediate consequence of Theorem 2.2.10 and Proposition 3.1.3. **Corollary 3.1.6**.: _Let \(n\geq 2\). The map \(\mathbb{C}S_{n}\to\bigoplus_{\lambda\in\overline{\mathcal{P}}(n)}\operatorname {End}(W^{\lambda})\), \(\sigma\mapsto\bigoplus_{\lambda\in\overline{\mathcal{P}}(n)}W^{\lambda}(\sigma)\), induces a superalgebra isomorphism_ \[\mathbb{C}S_{n}\cong\Bigg{[}\bigoplus_{\lambda\in E_{n}}Q\left(W^{\lambda} \right)\Bigg{]}\oplus\Bigg{[}\bigoplus_{\lambda\in F_{n}}\operatorname{End} \left(W^{\lambda}\right)\Bigg{]}\cong\Bigg{[}\bigoplus_{\lambda\in E_{n}}Q \left(f^{\lambda}\right)\Bigg{]}\oplus\Bigg{[}\bigoplus_{\lambda\in F_{n}}M \left(\tfrac{1}{2}f^{\lambda},\tfrac{1}{2}f^{\lambda}\right)\Bigg{]},\] _where \(f^{\lambda}=\dim(S^{\lambda})\)._ Let \(\lambda\in E_{n}\). For \(u\in S^{\lambda}\), the expression \(u\pm\phi^{\lambda}(u)\in W^{\lambda}\) is linear in \(u\), and one has \[\sigma.\big{(}u\pm\phi^{\lambda}(u)\big{)}=(\sigma.u)\pm(-1)^{\sigma}\phi^{ \lambda}(\sigma.u) \tag{3.1.2}\] for all \(\sigma\in S_{n}\). Then making the identification \(Q(W^{\lambda})=Q(f^{\lambda})\) via a choice of homogeneous basis as in Lemma 2.2.6, the identity (3.1.2) implies that \[W^{\lambda}(\sigma)=\begin{cases}\left[\begin{array}{c|c}S^{\lambda}(\sigma )&0\\ \hline 0&S^{\lambda}(\sigma)\\ \hline 0&S^{\lambda}(\sigma)\\ \hline S^{\lambda}(\sigma)&0\end{array}\right]&\text{if $\sigma$ is an odd permutation}.\end{cases} \tag{3.1.3}\] On the other hand, let \(\lambda\in F_{n}\). Choose a basis \(\{u_{1},\ldots,u_{m}\}\) for \(W^{\lambda}_{\overline{0}}=S^{\lambda^{+}}\), and let \(\tau\in S_{n}\) be a fixed odd permutation. Then \(\{\tau.u_{1},\ldots,\tau.u_{m}\}\) is a basis for \(W^{\lambda}_{\overline{1}}=S^{\lambda^{-}}\). Now identifying \(\operatorname{End}(W^{\lambda})\) with \(M(\tfrac{1}{2}f^{\lambda},\tfrac{1}{2}f^{\lambda})\) via this choice of homogeneous basis, one gets \[W^{\lambda}(\sigma)=\begin{cases}\left[\begin{array}{c|c}S^{\lambda^{+}}( \sigma)&0\\ \hline 0&S^{\lambda^{+}}(\tau^{-1}\sigma\tau)\\ \hline S^{\lambda^{+}}(\tau^{-1}\sigma)&0\end{array}\right]&\text{if $\sigma$ is an odd permutation}.\end{cases} \tag{3.1.4}\] ### Weight space decompositions of Specht modules Our primary references for this section are [9, SS2] and [3, SS3]. The Jucys-Murphy elements \(L_{1},\ldots,L_{n}\in\mathbb{C}S_{n}\) are defined by \(L_{j}=\sum_{1\leq i<j}(i,j)\). In particular, \(L_{1}=0\). The elements \(L_{1},\ldots,L_{n}\) generate a commutative, semisimple subalgebra of \(\mathbb{C}S_{n}\). Since this subalgebra is semisimple, each finite-dimensional \(\mathbb{C}S_{n}\)-module \(V\) decomposes into a direct sum of simultaneous eigenspaces for \(L_{1},\ldots,L_{n}\). Given \(\alpha=(\alpha_{1},\ldots,\alpha_{n})\in\mathbb{C}^{n}\), the \(\alpha\)-weight space of \(V\) is defined by \[V_{\alpha}=\left\{v\in V:L_{i}\cdot v=\alpha_{i}v\text{ for all }1\leq i\leq n \right\}.\] Given \(\alpha=(\alpha_{1},\ldots,\alpha_{n})\), we may write \(\alpha(L_{i})=\alpha_{i}\). The nonzero elements of \(V_{\alpha}\) are called _weight vectors_. If \(V_{\alpha}\neq 0\), then we say that \(\alpha\) is a weight of \(V\). Let \[\mathcal{W}(\lambda)=\{\alpha\in\mathbb{C}^{n}:\alpha\text{ is a weight of }S^{\lambda}\},\] and let \(\mathcal{W}(n)=\bigcup_{\lambda\vdash n}\mathcal{W}(\lambda)\). Fix a partition \(\lambda\vdash n\), and let \(\mathbb{T}(\lambda)\) be the set of all standard \(\lambda\)-tableaux. The nonzero weight spaces of the irreducible \(\mathbb{C}S_{n}\)-module \(S^{\lambda}\) are each one-dimensional, spanned by vectors \(v_{T}\) for \(T\in\mathbb{T}(\lambda)\). Given \(T\in\mathbb{T}(\lambda)\) and an integer \(1\leq i\leq n\), let \(T_{i}\) be the box in \(T\) that is occupied by \(i\), and let \(\operatorname{cont}(T_{i})\) be the content (or residue) of the box \(T_{i}\). Then \(v_{T}\) is of weight \[\alpha(T):=(\operatorname{cont}(T_{1}),\ldots,\operatorname{cont}(T_{n})).\] In particular, \(\mathcal{W}(\lambda)\subseteq\mathbb{Z}^{n}\). For example, if \(n=7\), \(\lambda=(4,2,1)\), and \[T=\begin{array}{|c|c|c|c|}\hline 1&2&4&5\\ \hline 3&7&\\ \hline 6&\\ \hline\end{array},\] then \(\alpha(T)=(0,1,-1,2,3,-2,0)\). This description implies that the union \(\mathcal{W}(n)=\bigcup_{\lambda\vdash n}\mathcal{W}(\lambda)\) is disjoint, and for \(\alpha\in\mathcal{W}(n)\) one has \(\alpha=-\alpha\) only if \(n=1\), which we have excluded by assumption. We may denote a weight vector in \(S^{\lambda}\) by \(v_{T}\), for a standard \(\lambda\)-tableau \(T\), or by \(v_{\alpha}\), where \(\alpha=\alpha(T)\) is the corresponding weight. Conversely, if \(\alpha\in\mathcal{W}(\lambda)\) is specified, let \(T(\alpha)\) be the corresponding standard \(\lambda\)-tableau. Then \(v_{\alpha(T)}=v_{T}\) for all \(T\in\mathbb{T}(\lambda)\), and \(v_{T(\alpha)}=v_{\alpha}\) for all \(\alpha\in\mathcal{W}(\lambda)\). Given a (standard) \(\lambda\)-tableau \(T\), let \(T^{\prime}\) be its transpose, which is then a (standard) \(\lambda^{\prime}\)-tableau. Then for all \(T\in\mathbb{T}(\lambda)\), one has \(\alpha(T^{\prime})=-\alpha(T)\), and for all \(\alpha\in\mathcal{W}(\lambda)\), one has \(T(-\alpha)=T(\alpha)^{\prime}\). **Proposition 3.2.1** ([9, Corollary 2.2.3]).: _Let \(\alpha\in\mathcal{W}(\lambda)\). Given \(1\leq i<n\), let \(s_{i}\) be the transposition \((i,i+1)\in S_{n}\), and let \(\beta=s_{i}.\alpha=(\alpha_{1},\ldots,\alpha_{i-1},\alpha_{i+1},\alpha_{i}, \alpha_{i+2},\ldots,\alpha_{n})\). Then:_ 1. \(\alpha_{i}\neq\alpha_{i+1}\)_._ 2. _If_ \(\alpha_{i+1}=\alpha_{i}\pm 1\)_, then_ \(s_{i}\cdot v_{\alpha}=\pm v_{\alpha}\) _and_ \(\beta\notin\mathcal{W}(\lambda)\)_._ 3. _Suppose_ \(\alpha_{i+1}\neq\alpha_{i}\pm 1\)_, and let_ \(c_{i}=(\alpha_{i+1}-\alpha_{i})^{-1}\)_. Then_ \(\beta\in\mathcal{W}(\lambda)\) _and_ \(w_{\beta}:=(s_{i}-c_{i})\cdot v_{\alpha}\) _is a nonzero scalar multiple of_ \(v_{\beta}\)_; the elements_ \(L_{i}\)_,_ \(L_{i+1}\)_, and_ \(s_{i}\) _leave_ \(S^{\lambda}_{\alpha}\oplus S^{\lambda}_{\beta}\) _invariant; and they act in the basis_ \(\{v_{\alpha},w_{\beta}\}\) _of_ \(S^{\lambda}_{\alpha}\oplus S^{\lambda}_{\beta}\) _via the matrices_ \[L_{i}=\begin{bmatrix}\alpha_{i}&0\\ 0&\alpha_{i+1}\end{bmatrix},\qquad L_{i+1}=\begin{bmatrix}\alpha_{i+1}&0\\ 0&\alpha_{i}\end{bmatrix},\qquad s_{i}=\begin{bmatrix}c_{i}&1-c_{i}^{2}\\ 1&-c_{i}\end{bmatrix}.\] ### Weight space decompositions of irreducible supermodules In this section we describe the actions of the odd operators \(L_{1},\ldots,L_{n}\) and the transpositions \(s_{1},\ldots,s_{n-1}\) on the irreducible \(\mathbb{C}S_{n}\)-supermodules in terms of the weight vectors described in Section 3.2. Given \(\lambda\vdash n\) and \(\alpha\in\mathcal{W}(\lambda)\), it follows from the intertwining condition (3.1.1) that the function \(\phi^{\lambda}:S^{\lambda}\to S^{\lambda^{\prime}}\) specified in Section 3.1 defines a linear isomorphism \(\phi^{\lambda}:S^{\lambda}_{\alpha}\xrightarrow{\cong}S^{\lambda^{\prime}}_{-\alpha}\). We will assume that the spanning vectors \(v_{\alpha}\in S^{\lambda}_{\alpha}\) and \(v_{-\alpha}\in S^{\lambda^{\prime}}_{-\alpha}\) are chosen so that \[v_{-\alpha}=\phi^{\lambda}(v_{\alpha}). \tag{3.3.1}\] This can be done for all \(\lambda\vdash n\) and \(\alpha\in\mathcal{W}(\lambda)\) because the union \(\mathcal{W}(n)=\bigcup_{\lambda\vdash n}\mathcal{W}(\lambda)\) is disjoint, because \(\alpha\neq-\alpha\) for all \(\alpha\in\mathcal{W}(n)\) by the assumption that \(n>1\), and because \(\phi^{\lambda^{\prime}}\circ\phi^{\lambda}=\operatorname{id}_{S^{\lambda}}\) and \(\phi^{\lambda}\circ\phi^{\lambda^{\prime}}=\operatorname{id}_{S^{\lambda^{ \prime}}}\). In terms of standard tableaux, one has \(v_{T^{\prime}}=\phi^{\lambda}(v_{T})\) for all \(T\in\mathbb{T}(\lambda)\). The preceding discussion implies that the elements of \(\mathcal{W}(\lambda)\cup\mathcal{W}(\lambda^{\prime})\) occur in \(\pm\) pairs. Let \[\overline{\mathcal{W}}(\lambda)=[\mathcal{W}(\lambda)\cup\mathcal{W}(\lambda^ {\prime})]/\pm\] be the set of all such pairs. For \(\lambda\in\overline{\mathcal{P}}(n)=E_{n}\cup F_{n}\), we will write \(\pm\alpha\) to denote an element of \(\overline{\mathcal{W}}(\lambda)\). Note that this notation implies a fixed choice for the 'positive' element \(\alpha\) of the pair \(\pm\alpha\). If \(\lambda\in E_{n}\), we will assume that \(\alpha\in\mathcal{W}(\lambda)\); if \(\lambda\in F_{n}\), we will assume that \(\alpha_{n}\geq 0\). This uniquely determines the choice of the positive element \(\alpha\), except when \(\lambda\in F_{n}\) and \(\alpha_{n}=0\). Now given \(\lambda\in\overline{\mathcal{P}}(n)\), we will describe bases for \(W^{\lambda}_{\overline{0}}\) and \(W^{\lambda}_{\overline{1}}\) that are indexed by \(\overline{\mathcal{W}}(\lambda)\). First let \(\lambda\in E_{n}\), so that \(W^{\lambda}\cong S^{\lambda}\oplus S^{\lambda^{\prime}}\) as a \(|\mathbb{C}S_{n}|\)-module. Given a pair \(\pm\alpha\in\overline{\mathcal{W}}(\lambda)\), set \[v^{+}_{\alpha}=\tfrac{1}{2}(v_{\alpha}+v_{-\alpha})=\tfrac{1}{2}\big{(}v_{ \alpha}+\phi^{\lambda}(v_{\alpha})\big{)},\qquad\qquad v^{-}_{\alpha}=\tfrac{1 }{2}(v_{\alpha}-v_{-\alpha})=\tfrac{1}{2}\big{(}v_{\alpha}-\phi^{\lambda}(v_{ \alpha})\big{)}.\] Then by Lemma 2.2.6, the sets \(\{v^{+}_{\alpha}:\pm\alpha\in\overline{\mathcal{W}}(\lambda)\}\) and \(\{v^{-}_{\alpha}:\pm\alpha\in\overline{\mathcal{W}}(\lambda)\}\) are bases for \(W^{\lambda}_{\overline{0}}\) and \(W^{\lambda}_{\overline{1}}\), respectively. One has \(v_{\alpha}=v^{+}_{\alpha}+v^{-}_{\alpha}\) and \(v_{-\alpha}=v^{+}_{\alpha}-v^{-}_{\alpha}\). Next let \(\lambda\in F_{n}\), so that \(W^{\lambda}\cong S^{\lambda}\) as a \(|\mathbb{C}S_{n}|\)-module. Then \(W^{\lambda}_{\overline{0}}=S^{\lambda^{+}}\) and \(W^{\lambda}_{\overline{1}}=S^{\lambda^{-}}\) are the \(+1\) and \(-1\) eigenspaces, respectively, for the function \(\phi^{\lambda}:S^{\lambda}\to S^{\lambda}\). For each pair \(\pm\alpha\in\overline{\mathcal{W}}(\lambda)\), write \(v_{\alpha}=v^{+}_{\alpha}+v^{-}_{\alpha}\), with \(v^{+}_{\alpha}\in S^{\lambda^{+}}\) and \(v^{-}_{\alpha}\in S^{\lambda^{-}}\). Then \(v_{-\alpha}=\phi^{\lambda}(v_{\alpha})=v^{+}_{\alpha}-v^{-}_{\alpha}\). Now \[v^{+}_{\alpha}=\tfrac{1}{2}(v_{\alpha}+v_{-\alpha})=\tfrac{1}{2}\big{(}v_{ \alpha}+\phi^{\lambda}(v_{\alpha})\big{)},\qquad\qquad v^{-}_{\alpha}=\tfrac{1 }{2}(v_{\alpha}-v_{-\alpha})=\tfrac{1}{2}\big{(}v_{\alpha}-\phi^{\lambda}(v_{ \alpha})\big{)},\] and the sets \(\{v^{+}_{\alpha}:\pm\alpha\in\overline{\mathcal{W}}(\lambda)\}\) and \(\{v^{-}_{\alpha}:\pm\alpha\in\overline{\mathcal{W}}(\lambda)\}\) are bases for \(W^{\lambda}_{\overline{0}}\) and \(W^{\lambda}_{\overline{1}}\), respectively. With notation as above, one gets \(L_{i}\cdot v^{+}_{\alpha}=\alpha_{i}v^{-}_{\alpha}\) and \(L_{i}\cdot v^{-}_{\alpha}=\alpha_{i}v^{+}_{\alpha}\) for each \(1\leq i\leq n\). For \(\pm\alpha\in\overline{\mathcal{W}}(\lambda)\), set \[W^{\lambda}_{\pm\alpha}=\operatorname{span}\big{\{}v^{+}_{\alpha},v^{-}_{\alpha} \big{\}}=\operatorname{span}\left\{v_{\alpha},v_{-\alpha}\right\}.\] Then \(W^{\lambda}=\bigoplus_{\pm\alpha\in\overline{\mathcal{W}}(\lambda)}W^{\lambda}_{ \pm\alpha}\). We may refer to \(W^{\lambda}_{\pm\alpha}\) as the \(\pm\alpha\)-weight space of \(W^{\lambda}\). The next result follows directly from Proposition 3.2.1. **Proposition 3.3.1**.: _Let \(\alpha=(\alpha_{1},\ldots,\alpha_{n})\in\mathcal{W}(\lambda)\). Let \(1\leq i<n\), and set \(\beta=s_{i}.\alpha\)._ 1. _If_ \(\alpha_{i+1}=\alpha_{i}\pm 1\)_, then the transposition_ \(s_{i}\) _leaves the superspace_ \(W^{\lambda}_{\pm\alpha}\) _invariant, and it acts in the homogeneous basis_ \(\{v^{+}_{\alpha},v^{-}_{\alpha}\}\) _of_ \(W^{\lambda}_{\pm\alpha}\) _via the matrix_ \[\pm\begin{bmatrix}0&1\\ 1&0\end{bmatrix},\] _where the_ \(\pm\) _sign is the same as in Proposition_ 3.2.1_(2)._ 2. _Suppose_ \(\alpha_{i+1}\neq\alpha_{i}\pm 1\)_. Let_ \(c_{i}=(\alpha_{i+1}-\alpha_{i})^{-1}\)_, and set_ \[w_{\beta} =(s_{i}-c_{i})\cdot v_{\alpha},\] \[w_{-\beta} =(s_{i}+c_{i})\cdot v_{-\alpha}=-\phi^{\lambda}(w_{\beta}),\] \[w^{+}_{\beta} =\tfrac{1}{2}(w_{\beta}+\phi^{\lambda}(w_{\beta}))=\tfrac{1}{2}(w_{ \beta}-w_{-\beta}),\] \[w^{-}_{\beta} =\tfrac{1}{2}(w_{\beta}-\phi^{\lambda}(w_{\beta}))=\tfrac{1}{2}(w_{ \beta}+w_{-\beta}).\] _Then_ \(\{w^{+}_{\beta},w^{-}_{\beta}\}\) _is a homogeneous basis for_ \(W^{\lambda}_{\pm\beta}\)_, the elements_ \(L_{i}\)_,_ \(L_{i+1}\)_, and_ \(s_{i}\) _leave the space_ \(W^{\lambda}_{\pm\alpha}\oplus W^{\lambda}_{\pm\beta}\) _invariant, and they act in the homogeneous basis_ \(\{v^{+}_{\alpha},w^{+}_{\beta},v^{-}_{\alpha},w^{-}_{\beta}\}\) _of this space via the following supermatrices:_ \[L_{i} =\left[\begin{array}{ccc|ccc}0&0&\alpha_{i}&0\\ 0&0&0&\alpha_{i+1}\\ \hline\alpha_{i}&0&0&0\\ 0&\alpha_{i+1}&0&0\end{array}\right], L_{i+1} =\left[\begin{array}{ccc|ccc}0&0&\alpha_{i+1}&0\\ 0&0&0&\alpha_{i}\\ \hline\alpha_{i+1}&0&0&0\\ 0&\alpha_{i}&0&0\end{array}\right],\] _and_ \[s_{i} =\left[\begin{array}{ccc|ccc}0&0&c_{i}&1-c_{i}^{2}\\ 0&0&1&-c_{i}\\ \hline c_{i}&1-c_{i}^{2}&0&0\\ 1&-c_{i}&0&0\end{array}\right].\] ### Restriction of irreducible supermodules Given partitions \(\lambda\vdash n\) and \(\mu\vdash(n-1)\), write \(\mu\prec\lambda\) if the Young diagram of \(\mu\) is obtained by removing a box from the Young diagram of \(\lambda\). In this case, let \(\operatorname{cont}(\lambda/\mu)\) denote the content of the box that is removed from \(\lambda\) to obtain \(\mu\). Let \(\operatorname{cont}(\lambda)\) denote the sum of the contents of all the boxes in the Young diagram for \(\lambda\). Identify \(S_{n-1}\) with the subgroup of \(S_{n}\) consisting of all permutations that leave the integer \(n\) fixed. Then \(\sigma\cdot L_{n}\cdot\sigma^{-1}=L_{n}\) for each \(\sigma\in S_{n-1}\), and hence \(L_{n}\) commutes (in the ordinary, non-super sense) with each element of \(S_{n-1}\). This implies for each partition \(\lambda\vdash n\) that \(\operatorname{Res}^{S_{n}}_{S_{n-1}}(S^{\lambda})\) decomposes into eigenspaces for the action of \(L_{n}\). In fact, one has \[\operatorname{Res}^{S_{n}}_{S_{n-1}}(S^{\lambda})=\bigoplus_{\mu\prec\lambda} \Big{[}\bigoplus_{\begin{subarray}{c}\alpha\in\mathcal{W}(\lambda)\\ \alpha_{n}=\operatorname{cont}(\lambda/\mu)\end{subarray}}S^{\lambda}_{\alpha} \Big{]}, \tag{3.4.1}\] and the summand indexed by \(\mu\) is isomorphic as a \(\mathbb{C}S_{n-1}\)-module to \(S^{\mu}\). Making the \(\mathbb{C}S_{n-1}\)-module identifications \(S^{\lambda}=\bigoplus_{\mu\prec\lambda}S^{\mu}\) and \(S^{\lambda^{\prime}}=\bigoplus_{\mu^{\prime}\prec\lambda^{\prime}}S^{\mu^{ \prime}}\), and using the fact that \(\operatorname{Hom}_{S_{n-1}}(S^{\mu},S^{\nu})=0\) unless \(\mu=\nu\), one can show that the functions \(\phi^{\lambda}\) and \(\phi^{\lambda^{\prime}}\) must restrict for each \(\mu\prec\lambda\) to linear isomorphisms \(S^{\mu}\to S^{\mu^{\prime}}\) and \(S^{\mu^{\prime}}\to S^{\mu}\) that satisfy the intertwining relation (3.1.1) for all \(\sigma\in S_{n-1}\), and whose composites are the respective identity functions. Then by uniqueness (up to mutual rescaling), we may assume that \(\phi^{\lambda}|_{S^{\mu}}=\phi^{\mu}\) and \(\phi^{\lambda^{\prime}}|_{S^{\mu^{\prime}}}=\phi^{\mu^{\prime}}\). Now let \(\lambda\in\overline{\mathcal{P}}(n)=E_{n}\cup F_{n}\). As a superspace, one has \[W^{\lambda}=\bigoplus_{k\in\mathbb{Z}}W^{\lambda}_{k},\quad\text{where}\quad W^ {\lambda}_{k}=\bigoplus_{\begin{subarray}{c}\pm\alpha\in\overline{\mathcal{W} }(\lambda)\\ \alpha_{n}=k\end{subarray}}W^{\lambda}_{\pm\alpha}. \tag{3.4.2}\] By our conventions for the choice of the 'positive' weight \(\alpha\) from each pair \(\pm\alpha\in\overline{\mathcal{W}}(\lambda)\), if \(\lambda\in F_{n}\) and \(W^{\lambda}_{k}\neq 0\), then \(k\geq 0\). In general, if \(W^{\lambda}_{k}\neq 0\), then there exists a unique partition \(\mu\vdash(n-1)\) such that \(\mu\prec\lambda\) and \(\operatorname{cont}(\lambda/\mu)=k\). Specifically, \(\mu\) is the partition obtained by removing a box of content \(k\) from the outer edge of the Young diagram of \(\lambda\). Indeed, a box of content \(k\) can be removed from the outer edge of the Young diagram of \(\lambda\) to produce a new partition \(\mu\) if and only if there exists a weight \(\alpha\in\mathcal{W}(\lambda)\) with \(\alpha_{n}=k\), and for any given \(k\) there is at most one removable box of content \(k\) in the Young diagram of \(\lambda\). For any \(\lambda\vdash n\), the boxes in the Young diagram of \(\lambda\) have contents bounded between \(-(n-1)\) and \(n-1\), so in (3.4.2) one has \(W^{\lambda}_{k}\neq 0\) only if \(0\leq k<n\) Since \(L_{n}\) commutes with \(S_{n-1}\), it follows that \(W^{\lambda}_{k}\) is a \(\mathbb{C}S_{n-1}\)-subsupermodule of \(W^{\lambda}\). **Proposition 3.4.1**.: _Let \(\lambda\in\overline{\mathcal{P}}(n)\), let \(k\in\mathbb{Z}\) such that \(W^{\lambda}_{k}\neq 0\), and let \(\mu\vdash(n-1)\) be the unique partition such that \(\mu\prec\lambda\) and \(\operatorname{cont}(\lambda/\mu)=k\). Then as a \(\mathbb{C}S_{n-1}\)-supermodule,_ \[W^{\lambda}_{k}\cong\begin{cases}W^{\mu}\oplus\Pi(W^{\mu})&\text{if $\lambda \in E_{n}$ and $\mu=\mu^{\prime}$,}\\ W^{\mu}&\text{otherwise.}\end{cases}\] Proof.: First suppose \(\lambda\in E_{n}\). Then \[W^{\lambda}_{k}=\bigoplus_{\begin{subarray}{c}\alpha\in\mathcal{W}(\lambda) \\ \alpha_{n}=\operatorname{cont}(\lambda/\mu)\end{subarray}}\left[S^{\lambda}_{ \alpha}\oplus S^{\lambda^{\prime}}_{-\alpha}\right]=\Big{[}\bigoplus_{ \begin{subarray}{c}\alpha\in\mathcal{W}(\lambda)\\ \alpha_{n}=\operatorname{cont}(\lambda^{\prime}/\mu^{\prime})\end{subarray}}S ^{\lambda^{\prime}}_{\alpha}\Big{]}.\] By (3.4.1), this is isomorphic as a \(|\mathbb{C}S_{n-1}|\)-module to \(S^{\mu}\oplus S^{\mu^{\prime}}\). For \(\mu\neq\mu^{\prime}\), this implies that \(W^{\lambda}_{k}\cong W^{\mu}\) as a \(\mathbb{C}S_{n-1}\)-supermodule, so suppose that \(\mu=\mu^{\prime}\). Making the \(|\mathbb{C}S_{n-1}|\)-module identification \[S^{\mu}=\bigoplus_{\begin{subarray}{c}\alpha\in\mathcal{W}(\lambda)\\ \alpha_{n}=\operatorname{cont}(\lambda/\mu)\end{subarray}}S^{\lambda}_{\alpha},\] one sees that \(W^{\lambda}_{k}\) decomposes into the direct sum of two \(\mathbb{C}S_{n-1}\)-supermodules, \[\begin{split} W^{\mu}&\cong\{u+\phi^{\lambda}(u):u \in S^{\mu^{+}}\}\oplus\{w-\phi^{\lambda}(w):w\in S^{\mu^{-}}\},\quad\text{ and}\\ \Pi(W^{\mu})&\cong\{u-\phi^{\lambda}(u):u\in S^{\mu^{+}}\} \oplus\{w+\phi^{\lambda}(w):w\in S^{\mu^{-}}\}.\end{split} \tag{3.4.3}\] In particular, \(W^{\mu}\) and \(\Pi(W^{\mu})\) are interchanged by the odd involution \(J^{\lambda}:W^{\lambda}\to W^{\lambda}\). Now suppose \(\lambda\in F_{n}\). If \(k>0\), then the partition \(\mu\) is non-symmetric, \(\operatorname{cont}(\lambda/\mu^{\prime})=-k\), and \[W^{\lambda}_{k}=\Big{[}\bigoplus_{\begin{subarray}{c}\alpha\in\mathcal{W}( \lambda)\\ \alpha_{n}=\operatorname{cont}(\lambda/\mu)\end{subarray}}S^{\lambda}_{\alpha} \Big{]}\oplus\Big{[}\bigoplus_{\begin{subarray}{c}\alpha\in\mathcal{W}( \lambda)\\ \alpha_{n}=\operatorname{cont}(\lambda/\mu^{\prime})\end{subarray}}S^{\lambda} _{\alpha}\Big{]}\cong S^{\mu}\oplus S^{\mu^{\prime}}\] as \(|\mathbb{C}S_{n-1}|\)-modules. This implies that \(W^{\lambda}_{k}\cong W^{\mu}\) as a \(\mathbb{C}S_{n-1}\)-supermodule. On the other hand, if \(k=0\), then \(\mu\) is symmetric, and \[W^{\lambda}_{k}=\Big{[}\bigoplus_{\begin{subarray}{c}\alpha\in\mathcal{W}( \lambda)\\ \alpha_{n}=\operatorname{cont}(\lambda/\mu)\end{subarray}}S^{\lambda}_{\alpha} \Big{]}\cong S^{\mu}\] as a \(|\mathbb{C}S_{n-1}|\)-module. Since \(\phi^{\lambda}:S^{\lambda}\to S^{\lambda}\) restricts to \(\phi^{\mu}:S^{\mu}\to S^{\mu}\) via this identification, one deduces that the \(+1\)-eigenspace of \(\phi^{\mu}\) is contained in the \(+1\)-eigenspace of \(\phi^{\lambda}\). Then \(S^{\mu^{+}}\) is concentrated in even superdegree, so \(W^{\lambda}_{k}\cong W^{\mu}\) as a \(\mathbb{C}S_{n-1}\)-supermodule. Let \(W^{\lambda}\!\!\downarrow_{\mathbb{C}S_{n-1}}\) denote the restriction of \(W^{\lambda}\) to the subalgebra \(\mathbb{C}S_{n-1}\) of \(\mathbb{C}S_{n}\). **Corollary 3.4.2**.: _Let \(\lambda\in\overline{\mathcal{P}}(n)\). Then_ \[W^{\lambda}\!\!\downarrow_{\mathbb{C}S_{n-1}}\cong\begin{cases}\Bigg{[}\bigoplus _{\begin{subarray}{c}\mu\prec\lambda\\ \mu\neq\mu^{\prime}\end{subarray}}W^{\mu}\Bigg{]}\oplus\Bigg{[}\bigoplus_{ \begin{subarray}{c}\mu\prec\lambda\\ \mu=\mu^{\prime}\end{subarray}}W^{\mu}\oplus\Pi(W^{\mu})\Bigg{]}&\text{if $\lambda\in E_{n}$,}\\ \bigoplus_{\begin{subarray}{c}\mu\prec\lambda\\ \operatorname{cont}(\lambda/\mu)\geq 0\end{subarray}}W^{\mu}&\text{if $\lambda\in F_{n}$.}\end{cases}\] **Remark 3.4.3**.: The corollary implies that, if one allows only even supermodule homomorphisms (so that a supermodule and its parity shift are not necessarily isomorphic), then the restriction \(W^{\lambda}\!\!\downarrow_{\mathbb{C}S_{n-1}}\) is multiplicity free, just as in the classical (non-super) situation for Specht modules. If, on the other hand, one allows odd isomorphisms as well (so that a supermodule and its parity shift are _odd_ isomorphic), then the restriction \(W^{\lambda}\!\!\downarrow_{\mathbb{C}S_{n-1}}\) is multiplicity free if \(\lambda\in F_{n}\), but may have a (unique) repeated composition factor if \(\lambda\in E_{n}\). ## 4. The Lie superalgebra generated by transpositions ### The setup Recall the associative superalgebras defined in Example 2.2.1 and Example 2.2.2. Given a vector superspace \(V\cong\mathbb{C}^{m|n}\), write \(\mathfrak{gl}(V)\) and \(\mathfrak{gl}(m|n)\) for the sets \(\operatorname{End}(V)\) and \(M(m|n)\), respectively, considered as Lie superalgebras via the super commutator \[[x,y]=xy-(-1)^{\overline{x}\cdot\overline{y}}yx.\] If \(V\) is a vector superspace equipped with an odd involution \(J:V\to V\) (so in particular, the even and odd subspaces of \(V\) must be of the same dimension), write \(\mathfrak{q}(V)\) and \(\mathfrak{q}(n)\) for the sets \(Q(V)\) and \(Q(n)\), respectively, considered as Lie superalgebras via the super commutator. For an arbitrary Lie superalgebra \(\mathfrak{g}\), we denote its derived subalgebra \([\mathfrak{g},\mathfrak{g}]\) by \(\mathfrak{D}(\mathfrak{g})\). Then \[\mathfrak{D}(\mathfrak{gl}(m|n))=\mathfrak{sl}(m|n):=\left\{\left[\begin{array} []{c|c}A&B\\ \hline C&D\end{array}\right]\in\mathfrak{gl}(m|n):\operatorname{tr}(A)- \operatorname{tr}(D)=0\right\}.\] Let \(V\) be a vector superspace equipped with an odd involution \(J:V\to V\), let \(\theta\in\mathfrak{q}(V)\), and let \(\theta=\theta_{\overline{0}}+\theta_{\overline{1}}\) be the decomposition of \(\theta\) into its even and odd components. Then \(J\circ\theta_{\overline{1}}=\theta_{\overline{1}}\circ J\) restricts to an even linear map \((J\circ\theta_{\overline{1}})|_{V_{\overline{0}}}:V_{\overline{0}}\to V_{ \overline{0}}\). Identifying \(V_{\overline{0}}\) and \(V_{\overline{1}}\) via \(J\), this is equal to the even linear map \((\theta_{\overline{1}}\circ J)|_{V_{\overline{1}}}:V_{\overline{1}}\to V_{ \overline{1}}\). Now define the _odd trace_ of \(\theta\), denoted \(\operatorname{otr}(\theta)\), by \[\operatorname{otr}(\theta)=\operatorname{tr}\big{(}(J\circ\theta_{\overline {1}})|_{V_{\overline{0}}}\big{)}=\operatorname{tr}\big{(}(\theta_{\overline {1}}\circ J)|_{V_{\overline{1}}}\big{)}, \tag{4.1.1}\] and define the subsuperspace \(\mathfrak{sq}(V)\subseteq\mathfrak{q}(V)\) by \[\mathfrak{sq}(V)=\left\{\theta\in\mathfrak{q}(V):\operatorname{otr}(\theta)= 0\right\}.\] Then one can show that \(\mathfrak{D}(\mathfrak{q}(V))=\mathfrak{sq}(V)\). Fixing a basis for \(V\) as in Example 2.2.2, one has \[\mathfrak{D}(\mathfrak{q}(n))=\mathfrak{sq}(n):=\left\{\left[\begin{array}[] {c|c}A&B\\ \hline B&A\end{array}\right]\in\mathfrak{q}(n):\operatorname{tr}(B)=0\right\}. \tag{4.1.2}\] **Remark 4.1.1**.: One can check that the Lie superalgebra \(\mathfrak{sl}(m|m)\) is generated by its odd subspace \(\mathfrak{sl}(m|m)_{\overline{1}}\) if \(m\geq 2\), and that \(\mathfrak{sq}(m)\) is generated by its odd subspace \(\mathfrak{sq}(m)_{\overline{1}}\) if \(m\geq 3\). The super commutator endows the superalgebra \(\mathbb{C}S_{n}\) with the structure of a Lie superalgebra. Corollary 3.1.6 then gives the Lie superalgebra isomorphism \[\mathbb{C}S_{n}\cong\Bigg{[}\bigoplus_{\lambda\in E_{n}}\mathfrak{q}(W^{ \lambda})\Bigg{]}\oplus\Bigg{[}\bigoplus_{\lambda\in F_{n}}\mathfrak{gl}(W^{ \lambda})\Bigg{]}\cong\Bigg{[}\bigoplus_{\lambda\in E_{n}}\mathfrak{q}(f^{ \lambda})\Bigg{]}\oplus\Bigg{[}\bigoplus_{\lambda\in F_{n}}\mathfrak{gl}( \tfrac{1}{2}f^{\lambda},\tfrac{1}{2}f^{\lambda})\Bigg{]}, \tag{4.1.3}\] where \(f^{\lambda}=\dim(S^{\lambda})\). Taking derived subalgebras, one has \[\mathfrak{D}(\mathbb{C}S_{n})\cong\Bigg{[}\bigoplus_{\lambda\in E_{n}} \mathfrak{sq}(W^{\lambda})\Bigg{]}\oplus\Bigg{[}\bigoplus_{\lambda\in F_{n}} \mathfrak{sl}(W^{\lambda})\Bigg{]}\cong\Bigg{[}\bigoplus_{\lambda\in E_{n}} \mathfrak{sq}(f^{\lambda})\Bigg{]}\oplus\Bigg{[}\bigoplus_{\lambda\in F_{n}} \mathfrak{sl}(\tfrac{1}{2}f^{\lambda},\tfrac{1}{2}f^{\lambda})\Bigg{]}. \tag{4.1.4}\] From this one sees that \[\dim(\mathfrak{D}(\mathbb{C}S_{n})_{\overline{0}})=\dim((\mathbb{C}S_{n})_{ \overline{0}})-|F_{n}|=\tfrac{n!}{2}-|F_{n}|,\quad\text{and}\] \[\dim(\mathfrak{D}(\mathbb{C}S_{n})_{\overline{1}})=\dim((\mathbb{C}S_{n})_{\overline {1}})-|E_{n}|=\tfrac{n!}{2}-|E_{n}|.\] In total, \(\dim(\mathfrak{D}(\mathbb{C}S_{n}))=n!-|E_{n}\cup F_{n}|\). Let \(T_{n}=\sum_{j=1}^{n}L_{j}=\sum_{j=1}^{n}\sum_{i=1}^{j-1}(i,j)\), the sum in \(\mathbb{C}S_{n}\) of all transpositions \(\tau\in S_{n}\). Let \(\lambda\vdash n\). Since all transpositions are conjugate in \(S_{n}\), the trace of the map \(S^{\lambda}(\tau):S^{\lambda}\to S^{\lambda}\) is independent of \(\tau\). This implies for each transposition \(\tau\) that \(\operatorname{tr}(S^{\lambda}(T_{n}))=\binom{n}{2}\cdot\operatorname{tr}(S^ {\lambda}(\tau))\), and hence \(\tau-\frac{2}{n(n-1)}\cdot T_{n}\) is a traceless operator on \(S^{\lambda}\). Then (3.1.3) implies for \(\lambda\in E_{n}\) that \[W^{\lambda}\left(\tau-\tfrac{2}{n(n-1)}\cdot T_{n}\right)\in\mathfrak{sq}(W^{ \lambda})=\mathfrak{D}(\mathfrak{q}(W^{\lambda})),\] while for \(\lambda\in F_{n}\) one gets \[W^{\lambda}\left(\tau-\tfrac{2}{n(n-1)}\cdot T_{n}\right)\in\mathfrak{sl}(W^{ \lambda})=\mathfrak{D}(\mathfrak{gl}(W^{\lambda})),\] because for any finite-dimensional superspace \(V\) one has \(\mathfrak{gl}(V)_{\overline{1}}=\mathfrak{sl}(V)_{\overline{1}}=\mathfrak{D}( \mathfrak{gl}(V))_{\overline{1}}\). Combining these observations, one gets \[\left\{\tau-\tfrac{2}{n(n-1)}\cdot T_{n}:\tau\text{ is a transposition in }S_{n}\right\}\subseteq\mathfrak{D}(\mathbb{C}S_{n}). \tag{4.1.5}\] On the other hand, for any partition \(\lambda\vdash n\), \(T_{n}\) acts on \(S^{\lambda}\) as scalar multiplication by \(\operatorname{cont}(\lambda)\), the sum of the contents of the boxes in the Young diagram of \(\lambda\). For \(n>1\), there are non-symmetric partitions for which this scalar is nonzero, so this implies by (3.1.3) that \(T_{n}\notin\mathfrak{D}(\mathbb{C}S_{n})\), and hence \(\tau\notin\mathfrak{D}(\mathbb{C}S_{n})\), as well, for each transposition \(\tau\). Since \(T_{n}\) acts on \(S^{\lambda}\) as scalar multiplication by \(\operatorname{cont}(\lambda)\), it follows that \[W^{\lambda}(T_{n})=\begin{cases}\operatorname{cont}(\lambda)\cdot J^{\lambda}& \text{if }\lambda\in E_{n},\\ 0&\text{if }\lambda\in F_{n},\end{cases} \tag{4.1.6}\] where \(J^{\lambda}:W^{\lambda}\to W^{\lambda}\) is the odd involution defined in Proposition 3.1.3(1). **Remark 4.1.2**.: If \(\operatorname{cont}(\lambda)\neq 0\), then \(\lambda\neq\lambda^{\prime}\). The converse of this statement is false. For example, if \(\lambda=(5,5,5,3,1,1)\), then \(\lambda\neq\lambda^{\prime}\) but \(\operatorname{cont}(\lambda)=0\). **Definition 4.1.3**.: Let \(\mathfrak{g}_{n}\subseteq\mathbb{C}S_{n}\) be the Lie superalgebra generated by all transpositions in \(S_{n}\). Evidently, \(T_{n}\in\mathfrak{g}_{n}\). Then (4.1.5) implies that \[\mathfrak{g}_{n}\subseteq\mathfrak{D}(\mathbb{C}S_{n})+\mathbb{C}\cdot T_{n}. \tag{4.1.7}\] Our goal by the end of the paper is to show that (4.1.7) is an equality for all \(n\geq 2\). **Lemma 4.1.4**.: _If \(n\in\{2,3,4,5\}\), then \(\mathfrak{g}_{n}=\mathfrak{D}(\mathbb{C}S_{n})+\mathbb{C}\cdot T_{n}\)._ Proof.: Since \(T_{n}\notin\mathfrak{D}(\mathbb{C}S_{n})\), the sum \(\mathfrak{D}(\mathbb{C}S_{n})+\mathbb{C}\cdot T_{n}\) is direct, and hence \[\dim(\mathfrak{D}(\mathbb{C}S_{n})+\mathbb{C}\cdot T_{n})=\dim(\mathfrak{D}( \mathbb{C}S_{n}))+1=n!-|E_{n}\cup F_{n}|+1.\] For \(n\leq 5\), we have verified that \(\dim(\mathfrak{g}_{n})\geq n!-|E_{n}\cup F_{n}|+1\), and hence (4.1.7) is an equality, via calculations in GAP [6]. **Remark 4.1.5**.: It is straightforward, if somewhat tedious, to check by hand for \(n\leq 4\) that \(\dim(\mathfrak{g}_{n})\geq n!-|E_{n}\cup F_{n}|+1\). Later in Section 4.4, we will find it convenient to assume that (4.1.7) is an equality for \(n=5\), as well, to help avoid certain annoying special cases. In fact, we have verified that Lemma 4.1.4 is also true for \(n=6\) and \(n=7\) using GAP, but there is nothing to be gained in our induction argument by taking these cases for granted. **Lemma 4.1.6**.: _Let \(n\geq 2\). Set \(\mathfrak{g}=\mathfrak{g}_{n}\)._ 1. \(Z(\mathbb{C}S_{n})=Z(\mathbb{C}S_{n})_{\overline{0}}\)_._ 2. \(Z(\mathfrak{g})\subseteq Z(\mathbb{C}S_{n})\)_. In particular,_ \(Z(\mathfrak{g})\subseteq\mathfrak{g}_{\overline{0}}\)_._ 3. _If_ \(n\geq 5\)_, then_ \(Z(\mathfrak{g}_{\overline{0}})=\mathfrak{g}_{\overline{0}}\cap Z(\mathbb{C}A _{n})\)_, and the projection map_ \(p:\mathbb{C}A_{n}\to Z(\mathbb{C}A_{n})\)_,_ \[p(z)=\frac{2}{n!}\sum_{\sigma\in A_{n}}\sigma z\sigma^{-1},\] _restricts to a projection map_ \(p:\mathfrak{g}_{\overline{0}}\to Z(\mathfrak{g}_{\overline{0}})\)_._ Proof.: Under the superalgebra isomorphism (4.1.3), one can deduce that the only homogeneous elements of \(\mathbb{C}S_{n}\) that commute (in the super sense) with all other elements of \(\mathbb{C}S_{n}\) correspond to linear combinations of the identity elements from the various matrix factors. In particular, \(Z(\mathbb{C}S_{n})\) is a purely even superspace. Next, for each \(z\in\mathbb{C}S_{n}\), the map \(\operatorname{ad}_{z}:x\mapsto[z,x]=zx-(-1)^{\overline{z}\cdot\overline{x}}xz\) is a superalgebra derivation on \(\mathbb{C}S_{n}\). If \(z\in Z(\mathfrak{g})\), then \(\operatorname{ad}_{z}(x)=0\) for each transposition \(x\), since those elements generate \(\mathfrak{g}\) as a Lie superalgebra. But the transpositions also generate \(\mathbb{C}S_{n}\) as an associative superalgebra, so this implies that \(\operatorname{ad}_{z}:\mathbb{C}S_{n}\to\mathbb{C}S_{n}\) is the zero map, and hence \(z\in Z(\mathbb{C}S_{n})\). Now suppose \(n\geq 5\). In this case, it is well-known that \(A_{n}\) is generated as a group by the set \[\{(i,j)(k,\ell):i,j,k,\ell\text{ distinct}\} \tag{4.1.8}\] of all products of two disjoint transpositions. These are all elements of \(\mathfrak{g}_{\overline{0}}\) because \[[(i,j),(k,\ell)]=(i,j)(k,\ell)+(k,\ell)(i,j)=2(i,j)(k,\ell) \tag{4.1.9}\] whenever \(i,j,k,\ell\) are distinct. Then reasoning as in the previous paragraph, it follows for \(z\in\mathfrak{g}_{\overline{0}}\) that \(z\in Z(\mathfrak{g}_{\overline{0}})\) if and only if \(z\in Z(\mathbb{C}A_{n})\). Finally, since the set of transpositions in \(S_{n}\) is closed under conjugation by arbitrary elements of \(S_{n}\), it follows that \(\mathfrak{g}\) is closed under conjugation. Conjugation is an even linear map, so \(\mathfrak{g}_{\overline{0}}\) is also closed under conjugation. Then the projection map must sends elements of \(\mathfrak{g}_{\overline{0}}\) to elements of \(\mathfrak{g}_{\overline{0}}\cap Z(\mathbb{C}A_{n})=Z(\mathfrak{g}_{\overline {0}})\). ### Image of \(\mathfrak{g}_{n}\) in \(\operatorname{End}(W^{\lambda})\) Given \(\lambda\in\overline{\mathcal{P}}(n)\), let \(W^{\lambda}(\mathfrak{g}_{n})\) denote the image of \(\mathfrak{g}_{n}\) under the supermodule structure map \(\mathbb{C}S_{n}\to\operatorname{End}(W^{\lambda})\). Our goal in Sections 4.3 and 4.4 is to establish Theorem 4.2.1, stated below. As described in the introduction, we prove the results in Sections 4.2-4.7 by induction on \(n\). First, for the base case of induction, observe that Theorem 4.7.3 is true for \(n\in\{2,3,4,5\}\), by Lemma 4.1.4. This implies that Theorem 4.2.1 is true for \(n\in\{2,3,4,5\}\) by (4.1.4) and (4.1.6). Hence Corollary 4.5.1 is true for \(n\) in this range as well. In the case \(n=5\), one can then work sequentially through Sections 4.6 and 4.7 to deduce that all subsequent results in the paper leading up to Theorem 4.7.3 are also true for \(n=5\). Now for the general inductive step of this argument we make the following assumptions: * \(n\geq 6\), and * all results in Sections 4.2-4.7 are true as stated for the value \(n-1\). The inductive step is then completed by working sequentially through Sections 4.2-4.7, starting with Theorem 4.2.1, to establish that each result is true as stated for the value \(n\). **Theorem 4.2.1**.: _Let \(n\geq 2\), and let \(\lambda\in\overline{\mathcal{P}}(n)\). Then_ \[W^{\lambda}(\mathfrak{g}_{n})=\begin{cases}\mathfrak{s}\mathfrak{q}(W^{\lambda })+\mathbb{C}\cdot(\operatorname{cont}(\lambda)\cdot J^{\lambda})&\text{if $\lambda\in E_{n}$,}\\ \mathfrak{s}\mathfrak{l}(W^{\lambda})&\text{if $\lambda\in F_{n}$.}\end{cases} \tag{4.2.1}\] The "\(\subseteq\)" direction of (4.2.1) follows from (4.1.7), (4.1.4), and (4.1.6). If \(\lambda\in E_{n}\), then (4.1.6) also implies that \(\operatorname{cont}(\lambda)\cdot J^{\lambda}\in W^{\lambda}(\mathfrak{g}_{n})\). Next, if \(\tau\in S_{n}\) is any transposition, then \[\operatorname{id}_{W^{\lambda}}=W^{\lambda}(1_{\mathbb{C}S_{n}})=W^{\lambda} \left(\tfrac{1}{2}[\tau,\tau]\right)\in W^{\lambda}(\mathfrak{g}_{n}).\] For \(\lambda\in\{(n),(1^{n})\}\), one has \(\mathfrak{sq}(W^{\lambda})=\mathbb{C}\cdot\operatorname{id}_{W^{\lambda}}\), so the theorem is true in this case. For all other partitions \(\lambda\in\overline{\mathcal{P}}(n)\), Lemma 3.1.5 implies (by the assumption \(n\geq 6\), and the fact that all \(W^{\lambda}\) are even-dimensional) that \(\dim(W^{\lambda})\geq 10\). Then to finish proving the "\(\supseteq\)" direction of (4.2.1), it will suffice by Remark 4.1.1 to show that \[W^{\lambda}(\mathfrak{g}_{n})\supseteq\begin{cases}\mathfrak{sq}(W^{\lambda} )_{\overline{1}}&\text{if $\lambda\in E_{n}$},\\ \mathfrak{sl}(W^{\lambda})_{\overline{1}}&\text{if $\lambda\in F_{n}$}.\end{cases} \tag{4.2.2}\] In the notation of Section 3.4, one has \(W^{\lambda}=\bigoplus_{k\in\mathbb{Z}}W^{\lambda}_{k}\), where \(W^{\lambda}_{k}\neq 0\) if and only if there exists a (unique) partition \(\mu_{k}\vdash(n-1)\) such that \(\mu_{k}\prec\lambda\) and \(\operatorname{cont}(\lambda/\mu_{k})=k\). To simplify notation, for the rest of this section we will fix a partition \(\lambda\in\overline{\mathcal{P}}(n)\), and we will write \[W^{k}=W^{\lambda}_{k}.\] By Proposition 3.4.1, if \(W^{k}\neq 0\), then \(W^{k}\) identifies as a \(\mathbb{C}S_{n-1}\)-supermodule with either the irreducible supermodule \(W^{\mu_{k}}\), or the direct sum of \(W^{\mu_{k}}\) and its parity shift \(\Pi(W^{\mu_{k}})\). In any case, if \(k\neq\ell\), then \(W^{k}\) and \(W^{\ell}\) have no irreducible \(\mathbb{C}S_{n-1}\)-constituents in common. This implies by the analogue of (4.1.4) for \(\mathbb{C}S_{n-1}\) and the induction hypothesis for \(\mathfrak{g}^{\prime}_{n-1}:=\mathfrak{D}(\mathfrak{g}_{n-1})\) that \[W^{\lambda}(\mathfrak{g}^{\prime}_{n-1})=\bigoplus_{k\in\mathbb{Z}}W^{k}( \mathfrak{g}^{\prime}_{n-1}), \tag{4.2.3}\] where \(W^{k}(\mathfrak{g}^{\prime}_{n-1})\) denotes the image of \(\mathfrak{g}^{\prime}_{n-1}\) in \(\operatorname{End}(W^{k})\). Conceptually, our strategy for the proof of Theorem 4.2.1 runs roughly as follows. First, we show that \(W^{\lambda}(\mathfrak{g}_{n})\) contains a large semisimple Lie subalgebra \(\mathfrak{h}\)--specifically, a direct sum of special linear Lie algebras--over which \[\operatorname{End}(W^{\lambda})_{\overline{1}} =\bigoplus_{k,\ell\in\mathbb{Z}}\operatorname{Hom}(W^{k},W^{ \ell})_{\overline{1}} \tag{4.2.5}\] \[=\bigoplus_{k,\ell\in\mathbb{Z}}\left[\operatorname{Hom}(W^{k}_{ \overline{0}},W^{\ell}_{\overline{1}})\oplus\operatorname{Hom}(W^{k}_{ \overline{1}},W^{\ell}_{\overline{0}})\right] \tag{4.2.4}\] is a semisimple \(\mathfrak{h}\)-module. Next, the transposition \(s_{n-1}=(n-1,n)\) defines an element \(W^{\lambda}(s_{n-1})\in\operatorname{End}(W^{\lambda})_{\overline{1}}\) that has nonzero components in various irreducible \(\mathfrak{h}\)-module summands of \(\operatorname{End}(W^{\lambda})_{\overline{1}}\). Using the semisimplicity of \(\mathfrak{h}\), we deduce that certain \(\mathfrak{h}\)-module summands of \(\operatorname{End}(W^{\lambda})_{\overline{1}}\) must be contained in the Lie superalgebra generated by \(W^{\lambda}(\mathfrak{g}_{n-1})\) and \(W^{\lambda}(s_{n-1})\), and hence must be contained in \(W^{\lambda}(\mathfrak{g}_{n})\). These summands in turn generate a large enough Lie superalgebra for us to deduce the inclusion (4.2.2). ### Proof of Theorem 4.2.1: the case \(\lambda\in F_{n}\) #### 4.3.1. First suppose \(\lambda\in F_{n}\). In this case one has \(W^{k}\neq 0\) only if \(k\geq 0\), and the decomposition (4.2.3) takes the form \[W^{\lambda}(\mathfrak{g}^{\prime}_{n-1})=\bigoplus_{k=0}^{n-1}W^{k}(\mathfrak{ g}^{\prime}_{n-1})=\mathfrak{sl}(W^{0})\oplus\mathfrak{sq}(W^{1})\oplus \mathfrak{sq}(W^{2})\oplus\cdots\oplus\mathfrak{sq}(W^{n-1}). \tag{4.3.1}\] By convention, \(\mathfrak{sl}(W^{k})=0\) whenever \(W^{k}=0\). For example, since \(n\geq 6\) by hypothesis, and since \(\lambda\in F_{n}\) is a symmetric partition, the Young diagram of \(\lambda\) cannot have a removable box of content \(n-1\), so the summand \(\mathfrak{sq}(W^{n-1})\) must be zero. Furthermore, Lemma 3.1.5 implies that if \(W^{k}\neq 0\), then \[\dim(W^{k})\geq\min\left\{2(n-1)-2,(n-1)+1\right\}\geq 6. \tag{4.3.2}\] #### 4.3.2. If \(\mathfrak{sl}(W^{0})\) is the only nonzero summand in (4.3.1), then \(W^{\lambda}=W^{0}\), and hence \[\mathfrak{sl}(W^{\lambda})=\mathfrak{sl}(W^{0})=W^{\lambda}(\mathfrak{g}^{ \prime}_{n-1})\subseteq W^{\lambda}(\mathfrak{g}_{n}),\] establishing (4.2.2). So assume that \(W^{k}\neq 0\) for at least one value \(k>0\). For such \(k\), \(W^{k}\cong W^{\mu_{k}}\) is a self-associate irreducible \(\mathbb{C}S_{n-1}\)-supermodule, and the even and odd subspaces \(W^{k}_{\overline{0}}\) and \(W^{k}_{\overline{1}}\) of \(W^{k}\) can be identified via the odd involution \(J^{\mu_{k}}:W^{\mu_{k}}\to W^{\mu_{k}}\). Making the identification \(W^{k}_{\overline{0}}\simeq W^{k}_{\overline{1}}\) via \(J^{\mu_{k}}\), and writing \(W_{k}\) for this new common space (considered just as an ordinary vector space, without any superspace structure), the diagonal maps \[\mathfrak{gl}(W_{k})\to\mathfrak{gl}(W^{k}_{\overline{0}})\oplus\mathfrak{gl}( W^{k}_{\overline{1}})\quad\text{and}\quad\mathfrak{sl}(W_{k})\to\operatorname{Hom}(W^{k}_{ \overline{0}},W^{k}_{\overline{1}})\oplus\operatorname{Hom}(W^{k}_{\overline{ 1}},W^{k}_{\overline{0}}) \tag{4.3.3}\] induce vector space isomorphisms \(\mathfrak{gl}(W_{k})\cong\mathfrak{sq}(W^{k})_{\overline{0}}\) and \(\mathfrak{sl}(W_{k})\cong\mathfrak{sq}(W^{k})_{\overline{1}}\) that are compatible with the adjoint action. At the risk of confusing the reader, we will immediately change the meaning of our notation and will write \(\mathfrak{sl}(W_{k})\) to mean the evident Lie subalgebra of \(\mathfrak{gl}(W_{k})\cong\mathfrak{sq}(W^{k})_{\overline{0}}\). With this notation, we see that \[\mathfrak{f}:=[\mathfrak{sl}(W^{0}_{\overline{0}})\oplus\mathfrak{sl}(W^{0}_{ \overline{1}})]\oplus\mathfrak{sl}(W_{1})\oplus\mathfrak{sl}(W_{2})\oplus \cdots\oplus\mathfrak{sl}(W_{n-1})\] naturally identifies with a semisimple Lie subalgebra of \(W^{\lambda}(\mathfrak{g}^{\prime}_{n-1})_{\overline{0}}\subseteq W^{\lambda} (\mathfrak{g}_{n})\).1 Further, (4.2.4) and (4.2.5) give \(\mathfrak{f}\)-module decompositions of \(\operatorname{End}(W^{\lambda})_{\overline{1}}\) under the adjoint action. Footnote 1: Since \(\dim(W^{k})\geq 6\) whenever \(W^{k}\neq 0\), the nonzero summands in \(\mathfrak{f}\) are each of the form \(\mathfrak{sl}(m)\) for some \(m\geq 3\). #### 4.3.3. We will write elements of \(\operatorname{End}(W^{k})\) in the supermatrix block form \[\left[\begin{array}{c|c}A&B\\ \hline C&D\end{array}\right] \tag{4.3.4}\] where \(A\in\operatorname{Hom}(W^{k}_{\overline{0}},W^{k}_{\overline{0}})\), \(B\in\operatorname{Hom}(W^{k}_{\overline{1}},W^{k}_{\overline{0}})\), \(C\in\operatorname{Hom}(W^{k}_{\overline{0}},W^{k}_{\overline{1}})\), and \(D\in\operatorname{Hom}(W^{k}_{\overline{1}},W^{k}_{\overline{1}})\). In this notation, the inclusion \(\mathfrak{sq}(W^{k})\subseteq W^{\lambda}(\mathfrak{g}_{n})\) for \(k\geq 1\) translates into the statement that \(W^{\lambda}(\mathfrak{g}_{n})\) contains all supermatrices such that \(A=D\), \(B=C\), and \(\operatorname{tr}(B)=0\), while the summand \(\mathfrak{sl}(W_{k})\) of the algebra \(\mathfrak{f}\) identifies with those supermatrices such that \(A=D\), \(B=C=0\), and \(\operatorname{tr}(A)=0\). For each \(k\geq 1\), one sees that, as an \(\mathfrak{f}\)-module, \(\operatorname{End}(W^{k})_{\overline{1}}\) is the direct sum of: 1. a two-dimensional trivial \(\mathfrak{f}\)-submodule, spanned by the odd supermatrices2 of the form (4.3.4) such that \(B\) and \(C\) are arbitrary scalar matrices; and Footnote 2: A supermatrix of the form (4.3.4) is even if \(B=C=0\), and is odd if \(A=D=0\). 2. two nontrivial isomorphic irreducible \(\mathfrak{f}\)-modules \(\mathfrak{sl}(W_{k})^{+}\) and \(\mathfrak{sl}(W_{k})^{-}\), spanned by the odd supermatrices of the form (4.3.4) such that \(\operatorname{tr}(B)=0\) and \(C=0\), and such that \(B=0\) and \(\operatorname{tr}(C)=0\), respectively. Via the projection \(\mathfrak{f}\twoheadrightarrow\mathfrak{sl}(W_{k})\), these irreducibles each identify with the adjoint representation of \(\mathfrak{sl}(W_{k})\). Moreover, these two nontrivial irreducibles do not occur in any other summands in (4.2.4). #### 4.3.4. Our first main goal is to show that \(W^{\lambda}(\mathfrak{g}_{n})\) contains the semisimple Lie algebra \[\mathfrak{h}:=[\mathfrak{sl}(W^{0}_{\overline{0}})\oplus\mathfrak{sl}(W^{0}_{ \overline{1}})]\oplus[\mathfrak{sl}(W^{1}_{\overline{0}})\oplus\mathfrak{sl}(W ^{1}_{\overline{1}})]\oplus\cdots\oplus[\mathfrak{sl}(W^{n-1}_{\overline{0}}) \oplus\mathfrak{sl}(W^{n-1}_{\overline{1}})]. \tag{4.3.7}\] Given \(k\geq 1\) such that \(W^{k}\neq 0\), we will show that \([\mathfrak{sl}(W^{k}_{\overline{0}})\oplus\mathfrak{sl}(W^{k}_{\overline{1}})] \subseteq W^{\lambda}(\mathfrak{g}_{n})\) by considering the component of the map \(W^{\lambda}(s_{n-1})\) that lies in the summand \(\operatorname{End}(W^{k})_{\overline{1}}\) of (4.2.4). By definition, \(W^{k}\) is spanned by the vectors \(v^{+}_{\alpha}\) and \(v^{-}_{\alpha}\) for \(\alpha\in\overline{\mathcal{W}}(\lambda)\) of the form \(\alpha=(\cdots,j,k)\). If \(|k-j|=1\), then \(W^{\lambda}(s_{n-1})\) acts on the vectors \(v^{+}_{\alpha}\) and \(v^{-}_{\alpha}\) via the matrix given in Proposition 3.3.1(1). If \(|k-j|\geq 2\) and \(j\neq-k\), then \(W^{\lambda}(s_{n-1})\) maps the vectors \(v^{+}_{\alpha}\) and \(v^{-}_{\alpha}\) into the subspace \(W^{|j|}=W^{\lambda}_{|j|}\), and \(W^{|j|}\neq W^{k}\). Thus, if \(|k-j|\geq 2\) and \(j\neq-k\), then the action of \(W^{\lambda}(s_{n-1})\) on \(v^{+}_{\alpha}\) and \(v^{-}_{\alpha}\) does not arise from a map in \(\operatorname{End}(W^{k})_{\overline{1}}\). Next consider a weight in \(\overline{\mathcal{W}}(\lambda)\) of the form \(\alpha=(\alpha^{\prime\prime},-k,k)=(\alpha_{1},\ldots,\alpha_{n-2},-k,k)\). Weights of this form do exist: If the Young diagram of \(\lambda\) has a removable box \(B_{k}\) of content \(k\geq 1\), then by symmetry of \(\lambda\) it also has a removable box \(B_{-k}\) of content \(-k\), and the two boxes can be removed in either order to produce a partition \(\lambda^{\prime\prime}\) of \(n-2\). Let \(T^{\prime\prime}\) be any standard \(\lambda^{\prime\prime}\)-tableau, and let \(\alpha^{\prime\prime}=\alpha(T^{\prime\prime})\in\mathcal{W}(\lambda^{\prime \prime})\) be the weight of \(T^{\prime\prime}\). We can extend \(T^{\prime\prime}\) to a standard \(\lambda\)-tableau in two ways: by putting \(n-1\) in box \(B_{-k}\) and \(n\) in box \(B_{k}\), to get a standard \(\lambda\)-tableau \(T_{k}\) of weight \((\alpha^{\prime\prime},-k,k)\), or by putting \(n-1\) in box \(B_{k}\) and \(n\) in box \(B_{-k}\) to get a standard \(\lambda\)-tableau of weight \(T_{-k}\) of weight \((\alpha^{\prime\prime},k,-k)\). Every weight in \(\overline{\mathcal{W}}(\lambda)\) of the form \((\alpha^{\prime\prime},-k,k)\) arises in this way. Now if \(\alpha=(\alpha^{\prime\prime},-k,k)\) is a weight in \(\overline{\mathcal{W}}(\lambda)\), then \(\gamma:=(-\alpha^{\prime\prime},-k,k)\) is also a weight in \(\overline{\mathcal{W}}(\lambda)\), and \(\alpha,-\alpha,\gamma,-\gamma\) are four distinct weights in \(\mathcal{W}(\lambda)\). Let \(\beta=-\gamma\), let \(c=(k-(-k))^{-1}=1/(2k)\), and let \(w_{\beta}=(s_{n-1}-c)\cdot v_{\alpha}\) and \(w_{-\beta}=(s_{n-1}+c)\cdot v_{-\alpha}\) be defined as in Proposition 3.3.1(2). Then \(w_{\beta}=a\cdot v_{-\gamma}\) for some \(0\neq a\in\mathbb{C}\), hence \(w_{-\beta}=-\phi^{\lambda}(w_{\beta})=-a\cdot v_{\gamma}\), \[w^{+}_{\beta} =\tfrac{1}{2}(w_{\beta}-w_{-\beta})=a\cdot\tfrac{1}{2}(v_{-\gamma }+v_{\gamma})=a\cdot v^{+}_{\gamma},\quad\text{and}\] \[w^{-}_{\beta} =\tfrac{1}{2}(w_{\beta}+w_{-\beta})=a\cdot\tfrac{1}{2}(v_{-\gamma }-v_{\gamma})=-a\cdot v^{-}_{\gamma}.\] By Proposition 3.3.1(2), \(W^{\lambda}(s_{n-1})\) leaves invariant the span of \(v^{+}_{\alpha},v^{+}_{\gamma},v^{-}_{\alpha},v^{-}_{\gamma}\), and acts in this homogeneous basis via the supermatrix \[\left[\begin{array}{ccc|ccc}0&0&c&-(1-c^{2})/a\\ 0&0&a&c\\ \hline c&(1-c^{2})/a&0&0\\ -a&c&0&0\end{array}\right].\] Combined with the observations two paragraphs ago, this implies that the component of \(W^{\lambda}(s_{n-1})\) in \(\operatorname{End}(W^{k})_{\overline{1}}\) can be written as an odd supermatrix of the form (4.3.4) such that \(B\neq C\), but each pair of corresponding diagonal entries in \(B\) and \(C\) are equal. Since \(k\geq 1\), the partition \(\mu_{k}\prec\lambda\) is not symmetric. Then by (4.1.6), \(W^{k}(T_{n-1})=\operatorname{cont}(\mu_{k})J^{\mu_{k}}\), and \(\operatorname{cont}(\mu_{k})=\operatorname{cont}(\lambda)-k=-k\neq 0\). Now it follows that for some scalar \(r\), the component in \(\operatorname{End}(W^{k})_{\overline{1}}\) of the operator \(W^{k}(s_{n-1}-r\cdot T_{n-1})=W^{k}(s_{n-1})-r\cdot W^{k}(T_{n-1})\) has the form \[\left[\begin{array}{c|c}0&B^{\prime}\\ \hline C^{\prime}&0\end{array}\right]\] where \(B^{\prime}\) and \(C^{\prime}\) are nonzero traceless matrices. Recall that \(\mathfrak{sq}(W^{k})_{\overline{1}}\subseteq W^{\lambda}(\mathfrak{g}_{n})\) consists of all supermatrices \([\begin{smallmatrix}0&X\\ X&0\end{smallmatrix}]\) such that \(\operatorname{tr}(X)=0\). Then \(\phi:=W^{k}(s_{n-1}-r\cdot T_{n-1})-[\begin{smallmatrix}0&C^{\prime}\\ C^{\prime}&0\end{smallmatrix}]\) is an odd element of \(W^{\lambda}(\mathfrak{g}_{n})\) whose component in \(\operatorname{End}(W^{k})_{\overline{1}}\) is a nonzero element of the \(\mathfrak{f}\)-module \(\mathfrak{sl}(W_{k})^{+}\) described in (4.3.6). Since \(\mathfrak{sl}(W_{k})^{+}\) does not occur in any other summand of (4.2.4), the semisimplicity of the Lie algebra \(\mathfrak{f}\) implies that the entire module \(\mathfrak{sl}(W_{k})^{+}\) must be contained in the \(\mathfrak{f}\)-submodule of \(W^{\lambda}(\mathfrak{g}_{n})\) generated by \(\phi\), i.e., \(\mathfrak{sl}(W_{k})^{+}\subseteq W^{\lambda}(\mathfrak{g}_{n})\). By similar reasoning, one gets \(\mathfrak{sl}(W_{k})^{-}\subseteq W^{\lambda}(\mathfrak{g}_{n})\). Now using the observation just after (4.3.2) that \(\dim(W^{k})\geq 6\), and hence \(\dim(W^{k}_{\overline{0}})=\dim(W^{k}_{\overline{1}})\geq 3\), one can show that the Lie superalgebra generated by the subspaces \(\mathfrak{sl}(W_{k})^{+}\) and \(\mathfrak{sl}(W_{k})^{-}\) of \(\operatorname{End}(W^{k})\) must contain nonzero elements \(x\in\mathfrak{sl}(W^{k}_{\overline{0}})\) and \(y\in\mathfrak{sl}(W^{k}_{\overline{1}})\). Then the Lie algebra generated by \(x\), \(y\), and \(\mathfrak{gl}(W_{k})=\mathfrak{sq}(W^{k})_{\overline{0}}\) must contain \(\mathfrak{sl}(W^{k}_{\overline{0}})\oplus\mathfrak{sl}(W^{k}_{\overline{1}})\). Thus, \([\mathfrak{sl}(W^{k}_{\overline{0}})\oplus\mathfrak{sl}(W^{k}_{\overline{1}} )]\subseteq W^{\lambda}(\mathfrak{g}_{n})\). #### 4.3.5. We have shown that the semisimple Lie algebra \(\mathfrak{h}\) of (4.3.7) is contained in \(W^{\lambda}(\mathfrak{g}_{n})\). Next observe that (4.2.5) is a multiplicity-free decomposition of \(\operatorname{End}(W^{\lambda})_{\overline{1}}\) into irreducible \(\mathfrak{h}\)-modules. In particular, each irreducible summand is equal to its own isotypical component. Together with the semisimplicity of \(\mathfrak{h}\), this implies that if \(\psi\in\operatorname{End}(W^{\lambda})_{\overline{1}}\), and if \(\psi\) has a nonzero component in some irreducible \(\mathfrak{h}\)-module summand of \(\operatorname{End}(W^{\lambda})_{\overline{1}}\), then the entire summand in question must be contained in the \(\mathfrak{h}\)-submodule of \(\operatorname{End}(W^{\lambda})_{\overline{1}}\) generated by \(\psi\). By induction, we already know that \(\operatorname{End}(W^{0})_{\overline{1}}\subseteq W^{\lambda}(\mathfrak{g}_{n})\). Then to show that \(\operatorname{End}(W^{\lambda})_{\overline{1}}\subseteq W^{\lambda}( \mathfrak{g}_{n})\), it suffices to show for all \(k,\ell\in\mathbb{Z}\) for which \(W^{k}\neq 0\) and \(W^{\ell}\neq 0\), and for which at least one of \(k\) or \(\ell\) is nonzero, that \(W^{\lambda}(s_{n-1})\) has nonzero components in both \(\operatorname{Hom}(W^{k}_{\overline{0}},W^{\ell}_{\overline{1}})\) and \(\operatorname{Hom}(W^{k}_{\overline{1}},W^{\ell}_{\overline{0}})\). We have already established this is true when \(k=\ell\geq 1\), so we may assume that \(k\neq\ell\). If \(W^{k}\neq 0\) and \(W^{\ell}\neq 0\), then the Young diagram of \(\lambda\) has removable boxes \(B_{k}\) and \(B_{\ell}\) of contents \(k\) and \(\ell\), respectively, and these boxes can be removed in either order. Moreover, since \(B_{k}\) and \(B_{\ell}\) are both removable, it must be the case that \(|k-\ell|\geq 2\). Now reasoning as we did earlier, one can deduce that \(\overline{W}(\lambda)\) contains a pair of weights of the forms \(\alpha=(\alpha^{\prime\prime},\ell,k)\) and \(\beta=(\alpha^{\prime\prime},k,\ell)\). Finally, applying Proposition 3.3.1(2), one sees that \(W^{\lambda}(s_{n-1})\) has nonzero components in each of \(\operatorname{Hom}(W^{k}_{\overline{0}},W^{\ell}_{\overline{1}})\), \(\operatorname{Hom}(W^{k}_{\overline{1}},W^{\ell}_{\overline{0}})\), \(\operatorname{Hom}(W^{\ell}_{\overline{0}},W^{k}_{\overline{1}})\), and \(\operatorname{Hom}(W^{\ell}_{\overline{1}},W^{k}_{\overline{0}})\). ### Proof of Theorem 4.2.1: the case \(\lambda\in E_{n}\) #### 4.4.1. Now suppose \(\lambda\in E_{n}\). In this case one has \(W^{k}\neq 0\) only if \(|k|<n\). If there is only one summand in the decomposition \(W^{\lambda}=\bigoplus_{k\in\mathbb{Z}}W^{k}\), i.e., if \(\lambda\) has only one removable box, say \(B_{\ell}\) of content \(\ell\), then the Young diagram of \(\lambda\) must be a (non-symmetric) rectangle with \(B_{\ell}\) in its outer corner, and hence the partition \(\mu_{\ell}\) obtained by removing \(B_{\ell}\) must also be non-symmetric. Then by induction, \(W^{\lambda}\cong W^{\mu_{\ell}}\) as a \(\mathbb{C}S_{n-1}\)-supermodule, and \[\mathfrak{sq}(W^{\lambda})=\mathfrak{sq}(W^{\mu_{\ell}})=W^{\mu_{\ell}}( \mathfrak{g}^{\prime}_{n-1})=W^{\lambda}(\mathfrak{g}^{\prime}_{n-1})\subseteq W ^{\lambda}(\mathfrak{g}_{n}),\] establishing (4.2.2). So assume that \(W^{k}\neq 0\) for more than one value of \(k\). #### 4.4.2. Since \(\lambda\) is not symmetric, there is at most one value of \(k\) such that the partition \(\mu_{k}\prec\lambda\) is symmetric; call this value \(s\) (if it exists). If \(s\) exists, then \(s\neq 0\), because \(\lambda\) is not symmetric. By the induction hypothesis, \[W^{\lambda}(\mathfrak{g}^{\prime}_{n-1})=W^{s}(\mathfrak{g}^{\prime}_{n-1}) \oplus\bigoplus_{k\neq s}\mathfrak{sq}(W^{k}), \tag{4.4.1}\] where, by definition, \(W^{s}=0\) and the first summand is omitted if a symmetric partition \(\mu_{s}\prec\lambda\) does not exist. The supermodule \(W^{\lambda}\) is equipped with the odd involution \(J=J^{\lambda}:W^{\lambda}\to W^{\lambda}\), which restricts for each \(k\in\mathbb{Z}\) to an odd involution \(J^{k}:W^{k}\to W^{k}\). As in Section 4.3, we make the identification \(W^{k}_{\overline{0}}\simeq W^{k}_{\overline{1}}\) via \(J^{k}\) and write \(W_{k}\) for the common identified space. Then for \(k\neq s\) one has \(\mathfrak{gl}(W_{k})\cong\mathfrak{sq}(W^{k})_{\overline{0}}\) as in (4.3.3). For \(k=s\), we see from (3.4.3) that \(J^{s}\) defines an odd isomorphism \(W^{\mu_{s}}\simeq\Pi(W^{\mu_{s}})\). Then conjugation by \(J^{s}\), \(\phi\mapsto J^{s}\circ\phi\circ J^{s}\), defines an even isomorphism \(\operatorname{End}(W^{\mu_{s}})\cong\operatorname{End}(\Pi(W^{\mu_{s}}))\), and \(W^{s}(\mathfrak{g}^{\prime}_{n-1})\) is the image in \(\operatorname{End}(W^{s})\) of the diagonal map \[\mathfrak{sl}(W^{\mu_{s}})\to\operatorname{End}(W^{\mu_{s}})\oplus \operatorname{End}(\Pi(W^{\mu_{s}})). \tag{4.4.2}\] Make the identifications \(W^{\mu_{s}}\simeq\Pi(W^{\mu_{s}})\), \(W^{\mu_{s}}_{\overline{0}}\simeq\Pi(W^{\mu_{s}}_{\overline{0}})\), and \(W^{\mu_{s}}_{\overline{1}}\simeq\Pi(W^{\mu_{s}}_{\overline{1}})\) via \(J^{s}\), and write \(W_{\mu_{s}}\), \(W_{\mu_{s},\overline{0}}\), and \(W_{\mu_{s},\overline{1}}\) for the common identified spaces, respectively (considered just as ordinary vector spaces, without any superspace structures). **Remark 4.4.1**.: It could happen that 1. \(W_{\mu_{s},\overline{0}}\cong W_{\mu_{s},\overline{1}}\neq 0\), but \(\mathfrak{sl}(W_{\mu_{s},\overline{0}})\cong\mathfrak{sl}(W_{\mu_{s}, \overline{1}})=0\); or that 2. \(W_{k}\neq 0\) for some \(k\neq s\), but \(\mathfrak{sl}(W_{k})=0\). These situations occur if and only if \(\dim(W^{\mu_{s}})=2\) or \(\dim(W^{\mu_{k}})=2\), respectively. Lemma 3.1.5 implies for \(n\geq 6\) that situation (1) cannot occur, and implies that up to the equivalence \(\lambda\sim\lambda^{\prime}\), situation (2) occurs only if \(\lambda=(n-1,1)\). In this case, \(W^{(n-1,1)}=W^{(n-1,1)}_{n-2}\oplus W^{(n-1,1)}_{-1}\), and one has \(\mathbb{C}S_{n-1}\)-supermodule isomorphisms \[W^{(n-1,1)}_{n-2} \cong W^{(n-2,1)}, \dim(W^{(n-2,1)})=2(n-2),\] \[W^{(n-1,1)}_{-1} \cong W^{(n-1)}, \dim(W^{(n-1)})=2.\] In any event, \(\operatorname{Hom}(W_{k},W_{\ell})\) remains an irreducible \(\mathfrak{sl}(W_{k})\oplus\mathfrak{sl}(W_{\ell})\)-module even if one of \(W_{k}\) or \(W_{\ell}\) is one-dimensional (hence even if one of \(\mathfrak{sl}(W_{k})\) or \(\mathfrak{sl}(W_{\ell})\) is zero). #### 4.4.3. Now from (4.4.1), we see that the semisimple Lie algebra \[\mathfrak{h}:=\left[\mathfrak{sl}(W_{\mu_{s},\overline{0}})\oplus\mathfrak{ sl}(W_{\mu_{s},\overline{1}})\right]\oplus\bigoplus_{k\neq s}\mathfrak{ sl}(W_{k})\] identifies with a subalgebra of \(W^{\lambda}(\mathfrak{g}^{\prime}_{n-1})_{\overline{0}}\subseteq W^{\lambda}( \mathfrak{g}_{n})\). Further, (4.2.4) and (4.2.5) give \(\mathfrak{h}\)-module decompositions of \(\operatorname{End}(W^{\lambda})_{\overline{1}}\) under the adjoint action. The set \(W^{\lambda}(\mathfrak{g}_{n})\) is contained in \[\mathfrak{sq}(W^{\lambda})=\operatorname{End}(W^{\lambda})^{J}:=\{\theta\in \operatorname{End}(W^{\lambda}):J\circ\theta\circ J=\theta\},\] and the decomposition (4.2.4) gives rise to the corresponding decomposition of \(J\)-invariants \[\operatorname{End}(W^{\lambda})^{J}_{\overline{1}}=\bigoplus_{k,\ell\in \mathbb{Z}}\operatorname{Hom}(W^{k},W^{\ell})^{J}_{\overline{1}}. \tag{4.4.3}\] For \(s\notin\{k,\ell\}\), one sees that the diagonal map \[\operatorname{Hom}(W_{k},W_{\ell})\to\operatorname{Hom}(W^{k}_{\overline{0}},W^{\ell}_{\overline{1}})\oplus\operatorname{Hom}(W^{k}_{\overline{1}},W^{ \ell}_{\overline{0}})\] induces an \(\mathfrak{h}\)-module isomorphism \(\operatorname{Hom}(W_{k},W_{\ell})\cong\operatorname{Hom}(W^{k},W^{\ell})^{J}_ {\overline{1}}\). Similarly, for \(k\neq s\) one sees that the diagonal maps \[\operatorname{Hom}(W_{\mu_{s},\overline{0}},W_{k})\to\operatorname{Hom}(W^{ \mu_{s}}_{\overline{0}},W^{k}_{\overline{1}})\oplus\operatorname{Hom}(\Pi(W^{ \mu_{s}}_{\overline{0}}),W^{k}_{\overline{0}}),\quad\text{and}\] \[\operatorname{Hom}(W_{\mu_{s},\overline{1}},W_{k})\to\operatorname{Hom}(W^{ \mu_{s}}_{\overline{1}},W^{k}_{\overline{0}})\oplus\operatorname{Hom}(\Pi(W^{ \mu_{s}}_{\overline{1}}),W^{k}_{\overline{1}})\] induce an \(\mathfrak{h}\)-module isomorphism \[\operatorname{Hom}(W_{\mu_{s},\overline{0}},W_{k})\oplus\operatorname{Hom}(W_ {\mu_{s},\overline{1}},W_{k})\cong\operatorname{Hom}(W^{s},W^{k})^{J}_{ \overline{1}}.\] An analogous description holds for \(\operatorname{Hom}(W^{k},W^{s})^{J}_{\overline{1}}\). Finally, as an \(\mathfrak{h}\)-module, \[\operatorname{Hom}(W^{s},W^{s})^{J}_{\overline{1}}\cong \operatorname{Hom}(W_{\mu_{s},\overline{0}},W_{\mu_{s},\overline{1} })\oplus\operatorname{Hom}(W_{\mu_{s},\overline{1}},W_{\mu_{s},\overline{0}})\] \[\oplus\operatorname{Hom}(W_{\mu_{s},\overline{0}},W_{\mu_{s}, \overline{0}})\oplus\operatorname{Hom}(W_{\mu_{s},\overline{1}},W_{\mu_{s}, \overline{1}}),\] where the summands on the right side of the isomorphism are identified with the images of the corresponding diagonal maps \[\operatorname{Hom}(W_{\mu_{s},\overline{0}},W_{\mu_{s},\overline{1}})\to \operatorname{Hom}(W^{\mu_{s}}_{\overline{0}},W^{\mu_{s}}_{\overline{1}})\oplus \operatorname{Hom}(\Pi(W^{\mu_{s}}_{\overline{0}}),\Pi(W^{\mu_{s}}_{\overline{1} })), \tag{4.4.4}\] \[\operatorname{Hom}(W_{\mu_{s},\overline{1}},W_{\mu_{s},\overline{0}}) \to\operatorname{Hom}(W_{\overline{1}}^{\mu_{s}},W_{\overline{0}}^{\mu_{s}}) \oplus\operatorname{Hom}(\Pi(W_{\overline{1}}^{\mu_{s}}),\Pi(W_{\overline{0}}^ {\mu_{s}})), \tag{4.4.6}\] \[\operatorname{Hom}(W_{\mu_{s},\overline{0}},W_{\mu_{s},\overline{0 }}) \to\operatorname{Hom}(W_{\overline{0}}^{\mu_{s}},\Pi(W_{\overline{0}}^ {\mu_{s}}))\oplus\operatorname{Hom}(\Pi(W_{\overline{0}}^{\mu_{s}}),W_{ \overline{0}}^{\mu_{s}}),\] (4.4.7) \[\operatorname{Hom}(W_{\mu_{s},\overline{1}},W_{\mu_{s},\overline{1 }}) \to\operatorname{Hom}(W_{\overline{1}}^{\mu_{s}},\Pi(W_{\overline{1}}^ {\mu_{s}}))\oplus\operatorname{Hom}(\Pi(W_{\overline{1}}^{\mu_{s}}),W_{ \overline{1}}^{\mu_{s}}). \tag{4.4.5}\] #### 4.4.4. Irreducible constituents Altogether, \(\operatorname{End}(W^{\lambda})_{\overline{1}}^{J}\) admits the \(\mathfrak{h}\)-module decomposition \[\begin{split}&\operatorname{End}(W^{\lambda})_{\overline{1}}^{J} \cong\Big{[}\bigoplus_{s\notin\{k,\ell\}}\operatorname{Hom}(W_{k},W_{\ell}) \Big{]}\\ &\oplus\Big{[}\bigoplus_{k\neq s}\operatorname{Hom}(W_{\mu_{s}, \overline{0}},W_{k})\oplus\operatorname{Hom}(W_{\mu_{s},\overline{1}},W_{k}) \oplus\operatorname{Hom}(W_{k},W_{\mu_{s},\overline{0}})\oplus\operatorname{ Hom}(W_{k},W_{\mu_{s},\overline{1}})\Big{]}\\ &\oplus\Big{[}\operatorname{Hom}(W_{\mu_{s},\overline{0}},W_{\mu_ {s},\overline{1}})\oplus\operatorname{Hom}(W_{\mu_{s},\overline{1}},W_{\mu_{s},\overline{0}})\oplus\operatorname{End}(W_{\mu_{s},\overline{0}})\Big{]}. \end{split} \tag{4.4.8}\] For \(k\neq s\), the term \(\operatorname{End}(W_{k})\) in (4.4.8) is either simply a one-dimensional trivial \(\mathfrak{h}\)-module, if \(\dim(W_{k})=1\), or else is the direct sum of a one-dimensional trivial \(\mathfrak{h}\)-module and a copy of the adjoint module for \(\mathfrak{sl}(W_{k})\), the latter of which is contained in \(W^{\lambda}(\mathfrak{g}_{n-1}^{\prime})_{\overline{1}}\subseteq W^{\lambda}( \mathfrak{g}_{n})\) by (4.4.1). By (4.4.2), the summands \(\operatorname{Hom}(W_{\mu_{s},\overline{0}},W_{\mu_{s},\overline{1}})\) and \(\operatorname{Hom}(W_{\mu_{s},\overline{1}},W_{\mu_{s},\overline{0}})\) are also contained in \(W^{\lambda}(\mathfrak{g}_{n})\). The summands \(\operatorname{End}(W_{\mu_{s},\overline{0}})\) and \(\operatorname{End}(W_{\mu_{s},\overline{1}})\) in (4.4.8) are each direct sums of a one-dimensional trivial \(\mathfrak{h}\)-module and a copy of the adjoint representation for \(\mathfrak{sl}(W_{\mu_{s},\overline{0}})\) and \(\mathfrak{sl}(W_{\mu_{s},\overline{1}})\), respectively. The remaining nonzero summands in (4.4.8) are each nontrivial irreducible \(\mathfrak{h}\)-modules. Overall, the non-trivial irreducible \(\mathfrak{h}\)-modules that occur in \(\operatorname{End}(W^{\lambda})_{\overline{1}}^{J}\) each do so with multiplicity one. #### 4.4.5. If \(k\), \(\ell\), and \(s\) are distinct, and if \(W^{k}\) and \(W^{\ell}\) are both nonzero, then one can argue as in the last two paragraphs of Section 4.3 to show first that \(W^{\lambda}(s_{n-1})\) has nonzero components in the irreducible \(\mathfrak{h}\)-module summands \(\operatorname{Hom}(W^{k},W^{\ell})_{\overline{1}}^{J}\) and \(\operatorname{Hom}(W^{\ell},W^{k})_{\overline{1}}^{J}\) of \(\operatorname{End}(W^{\lambda})_{\overline{1}}^{J}\), and then to deduce that these summands must both be contained in \(W^{\lambda}(\mathfrak{g}_{n})\). #### 4.4.6. The case \(W^{s}=0\) If \(W^{s}=0\), then the previous paragraph together with our observations in Section 4.4.4 imply that each non-trivial \(\mathfrak{h}\)-module constituent of \(\operatorname{End}(W^{\lambda})_{\overline{1}}^{J}\) is contained in \(W^{\lambda}(\mathfrak{g}_{n})\). Identifying \(\mathfrak{sq}(W^{\lambda})\) with supermatrices as in (4.1.2), it implies that \[\left\{\left[\begin{array}{c|c}0&B\\ \hline B&0\end{array}\right]:\text{the diagonal entries of $B$ are all zero}\right\} \tag{4.4.9}\] is contained in \(W^{\lambda}(\mathfrak{g}_{n})\). More precisely, the direct sum decomposition (4.4.8) induces a block decomposition of the matrix \(B\) such that the diagonal blocks correspond to \(\oplus_{k\in\mathbb{Z}}\operatorname{End}(W_{k})\), and we deduce that \(W^{\lambda}(\mathfrak{g}_{n})\) contains the (larger) set of all matrices of the form \([\begin{smallmatrix}0&B\\ B&0\end{smallmatrix}]\) such that these diagonal blocks each individually have trace zero. By Lemma 3.1.5 and the assumption that \(W^{k}\neq 0\) for more than one value of \(k\), we have \(\dim(W^{\lambda})\geq 2n-2\geq 10\). Then the inclusion \(\mathfrak{sq}(W^{\lambda})\subseteq W^{\lambda}(\mathfrak{g}_{n})\) in the case where \(W^{s}=0\) is obtained from the following lemma: **Lemma 4.4.2**.: _If \(m\geq 5\), then \(\mathfrak{sq}(m)\) is generated as a Lie superalgebra by the identity matrix \(I_{m|m}\) and the set (4.4.9)._ Proof.: Let \(\mathfrak{g}\subseteq\mathfrak{sq}(m)\) be the Lie superalgebra generated by the identity matrix \(I_{m|m}\) and the set (4.4.9). First show that all simple root vectors in \(\mathfrak{sl}(m)\subseteq\mathfrak{gl}(m)\cong\mathfrak{sq}(m)_{\overline{0}}\) are elements of \(\mathfrak{g}\) by taking Lie brackets between root vectors in the set (4.4.9). The simple root vectors generate \(\mathfrak{sl}(m)\) as a Lie algebra, and together with the identity matrix they generate all of \(\mathfrak{gl}(m)\cong\mathfrak{sq}(m)_{\overline{0}}\). Then \(\mathfrak{sq}(m)_{\overline{0}}\subseteq\mathfrak{g}\). Now since \(\mathfrak{sq}(m)_{\overline{1}}\) is irreducible under the adjoint action of \(\mathfrak{sq}(m)_{\overline{0}}\), one deduces that \(\mathfrak{sq}(m)_{\overline{1}}\subseteq\mathfrak{g}\) as well, and hence \(\mathfrak{g}=\mathfrak{sq}(m)\). #### 4.4.7. The case \(W^{s}\neq 0\) Now suppose that \(k\neq s\) and that \(W^{k}\) and \(W^{s}\) are both nonzero. Let \(B_{k}\) and \(B_{s}\) be the removable boxes of contents \(k\) and \(s\) in the Young diagram of \(\lambda\). Recall that \(s\neq 0\), because \(\lambda\) is not symmetric. Since \(B_{k}\) and \(B_{s}\) are both removable, we can remove \(B_{s}\) and then \(B_{k}\), showing that \(\mu_{s}\) has removable box of content \(k\). Then by symmetry, \(\mu_{s}\) must have a removable box \(B_{-k}\) of content \(-k\). (It may happen that \(k=0\), in which case \(B_{-k}=B_{k}\).) This implies that \(-k\neq s\), because a new box of content \(s\) would be removable from \(\mu_{s}\) only if there had originally been boxes both immediately above and immediately to the left of \(B_{s}\) in the Young diagram of \(\lambda\), and if both of those boxes had already been removed. In other words, it requires at least two intermediate steps to remove two boxes of the same content from \(\lambda\). Thus the Young diagram of \(\lambda\) must have removable boxes of contents \(s\), \(k\), and \(-k\) (the latter two being the same box, if \(k=0\)), implying that \(|s-k|\geq 2\) and \(|s+k|\geq 2\), and implying that \(W^{s}\), \(W^{k}\), and \(W^{-k}\) are nonzero. We want to show that \(W^{\lambda}(s_{n-1})\) has nonzero components in each of the terms in the second line of (4.4.8). We will do this simultaneously for the indices \(k\) and \(-k\). Since \(\lambda\) has removable boxes of contents \(s\), \(k\), and \(-k\), we can argue as before to deduce that \(\mathcal{W}(\lambda)\) contains four (distinct) weights of the form \[\alpha =(\alpha^{\prime\prime},k,s), \alpha^{\prime} =(-\alpha^{\prime\prime},-k,s),\] \[\beta =(\alpha^{\prime\prime},s,k), \beta^{\prime} =(-\alpha^{\prime\prime},s,-k).\] Then \(\gamma:=(\alpha^{\prime\prime},k)\) and \(-\gamma=(-\alpha^{\prime\prime},-k)\) are elements of \(\mathcal{W}(\mu_{s})\). We may assume that \(k\geq 0\) and that \(\gamma\in\overline{\mathcal{W}}(\mu_{s})\), i.e., \(\gamma\) is the 'positive' element of the pair \(\pm\gamma\). As in (3.4.1), \(S^{\mu_{s}}\) is a \(\mathbb{C}S_{n-1}\)-module summand of \(S^{\lambda}\), and under this identification the weight vectors \(v_{\alpha},v_{\alpha^{\prime}}\in S^{\lambda}\) restrict to a pair of weight vectors \(u_{\gamma},u_{-\gamma}\in S^{\mu_{s}}\). (We use the letter \(u\) rather than \(v\) to indicate when we are considering a vector's restriction to the \(\mathbb{C}S_{n-1}\)-module \(S^{\mu_{s}}\).) Rescaling the vector \(v_{\alpha^{\prime}}=u_{\gamma}\) if necessary, we may assume that \(u_{-\gamma}=\phi^{\mu}(u_{\gamma})\) as in (3.3.1). Then in the decomposition \(S^{\mu_{s}}=S^{\mu_{s}^{+}}\oplus S^{\mu_{s}^{-}}\), one has \(u_{\gamma}=u_{\gamma}^{+}+u_{\gamma}^{-}\) and \(u_{-\gamma}=u_{\gamma}^{+}-u_{\gamma}^{-}\), where \[u_{\gamma}^{+}=\tfrac{1}{2}\left(u_{\gamma}+u_{-\gamma}\right)=\tfrac{1}{2} \left(v_{\alpha}+v_{\alpha^{\prime}}\right)\quad\text{and}\quad u_{\gamma}^{ -}=\tfrac{1}{2}\left(u_{\gamma}-u_{-\gamma}\right)=\tfrac{1}{2}\left(v_{\alpha }-v_{\alpha^{\prime}}\right).\] Then under the \(\mathbb{C}S_{n-1}\)-supermodule decomposition \(W^{s}\cong W^{\mu}\oplus\Pi(W^{\mu})\), one has \[W_{\overline{0}}^{\mu} \ni u_{\gamma}^{+}+\phi^{\lambda}(u_{\gamma}^{+})=\tfrac{1}{2} \left(v_{\alpha}+v_{\alpha^{\prime}}\right)+\tfrac{1}{2}\left(v_{-\alpha}+v_ {-\alpha^{\prime}}\right)=v_{\alpha}^{+}+v_{\alpha^{\prime}}^{+},\] \[W_{\overline{1}}^{\mu} \ni u_{\gamma}^{-}-\phi^{\lambda}(u_{\gamma}^{-})=\tfrac{1}{2} \left(v_{\alpha}-v_{\alpha^{\prime}}\right)-\tfrac{1}{2}\left(v_{-\alpha}-v_ {-\alpha^{\prime}}\right)=v_{\alpha}^{-}-v_{\alpha^{\prime}}^{-},\] \[\Pi(W_{\overline{0}}^{\mu}) \ni u_{\gamma}^{+}-\phi^{\lambda}(u_{\gamma}^{+})=\tfrac{1}{2} \left(v_{\alpha}+v_{\alpha^{\prime}}\right)-\tfrac{1}{2}\left(v_{-\alpha}+v_ {-\alpha^{\prime}}\right)=v_{\alpha}^{-}+v_{\alpha^{\prime}}^{-},\] \[\Pi(W_{\overline{1}}^{\mu}) \ni u_{\gamma}^{-}+\phi^{\lambda}(u_{\gamma}^{-})=\tfrac{1}{2} \left(v_{\alpha}-v_{\alpha^{\prime}}\right)+\tfrac{1}{2}\left(v_{-\alpha}-v_ {-\alpha^{\prime}}\right)=v_{\alpha}^{+}-v_{\alpha^{\prime}}^{+}.\] Set \(c=(s-k)^{-1}\) and \(d=(s+k)^{-1}\). Let \(w_{\beta}=(s_{n-1}-c)\cdot v_{\alpha}\), let \(w_{\beta^{\prime}}=(s_{n-1}-d)\cdot v_{\alpha^{\prime}}\), and let the auxiliary vectors \(w_{\beta}^{+},w_{\beta}^{-}\in W^{k}\) and \(w_{\beta^{\prime}}^{+},w_{\beta^{\prime}}^{-}\in W^{-k}\) be defined as in Proposition 3.3.1(2). Then \(W^{\lambda}(s_{n-1})\) leaves invariant the span of the homogeneous vectors \[w_{\beta}^{+},\ w_{\beta^{\prime}}^{+},\ u_{\gamma}^{+}+\phi^{\lambda}(u_{\gamma }^{+}),\ u_{\gamma}^{-}+\phi^{\lambda}(u_{\gamma}^{-}),\ w_{\beta}^{-},\ w_{\beta^{ \prime}}^{-},\ u_{\gamma}^{+}-\phi^{\lambda}(u_{\gamma}^{+}),\ u_{\gamma}^{-}- \phi^{\lambda}(u_{\gamma}^{-}),\] and acts in this homogeneous basis via the matrix \[\left[\begin{array}{cccc|c|c|c}0&0&0&0&-c&0&\frac{1}{2}&\frac{1}{2}\\ 0&0&0&0&0&-d&\frac{1}{2}&-\frac{1}{2}\\ 0&0&0&0&1-c^{2}&1-d^{2}&\frac{c+d}{2}&\frac{c-d}{2}\\ 0&0&0&0&1-c^{2}&d^{2}-1&\frac{c-d}{2}&\frac{c+d}{2}\\ \hline-c&0&\frac{1}{2}&\frac{1}{2}&0&0&0&0\\ 0&-d&\frac{1}{2}&-\frac{1}{2}&0&0&0&0\\ 1-c^{2}&1-d^{2}&\frac{c+d}{2}&\frac{c-d}{2}&0&0&0&0\\ 1-c^{2}&d^{2}-1&\frac{c-d}{2}&\frac{c+d}{2}&0&0&0&0\end{array}\right] \tag{4.4.10}\] This shows (for both \(k\) and \(-k\)) that \(W^{\lambda}(s_{n-1})\) has nonzero components in each of the terms in the second line of (4.4.8). Then by the semisimplicity of \(\mathfrak{h}\), and by the fact that all nontrivial irreducible \(\mathfrak{h}\)-module summands in \(\operatorname{End}(W^{\lambda})\frac{J}{\mathbb{T}}\) occur with multiplicity one, we conclude that each of the summands in the second line of (4.4.8) must be contained in \(W^{\lambda}(\mathfrak{g}_{n})\). #### 4.4.8. The case \(W^{s}\neq 0\), concluded Now the only nontrivial \(\mathfrak{h}\)-module constituents of (4.4.8) that we have not yet shown are contained in \(W^{\lambda}(\mathfrak{g}_{n})\) are the copies of the adjoint representations of \(\mathfrak{sl}(W_{\mu_{s},\overline{0}})\) and \(\mathfrak{sl}(W_{\mu_{s},\overline{1}})\) in \(\operatorname{End}(W_{\mu_{s},\overline{0}})\) and \(\operatorname{End}(W_{\mu_{s},\overline{1}})\), respectively. Once we show that these irreducible constituents are contained in \(W^{\lambda}(\mathfrak{g}_{n})\), we can then argue as in Section 4.4.6, using Lemma 4.4.2, to conclude that \(\mathfrak{sq}(W^{\lambda})\subseteq W^{\lambda}(\mathfrak{g}_{n})\). We know that \(W^{\lambda}(\mathfrak{g}_{n})\) contains the terms \(\operatorname{Hom}(W_{\mu_{s},\overline{0}},W_{k})\) and \(\operatorname{Hom}(W_{k},W_{\mu_{s},\overline{1}})\) from (4.4.8). Like all of the terms in (4.4.8), these two terms are concentrated in odd superdegree. We also know from (4.4.2) that \(W^{s}(\mathfrak{g}_{n-1}^{\prime})\subset W^{\lambda}(\mathfrak{g}_{n})\) contains a copy of \(\operatorname{Hom}(W_{\mu_{s},\overline{1}},W_{\mu_{s},\overline{0}})\), also concentrated in odd superdegree, equal to the image of the diagonal map \[\operatorname{Hom}(W_{\mu_{s},\overline{1}},W_{\mu_{s},\overline{0}})\to \operatorname{Hom}(W_{\overline{1}}^{\mu_{s}},W_{\overline{0}}^{\mu_{s}}) \oplus\operatorname{Hom}(\Pi(W_{\overline{1}}^{\mu_{s}}),\Pi(W_{\overline{0} }^{\mu_{s}})).\] By Remark 4.4.1 and Lemma 3.1.5, we know that \(W_{k}\), \(W_{\mu_{s},\overline{0}}\), and \(W_{\mu_{s},\overline{1}}\) are each at least \(3\)-dimensional. Now one can choose appropriate'matrix units' \[x\in\operatorname{Hom}(W_{\mu_{s},\overline{1}},W_{\mu_{s},\overline{0}}), \qquad y\in\operatorname{Hom}(W_{k},W_{\mu_{s},\overline{1}}),\qquad z\in \operatorname{Hom}(W_{\mu_{s},\overline{0}},W_{k})\] such that the Lie bracket \([x,[y,z]]\) is a nonzero element--of \(W^{\lambda}(\mathfrak{g}_{n})\)--in the subspace \(\mathfrak{sl}(W_{\mu_{s},\overline{0}})\subset\operatorname{End}(W_{\mu_{s}, \overline{0}})\subset\operatorname{End}(W^{s})\frac{J}{\mathbb{T}}\). Then by the irreducibility of the adjoint representation \(\mathfrak{sl}(W_{\mu_{s},\overline{0}})\), the \(\mathfrak{h}\)-submodule of \(W^{\lambda}(\mathfrak{g}_{n})\) generated by \([x,[y,z]]\) must be equal to all of \(\mathfrak{sl}(W_{\mu_{s},\overline{0}})\). Similarly, one can show that \(W^{\lambda}(\mathfrak{g}_{n})\) contains the subspace \(\mathfrak{sl}(W_{\mu_{s},\overline{1}})\subset\operatorname{End}(W_{\mu_{s}, \overline{1}})\subset\operatorname{End}(W^{s})\frac{J}{\mathbb{T}}\). ### First consequence of Theorem 4.2.1 **Corollary 4.5.1**.: _Let \(n\geq 2\), and set \(\mathfrak{g}=\mathfrak{g}_{n}\)._ 1. _For each_ \(\lambda\in E_{n}\cup F_{n}\)_, the supermodule_ \(W^{\lambda}\) _is semisimple as a_ \(\mathfrak{g}_{\overline{0}}\)_-module._ 2. \(\mathfrak{g}_{\overline{0}}\) _is a reductive Lie algebra._ _In particular, \(\mathfrak{g}_{\overline{0}}=Z(\mathfrak{g}_{\overline{0}})\oplus\mathfrak{D}( \mathfrak{g}_{\overline{0}})\), where \(Z(\mathfrak{g}_{\overline{0}})\) is the center of \(\mathfrak{g}_{\overline{0}}\), and \(\mathfrak{D}(\mathfrak{g}_{\overline{0}})\) is semisimple._ Proof.: By Theorem 4.2.1, \[W^{\lambda}(\mathfrak{g}_{\overline{0}})=W^{\lambda}(\mathfrak{g})_{\overline{0} }=\begin{cases}\mathfrak{sq}(W^{\lambda})_{\overline{0}}&\text{if }\lambda\in E_{n},\\ \mathfrak{sl}(W^{\lambda})_{\overline{0}}&\text{if }\lambda\in F_{n}.\end{cases}\] In either case, this implies that \(W^{\lambda}_{\overline{0}}\) and \(W^{\lambda}_{\overline{1}}\) are irreducible \(\mathfrak{g}_{\overline{0}}\)-modules, and hence \(W^{\lambda}\) is semisimple. Now \(\bigoplus_{\lambda\in E_{n}\cup F_{n}}W^{\lambda}\) is a faithful, finite-dimensional, semisimple \(\mathfrak{g}_{\overline{0}}\)-module. Then \(\mathfrak{g}_{\overline{0}}\) is reductive, \(\mathfrak{D}(\mathfrak{g}_{\overline{0}})\) is semisimple, and \(\mathfrak{g}_{\overline{0}}=Z(\mathfrak{g}_{\overline{0}})\oplus\mathfrak{D} (\mathfrak{g}_{\overline{0}})\), by Proposition 5 of [1, Chapter I, SS6, no. 4]. **Remark 4.5.2**.: \(\mathfrak{D}(\mathfrak{g}_{\overline{0}})\neq\mathfrak{D}(\mathfrak{g})_{ \overline{0}}\)_._ ### Detecting isomorphisms of supermodules **Lemma 4.6.1**.: _Let \(n\geq 5\), set \(\mathfrak{g}=\mathfrak{g}_{n}\), and let \(W\) be a finite-dimensional \(\mathbb{C}A_{n}\)-module. Then the following are equivalent for a subspace \(V\subseteq W\):_ 1. \(V\) _is a_ \(\mathbb{C}A_{n}\)_-submodule._ 2. \(V\) _is a submodule for the action of the Lie subalgebra_ \(\mathfrak{g}_{\overline{0}}\subseteq\mathbb{C}A_{n}\)_._ 3. \(V\) _is a submodule for the action of the Lie subalgebra_ \(\mathfrak{D}(\mathfrak{g}_{\overline{0}})\subseteq\mathbb{C}A_{n}\)_._ Proof.: The fact that (1) and (2) are equivalent is immediate from the observation in the proof of Lemma 4.1.6 that for \(n\geq 5\), the Lie algebra \(\mathfrak{g}_{\overline{0}}\) contains a set of associative algebra generators for \(\mathbb{C}A_{n}\). The fact that (2) implies (3) is evident. We will show that (3) implies (2). First, since \(\mathfrak{D}(\mathfrak{g}_{\overline{0}})\) is semisimple by Corollary 4.5.1, the \(\mathfrak{D}(\mathfrak{g}_{\overline{0}})\)-submodule \(V\) can be written as a direct sum of irreducible \(\mathfrak{D}(\mathfrak{g}_{\overline{0}})\)-submodules. Without loss of generality, we may thus assume that \(V\) itself is irreducible. Then by Schur's Lemma, each element \(z\in Z(\mathfrak{g}_{\overline{0}})\) acts on \(V\) as a scalar multiple of the identity. Since \(\mathfrak{g}_{\overline{0}}=Z(\mathfrak{g}_{\overline{0}})\oplus\mathfrak{D} (\mathfrak{g}_{\overline{0}})\), this implies that \(V\) is closed under the action of \(\mathfrak{g}_{\overline{0}}\). **Proposition 4.6.2**.: _Let \(n\geq 5\), set \(\mathfrak{g}=\mathfrak{g}_{n}\), and let \(V_{1}\) and \(V_{2}\) be two irreducible \(\mathbb{C}A_{n}\)-modules. Then the following are equivalent:_ 1. \(V_{1}\) _and_ \(V_{2}\) _are isomorphic as_ \(\mathbb{C}A_{n}\)_-modules._ 2. \(V_{1}\) _and_ \(V_{2}\) _are isomorphic as modules over the Lie subalgebra_ \(\mathfrak{g}_{\overline{0}}\subseteq\mathbb{C}A_{n}\)_._ 3. \(V_{1}\) _and_ \(V_{2}\) _are isomorphic as modules over the Lie subalgebra_ \(\mathfrak{D}(\mathfrak{g}_{\overline{0}})\subseteq\mathbb{C}A_{n}\)_._ Proof.: Our argument is an adaptation of the proof of [11, Proposition 2]. The fact that (1) implies (2), and that (2) implies (3), is evident. We will show that (3) implies (1). For \(n\geq 5\), the trivial module is the unique one-dimensional \(\mathbb{C}A_{n}\)-module (cf. [8, Theorem 2.5.15]), so we may assume that \(V_{1}\) and \(V_{2}\) are each of dimension at least \(2\). Suppose \(\phi:V_{1}\to V_{2}\) is an isomorphism of \(\mathfrak{D}(\mathfrak{g}_{\overline{0}})\)-modules, and let \(\rho_{1}:\mathbb{C}A_{n}\to\operatorname{End}(V_{1})\) and \(\rho_{2}:\mathbb{C}A_{n}\to\operatorname{End}(V_{2})\) be the structure maps for \(V_{1}\) and \(V_{2}\), respectively. Then for all \(x\in\mathfrak{D}(\mathfrak{g}_{\overline{0}})\), one has \(\phi\circ\rho_{1}(x)=\rho_{2}(x)\circ\phi\), or equivalently, \(\rho_{2}(x)=\phi\circ\rho_{1}(x)\circ\phi^{-1}\). Let \(s=(i,j)(k,\ell)\) be a generator of \(A_{n}\) from the set (4.1.8), and set \(T=p(s)=\frac{2}{n!}\sum_{\sigma\in A_{n}}\sigma s\sigma^{-1}\). The elements of the set (4.1.8) form a single conjugacy class in \(A_{n}\) (because the cycle type does not consist of distinct odd integers), so \(T\) is independent of the particular choice of \(s\). Since \(T\) is central in \(\mathbb{C}A_{n}\), Schur's Lemma implies that \(\rho_{1}(T)=c_{1}\operatorname{id}_{V_{1}}\) and \(\rho_{2}(T)=c_{2}\operatorname{id}_{V_{2}}\) for some scalars \(c_{1},c_{2}\in\mathbb{C}\), which also do not depend on the choice of \(s\). We have \(p(s-T)=p(s)-p(T)=T-T=0\), so \(s-T\in\mathfrak{D}(\mathfrak{g}_{\overline{0}})\) by Corollary 4.5.1 and Lemma 4.1.6. Then \[\rho_{2}(s)-c_{2}\operatorname{id}_{V_{2}} =\rho_{2}(s-T)\] \[=\phi\circ\rho_{1}(s-T)\circ\phi^{-1}\] \[=\phi\circ\rho_{1}(s)\circ\phi^{-1}-\phi\circ(c_{1}\operatorname{ id}_{V_{1}})\circ\phi^{-1}\] \[=\phi\circ\rho_{1}(s)\circ\phi^{-1}-c_{1}\operatorname{id}_{V_{2}},\] or equivalently, \[\rho_{2}(s)=\phi\circ\rho_{1}(s)\circ\phi^{-1}+\omega\cdot\operatorname{id}_{V_{2 }}, \tag{4.6.1}\] where \(\omega=c_{2}-c_{1}\). Squaring both sides of (4.6.1), and using the fact that \(s^{2}=1\), we get \[\operatorname{id}_{V_{2}}=\operatorname{id}_{V_{2}}+2\omega\cdot\phi\circ\rho_{1 }(s)\circ\phi^{-1}+\omega^{2}\cdot\operatorname{id}_{V_{2}},\] or equivalently, \(\omega^{2}\cdot\operatorname{id}_{V_{2}}=2\omega\cdot\phi\circ\rho_{1}(s) \circ\phi^{-1}\). The scalar \(\omega\) does not depend on the choice of \(s\), so if \(\omega\neq 0\), we would deduce first for all \(s\) in the set (4.1.8), and then for all \(s\in A_{n}\) by multiplicativity, that \(\rho_{1}(s)\) is equal to a nonzero scalar multiple of \(\operatorname{id}_{V_{1}}\). Since \(\dim(V_{1})\geq 2\) by assumption, this would contradict the irreducibility of \(V_{1}\). Then \(\omega=0\), and (4.6.1) implies first for all \(s\) in the set (4.1.8), and then for all \(s\in A_{n}\) by multiplicativity, that \(\phi\circ\phi_{1}(s)=\rho_{2}(s)\circ\phi^{-1}\); that is, \(\phi\) is an isomorphism of \(\mathbb{C}A_{n}\)-modules. **Corollary 4.6.3**.: _Let \(n\geq 5\), set \(\mathfrak{g}=\mathfrak{g}_{n}\), and let \(V_{1}\) and \(V_{2}\) be two finite-dimensional \(\mathbb{C}S_{n}\)-supermodules. Then the following statements (in which 'isomorphic' is taken to mean 'isomorphic via a homogeneous homomorphism') are equivalent:_ 1. \(V_{1}\) _and_ \(V_{2}\) _are isomorphic as_ \(\mathbb{C}S_{n}\)_-supermodules._ 2. \(V_{1}\) _and_ \(V_{2}\) _are isomorphic as_ \(\mathbb{C}A_{n}\)_-supermodules._ 3. \(V_{1}\) _and_ \(V_{2}\) _are isomorphic as supermodules over the Lie subalgebra_ \(\mathfrak{g}_{\overline{0}}\subseteq\mathbb{C}A_{n}\)_._ 4. \(V_{1}\) _and_ \(V_{2}\) _are isomorphic as supermodules over the Lie subalgebra_ \(\mathfrak{D}(\mathfrak{g}_{\overline{0}})\subseteq\mathbb{C}A_{n}\)_._ Proof.: By semisimplicity of the superalgebra \(\mathbb{C}S_{n}\), each finite-dimensional \(\mathbb{C}S_{n}\)-supermodule is a direct sum of irreducible supermodules, and two finite-dimensional supermodules are isomorphic if and only if their collections of irreducible factors are the same (up to overall parity change). The classification of the irreducible \(\mathbb{C}S_{n}\)-supermodules in Section 3.1 shows that each irreducible supermodule \(W^{\lambda}\) is determined by its restriction to \(\mathbb{C}A_{n}\) (and for the absolutely irreducible modules, by the homogeneous degrees in which the irreducible \(|\mathbb{C}A_{n}|\)-constituents are concentrated). Thus (1) and (2) are equivalent, while (2) evidently implies (3), and (3) evidently implies (4). To show that (4) implies (2), one can first reduce to the even (resp. odd) subspaces of \(V_{1}\) and \(V_{2}\), which are submodules for each of \(\mathbb{C}A_{n}\) and \(\mathfrak{D}(\mathfrak{g}_{\overline{0}})\). Then apply the semisimplicity of \(\mathfrak{D}(\mathfrak{g}_{\overline{0}})\), Lemma 4.6.1, and Proposition 4.6.2. ### Structure of \(\mathfrak{g}_{n}\) In this section let \(n\geq 2\) and let \(\mathfrak{g}=\mathfrak{g}_{n}\). By Corollary 4.5.1, the Lie algebra \(\mathfrak{g}_{\overline{0}}\) is reductive and its derived subalgebra \(\mathfrak{D}:=\mathfrak{D}(\mathfrak{g}_{\overline{0}})\) is semisimple. For \(\lambda\in E_{n}\), make the identification \(W^{\lambda}_{\overline{0}}\simeq W^{\lambda}_{\overline{1}}\) via the odd involution \(J^{\lambda}:W^{\lambda}\to W^{\lambda}\) and write \(W_{\lambda}\) for the common identified space, as in Section 4.3.2. Then by Theorem 4.2.1, one has \[W^{\lambda}(\mathfrak{D})=\mathfrak{D}(W^{\lambda}(\mathfrak{g})_{\overline {0}})=\begin{cases}\mathfrak{sl}(W_{\lambda})&\text{if }\lambda\in E_{n},\\ \mathfrak{sl}(W^{\lambda}_{\overline{0}})\oplus\mathfrak{sl}(W^{\lambda}_{ \overline{1}})&\text{if }\lambda\in F_{n},\end{cases} \tag{4.7.1}\] where \(\mathfrak{sl}(W_{\lambda})\) denotes the diagonally embedded copy of \(\mathfrak{sl}(W_{\lambda})\) in \(\mathfrak{gl}(W^{\lambda}_{\overline{0}})\oplus\mathfrak{gl}(W^{\lambda}_{ \overline{1}})\), as in (4.3.3). The homogeneous subspaces of \(W^{\lambda}\) are submodules for the action of \(\mathbb{C}A_{n}\), and hence also for the action of \(\mathfrak{D}\); if \(\lambda\in E_{n}\) these submodules are both isomorphic to the irreducible \(|\mathbb{C}A_{n}|\)-module \(S^{\lambda}\), while if \(\lambda\in F_{n}\) they are isomorphic to the irreducible \(|\mathbb{C}A_{n}|\)-modules \(S^{\lambda^{+}}\) and \(S^{\lambda^{-}}\). For \(\lambda\in\overline{\mathcal{P}}(n)\), let \(\rho_{\lambda}:\mathfrak{D}\to\operatorname{End}(W^{\lambda})_{\overline{0}}\) be the \(\mathfrak{D}\)-module structure map, let \(\mathfrak{D}_{\lambda}=\ker(\rho_{\lambda})\), and let \(\mathfrak{D}^{\lambda}\) be the orthogonal complement of \(\mathfrak{D}_{\lambda}\) with respect to the Killing form on \(\mathfrak{D}\). Then \(\mathfrak{D}^{\lambda}\) is an ideal in \(\mathfrak{D}\), \(\mathfrak{D}=\mathfrak{D}^{\lambda}\oplus\mathfrak{D}_{\lambda}\) as a Lie algebra, and \(\rho_{\lambda}\) induces a Lie algebra isomorphism \(\mathfrak{D}^{\lambda}\cong W^{\lambda}(\mathfrak{D})\). Thus \(\mathfrak{D}^{\lambda}\) is a simple ideal in \(\mathfrak{D}\) (if \(\lambda\in E_{n}\)), or is uniquely expressible as a direct sum of two simple ideals in \(\mathfrak{D}\) (if \(\lambda\in F_{n}\)). **Lemma 4.7.1**.: _Let \(n\geq 5\), and let \(\lambda,\mu\in\overline{\mathcal{P}}(n)\). If \(\mathfrak{D}^{\lambda}\cap\mathfrak{D}^{\mu}\neq 0\), then \(\lambda=\mu\)._ Proof.: Let \(\mathfrak{a}=\mathfrak{D}^{\lambda}\cap\mathfrak{D}^{\mu}\), and suppose \(\mathfrak{a}\neq 0\). There are several cases to consider: 1. \(\lambda,\mu\in E_{n}\). Then \(\mathfrak{D}^{\lambda}=\mathfrak{a}=\mathfrak{D}^{\mu}\), and the maps \(\rho_{\lambda}\) and \(\rho_{\mu}\) induce isomorphisms \(\mathfrak{a}\cong W^{\lambda}(\mathfrak{D})\) and \(\mathfrak{a}\cong W^{\mu}(\mathfrak{D})\). 2. \(\lambda\in E_{n}\) and \(\mu\in F_{n}\). Then \(\mathfrak{D}^{\lambda}=\mathfrak{a}\), and \(\mathfrak{a}\) is one of the two simple ideals that comprise \(\mathfrak{D}^{\mu}\). The map \(\rho_{\lambda}\) induces an isomorphism \(\mathfrak{a}\cong W^{\lambda}(\mathfrak{D})\), while the map \(\rho_{\mu}\) maps \(\mathfrak{a}\) isomorphically into (precisely) one of the summands \(\mathfrak{sl}(W^{\mu}_{\overline{0}})\) or \(\mathfrak{sl}(W^{\mu}_{\overline{1}})\) of \(W^{\mu}(\mathfrak{D})\). (There is also the symmetric case \(\lambda\in F_{n}\) and \(\mu\in E_{n}\), which we omit.) 3. \(\lambda,\mu\in F_{n}\) and \(\mathfrak{a}\) is a simple ideal. Then \(\mathfrak{a}\) is one of the two simple ideals that comprise \(\mathfrak{D}^{\lambda}\), and also for \(\mathfrak{D}^{\mu}\). The map \(\rho_{\lambda}\) sends \(\mathfrak{a}\) isomorphically into (precisely) one of the summands \(\mathfrak{sl}(W^{\lambda}_{\overline{0}})\) or \(\mathfrak{sl}(W^{\lambda}_{\overline{1}})\) of \(W^{\lambda}(\mathfrak{D})\), and similarly for \(\rho_{\mu}\). 4. \(\lambda,\mu\in F_{n}\) and \(\mathfrak{D}^{\lambda}=\mathfrak{a}=\mathfrak{D}^{\mu}\). Then the maps \(\rho_{\lambda}\) and \(\rho_{\mu}\) induce isomorphisms \(\mathfrak{a}\cong W^{\lambda}(\mathfrak{D})\) and \(\mathfrak{a}\cong W^{\mu}(\mathfrak{D})\). In each of the four cases, one must have \(\dim(W^{\lambda})=\dim(W^{\mu})\). Then fixing homogeneous bases for \(W^{\lambda}\) and \(W^{\mu}\), we can make the identifications \(W^{\lambda}=\mathbb{C}^{m|m}=W^{\mu}\) for some \(m\in\mathbb{N}\), and we can interpret \(\rho_{\lambda}\) and \(\rho_{\mu}\) as Lie algebra homomorphisms \[\mathfrak{D}\to\operatorname{End}(\mathbb{C}^{m|m})_{\overline{0}}= \operatorname{End}(\mathbb{C}^{m|0})\oplus\operatorname{End}(\mathbb{C}^{0|m })=\operatorname{End}(\mathbb{C}^{m})\oplus\operatorname{End}(\mathbb{C}^{m}).\] First consider case (1), in which \(\lambda,\mu\in E_{n}\). Then the images of \(\rho_{\lambda}\) and \(\rho_{\mu}\) are each equal to the diagonal copy of \(\mathfrak{sl}(\mathbb{C}^{m})\) in \(\operatorname{End}(\mathbb{C}^{m|m})_{\overline{0}}\), and the composite induced map \[\mathfrak{sl}(\mathbb{C}^{m})\stackrel{{\rho_{\lambda}^{-1}}}{{ \longrightarrow}}\mathfrak{a}\stackrel{{\rho_{\mu}}}{{ \longrightarrow}}\mathfrak{sl}(\mathbb{C}^{m})\] is a Lie algebra automorphism. Lie algebra automorphisms of \(\mathfrak{sl}(\mathbb{C}^{m})\) come in two forms: 1. \(X\mapsto gXg^{-1}\) for some \(g\in GL(\mathbb{C}^{m})\), or 2. \(X\mapsto-(gX^{t}g^{-1})\) for some \(g\in GL(\mathbb{C}^{m})\), where \(X^{t}\) denotes the transpose of \(X\). If \(\rho_{\mu}\circ\rho_{\lambda}^{-1}\) is of the first form, then the \(\mathfrak{D}\)-module structures on \(\mathbb{C}^{m|m}\) afforded by \(\rho_{\lambda}\) and \(\rho_{\mu}\) are isomorphic, i.e., \(W^{\lambda}\cong W^{\mu}\) as \(\mathfrak{D}\)-modules. If \(\rho_{\mu}\circ\rho_{\lambda}^{-1}\) is of the second form, then the \(\mathfrak{D}\)-module structure on \(\mathbb{C}^{m|m}\) afforded by \(\rho_{\lambda}\) is isomorphic to the dual of the \(\mathfrak{D}\)-module structure afforded by \(\rho_{\mu}\), i.e., \(W^{\lambda}\cong(W^{\mu})^{*}\) as \(\mathfrak{D}\)-modules. Applying Corollary 4.6.3 and Remark 3.1.4, this implies that \(W^{\lambda}\cong W^{\mu}\) as \(\mathbb{C}S_{n}\)-supermodules, and hence \(\lambda=\mu\). The reasoning for the other cases proceeds similarly. For example, in cases (2) and (3), one deduces that one of the \(\mathfrak{D}\)-module composition factors in \(W^{\mu}\) is isomorphic to one (resp. both, if \(\lambda\in E_{n}\)) of the \(\mathfrak{D}\)-module composition factors (or their duals) in \(W^{\lambda}\). In case (4), one deduces that (both of) the \(\mathfrak{D}\)-module composition factors \(W^{\mu}\) are isomorphic to the \(\mathfrak{D}\)-module composition factors in \(W^{\lambda}\), perhaps up to duals and parity change. In any case, Lemma 4.6.1 and Proposition 4.6.2 then imply that as \(|\mathbb{C}A_{n}|\)-modules, \(W^{\lambda}\) and \(W^{\mu}\) have irreducible constituents in common, which is only possible if \(\lambda=\mu\). (In particular, cases (2) and (3) are impossible.) **Proposition 4.7.2**.: _Let \(n\geq 5\). Then \(\mathfrak{D}=\bigoplus_{\lambda\in\overline{\mathcal{P}}(n)}\mathfrak{D}^{\lambda}\)._ Proof.: The sum \(\sum_{\lambda\in\overline{\mathcal{P}}(n)}\mathfrak{D}^{\lambda}\) is a direct sum as a consequence of Lemma 4.7.1, and the sum is equal to all of \(\mathfrak{D}\) as a consequence of the module structure map \(\mathfrak{D}\to\bigoplus_{\lambda\in\overline{\mathcal{P}}(n)}\operatorname{ End}(W^{\lambda})\), \(\sigma\mapsto\bigoplus_{\lambda\in\overline{\mathcal{P}}(n)}W^{\lambda}(\sigma)\), being faithful. **Theorem 4.7.3**.: _Let \(n\geq 2\). Then \(\mathfrak{g}_{n}=\mathfrak{D}(\mathbb{C}S_{n})+\mathbb{C}\cdot T_{n}\)._ Proof.: The theorem is true for \(n\in\{2,3,4,5\}\) by Lemma 4.1.4, so we may assume that \(n\geq 6\). We observed previously in (4.1.7) that \(\mathfrak{g}_{n}\subseteq\mathfrak{D}(\mathbb{C}S_{n})+\mathbb{C}\cdot T_{n}\) and that \(T_{n}\in\mathfrak{g}_{n}\), so we just need to show that \(\mathfrak{D}(\mathbb{C}S_{n})\subseteq\mathfrak{g}_{n}\). Henceforward in this proof, we will let \(\mathfrak{g}=\mathfrak{g}_{n}\), and we will identify \(\mathbb{C}S_{n}\) with its image under the superalgebra isomorphism of Corollary 3.1.6. First we will show that \(\mathfrak{D}(\mathbb{C}S_{n})_{\overline{1}}\subseteq\mathfrak{g}\). By (4.1.4), one has \[\mathfrak{D}(\mathbb{C}S_{n})_{\overline{1}}=\Big{[}\bigoplus_{\lambda\in E_{ n}}\mathfrak{sq}(W^{\lambda})_{\overline{1}}\Big{]}\oplus\Big{[}\bigoplus_{ \lambda\in F_{n}}\mathfrak{sl}(W^{\lambda})_{\overline{1}}\Big{]},\] and by Proposition 4.7.2, one has, with notation as in (4.7.1), \[\mathfrak{D}(\mathfrak{g}_{\overline{0}})=\Big{[}\bigoplus_{\lambda\in E_{n}} \mathfrak{sl}(W_{\lambda})\Big{]}\oplus\Big{[}\bigoplus_{\lambda\in F_{n}} \mathfrak{sl}(W_{\overline{0}}^{\lambda})\oplus\mathfrak{sl}(W_{\overline{1} }^{\lambda})\Big{]}.\] Then \(\mathfrak{D}(\mathbb{C}S_{n})_{\overline{1}}\) is a direct sum of pairwise non-isomorphic irreducible \(\mathfrak{D}(\mathfrak{g}_{\overline{0}})\)-modules. Since \(\mathfrak{g}_{\overline{1}}\) is a \(\mathfrak{D}(\mathfrak{g}_{\overline{0}})\)-submodule of \(\mathfrak{D}(\mathbb{C}S_{n})_{\overline{1}}+\mathbb{C}\cdot T_{n}\), it must contain some subset of the irreducible summands in \(\mathfrak{D}(\mathbb{C}S_{n})_{\overline{1}}\). Using Theorem 4.2.1, we see that each of these summands is contained in the image of the corresponding projection map \(W^{\lambda}:\mathfrak{g}\to\operatorname{End}(W^{\lambda})\), and hence must have been contained in \(\mathfrak{g}_{\overline{1}}\). Thus \(\mathfrak{D}(\mathbb{C}S_{n})_{\overline{1}}\subseteq\mathfrak{g}\). Now applying Lemma 3.1.5 and Remark 4.1.1, we deduce that \(\mathfrak{sl}(W^{\lambda})\subseteq\mathfrak{g}\) for each \(\lambda\in F_{n}\), and we deduce that \(\mathfrak{sq}(W^{\lambda})\subseteq\mathfrak{g}\) for all \(\lambda\in E_{n}\) with the exception of \(\lambda\in\{(n),(1^{n})\}\). For the sake of definiteness, suppose \(\lambda=(n)\in E_{n}\). For this partition, one has \(\mathfrak{sq}(W^{(n)})=\mathbb{C}\operatorname{id}_{W^{(n)}}\). If \(\tau\in S_{n}\) is any transposition, then \(1_{\mathbb{C}S_{n}}=\frac{1}{2}[\tau,\tau]\in\mathfrak{g}\). But under the isomorphism of Corollary 3.1.6, one has \(1_{\mathbb{C}S_{n}}=\sum_{\lambda\in\overline{\mathcal{P}}(n)} \operatorname{id}_{W^{\lambda}}\), so \[\operatorname{id}_{W^{(n)}}=1_{\mathbb{C}S_{n}}-\Big{(}\sum_{\begin{subarray} {c}\lambda\in\overline{\mathcal{P}}(n)\\ \lambda\neq(n)\end{subarray}}\operatorname{id}_{W^{\lambda}}\Big{)}\in\mathfrak{ g}.\] Thus \(\mathfrak{sq}(W^{(n)})\subseteq\mathfrak{g}\), and hence \(\mathfrak{D}(\mathbb{C}S_{n})\subseteq\mathfrak{g}\).
2306.02962
Revisiting Lorentz invariance violation from GRB 221009A
As a potential consequence of Lorentz invariance violation~(LIV), threshold anomalies open a window to study LIV. Recently the Large High Altitude Air Shower Observatory~(LHAASO) reported that more than 5000 photons from GRB 221009A have been observed with energies above 500~GeV and up to $18~\text{TeV}$. In the literature, it is suggested that this observation may have tension with the standard model result because extragalactic background light~(EBL) can prevent photons around 18~TeV from reaching the earth and that LIV induced threshold anomalies might be able to explain the observation. In this work we further study this proposal with more detailed numerical calculation for different LIV scales and redshifts of the sources. We find that GRB 221009A is a rather unique opportunity to search LIV, and a LIV scale $E_\text{LIV} \lesssim E_\text{Planck}\approx 1.22\times 10^{19}~\text{GeV}$ is feasible to the observation of GRB 221009A on 9 October, 2022.
Hao Li, Bo-Qiang Ma
2023-06-05T15:26:43Z
http://arxiv.org/abs/2306.02962v3
# Revisiting Lorentz invariance violation from GRB 221009A ###### Abstract As a potential consequence of Lorentz invariance violation (LIV), threshold anomalies open a window to study LIV. Recently the Large High Altitude Air Shower (LHAASO) observatory reported that more than 5000 photons from GRB 221009A have been observed with energies above 500 GeV and up to 18 TeV. In the literature, it is suggested that this observation may have tension with the standard model result because extragalactic background light (EBL) can prevent photons around 18 TeV photons from reaching the earth and that LIV induced threshold anomalies might be able to explain the observation. In this work we further study this proposal with more detailed numerical calculation for different LIV scales and redshifts of the sources. We find that GRB 221009A is a rather unique opportunity to search LIV, and a LIV scale \(E_{\rm LIV}\lesssim E_{\rm Planck}\approx 1.22\times 10^{19}\) GeV is feasible to the observation of GRB 221009A on 9 October, 2022. Lorentz invariance violation, extragalactic background light, threshold anomaly, gamma-ray burst ## I Introduction Lorentz symmetry, one of the cornerstones of modern physics, although having been proven to be satisfied to very high accuracy, is continuously challenged after the proposal of Lorentz invariance violation (LIV) about thirty years ago [1, 2]. There are several constructions of theories of quantum gravity (QG) suggesting some LIV phenomena as the low-energy remnants of QG effects at the Planck scale \(E_{\rm Planck}\approx 1.22\times 10^{19}\) GeV [1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]. Since its tight relationship to our understanding of QG, pursuing the evidence of LIV or falsifying the proposal of the existence of LIV is becoming more important and urgent. Recently a unique opportunity to search LIV appears: the extraordinarily bright gamma-ray burst (GRB) GRB 221009A [19, 20, 21, 22, 23, 24, 25] located at redshift \(z=0.1505\)[26, 27] was observed in the energy range from 500 GeV to 18 TeV by the Large High Altitude Air Shower (LHAASO) observatory, with more than 5000 photons recorded [28]. However this result seems to be an alarm that the we may need new physics to interpret it since we expect that photons around 18 TeV are unlikely observable due to the attenuation of extragalactic background light (EBL). It immediately triggered the studies of whether LIV could come to the rescue [29, 30, 31, 32]. The key point roots in the possibility of LIV induced threshold anomalies, which influence the attenuation behaviours of such energetic photons in the universe so that the excess beyond the standard results based on Lorentz invariant theories is allowed. In other words, energetic photons, which should be absorbed by background light in the universe and thus can hardly be observed now have more chances to reach the earth as a consequence of LIV induced threshold anomalies. Although observing the excess of high energy photons cannot be a conclusive evidence for LIV, since other explanations exist, such as axion-like particles (ALPs) [33, 34, 35, 30] and even those within the standard framework of physics [29], understanding GRB 221009A from the point of view of LIV is still of importance because it may shed light on searches of LIV combined with other approaches to LIV phenomenology [36]. In the following at first, we introduce the basic background of this kind of anomalies and its impact on photon attenuation by the background light in the universe. As for the photons reported by LHAASO it is sufficient to only consider the background light component dubbed extragalactic background light (EBL), other sources like cosmic microwave background (CMB) would not be taken into consideration in this work. ## II Theoretical background In this section, we briefly introduce the theoretical backgrounds for the analyses of this work, including threshold anomalies in LIV and EBL attenuation of high energy photons in special relativity (SR) and in the LIV framework. More details about LIV can be found in the reviews [37, 38, 36] and references therein. For EBL one can refer to Refs. [39, 40] and references therein. ### Modified dispersion relations and threshold anomalies in LIV An intensively studied feature of LIV is the modified dispersion relations (MDRs) that differ from the SR ones by terms depending on the ratio (\(E/E_{\rm Planck}\)), where \(E\) represents the particle energy. For photons, one can adopt a model-independent form of the MDR to the lead ing order of \((E/E_{\rm Planck})\)1: Footnote 1: For our purpose, it suffices to study only the leading order LIV deviation of the MDR, while the complete knowledge of the MDR is not quite clear in general, such as those in the doubly special relativity (DSR) models [14; 15; 16; 17; 18]. \[E^{2}=p^{2}\left(1-s\bigg{(}\frac{p}{E_{\rm LIV,n}}\bigg{)}^{n}\right), \tag{1}\] where \(s=\pm 1\) represents the possible helicity dependence, \(n=1,2,3,\ldots\), and \(E_{\rm LIV,n}\), the scale of the onset of LIV, is of or close to the order of the magnitude of \(E_{\rm Planck}\). It is clear from Eq. (1) that if \(s=+1\), the LIV term plays the role of an effective (running) mass, and if \(s=-1\) photons maybe superluminal but could decay rapidly through the process \(\gamma\to e^{-}e^{+}\)[41], so we only focus on the subluminal case \(s=+1\). If we further assume that the energy-momentum conservation law still holds, then the threshold of a process in which photons take part could be anomalous [42; 43; 44]. Indeed, let us consider two photons with four momenta \(p_{1}=(E,0,0,p)\), where \(E\) and \(p\) obey Eq. (1), and \(p_{2}=(\varepsilon_{b},0,0,-\varepsilon_{b})\) respectively, and we may imagine that the photon with \(p_{1}\) is the high energy one from GRB, while \(p_{2}\) is from EBL thus \(\varepsilon_{b}\ll E\) and we can neglect the LIV of this photon. This two photons may produce an electron and a positron, then the ordinary energy-momentum conservation law states that: \[4m_{e}^{2}=(p_{1}+p_{2})^{2}, \tag{2}\] and it is noteworthy that we do not add LIV terms for electrons (or positrons) for both experimental and theoretical considerations [see 8; 9; 45, for example]. It is straight forward to show that Eq. (2) leads to [44] \[\xi_{n}=\frac{4\varepsilon_{b}}{p^{n-1}}-\frac{4m_{e}^{2}}{p^{n}},\;{\rm for} \;p>0, \tag{3}\] to the leading order, where \(\xi_{n}^{-1}:=E_{\rm LIV,n}^{n}\) for convenience. One can easily verify Eq. (3) by restoring the SR threshold for photons by setting \(\xi_{n}=+\infty\), obtaining \[E_{\rm th}^{\rm SR}=\frac{m_{e}^{2}}{\varepsilon_{b}}, \tag{4}\] with which we could calculate the typical scales for CMB attenuation and EBL attenuation as about 411 TeV and 261 GeV respectively [44; 46], and it is obvious that our focus on EBL is reasonable from these results. There exists many analyses about Eq. (3) in the literature [42; 43; 44], and for this work we concentrate on the \(n=1\) case so that we rewrite \(\xi=\xi_{1}\). The relevant results utilized in this work then are summarized as follows [44]: **Case I:**: If \(\xi>\xi_{c}:=16\times\varepsilon_{b}^{3}/(27m_{e}^{4})\), subluminal photons cannot be absorbed by background photons with energy \(\varepsilon_{b}\) because that the process \(\gamma\gamma\to e^{-}e^{+}\) is forbidden kinematically. **Case II:**: If \(0<\xi<\xi_{c}\), a subluminal photon can be absorbed only when its energy falls into a certain closed interval with its lower bound greater than \(E_{\rm th}\). There is also an upper threshold and the \(\varepsilon_{b}\) background is again transparent to photons with energy exceeding this upper threshold. Of course a non-\(\pi\) angle \(\theta\) between the two photons involed in the process can raise the threshold, but the reactions are still suppressed because the pair-production process with certain permissible configurations in the standard case are not allowed to occur due to LIV, and therefore we may find excesses in the spectra of GRB photons. This picture provides the starting point of threshold anomaly studies including this work. ### EBL attenuation without and with LIV The distribution of EBL still needs to be constrained further, however we adopt a plausible model [39] which possesses most of the common properties of existing models, and extension to other models is straightforward. For very high energy photons, there are two major processes that contribute to the absorption by background light: the pair-production process \(\gamma\gamma\to e^{-}e^{+}\) and the double pair-production process \(\gamma\gamma\to e^{-}e^{-}e^{+}e^{+}\)[47]. The process dominating the attenuation of 500 GeV to 18 TeV photons is the first one, i.e., the pair-production process [47]. For the pair-production process, the cross-section is readily obtained from quantum electrodynamics (QED) [48; 49]: \[\sigma_{\gamma\gamma}(\beta_{0}) = \frac{3\sigma_{T}}{16}(1-\beta_{0}^{2}) \tag{5}\] \[\times\left[2\beta_{0}(\beta_{0}^{2}-2)+(3-\beta_{0}^{4})\ln\frac{ 1+\beta_{0}}{1-\beta_{0}}\right],\] where \(\sigma_{T}\) is the Thomson cross-section, and \[\beta_{0}=\sqrt{1-\frac{2m_{e}^{2}}{E\varepsilon}\frac{1}{1-\mu}}, \tag{6}\] with \(\cos\theta\) defined to be \(\mu\). As for this work, the energies in Eq. (6) are the present-day measurements of a GRB photon and a EBL photon. It is also worth noting that Eq. (6) automatically manifests the threshold by ensuring \(\beta\) belonging to the real number field \(\mathbb{R}\): \[E\geq\frac{m_{e}^{2}}{\varepsilon}\frac{2}{1-\cos\theta}, \tag{7}\] To use the cross-section (5) in the calculation of EBL attenuation, we should further take into consideration the expansion of the universe, and hence \(E\) and \(\varepsilon\) should be multiplied by \((1+z)\). Thus for convenience we define \[\beta(E,z,\varepsilon,\mu)=\sqrt{1-\frac{2m_{e}^{2}}{E\varepsilon}\frac{1}{1- \mu}\left(\frac{1}{1+z}\right)^{2}}, \tag{8}\] and clearly \(\beta_{0}=\beta(E,0,\varepsilon,\mu)\). With this cross-section we can calculate the optical depth2: Footnote 2: The integration over energy is understood to be performed above the threshold. \[\tau_{\gamma\gamma}(E,z)=\int_{0}^{z}\mathrm{d}z^{\prime}\frac{\partial l}{ \partial z^{\prime}}(z^{\prime})\int_{0}^{\infty}\mathrm{d}\varepsilon\frac{ \partial n}{\partial\varepsilon}(\varepsilon,z^{\prime})\int_{-1}^{1}\mathrm{d }\mu\frac{1-\mu}{2}\sigma_{\gamma\gamma}\left[\beta(E,z^{\prime},\varepsilon, \mu)\right], \tag{9}\] where \(\partial n(\varepsilon,z^{\prime})/\partial\varepsilon\) is the number density of EBL photons of energy \(\varepsilon\) at redshift \(z^{\prime}\), and \[\frac{\partial l}{\partial z^{\prime}}=\frac{1}{H_{0}}\frac{1}{1+z^{\prime}} \frac{1}{\sqrt{\Omega_{\Lambda}+\Omega_{M}(1+z^{\prime})^{3}}}, \tag{10}\] is obtained from the Fermi-Robertson-Walker cosmology with the Hubble constant \(H_{0}=70\ \mathrm{kms^{-1}Mpc^{-1}}\) as well as the cosmological parameters \(\Omega_{\Lambda}=0.7\) and \(\Omega_{M}=0.3\). Then the intrinsic and observed fluxes (represented by \(F_{\mathrm{int}}\) and \(F_{\mathrm{obs}}\) respectively) can be related by \[F_{\mathrm{obs}}=F_{\mathrm{int}}\times e^{-\tau_{\gamma\gamma}}. \tag{11}\] However, once LIV takes effect in the pair-production process, the modified optical depth \(\tau_{\mathrm{LIV}}\) may be different from \(\tau_{\gamma\gamma}\). The deviation is at least two-fold. First, the topic of this work, the threshold anomaly of the pair-production process (3), makes the integration over \(\varepsilon\) in Eq. (9) change [50; 51], i.e., the phase space of particle kinematics in the LIV case is different from that in the standard SR case. Second, it is natural that one assumes that functional form of the cross-section of the pair-production process, \(\sigma_{\gamma\gamma}\), gets modified for reasons like extra terms in the corresponding Lagrangian. We do not have enough acknowledge of the modified cross-section since this needs the complete understanding of the LIV dynamics, but we can expect that at least in the low energy limit: \[\sigma_{\gamma\gamma}^{\mathrm{LIV}}(E,\cdots) = \sigma_{\gamma\gamma}(E,\cdots) \tag{12}\] \[\times\left(1+\vartheta\left(\frac{E}{E_{\mathrm{LIV}}}\right)+ \cdots\right),\] where \(\vartheta\) is a constant. Of course, there may exist other kinds of corrections to the calculation of \(\tau_{\mathrm{LIV}}\), such as the modification of the cosmological model (the function \(\partial l/\partial z^{\prime}\)) and the understanding of the EBL (the density \(n_{\varepsilon}:=\partial n/\partial\varepsilon\)). In the following we first focus on the first element, i.e. the threshold anomaly of the pair-production process, while in Sec. IV we briefly go back to the second element and discuss the possible effects. Other sources of corrections (\(\partial l/\partial z^{\prime}\) and \(n_{\varepsilon}\)) are left for future studies. ## III Analyses In this section we perform the analyses of photon attenuation for photons of GRB 221009A observed by LHAASO [28], and meanwhile we extend our analyses into other cases: a GRB at \(z=0.5\) which is the typical distance of a short burst, and a GRB at \(z=2.15\) that is the typical redshift of a long burst. To make the numerical calculation feasible, we need the information of \(n_{\varepsilon}\) and the exact form of \(\sigma_{\gamma\gamma}^{\mathrm{LIV}}\). In this work we only adopt the EBL model \(n_{\varepsilon}\) of Dominguez _et al._[39]. Indeed replacement with other models is quite straightforward, and the difference in the results is supposed to be mild, since the adopted model captures the major features of existing EBL models already. However to know the exact form of \(\sigma_{\gamma\gamma}^{\mathrm{LIV}}\) we should understand the LIV dynamics, which is still lacking. Alternatively in the calculation of the LIV cases we assume that \(\sigma_{\gamma\gamma}^{\mathrm{LIV}}=\sigma_{\gamma\gamma}\), and only consider the LIV effects brought by the change in the threshold. The possible consequence of the modification of the cross-section is discussed in Sec. IV. ### Grb 221009a GRB 221009A is located at \(z=0.1505\)[26; 27], therefore we first draw the \(E\)-\(e^{-\tau}\) plot of \(z=0.1505\) in Fig. 1 with different LIV scales utilizing **ebltable**[52]. The vertical axis of this figure can also be understood as representing the survival probability of photons from the GRB after being absorbed by EBL. In the figure we choose the LIV scale \(E_{\mathrm{LIV}}\) to be \(3.6\times 10^{17}\ \mathrm{GeV}=0.03\times E_{\mathrm{Planck}}\)3, \(E_{\mathrm{Planck}}/10\), \(E_{\mathrm{Planck}}\), \(10\times E_{\mathrm{Planck}}\), and the non-LIV case which can also be considered as \(E_{\mathrm{LIV}}=+\infty\). From Fig. 1 we can immediately infer some useful information. First, from 500 GeV to around 10 TeV, the effects of different LIV scales are not distinguishable enough from the standard case. Second, if \(E_{\mathrm{LIV}}\gtrsim 10\times E_{\mathrm{Planck}}\), the LIV effect is not remarkable for \(z=0.1505\). Third, we can conclude that so long as \(E_{\mathrm{LIV}}\lesssim E_{\mathrm{Planck}}\) the LIV effects become dramatic enough. However we should point out that as we can see from Fig. 1 that for the LIV scales we choose, \(e^{-\tau}\) tends to one eventually, this is mainly because that the pair production process would be dramatically reduced or even forbidden due to the threshold anomalies, thus \(e^{-\tau}\) tends to raise at higher energy beyond the threshold of the incident photon, and \(e^{-\tau}\) then tends to approaching one earlier for smaller LIV cases as the energy increases. Next we combine the observation of GRB 221009A reported by LHAASO [28] and the EBL attenuation results in Fig. 1 and calculate the potential effects of LIV numerically. To do this, we first assume that the number density spectrum of GRB 221009A can be written as \[\frac{\mathrm{d}N}{\mathrm{d}E}=A_{0}\times E^{-\alpha}, \tag{13}\] where the spectrum index \(\alpha\) is chosen to be 2.5 in this work, and \(A_{0}\) is a constant to be normalized case by case. The spectrum (13) we adopted is only assumed to be valid from 500 GeV to 20 TeV 4, which is the energy range observed by LHAASO [28]. Recalling that LHAASO observed more than 5000 photons5 from 500 GeV to 18(20) TeV, we thus normalize the constant \(A_{0}\) by the following strategy: multiplying \(\mathrm{d}N/\mathrm{d}E\) by \(e^{-\tau}\) first, where \(\tau\) is either \(\tau_{\gamma\gamma}\) or \(\tau_{\mathrm{LIV}}\) with the listed LIV scales, we then perform the integration below and equate the result to 5000, Footnote 4: The upper limit of the energy range we choose is not 18 TeV since we take into consideration the uncertainty in the energy reconstruction of LHAASO. Footnote 5: From now on we assume that there are 5000 photons observed in total. \[\int_{500\ \mathrm{GeV}}^{20\ \mathrm{TeV}}\mathrm{d}E\frac{\mathrm{d}N}{ \mathrm{d}E}(E)e^{-\tau(E,z=0.1505)}=5000, \tag{14}\] from which we can determine \(A_{0}\). With \(A_{0}\) determined for each case, we draw the spectra in Fig. 2. Just as we can see from Fig. 1, the larger the LIV scale is, the less the difference between the standard case and the LIV case. To explain the observation of 18 TeV photons observed by LHAASO, we solve the following equation: \[\int_{E_{\mathrm{low}}}^{20\ \mathrm{TeV}}\mathrm{d}E\frac{\mathrm{d}N}{ \mathrm{d}E}e^{-\tau(E,z)}=1\ (\mathrm{or}\ 10^{-6}) \tag{15}\] for different LIV scales, where \(E_{\mathrm{low}}\) is the variable to be determined. \(E_{\mathrm{low}}\) thus represents the largest energy for us to observe one photon, and in other words it then can serve as a possible signal of LIV if we observe photons above \(E_{\mathrm{low}}\). If we can observe photons above the \(E_{\mathrm{low}}\) for \(10^{-6}\) photon, the signal then is likely to be strong enough so that it is necessary to consider new physics. The results for observing one photon are listed in Tab. 1. From this table we can see that when \(E_{\mathrm{LIV}}=\infty\), above 7.34 TeV we could observe one photon at least. In Fig. 2 we show this results schematically. In this figure, we also show the results in Tab. 1 by vertical lines. As we can see, in the standard case, observing one photon around 18 TeV seems to be unlikely, while with a proper LIV scale this is permissible. Indeed to observe more than \(10^{-6}\) photon with \(E_{\mathrm{LIV}}=\infty\), _i.e._, within the framework of standard physics, we need \(E_{\mathrm{low}}\leq 16.3\ \mathrm{TeV}\). However this may contradict the report of observing around 18 TeV photons by LHAASO, and to observe at least one photon around 18 TeV, new physics, or a smaller index \(\alpha<2.5\), or observed photon numbers more than 50000 is required. In summary, the LHAASO result suggests the necessity to consider novel mechanisms to explain it, and LIV is a feasible candidate from our numerical results. Figure 2: The hypothetical spectrum of GRB 221009A after EBL attenuation with different LIV scales. Figure 1: The EBL attenuation of \(z=0.1505\) with different LIV scales \(E_{\mathrm{LIV}}\)[52; 39]. ### General gamma-ray bursts The above result indicates the possibility of LIV with the observation of GRB 221009A, and in the following we discuss what we could learn from this for general GRBs, including short bursts and long bursts. Some preparations are needed in order to ensure the comparability. First, we choose the GRBs with the _best guess_ values of the redshifts. To be specific, we assign \(z=0.5\) for a typical short burst, while \(z=2.15\) for a typical long burst [64]. Second, we let the spectrum adopted hereafter be the same as the one in Eq. (13), just like we put the same source to different locations in the universe. However the energy range we aim to observe should be modified since we want to fix as many variables as possible. Indeed, the energy range 500 GeV to 20 TeV corresponds to 575.25 GeV to 23.01 TeV at the source of GRB 221009A, as the observed energy \(E_{\rm obs}\) is related to the intrinsic one \(E_{\rm int}\) by \[E_{\rm int}=E_{\rm obs}\times(1+z). \tag{16}\] We thus restrict ourselves to only analyze the corresponding energy ranges 383.5 GeV to 15.34 TeV and 182.6 GeV to 7.305 TeV for short bursts and long bursts respectively. Third, us assume that the GRB emits photons in an isotropic way, since the effective area of the observatory is fixed and the sources are identical, we have to consider the influence of distance. As a consequence, photon numbers should be approximately multiplied by \((622.1~{}{\rm Mpc}/1888.6~{}{\rm Mpc})^{2}\approx 0.109\) and \((622.1~{}{\rm Mpc}/5388.4~{}{\rm Mpc})^{2}\approx 0.013\) respectively6. Footnote 6: The corresponding distances of \(z=0.1505,0.5,2.15\) are calculated with the program at [https://www.astro.ucla.edu/~wright/CosmoCalc.html](https://www.astro.ucla.edu/~wright/CosmoCalc.html), see also Ref. [65]. Before analyzing GRBs with the _best guess_ redshift values, we first perform a more general discussion: we still fix the same conditions aforementioned, while now we let the redshift \(z\) be a variable, and study above which energy we can observe one photon after EBL attenuation in the standard case by doing integration and solving the equation similar to Eq. 15 with the upper limit fixed but lower limit changing. The result is shown in Fig. 3 with the error estimated to be 20%. From this figure, for example, we can see that if we observe a photon above 1 TeV from a GRB at \(z=1\), then it could be considered as a starting point to seek for signal of LIV or other possibilities beyond the standard model. In summary, we extend the energy band to the corresponding ones for short and long bursts, then do integration as in Eqs. (14) and (15) for different LIV scales. Finally we multiply the results by 0.109 and 0.013 respectively. As we will see, the result shows the uniqueness of GRB 221009A. #### iii.2.1 General short bursts In Fig. 4, we present the \(E\)-\(e^{-\tau}\) curves (produced by **ebltable**[52]) for different LIV scales and the standard case as those in Fig. 1, with the redshift set to be the _best guess_ value \(z=0.5\). In this case we focus on the energy band 383.5 GeV to 15.34 TeV, and it is shown in Fig. 4 that this band is affected by LIV as well, but the effects are relatively small compared to those in Fig. 1. Fig. 5 shows the spectra in this case. In the standard case, to observe one photon we have \(E_{\rm low}\approx 1.25~{}{\rm TeV}\) (the vertical line in Fig. 5 and to observe \(10^{-6}\) photon we have \(E_{\rm low}\approx 5.76~{}{\rm TeV}\). However, the corresponding LIV results are close to the above results, as can be seen from the vertical lines in Fig. 5. This implies that it is hard to distinguish between different cases due to experimental uncertainties. We will discuss this problem later. #### iii.2.2 General long bursts Similarly, using **ebltable**[52] we draw the \(E\)-\(e^{-\tau}\) diagram for a long GRB with \(z=2.15\) in Fig. 6, from which we can see that even LIV is taken into consideration, the LIV effects are still too small to be distinguished between different cases, recalling that we focus \begin{table} \begin{tabular}{c c} \(E_{\rm LIV}\) (\(E_{\rm Planck}\)) & \(E_{\rm low}\) for observing one photon (TeV) \\ \hline 0.03 & 19.4 \\ 0.1 & 18.4 \\ 1 & 8.06 \\ 10 & 7.51 \\ \(\infty\) & 7.34 \\ \end{tabular} \end{table} Table 1: \(E_{\rm low}\) for observing one photon from GRB 221009A with different LIV scales. Figure 3: The lowest energies we can observe one photon (and \(10^{-6}\) photon) in the corresponding energy band from the same source as GRB 221009A at different redshifts. on the energy band from 182.6 GeV to 7.305 GeV. In fact, in Fig. 7 we present the numerical results. To observe one photon in the standard case, we have to require \(E_{\rm low}\geq 258\ {\rm GeV}\), while for \(10^{-6}\) photon to be observed, \(E_{\rm low}\geq 720\ {\rm GeV}\). Like the observation in the analysis for general short bursts, \(E_{\rm low}\) for observing one (or \(10^{-6}\)) photon in each LIV case is not distinguishable enough from the standard case. In other words, with the linear LIV term, a LIV scale \(E_{\rm LIV}\) (which should have obvious indications in other approaches to LIV such as the study of time of light flight) is essential to make the standard case and LIV case distinguishable. However this contradicts with the present phenomenological results of LIV. Indeed from Fig. 7 we may understand this problem easily. As we can see, the spectra in Fig. 7 all drop rapidly as the energy increase, so almost all photons fall into a narrow region from 182.6 GeV to about 300 GeV. Then a mild change in \(E_{\rm low}\) results in a dramatic change in the photon number, so the results for different cases are not distinguishable considering the experimental uncertainties of observatories. This fact is intriguing since this provides an approach to distinguish between LIV and other novel mechanisms such as APL particles. This is because if there exists photon number excess and meanwhile \(E_{\rm low}\) for observation is obviously larger than that for the standard case, then both standard model and LIV are unlikely to be able to explain such observation. So LIV is likely insensible to long GRBs and alternative interpretations should be sought. Furthermore, the uniqueness of GRB 221009A for LIV and possibly other new physics proposals is clear from the analyses of general short and long bursts. We find that for a reasonable LIV scale, there is less chance that Figure 5: The hypothetical spectrum of a short GRB at \(z=0.5\) after EBL attenuation with different LIV scales. Figure 6: The EBL attenuation of \(z=2.15\) with different LIV scales \(E_{\rm LIV}\)[39; 52] Figure 7: The hypothetical spectrum of a long GRB at \(z=2.15\) after EBL attenuation with different LIV scales. we can distinguish excess caused by LIV from the background or the standard results. Nevertheless, this observation suggests that we can focus more on sources like GRB 221009A, with a relatively small redshift and very high energy messengers (photons, neutrinos and electrons _etc._), since this kind of objects is very unique for our studies of LIV and other new physics speculations. We expect that with observatories such as LHAASO [66; 67; 68] and the Cherenkov Telescope Array (CTA) [69; 70; 71; 72], more data will be available in the future. ## IV Discussions In this work we re-analyzed the possibility of (linear) Lorentz invariance violation (LIV) for photons as an explanation of the observation of very high energy photons from GRB 221009A reported by the Large High Altitude Air Shower Observatory (LHAASO) [28]. We also discussed if a gamma-ray burst (GRB) the same as GRB 221009A is located at \(z=0.5\) or \(z=2.15\), the _best guess_ value of the redshifts of short bursts or long bursts respectively [64], to what extend our future observation can be influenced after fixing other conditions as possible. We find that for GRB 221009A itself, a (linear) LIV scale \(E_{\rm LIV}\lesssim E_{\rm Planck}\) seems to be permissible with the observation by LHAASO. While for a typical short or long burst, such a LIV scale is not enough and a mildly lower scale is sensible for short bursts but we may not be able to observe very high energy photons from distant GRBs even LIV for photons does exist. This shows the uniqueness of GRB 221009A, while we expect that observatories such as LHAASO and the Cherenkov Telescope Array (CTA) will provide more data of sources similar to GRB 221009A, and then we can perform more detailed analyses of LIV. Meanwhile, it is noteworthy that although LIV seems to be a candidate to render the observation of GRB 221009A reasonable, there are also other choices. For example, it is argued that the existence of axion-like particles (ALPs) could play the same role as LIV in this work [30; 33; 34; 35]. Besides, even standard models are not excluded completely. This is because we can interpret the very high energy photons from GRB 221009A as the result of a coincidence that an energetic photon from this GRB becomes a very high energy electron (or positron) which emits an energetic photon again, and this process maybe continue until we observe very high energy photons from this GRB [29]. Alternative possibility that LHAASO identified a number of low energy photons which arrived at the detector simultaneously as a single ultra-high-energy photon in the reconstruction also exists [73; 74]. This speculation, in which the ultra-high-energy (TeV or multi-TeV) photon events are due to Bose-Einstein gamma condensates, was rejected in the literature about Markarian 501 [74], but might be still permissible for GRB 221009A. So this work only suggests that the recent observation of GRB 221009A might be an optional signal of LIV. Future observations are needed to testify the analyses in this work, and other approaches to LIV such as the studies of time of light flight should be combined with the threshold anomaly researches to draw a more concrete conclusion. Before finishing this work, we should point out that we neglect the effects of the possible deviations in the cross-section (12). Indeed the effects of the extra terms may be important. For example, if \(\vartheta<0\) then the cross-section gets smaller and the linearity in \(E\) makes it contribute more to the observation of high energy photons and as a result a larger \(E_{\rm LIV}\) is permissible compared to the results in this work. Otherwise a positive \(\vartheta\) would imply a stronger LIV effect, thus require a smaller \(E_{\rm LIV}\) to get the results for explaining the recent observation of GRB 221009A. This may break the available bound on \(E_{\rm LIV}\) thus may render LIV less possible as the only source for the recent observation of GRB 221009 and other interpretations are needed. But this work could still shed light on this question by further analyses in the future since the energy dependence of the change of the threshold and the extra terms in the cross-section are quite different, so that we can constrain them separately with more data. ###### Acknowledgements. This work is supported by National Natural Science Foundation of China (Grant No. 12075003).
2301.09890
Think before you shrink: Alternatives to default shrinkage methods can improve prediction accuracy, calibration and coverage
While shrinkage is essential in high-dimensional settings, its use for low-dimensional regression-based prediction has been debated. It reduces variance, often leading to improved prediction accuracy. However, it also inevitably introduces bias, which may harm two other measures of predictive performance: calibration and coverage of confidence intervals. Much of the criticism stems from the usage of standard shrinkage methods, such as lasso and ridge with a single, cross-validated penalty. Our aim is to show that readily available alternatives can strongly improve predictive performance, in terms of accuracy, calibration or coverage. For linear regression, we use small sample splits of a large, fairly typical epidemiological data set to illustrate this. We show that usage of differential ridge penalties for covariate groups may enhance prediction accuracy, while calibration and coverage benefit from additional shrinkage of the penalties. In the logistic setting, we apply an external simulation to demonstrate that local shrinkage improves calibration with respect to global shrinkage, while providing better prediction accuracy than other solutions, like Firth's correction. The benefits of the alternative shrinkage methods are easily accessible via example implementations using \texttt{mgcv} and \texttt{r-stan}, including the estimation of multiple penalties. A synthetic copy of the large data set is shared for reproducibility.
Mark A. van de Wiel, Gwenaël G. R. Leday, Jeroen Hoogland, Martijn W. Heymans, Erik W. van Zwet, Ailko H. Zwinderman
2023-01-24T09:54:52Z
http://arxiv.org/abs/2301.09890v1
###### Abstract ###### Abstract While shrinkage is essential in high-dimensional settings, its use for low-dimensional regression-based prediction has been debated. It reduces variance, often leading to improved prediction accuracy. However, it also inevitably introduces bias, which may harm two other measures of predictive performance: calibration and coverage of confidence intervals. Much of the criticism stems from the usage of standard shrinkage methods, such as lasso and ridge with a single, cross-validated penalty. Our aim is to show that readily available alternatives can strongly improve predictive performance, in terms of accuracy, calibration or coverage. For linear regression, we use small sample splits of a large, fairly typical epidemiological data set to illustrate this. We show that usage of differential ridge penalties for covariate groups may enhance prediction accuracy, while calibration and coverage benefit from additional shrinkage of the penalties. In the logistic setting, we apply an external simulation to demonstrate that local shrinkage improves calibration with respect to global shrinkage, while providing better prediction accuracy than other solutions, like Firth's correction. The benefits of the alternative shrinkage methods are easily accessible via example implementations using mgcv and r-stan, including the estimation of multiple penalties. A synthetic copy of the large data set is shared for reproducibility. **Think before you shrink: Alternatives to default shrinkage methods can improve prediction accuracy, calibration and coverage** Mark A. van de Wiel\({}^{1}\), Gwenael G.R. Leday\({}^{2}\), Jeroen Hoogland\({}^{1}\), Martijn W. Heymans\({}^{1}\), Erik W. van Zwet\({}^{3}\), Ailko H. Zwinderman\({}^{1}\) \({}^{1}\)_Dept of Epidemiology and Data Science, Amsterdam Public Health research institute, Amsterdam University Medical Centers, Amsterdam, The Netherlands; Biometris, Wageningen University and Research, Wageningen, The Netherlands; \({}^{3}\)Dept of Biomedical Data Sciences, Leiden University Medical Center, Leiden, The Netherlands_ ## 1 Introduction Shrinkage has become a standard technique in statistics to counter over-fitting in regression models. In particular in high-dimensional settings, with the number of variables \(p\) larger than sample size \(n\), application of shrinkage is necessary to obtain parameter estimates and predictions. In low-dimensional settings, the benefit of shrinkage depends on the \(n:p\) ratio, as well as the main purpose of the analysis, e.g. prediction, parameter estimation, variable selection or causal effect estimation. Here, we focus on prediction. In general, the consensus seems to be that, on average, shrinkage improves prediction accuracy on test data from the same population as the training data (Van Calster et al., 2020; Riley et al., 2021). This is no surprise as the penalty parameter is usually tuned, e.g. by cross-validation, to maximize out-of-bag prediction accuracy. Hence, it will adapt to the \(n:p\) ratio: if large, little shrinkage is necessary; if small, more shrinkage is required. The use of shrinkage, however, comes at a price. It inevitably biases the parameter estimates, which in turn may lead to bad calibration of the prediction (Van Calster et al., 2020). Moreover, the penalty parameter can be rather instable (Riley et al., 2021), and such instability is often not communicated when presenting resulting models. Most empirical results that support these critiques on shrinkage are based on standard penalization techniques like lasso and ridge. While these issues are to some extent intrinsic to any shrinkage method, we show that they can be substantially alleviated by using alternatives that are only slightly more advanced. Therefore, our aim is to show a broad audience of applied statisticians and epidemiologists that some 'non-standard' shrinkage methods may be very useful for typical epidemiological studies that target multivariable prognostic models. In terms of shrinkage we focus mainly on several variants of ridge regression. We do briefly discuss comparison of prediction accuracy with classical lasso and stepwise selection (which could be regarded as an extremely bimodal type of shrinkage) for the main data example, showing the latter two are inferior to the ridge variations for this purpose. We emphasize that the latter two cater mainly for variable selection, rendering the comparison with ridge-type models somewhat unfair. Therefore, we focus on the latter. Our setting is regression, where shrinkage is either effectuated by using a penalty for the parameters or by using an informative prior on these parameters in a Bayesian setting. In the latter case, the scale parameter of the prior distribution acts effectively as a penalty parameter, as the mean of the prior is usually set to zero. First, we explore differential shrinkage by the use of different penalties for (groups of) variables. We argue that in many settings, it is in fact unreasonable to assume the same penalty for all variables. We show that both prediction accuracy and calibration may be greatly enhanced by using differential penalties. In addition, the use of differential penalties provides a more objective solution to down-weighting a set of variables than simply discarding it prior to the analysis, as is sometimes suggested as a (partial) solution for a low \(n:p\) ratio (Van Calster et al., 2020). Importantly, these penalties can be estimated automatically without (time-consuming) cross-validation in both the classical and full Bayesian setting, using R-packages mgcv (Wood, 2011) and R-Stan (Stan Development Team, 2022), respectively. Global shrinkage methods, such as ridge, penalize all parameters equally. If the primary aim is to optimize prediction accuracy, we show that these methods perform well, in particular when one allows for differential shrinkage. Calibration, however, is often bad, as the reduced variability is countered by increased bias. While this is partly inevitable, we show that local shrinkage (Gelman, 2006; Polson and Scott, 2012), which penalizes (groups of) variables differently, may improve calibration considerably. This comes at the cost of some loss in prediction accuracy as compared to global shrinkage, but local shrinkage still outperforms ordinary least squares (OLS) by a fair margin, while being competitive in terms of calibration. The reported instability of the penalty parameter(s) (Riley et al., 2021) may affect the uncertainty of the predictions. Unlike in settings with cross-validated or fixed penalty parameter(s), uncertainty propagation of the penalty parameter(s) is accomplished very naturally in a Bayesian hierarchical shrinkage model. Hereby, it may aid in communicating appropriate prediction uncertainties by improving (frequentist) coverage of their credibility intervals. Moreover, the additional regularization of the penalty parameter(s) may increase stability of their estimates, in particular in the presence of multiple penalties. In the frequentist paradigm penalty parameter uncertainty is not explicitly modeled. (Marra and Wood, 2012) show, however, that modifying the classical intervals such that the covariance matrix of the regression coefficients matches with the Bayesian one (which includes the additional prior uncertainty of those coefficients) improves coverage. Here, we also consider those 'unconditional' intervals, as implemented in mgcv. Our main strategy to compare various methods is to divide a large (\(N=21,570\)), fairly typical epidemiological study into many subsets of sizes \(n=50\) and \(n=200\). We study systolic blood pressure (SBP) as outcome, and use twelve variables to predict SBP. We add five random noise covariates to make sure that some covariates are not linked to the outcome at all, resulting in \(p=17\). The OLS results on the entire set are used as benchmark, as these estimates are unbiased and very precise due to the large sample size. Then, the subsets are analysed with various shrinkage methods and compared to the benchmark to evaluate prediction accuracy (using out-of-bag test samples), calibration and coverage of the prediction interval. We also analyse two simulated data sets: an external one for the logistic setting (Van Calster et al., 2020), which focuses on calibration; and a second one serving as the following introductory, motivating example. ### Introductory example Our motivating simulation example is loosely inspired by Model A in Riley et al. (2021), which relates seven covariates to SBP. Treatment (yes/no) is one of the covariates. The results of OLS in Riley et al. (2021) show a strong treatment effect as compared to the other six parameters. Such a scenario may be fairly common in clinical studies. Therefore, we simulate regression coefficients that adhere to this scenario: \(\beta_{1}=\beta_{2}=\beta_{3}=-0.05,\beta_{4}=\beta_{5}=\beta_{6}=0.05\), \(\beta_{\text{Treat}}=-0.25\) and error variance \(\sigma^{2}=1\). We simulate 1,000 data sets of size \(n=50\) from the linear model. To each of these, we apply standard ridge shrinkage, as in Riley et al. (2021), and two-penalty ridge shrinkage (ridge_2) allowing differential shrinkage for the treatment parameter. We argue the latter is reasonable, as treatment is an intervention and therefore a different type of covariate than the others. In the Bayesian setting (Bay_2) the square-roots of the two inverse penalties (i.e. standard deviations) are endowed with a standard \(\text{C}^{+}(0,1)\) (half-Cauchy) prior (Gelman, 2006). Table 1 shows the evaluation results in terms of mean squared error of the prediction (MSEp) and mean coverage of the 95% confidence intervals of the predictions, as evaluated on a large test set (\(n_{\text{test}}=1,000\)). The results clearly show the benefit of two-penalty shrinkage as compared to standard ridge: ridge_2 and Bay_2 decrease the mean MSEp by 17.7% and 29.2%, respectively. Most striking is the difference in interval coverage. While Bay_2 is somewhat conservative it is much closer to the target than ridge and ridge_2. We repeated the simulation for a setting that is more beneficial to global ridge shrinkage: all regression parameters equal \(\pm 0.1\). Then, prediction accuracies of all three methods are very similar, implying the cost of using one redundant penalty parameter, as for ridge_2 and Bay_2, is limited. The interval coverage of Bay_2 is still superior to that of ridge (ridge_2) as it averages to 95.7% versus 86.0% (87.8%). \begin{table} \begin{tabular}{|l||l|l||l||} \hline Method & MSEp, mean & MSEp, 90\% & Cover \\ \hline ridge & 0.096 & 0.157 & 0.844 \\ ridge\_2 & 0.079 & 0.150 & 0.898 \\ Bay\_2 & 0.068 & 0.131 & 0.969 \\ \hline \end{tabular} \end{table} Table 1: Results from simulation example. MSEp: mean squared error of the prediction; mean and 90% quantile across data sets; Cover: mean coverage of the 95% confidence intervals of the predictions (target: 0.95) Below we study various frequentist and Bayesian shrinkage methods in more detail for real data in the linear regression setting. We evaluate methods by considering prediction accuracy, coverage of the confidence intervals of the predictions, and calibration of the predictions. As the latter is particularly also a concern in binary prediction problems (Van Calster et al., 2020) we address this in an external simulation setting. To enhance reproducibility of our results we provide i) a synthetic copy of our primary data set, for which we show it renders qualitatively the same results as the real data; and ii) all code to run and evaluate the models, including the simulations. The latter should also assist researchers to use some of the 'non-standard' solutions we propose for their own data. We end with a discussion, which includes recommendations and several extensions. ### Data The main data we use throughout the manuscript is obtained from the Helius study (Snijder et al., 2017). In a linear regression context, we study response \(Y\): systolic blood pressure (SBP), as a function of covariates \(X\): age, gender, BMI, ethnicity (5 levels), smoking (binary), packyears, coffee (binary), glucose (log), cholesterol, rendering twelve covariates after dummy coding the nominal covariate. We apply minimal preprocessing to the data: only \(2.7\%\) of the samples had at least one missing value; these samples were removed, rendering a sample size \(N=21,570\). Continuous covariates were standardized, as this is common practice before applying shrinkage for the penalty to have the same effect on all corresponding regressing parameters. For binary covariates, the standardization itself may become unstable for small data sets, in particular when the classes are unbalanced. This may hamper generalization to test settings. Therefore, we opted for a default -1, 1 coding, as this standardizes a balanced binary covariate. For our data, we compared results with those from complete standardization. Differences were small, but marginally better for the proposed coding. Finally, we added five independent standard normal noise covariates. Hereby, we are sure to include some covariates completely unrelated to the outcome, SBP. Hence, the total number of covariates equals 17. We chose this data set for various reasons. First, it addresses a fairly standard, and well-known prediction problem with an interesting mix of covariates (binary, nominal and continuous). Second, its large sample size allows to a) use the OLS estimates from the entire set as the benchmark, because these have very small standard errors (typically \(\approx 0.01\)); and b) to split the set in many independent subsets with sample sizes as often encountered in clinical studies, such as \(n=50,200\). This enables us to evaluate various (shrinkage) methods on many real, relatively small sample data sets. Third, from applying OLS to the entire data set, we have \(R^{2}=0.34\), which is neither trivially low nor high. The data cannot be shared, but we provide a synthetic copy which qualitatively renders the same results as those presented here. ## 2 Methods All methods fit the linear model \(Y_{i}=\beta_{0}+X_{i}\boldsymbol{\beta}=\beta_{0}+\sum_{j=1}^{17}\beta_{j}X_{ ij}+\epsilon_{i},\epsilon_{i}\sim N(0,\sigma^{2})\), with \(i=1,\ldots,n\), \(\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{17})\) and covariates \(X_{i}=(X_{i1},\ldots,X_{i17})\). Before discussing the methods, we state the evaluation criteria, which all focus on prediction. ### Evaluation criteria First, note that the 'true' value of \(\mathbf{\beta}\) is obtained from applying OLS to the entire data set, as this rendered estimates with extremely small standard errors. Training sets \(T^{s}\) and their complementary test sets \(T^{\prime s}\) are indexed by \(s\). The true value of the prediction for any (test) sample \(i\) with covariates \(X_{i}\) then equals \(\eta_{i}=\beta_{0}+X_{i}\mathbf{\beta}\). We evaluate the methods on three criteria: 1. _Prediction accuracy_. We use the mean squared error of the predictions (MSEp). For a given subset \(s\) this is defined as \[\text{MSEp}^{s}=1/|T^{\prime s}|\sum_{i\in T^{\prime s}}(\eta_{i}-\hat{\eta}_{ i}^{s})^{2},\] (1) with, for test sample \(i\), \(\hat{\eta}_{i}^{s}=\hat{\beta}_{0}^{s}+X_{i}\hat{\mathbf{\beta}}^{s}\), where \(\hat{\beta}_{0}^{s}\) and \(\mathbf{\beta}^{s}\) are estimated from samples in training set \(T_{s}\). As we have the 'true' predicted values \(\eta_{i}\) in this setting, we use those in (1) instead of \(y_{i}\), which is often used when \(\eta_{i}\) is not avalaible, rendering the prediction squared error. Of note: we checked that the latter leads to the same conclusions as the former. 2. _Calibration of the predictions._ As we are in the setting of knowing the true values, we use the term 'calibration' as it is used in measurement technology: we quantify how close the predictions (which are derived from measurements) are to an accurate benchmark, in this case the true values. For this, we regress predictions \(\hat{\eta}_{i}^{s}\) on their true values \(\eta_{i}\) per subset \(s\), including an intercept as well. The slope of this regression, which we term 'cslope', serves as a calibration metric as it should be close to 1, and quantifies the possibly biasing effect of shrinkage across the range of predictions. Note that this deviates from what has become known as the 'calibration slope' in the literature, which results from regressing the observed values on the predictions. The latter provides an assessment of the overall agreement of observed values and predictions, but has been claimed to be a misnomer, partly because it is (also) a measure of spread of the predictions (Stevens and Poppe, 2020). 3. _Uncertainty of the predictions_. We evaluate this by the mean coverage of the confidence intervals of the predictions \(\mathbf{\eta}=(\eta_{1},\ldots,\eta_{N})\). In the classical penalized ridge regression setting we use the 95% Gaussian Wald intervals with standard errors corrected for prior uncertainty of \(\mathbf{\beta}\), as derived in Marra and Wood (2012). For Bayesian methods, we simply use the (2.5%, 97.5%) quantiles of the posteriors to create 95% credible intervals for the predictions \(\eta_{i}\). Note that confidence intervals of predictions \(\mathbf{\eta}\) do not include the measurement error \(\epsilon_{ij}\) as prediction intervals do. For simplicity, we decided to focus on the former as these more directly relate to the shrinkage of regression coefficients \(\mathbf{\beta}\), and less so to how the error variance \(\sigma^{2}\) is estimated. ### Standard solutions We study OLS, stepwise selection, lasso and ridge as standard solutions with no or just one global penalty parameter. The intercept is not penalized. Note that step is also a shrinkage method, but of an extreme nature: it shrinks completely to 0 or not at all, hence somewhat similar to the use of a spike-and-slab prior in a Bayesian model. Here, step is the standard step implementation in R, using forward-backward selection and AIC to select a model. We use lasso as implemented in glmnet, using lambda.min resulting from 10-fold cross-validation (CV). For ridge, we estimated penalties using marginal likelihood optimization as implemented in mgcv(Wood, 2011), which, as opposed to glmnet, provides confidence intervals (Marra and Wood, 2012). Results from 10-fold CV were very similar, and hence not shown. ### Multi-penalty solutions Multi-penalty solutions allow the use of different penalties for groups of covariates. We focus on ridge-type solutions for three reasons: a) our focus lies on prediction and not on variable selection; b) standard software implementations such as mgcv allow efficient estimation of the penalties; and c) standard ridge performs relatively well (see Supp Fig 2). In some settings, the use of covariate groups is very natural, e.g. when some covariates represent similar entities (e.g. genes), or when main effects plus interactions are included. In other cases, like ours, choice of the covariate groups implies a level of subjectivity. Therefore, we also assess the performance of multi-penalty approaches when the covariate groups are chosen randomly; that is, when the prior information used to form the covariate groups is useless. Besides differential penalization another option is not to shrink one (group of) covariates, for example because there is substantial evidence that these effect the outcome. We will explore this option as well. We now define the covariate groups that were used: * Two groups, \(G=2\). The first 3 covariates (age, gender, BMI) are one group, supposedly because these are known to relate to SBP from previous studies. The other 14 covariates are in the second group. Two options are explored: ridge_2 penalizes both groups, separately, whereas ridge_2un leaves the first group unpenalized. * Three groups, \(G=3\), ridge_3. As above, but the nominal covariate, ethnicity, is a seperate group as this is a nominal covariate with five levels; here, 'Dutch' is chosen as the baseline level. * Two random groups. Three covariates are randomly picked to belong to group 1, the other 14 belong to group 2. Randomization is repeated for each training data set. Referred to as ridge_2r and ridge_2unr, the latter corresponding to leaving the smallest group of covariates unpenalized. Riley et al. (2021) and others have reported the instability of the penalty parameters for standard solutions like ridge. This instability is likely to increase when estimating multiple penalty parameters, as less data per penalty parameter is available then. Bayesian solutions may be a worthwhile alternative to shrink these penalties with an additional prior to counter such instability. Moreover, the extreme case: one penalty per parameter, referred to as local shrinkage, may benefit calibration, as illustrated further on. ### Bayesian solutions Shrinkage is a very natural concept in Bayesian statistics. Depending on the type of shrinkage, frequentist and Bayesian solutions may be very similar. For example, the frequentist ridge and lasso estimates of \(\mathbf{\beta}\) are equal to the posterior mode estimate of \(\mathbf{\beta}\) when using a Gaussian or Laplacian prior with a fixed precision parameter that relates proportionally to the penalty parameter for fixed sample size. Bayesian methodology, however, can model the variability of the penalty parameter, which holds several promises. First, it counters potential instability of estimates of penalty parameter(s) by using a (weakly) informative prior for it. Second, it propagates uncertainty of penalty parameter(s). This follows from the fact that for a single \(\beta\) and random penalty parameter \(\lambda\) we have \(V(\beta)=E_{\lambda}[V(\beta|\lambda)]+V_{\lambda}[E(\beta|\lambda)].\) Finally, it allows local shrinkage, either by group, or even per covariate. Note that, just as for OLS, an advantage of local shrinkage is that the results are less sensitive to (differences in) scale and effect sizes, as the local penalty can adapt. Extensions to Bayesian ridge regression differ in how they allow the penalty parameters to vary, a priori. Table 2.4 specifies the priors that we used. Here, \(g(k)\) denotes the map of covariates to groups: \(\{1,\ldots,17\}\rightarrow\{1,\ldots,G\}\), with \(G=2,3\). EB refers to 'empirical Bayes', meaning that the penalty parameter is considered fixed and estimated by maximizing marginal likelihood. IG is the conjugate inverse-gamma prior, and C\({}^{+}(0,1)\) is the half-Cauchy prior (see Supp Fig 1) for the standard deviation(s), which has fairly heavy tails and was advocated as a good default prior by Gelman (2006); Polson and Scott (2012). Moreover, \(\sigma^{2}\), the error variance in the linear model (set to 1 in above's prior for the logistic), is endowed with either a Jeffrey's prior or a vague inverse gamma (results very similar). It is standard practice to include \(\sigma^{2}\) in the prior of \(\beta_{k}\) in Bayesian linear regression. Since we observed that the global versions of Bay_EB and Bay_IG performed very similarly to their frequentist counterpart (ridge; see Supp Fig 3), we primarily focus on the C\({}^{+}\) prior for the grouped and local settings. ## 3 Results ### Prediction accuracy We first consider prediction accuracy, as measured by the mean squared error of the predictions, MSEp (1). In all figures, results are based on 400 training subsets of sizes \(n=50,200\), which due to the size of the entire data set show no or little overlap in samples. Predictions are evaluated on complementary test sets. Below, we discuss the results for several comparisons. \begin{table} \begin{tabular}{|l|l|l|l|} \hline Name & Prior \(\beta\) & Prior penalty \(\lambda\) & Type of shrinkage \\ \hline Bay\_EB & \(\beta_{k}\sim N(0,\sigma^{2}\lambda^{-1})\) & \(\lambda=\bar{\lambda}_{\text{EB}}\) & global, fixed \\ Bay\_IG & \(\beta_{k}\sim N(0,\sigma^{2}\lambda^{-1})\) & \(\lambda^{-1}\sim\text{IG}(0.001,0.001)\) & global, vague \\ Bay\_glo & \(\beta_{k}\sim N(0,\sigma^{2}\lambda^{-1})\) & \(\lambda^{-1/2}\sim\text{C}^{+}(0,1)\) & global, weakly informative \\ Bay\_2 (3) & \(\beta_{k}\sim N(0,\sigma^{2}\lambda^{-1}_{g(k)})\) & \(\lambda^{-1/2}_{g}\sim\text{C}^{+}(0,1)\) & grouped, weakly informative \\ Bay\_loc & \(\beta_{k}\sim N(0,\sigma^{2}\lambda^{-1}_{k})\) & \(\lambda^{-1/2}_{k}\sim\text{C}^{+}(0,1)\) & local, weakly informative \\ \hline \end{tabular} \end{table} Table 2: Priors **Standard solutions**: We first compare the standard methods. Suppl. Fig. 2 depicts the performances. For \(n=50\), we observe that ridge performs better than lasso, which is substantially better than stepwise selection and OLS. For \(n=200\) the gap between ridge and lasso becomes much smaller, and so do the other gaps. **Multi-penalty solutions**: Fig 1 shows that use of multiple penalties can improve prediction accuracy (lower MSEp). For n=50, we observe a substantial improvement for \(G=2\) penalties, likely caused by the large difference in penalties for the two groups (Supp Fig 4; first and third boxplot). For n=200, the improvement for \(G=2\) is marginal, but the larger sample size benefits the estimation of one more penalty rendering improved accuracy for \(G=3\). In this setting ridge_2un, which does not penalize the first three covariates, performs on par with ridge_2. This is reasonable, because these covariates are relatively strong, so need little penalization. The figure also shows, however, that when the two groups are assigned at random ridge_2unr performs inferior to ridge_2r (with suffix 'r' denoting the random group setting) as it cannot adapt when the unpenalized group is relatively weak, and would hence have benefitted from penalization. Importantly, also observe that even when the groups are random, ridge_2r performs - on average - (nearly) on par with ridge, again due to the adaptive penalties. The improvement with the multi-penalty approach becomes very tangible in Fig. 3. It shows that OLS and ridge need many more samples to achieve the same median MSEp as ridge_2, f.e. approximately 135 and 85, respectively, versus 50. **Adding Bayesian solutions**: We first compare the standard (empirical) Bayesian ridge regression solutions, Bay_EB and Bay_IG, with their frequentist counterpart, ridge. Suppl. Fig. 3 shows very little difference in MSEp between those methods. This is expected as for both methods the posterior mode estimate coincides with the frequentist estimate. Small differences are due to the use of posterior mean instead of posterior mode as a summary for the predictions. Fig. 2 compares each of the three Bayesian methods with C\({}^{+}\) priors to their frequentist counterparts. Here, Bay_loc is contrasted with OLS as both methods do not imply any grouping on the covariates. Likewise, Bay_2 is contrasted with ridge_2 (2 groups) and Bay_glob is contrasted with ridge (1 group). For the latter two comparisons we observe that differences in MSEp are minor. A more striking difference is observed for OLS vs Bay_loc. While the latter cannot compete with the global shrinkage methods (for \(n=50\)), it does perform much better than OLS. Apparently, the default shrinkage of the normal-C\({}^{+}\) prior substantially stabilizes the \(\beta\) estimates in such small sample size settings. The effect of the local normal-C\({}^{+}\) prior for \(\beta\) is depicted in Supp. Fig. 5 for two \(n=50\) subsets with extreme OLS estimates of one of the most important coefficients, \(\beta_{\text{BMI}}\). We observe that the normal-C\({}^{+}\) prior of Bay_loc shrinks the extreme OLS estimates in the right direction. ### Calibration Fig. 4 shows the 'cslopes' resulting from regressing test sample predictions \(\hat{\eta}_{i}^{s}\) against true \(\eta_{i}\) for all subsets \(s\) for OLS, Bay_loc, ridge, Bay_glob, ridge_2 and Bay_2. We observe that OLS and Bay_loc outperform the other methods on calibration. OLS and Bay_loc are competitive: the first being unbiased, but with larger variability of the slopes; in fact, the root MSE of the slope estimates (as compared to 1) are comparable: 0.249 (OLS) vs 0.232 (Bay_loc). The bias of ridge and Bay_glob is evident; shrinkage compresses the slopes. Differential penalization using two covariate groups improves results as compared to global regularization, with a small edge for Bay_glo2 w.r.t. ridge_2 in terms of decreased variability likely due to the extra regularization of the penalties. Differences in calibration performance are somewhat smaller for \(n=200\) (Supp. Fig. 6). To elaborate on the inferior calibration of ridge and Bay_glo compared to OLS and Bay_loc consider the estimated coefficients of a strong covariate (in the entire study), BMI, from the \(n=50\) subsets. Supp. Fig. 7 clearly shows that ridge and Bay_glo over-shrink \(\beta_{\text{BMI}}\) due to the common shrinkage factor shared with the weaker covariates, whereas Bay_loc strikes a good balance between a small amount of bias and reducing variability compared to the OLS estimates. ### Quantifying uncertainty: confidence interval of predictions Uncertainty of the predictions \(\boldsymbol{\eta}\) results from the variability of \(\boldsymbol{\beta}\), which includes two components: one conditional on fixed penalty parameter(s) \(\boldsymbol{\lambda}\) and one reflecting the variability of \(\boldsymbol{\lambda}\). Riley et al. (2021) show that the latter, the ridge penalties, may be very variable across small subsets. Indeed, the left-hand side of Figure 5 confirms this for our data: the penalty estimates may differ 2-3 natural logs in magnitude from one subset to another (\(n=50\)), with standard ridge and Bay_glo alike. Hence, the extra regularization of Bay_glo has little effect here. When multiple penalties are estimated, however, extra regularization can improve stability of those, as illustrated by Supp. Fig. 4: Bay_2 compares favourably to ridge_2 for stability of \(\lambda_{2}\). Nevertheless, between subset variability of \(\boldsymbol{\lambda}\) remains considerably large. Therefore, we argue that it is important to propagate the uncertainty of \(\boldsymbol{\lambda}\) into estimation of \(\boldsymbol{\beta}\) when analysing _one_ subset, provided that this reflects the _between_ subset variability, as the latter is usually not available. The right-hand side of Figure 5 shows this: when using Bay_glo the posterior variability of \(\boldsymbol{\lambda}\) for ten random subsets approximates the between subset variability of the ridge penalty (left-hand side) fairly well, in particular in order of magnitude. The penalties are usually not the primary estimands of interest; what matters most is to what extent their variability propagates towards \(\boldsymbol{\beta}\) and, eventually, the predictions \(\boldsymbol{\eta}\). Therefore, we finalize the comparison by evaluating coverages of the confidence intervals of \(\boldsymbol{\eta}\). As before, we focus on OLS, ridge and ridge_2 and contrast these with Bay_loc, Bay_glo and Bay_2 in Table 3. Both ridge and ridge_2 are used with uncertainty propagation of the penalties, as implemented in mgcv's predict.gam function with argument unconditional = TRUE. First, while we observe that OLS and Bay_loc are competitive in terms of coverage, the latter results in narrower, and hence slightly more useful, intervals. Second, unlike for the introductory example, ridge and Bay_glo are competitive in terms of coverage, and also in terms of mean width of the intervals. Finally, for the grouped penalties the extra shrinkage imposed by Bay_2 pays off as it maintains good coverage, unlike ridge_2, which seems to render too narrow intervals. Note that in the latter case, the 10% quantile of the coverage of the predictions (as computed from the test individuals) drops to only 0.772 for ridge_2, whereas it equals 0.925 for Bay_2. Note that the mean coverages of ridge and ridge_2 drop slightly (from 0.944 to 0.920 and from 0.896 to 0.864, respectively) when not adjusting the standard errors (unconditional = FALSE in predict.gam). For \(n=200\), results are qualitatively similar, although the intervals of OLS are now somewhat narrower than those of Bay_loc (Supp Table 4). ### Conclusions, linear case Below we list the conclusions from the linear regression case. 1. Shrinkage benefits prediction accuracy compared to OLS and stepwise selection. 2. Differential shrinkage for groups of covariates may substantially benefit prediction accuracy compared to global shrinkage. Therefore, the same prediction accuracy can be achieved with a smaller sample size. 3. Penalty estimates vary substantially between small subsets of samples. Hence, it is relevant to propagate this variability when quantifying uncertainty of the predictions. 4. Shrinkage may indeed lead to some level of miscalibration, although minimally for Bayesian shrinkage with a local C\({}^{+}\) prior. The latter is competitive to OLS on this matter and on coverage, while rendering better prediction accuracy. Prediction accuracy is worse, though, than that of the global regularization methods. 5. Bayesian shrinkage with a grouped C\({}^{+}\) prior outperforms its frequentist counterparts in terms of interval estimation. In case of global shrinkage, the two are very competitive, with the latter slightly less conservative. ## 4 Calibration for logistic regression: external simulation Here, we study the small sample logistic regression setting. This differs from the linear one in three essential ways: a) binary outcomes are much less information-rich than continuous ones; b) the logistic function flattens the impact of large \(\beta\)'s; and c) the maximum likelihood estimator (MLE) is biased (Firth, 1993). We focus on prediction accuracy and calibration, as standard shrinkage methods like lasso and ridge with cross-validated penalties generally improve the first, but may deteriorate the latter. This is intrinsic due to the introduced bias, but may be worsened by the conventional tuning of the penalty parameter: minimization of cross-validated prediction error, which may be far from optimal for calibration. Hence, we discuss some alternatives. Firth's solution to adjust bias (Firth, 1993) uses a fixed penalty, defined on the level of the information matrix, and was shown to perform favourably to the tuning methods (Van Calster et al., 2020). Sullivan and Greenland (2013) propose a similar, alternative solution, which also accounts for the aforementioned issues a) and b): simply fix the prior variance of the \(\beta\)'s to 0.5, equivalent to setting the ridge penalty \(\lambda=1/0.5=2\). Their argument is that for the small sample logistic setting one has to 'dare' to be fairly informative, as the outcomes are information-poor. Based on this, we propose a somewhat more objective alternative that does not require any tuning either: use Bay_glo or Bay_loc \begin{table} \begin{tabular}{|c|r r||r r|} \hline & \multicolumn{2}{c||}{Coverage} & \multicolumn{2}{c|}{Width} \\ Methods & Classical & Bayes & Classical & Bayes \\ \hline OLS, Bay\_loc & 0.939 & 0.971 & 2.334 & 2.149 \\ ridge, Bay\_glo & 0.944 & 0.961 & 1.521 & 1.607 \\ ridge\_2, Bay\_2 & 0.896 & 0.966 & 1.216 & 1.470 \\ \hline \end{tabular} \end{table} Table 3: Mean coverage (target: 0.95) and width of confidence intervals of predictions for \(n=50\), contrasting classical method with Bayesian counterpart as defined in Table 2.4, but with a \(C^{+}(0,\sqrt{0.5})\) instead of a \(C^{+}(0,1)\) prior for the standard deviation, as the former matches to a median prior variance of 0.5. These priors are depicted in Supp. Fig. 1. So, we allow for some variation in the prior variances, thereby enabling adaptation to cases in which (some of the) \(\beta\)'s are more (or less) extreme. For evaluation we follow one of the simulation set-ups by Van Calster et al. (2020): \(n=50,\mathbf{\beta}=(\beta_{1},\ldots,\beta_{5})=(0.2,0.2,0.2,0.5,0.8)\) and intercept \(\beta_{0}\) tuned such that the average event probability equals 0.5, rendering a challenging setting of \(25/5=5\) events per covariate. Covariates \(X\) are simulated from a multivariate normal with means 0, variances 1, and correlations 0.5. Response \(Y\) is then simulated from a Bernoulli with success probabilities \(1/(1+\exp(-\beta_{0}-X\mathbf{\beta})).\) We refer to this setting as 'Moderate signal'. Additionally, to study how results adapt to the signal strength, we simulate the 'Weak and Strong signal' settings by using \(\mathbf{\beta}_{\text{weak}}=\mathbf{\beta}/3,\) and \(\mathbf{\beta}_{\text{strong}}=3\mathbf{\beta}\). Calibration is assessed by the 'cslope', which results from regressing the estimated linear predictors against the true ones (using the real \(\mathbf{\beta}\)) for a large test set. A slope close to 1 indicates good calibration. Note that Van Calster et al. (2020) report the classical calibration slope, which results from regressing observations on predictions. The latter measures in fact a mix of calibration and spread of the predictions (Stevens and Poppe, 2020), which is why we opt for the cslope. In our case, too strong shrinkage will render cslope \(\ll 1\). The MSE of the predictions (MSEp) is assessed by the average squared error of those predicted probabilities w.r.t. the true ones. Figure 6 displays the results based on 50 training sets for ML (maximum likelihood); Firth; ridgeCV: ridge with cross-validated penalty; ridge05: ridge with fixed penalty 1/0.5; Bay_glo05 and Bay_loc05: global and local regularization with \(C^{+}(0,\sqrt{0.5})\) priors for the standard deviation(s). Below we list the conclusions. * We confirm that ridgeCV performs well in terms of prediction accuracy, but poor on calibration, with clopes \(\ll 1\). Firth calibrates better than both ML and ridgeCV. * The solution by Sullivan and Greenland (2013), ridge05, is competitive to Firth in terms of calibration, when the signal is weak or moderate, but overshrinking when the signal is strong. It is generally superior to ML and Firth for prediction. * Bay_glo05 is generally fairly competitive to ridgeCV in terms of prediction accuracy. On average it calibrates better, but shows more variability. It calibrates less well than Firth, and Bay_loc05. * Bay_loc05 is competitive to Firth in terms of calibration, but is superior for prediction in particular in the weak signal setting. * Bay_loc05 is competitive to ridge05 in terms of calibration. It adapts better to the change in signal at the price of somewhat more variation. It is competitive in terms of prediction accuracy, taking into account that variation towards low MSEp is actually desirable. strong signal setting, but less so for the weak and moderate signal settings, as the classical calibration slopes are more affected by the variability of the predictions. All-in-all, we conclude that for calibration Firth, ridge05, and Bay_loc05 are competitive to one another and superior to ML and the global penalization methods with adaptive penalty, ridgeCV and Bay_glo05. The latter two, though, are superior in terms of prediction accuracy when the signal is weak. In that case Bay_loc05 outperforms Firth in terms of prediction accuracy. Unlike ridge05, Bay_loc05 adapts to the signal strength, but shows somewhat more variability due to the non-fixed penalty. Note that an important advantage of Bay_loc05 is that unlike Firth and ridge05, it straightforwardly provides uncertainty estimation of the predictions, including the uncertainties of the penalties. ## 5 Software and reproducibility ### Software Below, we list the software packages used for the various models. 1. OLS and stepwise selection: R's lm and step functions. 2. Linear ridge, global and grouped penalty: gam function in R-package mgcv (Wood, 2011). Unlike glmnet this allows for uncertainty computations. 3. Logistic ridge: glmnet, as this was also used by Van Calster et al. (2020). Firth's correction: logistf (Heinze and Schemper, 2002). 4. Lasso: glmnet (Friedman et al., 2010). 5. Bayes, linear: shrinkage software (Leday, 2022), which is optimized for computational efficiency, hence convenient for the repeated calculations on data splits. Results highly agree with those of r-stan. 6. Bayes, logistic: r-stan (Stan Development Team, 2022). ### Data sharing As is the case for many cohort studies, our primary data source, the Helius data, cannot be shared publicly. This is impeding methodological studies like this, because such large \(N\) studies are very useful to check results of various methods on small subsets as we did. Therefore, we share a synthetic copy of our data set which may be used to a) qualitatively reproduce our results; or b) evaluate other methods than those proposed here. We verified whether results from the synthetic copy agreed with those from the real data, which is the case (see Supp Fig 8 for MSEp). The imputation-based method to generate the synthetic data is described in the Supp Mat. ### Do it yourself One may certainly argue that results and conclusions might differ somewhat for other data than those used here. The following scheme may be useful to assess what shrinkage method works best for the data at hand. Prediction accuracy can be assessed by cross-validation principles. To evaluate calibration and coverage, one needs to know'real' predictions in a setting that mimics one's own. For that, run an OLS or, preferrably, the more stable Bay_loc on the data and obtain \(\hat{\mathbf{\beta}}\) as well as an estimate of the noise. Then simulate response \(y\) many times from the regression model using the real design matrix \(X\) and those estimated parameters. Then, our scripts (or other software for the proposed models) can be used to assess the performances of the methods in one's own data-centric simulation setting. ### Availability Annotated software scripts and R-markdown files to reproduce our results are available from github: [https://github.com/markvdwiel/ThinkBeforeShrink](https://github.com/markvdwiel/ThinkBeforeShrink). This repository also contains a synthetic copy of the Helius data and an example script to guide users. ## 6 Discussion We observed that grouping of covariates to allow for differential penalties may improve predictive performance. Here, we only studied the low-dimensional linear regression setting, but we have observed similar results for the high-dimensional logistic ridge setting (Van de Wiel et al., 2016; van Nee et al., 2021). One could argue that the grouping of covariates is subjective. However, we demonstrated that the results are robust against misspecification of the groups. Moreover, it is much less subjective than a commonly used strategy, as stated in Van Calster et al. (2020): "Alternatively, a less complicated model can be considered, for example by discarding many predictors a priori.". The latter strategy basically assigns an infinite penalty to the excluded covariates a priori. We believe it is better to leave those in, but allow differential shrinkage for those as compared to the others. If calibration is key, global shrinkage methods with flexible penalties, both frequentist and Bayesian ones, are usually not very suitable. This is not surprising as these penalty parameter(s) adapt to predict optimally in terms of accuracy, which is a different goal than calibration. We showed that Bayesian local shrinkage is a good alternative which competes with conventional methods that do not tune penalties, such as OLS, ML, Firth and ridge with a fixed penalty. Note that the importance of having calibrated predictions depends on a) the possibility of recalibrating predictions in a later stage of the study; and b) the need or desire to interpret the predictions on an absolute scale. For the latter, the design of the study is also very relevant; often recalibration will be needed anyhow when applying the predictor to a population with somewhat different characteristics than the one represented by the study, a very common case for medical studies. While the grouping of covariates can aid in improving prediction accuracy, it may deteriorate coverage of the confidence intervals of predictions in the classical setting. Here, the additional regularization by the \(C^{+}(0,1)\) prior as invoked by the Bayesian procedure helps to improve coverage, while maintaining competitiveness on prediction accuracy and calibration. The importance of quantifying uncertainty of the predictions differs from study to study. However, in any study it will be useful to know it, also to assess whether the sample size should be increased to lower this uncertainty to an acceptable level. We discussed the linear and logistic case, which could be regarded as two extremes: the former being information-rich with unbiased ML (= OLS) estimates, the latter being information poor with biased ML estimates. This means that in the latter case mild shrinkage is also beneficial for calibration. Moreover, it may be relatively more beneficial to a use a subjective prior (or penalty) in the logistic setting. We discussed two examples that performed well, prior variance equal to 1/2 (Sullivan and Greenland, 2013), or less subjective, a hyperprior for the variance with median equal to 1/2. A good alternative may be to use a historical prior, which is tuned to available data from similar studies (Neuenschwander et al., 2010; MacLehose and Hamra, 2014). Then, the solution with a hyperprior may be preferable over one that fixes the variance, as the former allows the current study to deviate somewhat more when it would not behave similarly. For our applications, computational time was not an issue at all: all methods ran within seconds for given data sets. Nevertheless, the classical methods fitted substantially faster than the Bayesian ones, rendering the former an edge for analysing larger scale studies. Whether this balances against the demonstrated benefit of higher-level shrinkage that Bayesian methods can provide, will depend on the main aim(s) of the study (prediction accuracy / calibration / uncertainty quantification). In general, Bayesian methods gain popularity in epidemiological research MacLehose and Hamra (2014), which assists their acceptance for general use. Note that, on purpose, we did not tweak the Bayesian methods in terms of convergence checks or use of other hyper-priors, as these would imply further tuning, rendering the comparison with less flexible methods unfair. In terms of software, we recommend to use either mgcv, or R-stan with the \(C^{+}\) priors, as both incorporate estimation of multiple penalty parameters and provide intervals with, in most settings, fairly good coverage. The latter has an edge when calibration is key as it allows for local penalties, or when stabilization of grouped penalties is relevant, e.g. for uncertainty quantification. Supp Fig 12 shows a flowchart with our recommendations on when to use which method. Our work is limited in scope, so we discuss several important extensions. The first one is variable selection. As this is a different goal than prediction, other priors and penalties should be discussed. In the classical setting many variants of the lasso (adaptive, group, hierarchical) become relevant, compared to more traditional stepwise selection techniques. In the Bayesian setting, one may wish to include spike-and-slab priors and/or Zellner's g-prior, as these are targeting for variable selection (George and Foster, 2000). Alternatively, posterior selection techniques that apply to the fairly dense ridge-type methods used here may be very competitive to those more sparse formulations (Bondell and Reich, 2012). A second extension is to study the effect of shrinkage in the context of causal inference. Many causal inference frameworks make use of prediction methods to account for confounding or non-random treatment allocation. Unlike in predictive modelling, the emphasis in causal inference is primarily placed on bias, and much less so on variance reduction. This makes the use of shrinkage less natural, in particular for the treatment effect. Nevertheless, some (local) shrinkage of the other covariates may be beneficial when \(p\) is relatively large compared to \(n\), possibly in combination with double-robust estimation (Avagyan and Vansteelandt, 2021) to counter misspecification of the model. A third extension is the multi-regression setting, as often encountered in high-dimensional multiple testing setting, e.g. when relating gene expression to phenotype, correcting for confounders like age and gender. In such a setting, shrinking effects across similar features (e.g. genes) may improve effect size estimates and multiple testing properties (Van de Wiel et al., 2012). Moreover, a multi-regression setting allows tuning of hyper-parameters of the penalty's prior across features using empirical Bayes (Leday et al., 2017). Shrinkage is only a partial solution for underpowered studies. In the end, increase of sample size will often be needed to draw firm conclusions (Riley et al., 2020). However, we do believe shrinkage may play an important role in the research cycle, which for many practical reasons often starts with a fairly small study. Then, as demonstrated, well-thought global or grouped shrinkage methods can help to better assess the predictive potential of the study and quantify uncertainty of the predictions, whereas local shrinkage methods are able to reduce bias and improve calibration. Therefore, these shrinkage methods aid in deciding whether and how to extend the study.
2307.12680
Root Extraction in Finite Abelian Groups
We formulate the Root Extraction problem in finite Abelian $p$-groups and then extend it to generic finite Abelian groups. We provide algorithms to solve them. We also give the bounds on the number of group operations required for these algorithms. We observe that once a basis is computed and the discrete logarithm relative to the basis is solved, root extraction takes relatively fewer "bookkeeping" steps. Thus, we conclude that root extraction in finite Abelian groups is no harder than solving discrete logarithms and computing basis.
Udvas Acharjee, M S Srinath
2023-07-24T10:40:38Z
http://arxiv.org/abs/2307.12680v2
# Root Extraction in Finite Abelian Groups ###### Abstract We formulate the _Root Extraction problem_ in finite Abelian \(p\)-groups and then extend it to generic finite Abelian groups. We provide algorithms to solve them. We also give the bounds on the number of group operations required for these algorithms. We observe that once a basis is computed and the discrete logarithm relative to the basis is solved, root extraction takes relatively fewer "bookkeeping" steps. Thus, we conclude that root extraction in finite Abelian groups is _no harder_ than solving discrete logarithms and computing basis. ## 1 Introduction A simple form of the _root extraction_ is as follows: Let \(G=\langle P,Q\rangle\) where \[G\approx\frac{\mathbb{Z}}{\ell^{e}\mathbb{Z}}\times\frac{\mathbb{Z}}{\ell^{e} \mathbb{Z}} \tag{1}\] Consider an element \(K\in G\) and \(m,n\in\frac{\mathbb{Z}}{\ell^{e}\mathbb{Z}}\) such that, \[K=mP+nQ \tag{2}\] If element \(K\) and the multipliers \(m,n\) are known, the root extraction problem is to find a basis \(P,Q\) of \(G\) such that 2 holds. The following table summarizes this problem and related problems. \begin{tabular}{|l|c|c|c|} \hline \hline Problem & Element & Multipliers & Base Pts. \\ & \(K\) & \(m,n\) & \(P,Q\) \\ \hline \hline Exponentiation &? & ✓ & ✓ \\ \hline Extended DLP & ✓ &? & ✓ \\ \hline Root Extraction & ✓ & ✓ &? \\ \hline Basis computation & - & - &? \\ \hline \hline \end{tabular} The root extraction problem has been solved for the groups of the form (1) by Srinath in [5]. Similarly for the algorithms to solve the discrete logarithm problem see [8] and [7], for the basis computation problem see [7], and the square and multiply algorithm for exponentiation is given in [4, Chapter 9, SS9.2]. Lower bound for the root extraction problem in generic finite Abelian groups is given in [3]. The method to solve the root extraction problem discussed in [5] can be easily extended to any group \(G\) of the form: \[G\approx\underbrace{\frac{\mathbb{Z}}{p^{e}\mathbb{Z}}\times...\times\frac{ \mathbb{Z}}{p^{e}\mathbb{Z}}}_{N\text{-times}} \tag{3}\] Our objective is to extend the techniques given in [5] for finite Abelian groups which, by the fundamental theorem of finite Abelian groups, can be written as \[G\approx\prod_{i=1}^{N}\frac{\mathbb{Z}}{p_{i}^{e_{i}}\mathbb{Z}}\] For this, we will first solve the problem for finite Abelian \(p\)-groups and then extend the techniques to finite Abelian groups. ### Contributions of this work The following are the contributions of this work: 1. We have arrived at the necessary and sufficient conditions for the existence of a solution to the root extraction problem in finite Abelian \(p\)-groups. 2. We have provided an algorithm for root extraction in finite Abelian \(p\)-groups. The SageMath implementation of which can be found in [https://github.com/uacharjee14/Root-Extraction](https://github.com/uacharjee14/Root-Extraction). 3. We have extended the algorithm for extracting roots in finite Abelian \(p\)-groups to finite Abelian groups. 4. We conclude that root extraction in finite Abelian groups is no harder than solving discrete logarithms and computing basis. ## 2 Finite Abelian \(p\)-groups The group \(G\) that we consider in this section is a finite Abelian \(p\)-group having the structure \[G\approx\prod_{i=1}^{N}\frac{\mathbb{Z}}{p^{e_{i}}\mathbb{Z}} \tag{4}\] The \(e_{i}\)'s need not be distinct but we will assume without loss of generality that \(e_{j}\leq e_{j+1}\). Also, we would denote the additive identity of the group \(G\) by \(0\). The concept of _basis_ of a finite Abelian group has been defined in [6, Definition 9.1]. We state it here in additive notation. **Definition 2.1** (Basis of a finite Abelian group).: A basis for a finite Abelian group \(G\) is an ordered set of group elements, \(\{Q_{1},\ldots,Q_{N}\}\) with the property that every \(B\in G\) can be uniquely expressed in the form \(B=b_{1}Q_{1}+\cdots+b_{N}Q_{N}\) with \(0\leq b_{i}<|Q_{i}|\) for \(1\leq i\leq N\). **Remark 2.2**.: Equivalently, it can be stated that a set \(\{Q_{1},\ldots,Q_{N}\}\) is a basis of \(G\) if 1. The set \(\{Q_{1},\ldots,Q_{N}\}\) is a _spanning_ set i.e., every element \(B\in G\) can be expressed as \(B=b_{1}Q_{1}+\cdots+b_{N}Q_{N}\) with \(0\leq b_{i}<|Q_{i}|\) for \(1\leq i\leq N\). 2. The set \(\{Q_{1},\ldots,Q_{N}\}\) is _linearly independent_ i.e., if \(a_{1}Q_{1}+\cdots+a_{N}Q_{N}=0\) then \(a_{i}\equiv 0\mod|Q_{i}|\) for \(1\leq i\leq N\). We formulate the _Root Extraction Problem_ (REP) for groups of the form (4) as follows: **Problem 2.3** (Root Extraction Problem in finite Abelian \(p\)-group).: Let \(G\) be a finite Abelian \(p\)-group. Given \(m_{1},\ldots,m_{N}\) where \(m_{i}\in\frac{2}{p^{n}\cdot Z}\) and \(K\in G\), find a basis \(P_{1},\ldots,P_{N}\) of \(G\) such that \(K=m_{1}P_{1}+\cdots+m_{N}P_{N}\) (\(|P_{i}|=p^{e_{i}}\)). We introduce some definitions and results that would be useful in solving the problem. Let \(\{Q_{1},\ldots,Q_{N}\}\) be a basis of \(G\) (that we might have computed using the algorithm in [7, SS4]). Additionally let \(\{Q_{1},\ldots,Q_{N}\}\) be sorted i.e., \(|Q_{1}|\leq|Q_{2}|\leq\cdots\leq|Q_{N}|\). **Definition 2.4** (Primitive Element).: We call an element \(K=q_{1}Q_{1}+\cdots+q_{N}Q_{N}\in G\) a _primitive_ element if \(p\nmid q_{i}\) for some \(1\leq i\leq N\). **Proposition 2.5**.: If \(K\in G\) is primitive w.r.t. some basis \(\{Q_{1},\ldots,Q_{N}\}\) then \(K\) is primitive w.r.t. any other basis of \(G\). _Proof:_ Assume that \(K\) is not primitive relative to some basis \(\{P_{1},\ldots,P_{N}\}\) of \(G\). Then \(K=p(m_{1}^{\prime}P_{1}+\cdots+m_{N}^{\prime}P_{N})=pK^{\prime}\), now we may write \(K^{\prime}=q_{1}^{\prime}Q_{1}+\cdots+q_{N}^{\prime}Q_{N}\). Hence by the uniqueness of representation \(q_{i}=pq_{i}^{\prime}\) for all \(i\). This contradicts our hypothesis that \(K\) is primitive w.r.t. \(\{Q_{1},\ldots,Q_{N}\}\). \(\square\) The above result says that the property of an element being _primitive_ is independent of the basis used in its representation. Next, we define a function \(\nu_{p}:G\to\mathbb{Z}_{\geq 0}\cup\{\infty\}\) as follows: **Definition 2.6**.: Let a non-zero element \(K\in G\) be written as \(K=q_{1}Q_{1}+\cdots+q_{N}Q_{N}\). The function \(\nu_{p}:G\to\mathbb{Z}_{\geq 0}\cup\{\infty\}\) is defined as \(\nu_{p}(K)=\nu_{p}(q_{1}Q_{1}+\cdots+q_{N}Q_{N})=r\) where \(r\) is the highest non-negative integer such that \(p^{r}\mid q_{i},\forall i\). Also, by convention, \(\nu_{p}(0)=\infty\). **Remark 2.7** (The function \(\nu_{p}\) is well defined).: For a non-zero element \(K=q_{1}Q_{1}+\cdots+q_{N}Q_{N}=m_{1}P_{1}+\cdots+m_{N}P_{N}\in G\), where \(\{P_{1},\ldots,P_{N}\}\) is any other basis of \(G\), \(\nu_{p}(q_{1}Q_{1}+\cdots+q_{N}Q_{N})=\nu_{p}(m_{1}P_{1}+\cdots+m_{N}P_{N})\). _Proof:_ Now \(K=p^{r}K^{\prime}\), where \(K^{\prime}\) is primitive w.r.t \(\{Q_{1},\ldots,Q_{N}\}\), so it is primitive w.r.t \(\{P_{1},\ldots,P_{N}\}\) as well. Let \(K^{\prime}=m_{1}^{\prime}P_{1}+\cdots+m_{N}^{\prime}P_{N}\) where \(p\nmid m_{i}^{\prime}\) for some \(i\). Then, \(\nu_{p}(m_{1}P_{1}+\cdots+m_{N}P_{N})=\nu_{p}(p^{r}m_{1}^{\prime}P_{1}+\cdots+p ^{r}m_{N}^{\prime}P_{N})=r\). \(\Box\) **Proposition 2.8**.: Following are some properties of the function \(\nu_{p}\). 1. \(\nu_{p}(A)=\infty\) if and only if \(A=0\); 2. \(\nu_{p}(-A)=\nu_{p}(A)\); 3. \(\nu_{p}(A+B)\geq\min(\nu_{p}(A),\nu_{p}(B))\). _Proof:_ These properties follow directly from the definition of \(\nu_{p}\). Since properties 1 and 2 are straightforward we only prove property 3 here. Let \[A=a_{1}Q_{1}+\cdots+a_{N}Q_{N}\mbox{ and }B=b_{1}Q_{1}+\cdots+b_{N}Q_{N}.\] Let, \(\nu_{p}(A)=r_{A}\) and \(\nu_{p}(B)=r_{B}\) where \(r_{A}\leq r_{B}\). So, \(p^{r_{A}}|a_{i}\) and \(p^{r_{A}}|b_{i}\) for \(1\leq i\leq N\). This means \(p^{r_{A}}|(a_{i}+b_{i})\) for \(1\leq i\leq N\) and therefore \(\nu_{p}(A+B)\geq r_{A}=\min(\nu_{p}(A),\nu_{p}(B))\). \(\Box\) From this, we can see that the function defined above satisfies the properties of a group valuation function defined in [1, Chapter 2, SS2.2]. Given an element \(K\in G\) we say that we can _extend_ it to a basis of \(G\) if we can find elements \(\{K_{2},\ldots,K_{N}\}\) such that \(\{K,K_{2},\ldots,K_{N}\}\) is a basis for \(G\). The process of extending an element \(K\) to a basis of \(G\) will be referred to as _basis extension_. **Theorem 2.9** (Necessary Condition for Basis Extension).: If an element \(Q=q_{1}Q_{1}+\cdots+q_{N}Q_{N}\in G\) can be extended to a basis of \(G\) then \(Q\) is primitive. _Proof:_ We show that if \(Q\) is not _primitive_ then it cannot be extended to a basis of \(G\). Assume to the contrary that \(\{Q,\overline{Q}_{2},\ldots,\overline{Q}_{N}\}\) is a basis for \(G\). Then since \(p\mid q_{k}\forall k\) as \(Q\) is not primitive, we have \(Q=pQ^{\prime}\). Let \(Q^{\prime}=q_{1}^{\prime}Q+q_{2}^{\prime}\overline{Q}_{2}+\ldots+q_{N}^{\prime }\overline{Q}_{N}\). So we have, \[Q=pq_{1}^{\prime}Q+pq_{2}^{\prime}Q_{2}+\cdots+pq_{N}^{\prime}Q_{N}\implies( pq_{1}^{\prime}-1)Q+pq_{2}^{\prime}Q_{2}+\cdots+pq_{N}^{\prime}Q_{N}=0.\] But this would mean \(pq_{1}^{\prime}-1=0\) (from linear independence) which in turn means \(p\) is a unit in \(\mathbb{Z}_{|Q|}\). However, \(|Q|\) is a power of \(p\) so \(p\) cannot be a unit in \(\mathbb{Z}_{|Q|}\) and therefore this is a contradiction. \(\Box\) **Theorem 2.10** (Sufficient Condition for Basis Extension).: An element \(Q=q_{1}Q_{1}+\cdots+q_{N}Q_{N}\in G\) can be extended to a basis of \(G\) if \(Q\) is primitive and \(|Q|=p^{r_{k}}\) where \(k\) is the largest index such that \(p\nmid q_{k}\). _Proof:_ Assume that the said conditions hold, i.e. \(Q\) is _primitive_ and \(|Q|=p^{r_{k}}\), where \(k\) is the highest index such that \(p\nmid q_{k}\). Next, we show that the set \(\{Q_{1},\ldots,Q_{k-1},Q,Q_{k+1},\ldots,Q_{N}\}\) is a linearly independent and spanning set. Let, \[Q=q_{1}Q_{1}+\cdots+q_{k}Q_{k}+\cdots+q_{N}Q_{N} \tag{5}\] where \(p\nmid q_{k}\) and \(|Q|=p^{r_{k}}\). 1. _Linear independence_: Let, \[a_{1}Q_{1}+\cdots+a_{k}Q+\cdots+a_{N}Q_{N}=0.\] Substituting from 5 we have, \[(a_{1}+a_{k}q_{1})Q_{1}+\cdots+a_{k}q_{k}Q_{k}+\cdots+(a_{N}+a_{k}q_{N})Q_{N}=0.\] From linear independence of basis \(\{Q_{1},\ldots,Q_{N}\}\) we must have, \(p^{e_{k}}\mid a_{k}q_{k}\) and \(p^{e_{i}}\mid(a_{i}+a_{k}q_{i})\) for \(i\neq k\). Since, \(p\nmid q_{k}\) so \(p^{e_{k}}\mid a_{k}\) i.e., \(a_{k}\equiv 0\mod|Q|\). This also means that \(a_{k}\equiv 0\mod|Q_{i}|\) for all \(i<k\) and therefore \(a_{i}\equiv 0\mod|Q_{i}|\). Also, because \(|Q|=p^{e_{k}}\) so for all \(i>k\), \(p^{e_{i}-e_{k}}\mid q_{i}\) and therefore \(q_{i}a_{k}\equiv 0\mod|Q_{i}|\). This means \(a_{i}\equiv 0\mod|Q_{i}|\) for \(i>k\). This establishes linear independence. 2. _Spanning_: Let \(B\in G\). Since \(\{Q_{1},\ldots,Q_{N}\}\) is a basis of \(G\) so we must be able to write \(B\) as, \[B=b_{1}Q_{1}+\cdots+b_{N}Q_{N}.\] Also, since \(p\nmid q_{k}\) in 5 so \(q_{k}^{-1}\) exists in \(\frac{Z}{p^{e_{k}}Z}\) for \(1\leq i\leq N\). This means we can write \(Q_{k}\) as, \[Q_{k}=q_{k}^{-1}(Q-q_{1}Q_{1}-\cdots-q_{k-1}Q_{k-1}-q_{k+1}Q_{k+1}-\cdots-q_{N}Q _{N}).\] This can be substituted in the expression of \(B\) to get a representation in terms of \(\{Q_{1},\ldots,Q_{k-1},Q,Q_{k+1},\ldots,Q_{N}\}\). \(\Box\) **Remark 2.11**.: The term _primitive element_ of a group has been used in [2] to mean an element that is part of some basis of a group. A primitive element \(K\in G\) where \(G\) is an Abelian \(p\)-group of the form (3) then \(|K|=p^{\varepsilon}\), so by theorem 2.10\(K\) must be a part of some basis. Therefore, our definition of primitive element for Abelian \(p\)-groups of the form (3) is equivalent to that in [2]. However, this is not the case for an arbitrary finite Abelian \(p\)-group, where an element of the group has to satisfy the sufficient conditions mentioned above to be a part of a basis. **Example 2.12** (A primitive element that is not a part of some basis).: Let \(G\approx\frac{\mathbb{Z}}{22}\times\frac{\mathbb{Z}}{42}\times\frac{\mathbb{Z} }{8\mathbb{Z}}\) and let the primitive element be \(Q=(1,0,2)\). Notice that \(|Q|=4\) and it does not satisfy the condition in theorem 2.10. Assume to the contrary that \(\{Q_{1},Q,Q_{3}\}\) is a basis of \(G\). Now, \[2Q=(0,0,4)=4\cdot(0,0,1)\] Let, \((0,0,1)=q_{1}Q_{1}+q_{2}Q+q_{3}Q_{3}\). So, \[2Q=4\cdot(0,0,1)=4\cdot(q_{1}Q_{1}+q_{2}Q+q_{3}Q_{3})=4\cdot q_{1}Q_{1}+4\cdot q _{3}Q_{3}\] Note that at least one of \(4\cdot q_{1}\) and \(4\cdot q_{3}\) is non-zero. This goes against the linear independence of \(\{Q_{1},Q,Q_{3}\}\) and therefore it cannot be a basis for \(G\). This is a contradiction. We will now define what we mean by _reducibility_ and _reduced form_ of a root extraction problem: **Definition 2.13**.: [Reducibility and Reduced Form]Let, \(K=q_{1}Q_{1}+\cdots+q_{N}Q_{N}\in G\). If the REP has the parameters, \(K\) and \(m_{1},\ldots,m_{N}\) where \(m_{i}\in\frac{\mathbb{Z}}{p^{\varepsilon_{i}}\mathbb{Z}}\) then the problem is said to be _reducible_ if \(\nu_{p}(K)=\nu_{p}(\sum_{i=1}^{N}m_{i}Q_{i})\). Further, the new REP with parameters \(K^{\prime}\in G\) and \(m^{\prime}_{1},\ldots,m^{\prime}_{N}\), where \(K^{\prime}=\sum_{i=1}^{N}q^{\prime}_{i}Q_{i},q_{i}=p^{\nu_{p}(K)}q^{\prime}_{i}\) and \(m_{i}=p^{\nu_{p}(K)}m^{\prime}_{i}\) is said to be in _reduced form_. Additionally in the reduced form, we require that \(q^{\prime}_{i}=0\) when \(q_{i}=0\) and \(m^{\prime}_{i}=0\) when \(m_{i}=0\). In our algorithms, we will only solve the reduced form of the REP so we state the following result: **Theorem 2.14** (Reduced form can be solved \(\iff\) REP can be solved).: Let \(G\) be a finite Abelian \(p\)-group with a sorted basis basis \(\{Q_{1},\ldots,Q_{N}\}\) and the root extraction problem be as defined in problem 2.3 where \(\nu_{p}(K)=r\) for some \(r\geq 0\). A solution for REP exists if and only if a solution for the reduced form exists. Proof.: 1. **Reduced form can be solved \(\implies\) REP can be solved:** Suppose we reduce \((K,m_{1},\ldots,m_{N})\) to \((K^{\prime},m^{\prime}_{1},\ldots,m^{\prime}_{N})\). Let \(\{P_{1},\ldots,P_{N}\}\) be a solution to the reduced from of the REP. So, \(K^{\prime}=m^{\prime}_{1}P_{1}+\cdots+m^{\prime}_{N}P_{N}\). Now, \(K=p^{r}K^{\prime}=p^{r}m^{\prime}_{1}P_{1}+\cdots+p^{r}m^{\prime}_{N}P_{N}=m_{ 1}P_{1}+\cdots+m_{N}P_{N}\). The solution is \(\{P_{1},\ldots,P_{N}\}\) in both cases. 2. **REP can be solved \(\implies\) Reduced form can be solved:** Suppose \((K,m_{1},\ldots,m_{N})\) is reduced to \((K^{\prime},m^{\prime}_{1},\ldots,m^{\prime}_{N})\). Let \(\{P_{1},\ldots,P_{N}\}\) be a solution to the REP. So, \(K=\sum_{i=1}^{N}q^{\prime}_{i}Q_{i}=\sum_{i=1}^{N}m_{i}P_{i}\). Also, \(K^{\prime}=q^{\prime}_{i}Q_{1}+\cdots+q^{\prime}_{N}Q_{N}\) from the reduced REP and let \(K^{\prime\prime}=m^{\prime}_{1}P_{1}+\cdots+m^{\prime}_{N}P_{N}\) then clearly, \(p^{r}K^{\prime}=p^{r}K^{\prime\prime}=K\), (as \(r=\nu_{p}(K)\)), which implies \(p^{r}(K^{\prime}-K^{\prime\prime})=0\). Let \(K^{\prime}-K^{\prime\prime}=R\), so \(|R|\leq p^{\prime}\). By definition, \(K^{\prime}=K^{\prime\prime}+R=m^{\prime}_{1}P_{1}+\cdots+m^{\prime}_{k}P_{k}+ \cdots+m^{\prime}_{N}P_{N}+R\), where \(k\) is the maximum index such that \(p\nmid m^{\prime}_{k}\), so \(m^{\prime-1}_{k}\) exists in \(\frac{\mathbb{Z}}{|R|\mathbb{Z}}\). We also have \(|P_{k}|=p^{\varepsilon_{k}}>p^{r}\) because \(m^{\prime}_{k}\neq 0\) and therefore \(p^{r}m^{\prime}_{k}P_{k}=m_{k}P_{k}\neq 0\). So from the sufficient condition of basis extension (theorem 2.10) \(P_{k}+m^{\prime-1}_{k}R\) can be extended to a basis of \(G\) which is, \(\{P_{1},\ldots,P_{k-1},P_{k}+m^{\prime-1}_{k}R,P_{k+1},\ldots,P_{N}\}\). Therefore we have \(K^{\prime}=m^{\prime}_{1}P_{1}+\cdots+m^{\prime}_{k}(P_{k}+m^{\prime-1}_{k}R)+ \cdots+m^{\prime}_{N}P_{N}\). Here, the solution of the _reduced form_ need not be the same as that of REP. Each time we reduce the problem we would need to find the highest index such that \(p\nmid q_{k}\) and to solve the root extraction problem from there we would require the existence of \(m^{-1}_{k}\). So we would at least need that \(p\nmid m_{j}\) for some \(j\) such that \(|Q_{j}|=|Q_{k}|\). This would make sure that we can shuffle the basis \(\{Q_{1},\ldots,Q_{N}\}\) so that \(m_{k}\) (found using the above technique) has an inverse. For this, we have the following result: **Theorem 2.15**.: Let \(K_{1}=q_{1}Q_{1}+\cdots+q_{N}Q_{N}\) and \(K_{2}=m_{1}P_{1}+\cdots+m_{N}P_{N}\) are two _primitive elements_ of \(G\) where \(|K_{1}|=p^{\varepsilon}\). Also, let \(k\) be the highest index such that \(p\nmid q_{k}\) and \(l\) be the highest index such that \(p\nmid m_{l}\). If \(\nu_{p}(p^{j}K_{1})=\nu_{p}(p^{j}K_{2}),\forall j\) such that \(0\leq j\leq e\) then \(|Q_{k}|=|P_{l}|\). Proof.: If \(|Q_{k}|\neq|P_{l}|\), then without loss of generality we may assume that \(|P_{l}|>|Q_{k}|\). Let, \(|P_{l}|=p^{\varepsilon_{l}}\) and \(|Q_{k}|=p^{\varepsilon_{k}}\), so \(\varepsilon_{l}>e_{k}\). Now, \(p^{\varepsilon_{l}-1}K_{1}=p^{\varepsilon_{l}-1}(q_{1}Q_{1}+\cdots+q_{N}Q_{N})\), and \(p^{\varepsilon_{l}-1}K_{2}=p^{\varepsilon_{l}-1}(m_{1}P_{1}+\cdots+m_{N}P_{N})\). But then we have \(\nu_{p}(p^{\varepsilon_{l}-1}K_{1})\neq\nu_{p}(p^{\varepsilon_{l}-1}K_{2})\), because \(p\mid q_{i},\forall i>k\), whereas \(p\nmid m_{l}\). This contradicts the condition that we started with. ### Solving for a Simpler Structure We will first solve the problem for the structure \[G\approx\frac{\mathbb{Z}}{p^{e_{1}}\mathbb{Z}}\times\frac{\mathbb{Z}}{p^{e_{2}} \mathbb{Z}} \tag{6}\] where \(e_{1}\leq e_{2}\). The root extraction problem may be stated as: **Problem 2.16**.: Given \(G\) as mentioned above, \(K\in G\), \(m_{1}\in\frac{\mathbb{Z}}{p^{e_{1}}Z}\) and \(m_{2}\in\frac{\mathbb{Z}}{p^{e_{2}}Z}\) find a basis \(P_{1},P_{2}\) such that \(K=m_{1}P_{1}+m_{2}P_{2}\). We will state the necessary and sufficient conditions for the existence of a solution of the REP for this structure. We will call this the existence theorem. **Theorem 2.17** (Existence Theorem).: The solution to the REP problem exists if and only if 1. The REP is reducible (See definition 2.13), 2. \(|K|=|m_{1}Q_{1}+m_{2}Q_{2}|\) for some basis \(Q_{1},Q_{2}\) of \(G\). _Proof:_ Let \(\{P_{1},P_{2}\}\) be the solution of the REP. Then, \(K=m_{1}P_{1}+m_{2}P_{2}\) and \(\nu_{p}(m_{1}P_{1}+m_{2}P_{2})=\nu_{p}(m_{1}Q_{1}+m_{2}Q_{2})\) because the value of \(\nu_{p}\) only depends on the coefficients \(m_{1},m_{2}\). So, the REP is reducible. Also, \(|K|=|m_{1}P_{1}+m_{2}P_{2}|=|m_{1}Q_{1}+m_{2}Q_{2}|\) follows from the definition of order of an element and the fact that \(\{Q_{1},Q_{2}\}\) and \(\{P_{1},P_{2}\}\) are sorted bases for the group \(G\). Now, we give a way to construct a solution when the two conditions are met. Assume that the two conditions of the _Existence Theorem_ hold and we have reduced our inputs to \(K=q_{1}Q_{1}+q_{2}Q_{2},m_{1},m_{2}\). Note that we have \(|K|=|m_{1}Q_{1}+m_{2}Q_{2}|\) from the conditions 1 and 2, so if \(k\) is the highest index such that \(p\nmid q_{k}\) then \(k\) is the highest index such that \(p\nmid m_{k}\) (when \(e_{1}=e_{2}\) the basis \(\{Q_{1},Q_{2}\}\) may be shuffled to get this). This would mean that \(m_{k}^{-1}\) exists in \(\frac{\mathbb{Z}}{|Q_{k}|\mathbb{Z}}\). We will consider different cases and solve each of them. **Case 1:**: \(|K|=p^{e_{2}}\) (equivalently, \(p\nmid m_{2}\)). Then the solution is \(\{Q_{1},m_{2}^{-1}(K-m_{1}Q_{1})\}\). **Case 2:**: \(|K|=|m_{1}Q_{1}+m_{2}Q_{2}|=p^{e_{1}}\) (equivalently, \(p^{e_{2}-e_{1}}\mid m_{2}\) and \(p\nmid m_{1}\)). Then the solution is \(\{m_{1}^{-1}(K-m_{2}Q_{2}),Q_{2}\}\); **Case 3:**: \(|K|=p^{e},e_{1}<e<e_{2}\) (equivalently, \(p^{e_{2}-e}\mid m_{2},p^{e_{2}-e+1}\nmid m_{2},p\nmid m_{1}\)). Then the solution is \(\{m_{1}^{-1}q_{1}Q_{1},(m_{2}^{\prime})^{-1}q_{2}^{\prime}Q_{2}\}\) where \(m_{2}=p^{e_{2}-e}m_{2}^{\prime},q_{2}=p^{e_{2}-e}q_{2}^{\prime}\). \(\Box\) **Remark 2.18**.: When \(e_{1}=e_{2}\) this way of constructing the solution is the same as the one given in [5]. We will extend this technique to an arbitrary finite Abelian \(p\)-group. For this: 1. A slightly stronger version of the _existence_ conditions mentioned is required. 2. The number of cases considered in this way will be too large and this method will turn out to be clumsy. We need to bring down the number of cases considered by grouping the cases that can be handled in a similar way. ### Solving the REP for finite Abelian \(p\)-groups We will consider the structure given in 4. Let us first state and prove the _existence theorem_ for this structure. The proof we give is constructive, and our algorithm to extract roots will readily follow. We state a lemma before the existence theorem. **Lemma 2.19**.: Let \(G\) be a finite Abelian \(p\)-group with basis \(\{Q_{1},\ldots,Q_{N}\}\). Also, let \(K=\sum_{i=1}^{N}q_{i}Q_{i}\) and \(K^{\prime}=\sum_{i=1}^{N}q_{i}^{\prime}Q_{i}\) be two elements of \(G\) such that \(K=p^{e}K^{\prime}\). If \(q_{i}=0\implies q_{i}^{\prime}=0\) then \(\nu_{p}(K)=e+\nu_{p}(K^{\prime})\). _Proof:_ Now, \(p^{\nu_{p}(K^{\prime})}\mid q_{i}^{\prime}\) for all \(1\leq i\leq N\) and \(q_{i}=p^{e}q_{i}^{\prime}\) from the theorem statement. So, \(p^{e+\nu_{p}(K^{\prime})}\mid q_{i}\) for all \(1\leq i\leq N\). This means, \(\nu_{p}(K)\geq e+\nu_{p}(K^{\prime})\). Note that since \(K=p^{e}K^{\prime}\) so \(\nu_{p}(K)\geq e\). Now, \(p^{\nu_{p}(K)}\mid q_{i}\) i.e., \(p^{\nu_{p}(K)}\mid p^{e}q_{i}^{\prime}\) for all \(1\leq i\leq N\). Since, we have \(q_{i}=0\implies q_{i}^{\prime}=0\) i.e., \(q_{i}\neq 0\implies q_{i}^{\prime}\neq 0\) so \(p^{\nu_{p}(K)-e}\mid q_{i}^{\prime}\) for all \(1\leq i\leq N\). Therefore \(\nu_{p}(K^{\prime})\geq\nu_{p}(K)-e\) i.e., \(\nu_{p}(K)\leq e+\nu_{p}(K^{\prime})\). \(\Box\) **Remark 2.20**.: Let \(K=\sum_{i=1}^{N}q_{i}Q_{i}\in G\) and \(K=p^{e}K^{\prime}\) where \(K^{\prime}=\sum_{i=1}^{N}q_{i}^{\prime}Q_{i}\) then it is not in general true that \[\nu_{p}(K)=e+\nu_{p}(K^{\prime}) \tag{7}\] One example could be \(K=(0,4)\in\frac{\mathbb{Z}}{2Z}\times\frac{\mathbb{Z}}{8\mathbb{Z}}\) and \(K^{\prime}=(1,2)\) such that \(K=2K^{\prime}\). **Theorem 2.21** (Existence Theorem).: Let \(G\) be a finite Abelian \(p\)-group with the structure given in 4 with basis \(\{Q_{1},\ldots,Q_{N}\}\). Also let the root extraction problem (REP) be as defined in problem 2.3. Suppose \(K\) can be written as \(K=q_{1}Q_{1}+\cdots+q_{N}Q_{N}\) and \(|K|=p^{e}\). Then the solution to REP exists if and only if 1. \(|K|=|(m_{1}Q_{1}+\cdots+m_{N}Q_{N})|\), and 2. \(\nu_{p}(p^{j}K)=\nu_{p}(p^{j}(m_{1}Q_{1}+\cdots+m_{N}Q_{N})),\forall j\) such that \(0\leq j<e\). _Proof:_ If a solution to REP exists, then the properties hold.If \(K=\sum_{i=1}^{N}m_{i}P_{i}\), then \(|K|=|\sum_{i=1}^{N}m_{i}P_{i}|=|\sum_{i=1}^{N}m_{i}Q_{i}|\) and also \(\nu_{p}(p^{j}K)=\nu_{p}(p^{j}(m_{1}P_{1}+\cdots+m_{N}P_{N}))=\nu_{p}(p^{j}(m_{ 1}Q_{1}+\cdots+m_{N}Q_{N}))\) for \(0\leq j<e\). If the properties hold a solution to REP exists.We will use induction on the rank of \(G\) calling it \(n\). Verifying that a solution exists for \(n=1\) if the properties hold is easy. For \(n=2\) we already constructed a solution in the previous subsection. Also, it is to be noted that for all the cases that we have solved the problem whenever \(m_{i}=0\) we had \(P_{i}=Q_{i}\) i.e., the \(i\)-th _basis_ element was not altered. We now assume that for all \(n<N\) the problem can be solved and try to construct a solution for \(n=N\). As \(\nu_{p}(K)=\nu_{p}((m_{1}Q_{1}+\cdots+m_{N}Q_{N}))\) so the problem is _reducible_. Let \(K^{\prime},m^{\prime}_{1},\ldots,m^{\prime}_{N}\) be the _reduced problem_. If \(K^{\prime}=\sum_{i=1}^{N}q^{\prime}_{i}Q_{i}\), let \(k\) be the greatest index such that \(p\nmid q^{\prime}_{k}\) and so \(p\nmid m^{\prime}_{k}\) (assuming the necessary shuffling of basis is done using result 2.15). Construct a set of indices \(I_{Q}\) that contains all indices \(i\) for which \(p^{e_{k}}q^{\prime}_{i}Q_{i}=0\). Similarly, construct a set of all indices \(I_{M}\) that contains all indices for which \(p^{e_{k}}m^{\prime}_{i}Q_{i}=0\). Define a new subproblem with parameters \(K_{1}=\sum_{i\in I_{Q}}q_{i}Q_{i}\) and \(\overline{m}_{i}\) where \(\overline{m}_{i}=m_{i},i\in I_{M}\), otherwise \(\overline{m}_{i}=0\). Notice that this problem is reducible as well because \(\nu_{p}(K)=\nu_{p}(K_{1})\) and \(\nu_{p}((m_{1}Q_{1}+\cdots+m_{N}Q_{N}))=\nu_{p}(\sum_{i=1}^{N}\overline{m}_{i }Q_{i})\). We reduce this problem to obtain the parameters \(K^{\prime}_{1}=\sum_{i\in I_{Q}}q^{\prime}_{i}Q_{i}\) and \(\overline{m}^{\prime}_{i}\). This subproblem has a straightforward solution where \(P_{i}=Q_{i},i\neq k\), and \(P_{k}=\overline{m}^{\prime-1}_{k}(K^{\prime}_{1}-\sum_{i\neq k}\overline{m}^{ \prime}_{i}Q_{i})\). This solves the subproblem and we obtain our new _basis_\(\{P_{1},\ldots,P_{N}\}\) (this follows from result 2.10) such that \(K_{1}=\sum_{i=1}^{N}\overline{m}_{i}P_{i}=\sum_{i\in I_{M}}m^{\prime}_{i}P_{i}\). If \(e_{k}=e\) then we are done and there is no need to define another subproblem. When \(e_{k}\neq e\) the other subproblem can be defined with the parameters \(K_{2}=K-K_{1}=\sum_{i\notin I_{Q}}q_{i}Q_{i}\) and \(\overline{m}_{i}=m_{i},i\notin I_{M}\), otherwise \(\overline{m}_{i}=0\). Notice that for all \(i\leq k\) we have \(\overline{m}_{i}=0\) and similarly, \(K_{2}\) can be regarded as \(K_{2}=\sum_{i=1}^{N}\overline{q}_{i}Q_{i}\) where \(\overline{q}_{i}=q_{i},i\notin I_{Q}\), otherwise \(\overline{q}_{i}=0\). This problem is equivalent to solving a problem where \(n=N-k\) and our mentioned conditions hold. 1. As, \(|K|=|K_{2}|\) and \(|(m_{1}Q_{1}+\cdots+m_{N}Q_{N})|=|(\overline{m}_{k+1}Q_{k+1}+\cdots+\overline{m }_{N}Q_{N})|\). So, \(|K_{2}|=|(\overline{m}_{k+1}Q_{k+1}+\cdots+\overline{m}_{N}Q_{N})|\) 2. Because \(\nu_{p}(p^{j}K)=\nu_{p}(p^{j}(m_{1}Q_{1}+\cdots+m_{N}Q_{N}))\) for \(0\leq j<e\) so we have from lemma 2.19\(\nu_{p}(p^{j}K_{2})=\nu_{p}(p^{j}(\overline{m}_{k+1}Q_{k+1}+\cdots+\overline{m }_{N}Q_{N}))\) for \(1\leq p^{j}<|K_{2}|\). Therefore, from the induction hypothesis we have a solution for this problem as well say \(\{\overline{P}_{k+1},\ldots,\overline{P}_{N}\}\) with the additional condition that \(\overline{P}_{i}=P_{i}\) whenever \(\overline{m}_{i}=0\). So, taking \(\overline{P}_{i}=P_{i},\forall i\leq k\) we have the _basis_ set \(\{\overline{P}_{1},\ldots,\overline{P}_{N}\}\). Now, \(\sum_{i=1}^{N}m_{i}\overline{P}_{i}=\sum_{i\in I_{M}}m_{i}\overline{P}_{i}+\sum_{ i\notin I_{M}}m_{i}\overline{P}_{i}=K_{1}+K_{2}=K\). \(\Box\). ### The Algorithm for root extraction in finite Abelian \(p\)-groups We have divided the algorithm into three parts: **Algorithm 1:**: An algorithm that returns a solution if the existence conditions are satisfied. **Algorithm 2:**: An algorithm to convert the given inputs to _reduced form_. **Algorithm 3:**: The algorithm for root extraction. We have assumed that the following are available globally and can be accessed and modified by each algorithm that we have mentioned : 1. The group description \(G\), 2. \(Q_{1},\ldots,Q_{N}\) (the computed _basis_ of group \(G\)), 3. \(q_{1},q_{2},\ldots,q_{N}\) ( the discrete logarithm of \(K\) w.r.t \(\{Q_{1},\ldots,Q_{N}\}\)), 4. \(m_{1},\ldots,m_{N}\) (the coefficients supplied as inputs to the root extraction problem). Input:\(G,K,m_{1},\ldots,m_{N}\) Precondition:\(K\in G,m_{i}\in\mathbb{Z}/p^{e_{i}}\mathbb{Z},e_{1}\leq e_{2}\leq\ldots\leq e_{N}\) Output:\(P_{1},\ldots,P_{N}\) Postcondition:\(K=m_{1}P_{1}+\cdots+m_{N}P_{N}\) and \(\langle P_{1},\ldots,P_{N}\rangle=G\) begin * **if**\(m_{1}=\ldots=m_{N}=0\textbf{or}q_{1}=\ldots=q_{N}=0\)**then** * **if**\((\exists:q_{i}\neq 0)\textbf{or}(\exists i:m_{i}\neq 0)\textbf{then}\) * Raise Exception(Solution doesn't Exist) * **else** * return \(Q_{1},\ldots,Q_{N}\) * **endif** * **if**\(|K|=|m_{1}Q_{1}+\cdots+m_{N}Q_{N}|\textbf{and}\nu_{p}(p^{e}q_{1}Q_{1}+\cdots+ p^{e}q_{N}Q_{N})=\nu_{p}(p^{e}m_{1}Q_{1}+\cdots+p^{e}m_{N}Q_{N}),0\leq e<\log_{ 0}(|K|)\)**then** * Pass parameter \(K\) to Algorithm 3 * **return**\(P_{1},\ldots,P_{N}\) * **else** * raise Exception{Solution doesn't exist} * **return** * **endif** Input: \(K,I_{Q},I_{M}\) Precondition: \(K\in G,m_{i}\in\mathbb{Z}/p^{e_{i}}\mathbb{Z},e_{1}\leq e_{2}\leq\ldots\leq e _{N}\) (The conditions from Algorithm 1 are satisfied). Output: \(K\), (Also the variables \(q_{1},\ldots,q_{N}\) and \(m_{1},\ldots,m_{N}\) are altered) Post-condition: The problem is reduced (i.e. \(\nu_{p}(K)=\nu_{p}(m_{1}Q_{1}+\cdots+m_{N}Q_{N})=0\)) **Reduce(\(K,I_{Q},I_{M}\)):** ``` 1:find largest \(r\) such that \(p^{r}\mid q_{i},\forall i\in I_{Q}\) 2:if\(r\neq 0\)then 3: set \(q_{i}=q_{i}/p^{r},\forall i\in I_{Q}\) 4: set \(m_{i}=m_{i}/p^{r},\forall i\in I_{M}\) 5:\(K=\sum_{i\in I_{Q}}(q_{i}/p^{r})Q_{i}\) 6:endif 7:return\(K\) ``` **Algorithm 2** Reduction algorithm: **Number of group operations required in Algorithm 2** Only in step 5, we are performing group operations. We are exponentiating at most \(N\) times and adding at most \(N\) times. This is bounded by \(O(Ne_{N}\log_{2}p+N)\) group operations which in turn is bounded by \(O(Ne_{N}\log_{2}p)\). **Number of group operations required in Algorithm 3** We first find a bound on the number of group operations in one iteration. In step 4 we call Algorithm 2 and it is bounded by \(O(Ne_{N}\log_{2}p)\) group operations. After having found the index \(k\) in step 5, to check if the order of \(K\) is \(p^{e_{k}}\) in step 6 requires one exponentiation which is bounded by \(O(e_{N}\log_{2}p)\) group operations. Inside the if block, step 7 requires at most \(N\) exponentiations and additions so it is bounded by \(O(Ne_{N}\log_{2}p+N)\) operations. Inside the else block, steps 10, 11, 12 13 will have at most \(N\) exponentiation( and additions in step 13) so it is bounded by and\(O(Ne_{N}\log_{2}p)\). The maximum of all these is \(O(Ne_{N}\log_{2}p)\). The loop has at most \(N\) iterations so the total number of group operations is bounded by \(O(N^{2}e_{N}\log_{2}p)\) group operations. **Number of group operations required in Algorithm 1** Computing order of \(K\) takes \(O(e_{N}^{2}\log_{2}p)\). In step 8 \(N\) additions and \(N\) exponentiations take \(O(Ne_{N}\log_{2}p+N)\). We need to do the same additions and exponentiations again \(\log_{p}(|K|)\) times which is bounded by \(e_{N}\) so the total number of group operations here is bounded by \(O(Ne_{N}^{2}\log_{2}p)\). Algorithm 3 takes \(O(N^{2}e_{N}\log_{2}p)\) group operations. So the total number of group operations required is \(O((N+e_{N})Ne_{N}\log_{2}p)\). We have provided some examples below: **Example 2.22**.: Consider the group \(G\approx\frac{Z}{22}\times\frac{Z}{82}\times\frac{Z}{162}\), with standard basis. \(K=(0,2,8)\) and \(m_{1}=0,m_{2}=6,m_{3}=4\). Note that \(|(0,2,8)|=|(0,6,4)|=4\), \(\nu_{2}((0,2,8))=\nu_{2}((0,6,4))=1\) and \(\nu_{2}((0,4,0))=\nu_{2}((0,4,8))=2\), so our conditions are satisfied. Next, to solve the problem we reduce it to: \(K^{\prime}=(0,1,4),m_{1}=0,m_{2}=3,m_{3}=2\). Now, the highest index \(k\) such that \(p\nmid m_{k}\) is \(2\) and note that \(|K^{\prime}|=2^{\epsilon_{2}}=2^{3}=8\) so the solution is: \(P_{1}=(1,0,0),P_{2}=3^{-1}((0,1,4)-2(0,0,1))=(0,3,6),P_{3}=(0,0,1)\). **Example 2.23**.: We will use the same group as in Example 2.22 with standard basis. Let \(K=(1,2,2)\) and \(m_{1}=1,m_{2}=6,m_{3}=10\). Notice that \(|(1,2,2)|=|(1,6,10)|=8\), \(\nu_{2}((1,2,2))=\nu_{2}((1,6,10))=0,\nu_{2}((0,4,4))=\nu_{2}((0,4,4))=2\) and \(\nu_{2}((0,0,8))=\nu_{2}((0,0,8))=3\). This problem is already _reduced_: Now, in \(K\), \(2\nmid 1\) and \(|K|=8\), so in this case we will need to partition the problem. Clearly, we may write \(K=K_{1}+K_{2}\) where \(K_{1}=(1,0,0)\) and \(K_{2}=(0,2,2)\). We must also partition our coefficients \(\{1,6,10\}=\{1\}\cup\{6,10\}\). 1. Solving for \(K_{1},m_{1}\), i.e. \((1,0,0)\) and \(m_{1}=1\). We obtain \(P_{1}=(1,0,0)\). Note that \(P_{2},P_{3}\) are still the standard basis. 2. Solving the other sub-problem now i.e. \(K_{2},m_{2},m_{3}\). We first reduce it to get \((0,1,1)\) and \(m_{2}^{\prime}=3,m_{3}^{\prime}=5\). This is can be easily solved as \(P_{2}=(0,1,0),P_{3}=5^{-1}((0,1,1)-3\cdot(0,1,0))=(0,6,13)\). **Example 2.24**.: Consider \(G\approx\frac{Z}{42}\times\frac{Z}{162}\times\frac{Z}{322}\times\frac{Z}{642}\) with standard basis, \(K=(3,2,8,4)\), and \(M=[1,6,4,12]\) denote the multipliers supplied as inputs to the root extraction problem. First, we ensure that our conditions are satisfied. 1. Note that \(|K|=|\sum_{i=1}^{4}m_{i}Q_{i}|=16\) 2. We need to check the \(\nu_{2}()\) value for \(p=1,2,4,8\), i.e., check that \(\nu_{p}(pK)=\nu_{p}(p\sum_{i=1}^{4}m_{i}Q_{i})\) 1. \(\nu_{2}(3,2,8,4)=\nu_{2}(1,6,4,12)=0\) 2. \(\nu_{2}(2,4,16,8)=\nu_{2}(2,12,8,24)=1\)\(\nu_{2}(0,8,0,16)=\nu_{2}(0,8,16,48)=3\) 3. \(\nu_{2}(0,0,0,32)=\nu_{2}(0,0,0,32)=5\) 3. The initial problem can be split as \(K_{1}=(3,0,8,0),M_{1}=(1,0,0,0)\) and \(K_{2}=(0,2,0,4),M_{2}=(0,6,4,12)\). Of these, the first one can be solved readily. We get \(P_{1}=(3,0,8,0)\). 4. We may further reduce and partition the second sub-problem. \(K_{2}^{\prime}=(0,1,0,2),M_{2}^{\prime}=(0,3,2,6)\) to \(K_{3}=(0,1,0,0),M_{3}=(0,3,2,0)\) and \(K_{4}=(0,0,0,2),M_{4}=(0,0,0,6)\). \(K_{3},M_{3}\) can be readily solved. We obtain \(P_{2}=3^{-1}((0,1,0,0)-2Q_{3})=(0,11,26,0),P_{3}=Q_{3}=(0,0,1,0)\). 5. \(K_{4},M_{4}\) can be reduced to \(K_{4}^{\prime}=(0,0,0,1),M_{4}^{\prime}=(0,0,0,3)\) and solved. So \(P_{4}=3^{-1}(0,0,0,1)=(0,0,0,43)\). We also provide examples of cases when a solution to the root extraction problem doesn't exist. **Example 2.25**.: Let \(G\approx\frac{Z}{42}\times\frac{Z}{16Z}\) with standard basis, \(K=(1,0)\) and \(M=(m_{1},m_{2})=(0,4)\). Now we check the existence conditions. Note that \(|K|=|M|=4\). However, \(\nu_{2}(K)=0\) whereas \(\nu_{2}(M)=2\) so a solution to this problem does not exist. **Example 2.26**.: Consider \(G\approx\frac{Z}{2\Sigma}\times\frac{Z}{42}\times\frac{Z}{32\Sigma}\times \frac{Z}{64Z}\) with standard basis. Let \(K=(0,1,2,0)\) and \(M=(m_{1},m_{2},m_{3},m_{4})=(1,1,4,4)\). We check the existence conditions now. Note that \(|K|=|M|=16\), \(\nu_{2}(K)=\nu_{2}(M)=0\) and \(\nu_{2}(2K)=\nu_{2}(2M)=1\). However, \(\nu_{2}(4K)=\nu(0,0,8,0)=3\) and \(\nu_{2}(4M)=\nu_{2}(0,0,16,16)=4\), so \(\nu_{2}(4K)\neq\nu_{2}(4M)\). Therefore a solution to this problem doesn't exist. The algorithm for root extraction in finite Abelian \(p\)-groups [Algorithm 1, 2 and 3] has been implemented in SageMath and the code can be found in [https://github.com/uacharjee14/Root-Extraction](https://github.com/uacharjee14/Root-Extraction). ## 3 Finite Abelian Groups We will extend our results from the previous section to solve the _root extraction_ problem in generic finite Abelian groups. From the fundamental theorem of finite Abelian groups, the structure of a finite Abelian group \(G\) is \[G\approx\prod_{i=1}^{N}\frac{\mathbb{Z}}{p_{i}^{e_{i}}\mathbb{Z}}.\] We will assume that this structure is already known. This can be found using Sutherland's structure computation algorithm given in [7, SS5]. This structure can be restated as: \[G\approx\prod_{j=1}^{N_{1}}\frac{\mathbb{Z}}{p_{1}^{e_{ij}}\mathbb{Z}}\times \cdots\times\prod_{j=1}^{N_{r}}\frac{\mathbb{Z}}{p_{r}^{e_{j}}\mathbb{Z}}\] Here \(e_{ij}\leq e_{ik}\) for \(j\leq k\) and \(p_{i}\neq p_{j}\) for \(i\neq j\). The _Root Extraction Problem_ can be stated as: **Problem 3.1** (Root Extraction Problem in finite Abelian groups).: Given \(m_{11},\ldots,m_{rN_{r}}\) such that \(m_{ij}\in\frac{\mathbb{Z}}{p_{i}^{e_{j}}\mathbb{Z}}\) and \(K\in G\) find a basis \(\{P_{11},P_{12},\ldots,P_{rN_{r}}\}\) of \(G\) such that \(K=m_{11}P_{11}+\cdots+m_{rN_{r}}P_{rN_{r}}\). By using Sutherland's algorithm [7, SS5]) we may find a basis \(\{Q_{1},\ldots,Q_{N}\}\) for the group \(G\) and then compute their orders. Then using the basis, (with known orders) we can partition the given group into Sylow \(p\)-subgroups and solve our problem in each of them using algorithm 1 discussed in the previous section. Let \(\{Q_{1},Q_{2},\ldots,Q_{N}\}\) be a basis for \(G\) which can be renumbered as \(\{Q_{11},Q_{12},\ldots,Q_{rN_{r}}\}\) so that \(|Q_{ij}|=p_{i}^{e_{ij}}\). Then we use the Generalized Discrete Logarithm Algorithm [7, SS3, Algorithm 2] to express \(K\) as \[K=q_{11}Q_{11}+q_{12}Q_{12}+\cdots+q_{rN_{r}}Q_{rN_{r}}.\] Let \(G_{i}=\langle Q_{i1},Q_{i2},\ldots,Q_{i(N_{i})}\rangle\) and \(K_{i}=q_{i1}Q_{i1}+\cdots+q_{i(N_{i})}Q_{i(N_{i})}\) where \(1\leq i\leq r\). Now it can be seen that \[G=G_{1}\oplus\cdots\oplus G_{r}\text{ and }K=K_{1}+\cdots+K_{r}.\] Note that subgroup \(G_{i}\) is the Sylow \(p_{i}\)-subgroup of \(G\) and \(K_{i}\in G_{i}\) for all \(i\) such that \(1\leq i\leq r\). We then pass the following inputs to Algorithm 1: 1. \(G_{i}=span\{Q_{i1},Q_{i2},\ldots,Q_{i(N_{i})}\}\) (the Group description for Sylow-\(p_{i}\) group) 2. \(K_{i}=q_{i1}Q_{i1}+\cdots+q_{i(N_{i})}Q_{i(N_{i})}\) (the element of the group \(G_{i}\)) 3. \(m_{i1},\ldots,m_{iN_{i}}\) (the coefficients) This will define our \(i\)-th subproblem (for the prime \(p_{i}\)) Finally, we would make an important statement: **Theorem 3.2**.: The REP for finite Abelian groups has a solution if and only if each of the sub-problems has a solution _Proof:_ 1. _If each of the sub-problems has a solution then the REP has a solution_: Let, \(\beta_{i}=\{P_{i1},\ldots,P_{N_{i}}\}\) is a solution for the \(i\)-th subproblem. Then, clearly \(K=m_{11}P_{11}+\cdots+m_{rN_{r}}P_{rN_{r}}\). Also, \(\beta=\cup_{i=1}^{r}\beta_{i}=\{P_{11},\ldots,P_{rN_{r}}\}\) is the _basis_ for the group \(G\) as it is the union of the basis of all the Sylow \(p_{i}\)-subgroups of \(G\). 2. _If a solution to REP exists then each of the subproblems also has a solution:_ Let \(\{P_{11},\ldots,P_{rN_{r}}\}\) be a solution to the REP. Now since \(\langle P_{11},\ldots,P_{iN_{i}}\rangle\) is a Sylow \(p_{i}\)-subgroup of \(G\). Since, a Sylow \(p_{i}\)-subgroup of finite Abelian group is unique, so \(\langle P_{i1},\ldots,P_{iN_{i}}\rangle=\langle Q_{i1},\ldots,Q_{iN_{i}}\rangle\). Therefore, \(\{P_{11},...,P_{iN_{i}}\}\) is a solution to the \(i\)-th subproblem. Input:\(G,K,m_{11},...,m_{rN_{r}}\),a _sorted_ basis \(\{Q_{11},Q_{12},...,Q_{rN_{r}}\}\) for \(G\), \(|Q_{ij}|=p_{i}^{e_{ij}}\), \(q_{11},q_{12},...,q_{rN_{r}}\) such that \(K=q_{11}Q_{11}+q_{12}Q_{12}+...+q_{rN_{r}}Q_{rN_{r}}\) Precondition:\(K\in G,m_{ij}\in\frac{r}{p_{i}^{e_{ij}}\mathbb{Z}}\), \(e_{ij}\leq e_{ik}\) for \(j\leq k\), \(p_{i}\neq p_{j}\) for \(i\neq j\) Output:\(P_{11},P_{12},...,P_{rN_{r}}\) Postcondition: \(K=m_{11}P_{11}+m_{12}P_{12}+...+m_{rN_{r}}P_{rN_{r}}\) and \(\{P_{11},P_{12},...,P_{rN_{r}}\}\) is a basis of \(G\) **begin** ``` 1:set \(i=1,A=[\;]\) 2:while\(i\leq r\)do 3: define \(G^{\prime}=span\{Q_{i1},Q_{i2},...,Q_{iN_{i}}\}\) 4: set \(K^{\prime}=q_{i1}Q_{i1}+...+q_{iN_{i}}Q_{iN_{i}}\) 5: pass \(G^{\prime},K^{\prime},m_{i1},...,m_{iN_{i}}\) to Algorithm 1. 6: append output of Algorithm 1 to \(A\) 7: set \(i=i+1\) 8:endwhile 9:return\(A\) ``` **The number of group operations required in Algorithm 4** In step 4 we perform \(N_{i}\) additions for each \(i\). Now \(N_{i}\) is the rank of the Sylow \(p_{i}\)-subgroup and suppose \(N\) is the maximum among all such ranks i.e., \(N=\max_{i=1}^{r}N_{i}\). Then, the number of group operations for additions is bounded by \(O(N)\), and exponentiations take \(O(e_{N_{i}}\log_{2}p_{i})\) group operations. So in the loop, the total number of group operations required is \(O(re_{iN_{i}}\log_{2}p_{i}+rN)\). Let \(e=\max_{i=1}^{r}e_{iN_{i}}\) and \(p=\max_{i=1}^{r}p_{i}\). Algorithm 1 uses \(O((e+N)Ne\log_{2}p)\). This we do for each of the \(r\) primes in the structure of \(G\) so the total number of group operations is bounded by \(O((e+N)rNe\log_{2}p)\). ### Discussion In all the algorithms presented till now, we have assumed that a basis \(\{Q_{1},\ldots,Q_{N}\}\) of the group and the group structure is known. We have also assumed that the discrete logarithm of the element \(K\) from problem 3.1 with respect to the basis \(\{Q_{1},\ldots,Q_{N}\}\) is known. The bound on the number of group operations required in the generalized discrete logarithm algorithm can be found in [7, SS3, equation 15] and for the basis computation algorithm this can be found in [7, SS5, corollary 3]. From the bound on the number of group operations required for algorithm 4 we can see that _root extraction in finite Abelian groups is no harder than solving discrete logarithms and computing basis_. Further, Damgard and Koprowski [3] have proved that the root extraction problem in finite Abelian groups of unknown orders has an exponential lower bound. We have shown that the number of group operations for root extraction is dominated by that of basis computation and solving discrete logarithm which have exponential complexities. Thus, our results agree with that of Damgard and Koprowski.
2307.03955
Solutions to weighted complex m-Hessian Equations on domains in Cn
In this paper, we first study the comparison principle for the operator $H_{\chi,m}$. This result is used to solve certain weighted complex $m-$ Hessian equations.
Nguyen Van Phu, Nguyen Quang Dieu
2023-07-08T11:20:32Z
http://arxiv.org/abs/2307.03955v1
# Solutions to weighted complex \(m\)-Hessian Equations on domains in \(\mathbb{C}^{n}\) ###### Abstract In this paper, we first study the comparison principle for the operator \(H_{\chi,m}\). This result is used to solve certain weighted complex \(m-\) Hessian equations. 0 Footnote 0: _Key words and phrases:_\(m\)-subharmonic functions, Complex \(m\)-Hessian operator, \(m\)-Hessian equations, \(m\)-polar sets, \(m\)-hyperconvex domain. 0 Footnote 0: _Key words and phrases:_\(m\)-subharmonic functions, Complex \(m\)-Hessian operator, \(m\)-Hessian equations, \(m\)-polar sets, \(m\)-hyperconvex domain. ## 1 Introduction The complex Monge-Ampere operator plays a central role in pluripotential theory and has been extensively studied through the years. This operator was used to obtain many important results of the pluripotential theory in \(\mathbb{C}^{n},n>1\). In [1] Bedford and Taylor have shown that this operator is well defined in the class of locally bounded plurisubharmonic functions with range in the class of non-negative measures. Later on, Demailly generalized the work of Bedford and Taylor for the class of locally plurisubharmonic functions with bounded values near the boundary. In [1] and [2], Cegrell introduced the classes \(\mathcal{F}(\Omega),\mathcal{E}(\Omega)\) which are not necessarily locally bounded and he proved that the complex Monge-Ampere operator is well defined in these classes. Recently, in [3] and [4] the authors introduced \(m\)-subharmonic functions which are extensions of the plurisubharmonic functions and the complex \(m\)-Hessian operator \(H_{m}(.)=(dd^{c}.)^{m}\wedge\beta^{n-m}\) which is more general than the Monge-Ampere operator \((dd^{c}.)^{n}\). In [12], Chinh introduced the Cegrell classes \(\mathcal{F}_{m}(\Omega)\) and \(\mathcal{E}_{m}(\Omega)\) which are not necessarily locally bounded and the complex \(m\)-Hessian operator is well defined in these classes. On the other hand, solving the Monge - Ampere equation in the class of plurisubharmonic functions is important problem in pluripotential theory. In the classes of \(m\)-subharmonic functions, similar to the Monge-Ampere equation, the complex \(m\)-Hessian equation \(H_{m}(u)=\mu\) also plays a similar role. This equation was first studied by Li [13]. He solved the non-degenerate Dirichlet problem for this equation with smooth data in strongly \(m\)-pseudoconvex domains. One of its degenerate counterparts was studied by Blocki [14], where he solved the homogeneous equation with continuous boundary data. In [12], Cuong provided a version of the subsolution theorem for the complex \(m\)-Hessian equation in smoothly bounded strongly \(m\)-pseudoconvex domains in \(\mathbb{C}^{n}\). Next, in [12] he solved complex \(m\)-Hessian equation in the case measures \(\mu\) is dominated by \(m-\) Hessian operator of a bounded \(m-\) subharmonic function. In [16], the authors studied complex \(m\)-Hessian equation in the case when the measures \(\mu\) is dominated by \(m-\) Hessian operator of a function in the class \(\mathcal{E}_{m}(\Omega)\). These results partially extend earlier results obtained in [1] and [1] for the plurisubharmonic case. In this paper, we are concerned with the existence and uniqueness of certain weighted complex \(m\)-Hessian equations on bounded \(m-\)hyperconvex domains \(\Omega\) in \(\mathbb{C}^{n}\). Our work is directly motivated by [18] where the author investigated the similar question but for somewhat simpler operator acting on the Cegrell classes for plurisubharmonic function. Here by weighted complex \(m\)-Hessian equations we solve an equation of the form \(\chi(u(z),z)H_{m}(u)=\mu\) where \(\chi\) is a certain positive measurable function defined on \((-\infty,0)\times\Omega\) and \(\mu\) is a positive Borel measure on \(\Omega\). The paper is organized as follows. Besides the introduction, the paper has other four sections. In Section 2 we recall the definitions and results concerning the \(m\)-subharmonic functions which were introduced and investigated intensively in recent years by many authors (see [14], [1]). We also recall the Cegrell classes of \(m\)-subharmonic functions \(\mathcal{F}_{m}(\Omega)\), \(\mathcal{N}_{m}(\Omega)\) and \(\mathcal{E}_{m}(\Omega)\) which were introduced and studied in [12] and [15]. In Section 3, we present a version of the comparison principle for the weighted \(m-\) Hessian operator \(H_{\chi,m}\). Finally, in Section 4, we used the obtained results to study solutions to the weighted \(m-\) Hessian operator \(H_{\chi,m}.\) For the existence of the solution, we manage to apply Schauder's fixed point theorem, a method suggested by Cegrell in [11]. The problem is to create a suitable convex compact set and then appropriate continuous self maps. To make this work possible, we mention among other things, Lemma 4.5 giving us a sufficient condition for convergence in \(L^{1}(\Omega,\mu)\) of a weakly convergent sequence in \(SH^{-}_{m}(\Omega)\), where \(\mu\) is a positive Borel measure that does not charge \(m-\)polar sets. We also discuss a sort of stability of solutions of the weighted Hessian equations. A main technical tool is Lemma 4.9 about convergent in capacity of Hessian measures where we do not assume the sequence is bounded from below by a fixed element in \(\mathcal{F}_{m}(\Omega)\). ## 2 Preliminaries Some elements of pluripotential theory that will be used throughout the paper can be found in [10], [11], [12], while elements of the theory of \(m\)-subharmonic functions and the complex \(m\)-Hessian operator can be found in [1], [2]. Now we recall the class of \(m\)-subharmonic functions introduced by Blocki in [13] and the classes \(\mathcal{E}^{0}_{m}(\Omega)\), \(\mathcal{F}_{m}(\Omega)\) which were introduced by Chinh recently in [14]. Let \(\Omega\) be an open subset in \(\mathbb{C}^{n}\). By \(\beta=dd^{c}\|z\|^{2}\) we denote the canonical Kahler form of \(\mathbb{C}^{n}\) with the volume element \(dV_{2n}=\frac{1}{n!}\beta^{n}\) where \(d=\partial+\overline{\partial}\) and \(d^{c}=\frac{\partial-\overline{\partial}}{4i}\). **2.1** First, we recall the class of \(m\)-subharmonic functions which were introduced and investigated in [13]. For \(1\leq m\leq n\), we define \[\widehat{\Gamma}_{m}=\{\eta\in\mathbb{C}_{(1,1)}:\eta\wedge\beta^{n-1}\geq 0, \ldots,\eta^{m}\wedge\beta^{n-m}\geq 0\},\] where \(\mathbb{C}_{(1,1)}\) denotes the space of \((1,1)\)-forms with constant coefficients. **Definition 2.1**.: Let \(u\) be a subharmonic function on an open subset \(\Omega\subset\mathbb{C}^{n}\). Then \(u\) is said to be an \(m\)_-subharmonic_ function on \(\Omega\) if for every \(\eta_{1},\ldots,\eta_{m-1}\) in \(\widehat{\Gamma}_{m}\) the inequality \[dd^{c}u\wedge\eta_{1}\wedge\cdots\wedge\eta_{m-1}\wedge\beta^{n-m}\geq 0,\] holds in the sense of currents. By \(SH_{m}(\Omega)\) we denote the set of \(m\)-subharmonic functions on \(\Omega\) while \(SH^{-}_{m}(\Omega)\) denotes the set of negative \(m\)-subharmonic functions on \(\Omega\). It is clear that if \(u\in SH_{m}\) then \(dd^{c}u\in\widehat{\Gamma}_{m}\). Now assume that \(\Omega\) is an open set in \(\mathbb{C}^{n}\) and \(u\in\mathcal{C}^{2}(\Omega)\). Then from the Proposition 3.1 in [13] (also see the Definition 1.2 in [2]) we note that \(u\) is subharmonic function on \(\Omega\) if and only if \((dd^{c}u)^{k}\wedge\beta^{n-k}\geq 0\), for \(k=1,\ldots,m\). More generally, if \(u_{1},\ldots,u_{k}\in\mathcal{C}^{2}(\Omega)\), then for all \(\eta_{1},\ldots,\eta_{m-k}\in\widehat{\Gamma}_{m}\), we have \[dd^{c}u_{1}\wedge\cdots\wedge dd^{c}u_{k}\wedge\eta_{1}\wedge\cdots\wedge\eta _{m-k}\wedge\beta^{n-m}\geq 0 \tag{1}\] holds in the sense of currents. We collect below basic properties of \(m\)-subharmonic functions that might be deduced directly from Definition 2.1. For more details, the reader may consult [15], [16], [17]. **Proposition 2.2**.: _Let \(\Omega\) be an open set in \(\mathbb{C}^{n}\). Then the following assertions holds true: (1) If \(u,v\in SH_{m}(\Omega)\) then \(au+bv\in SH_{m}(\Omega)\) for any \(a,b\geq 0.\) (2) \(PSH(\Omega)=SH_{n}(\Omega)\subset\cdots\subset SH_{1}(\Omega)=SH(\Omega).\) (3) If \(u\in SH_{m}(\Omega)\) then a standard approximation convolution \(u*\rho_{\varepsilon}\) is also an m-subharmonic function on \(\Omega_{\varepsilon}=\{z\in\Omega:d(z,\partial\Omega)>\varepsilon\}\) and \(u*\rho_{\varepsilon}\searrow u\) as \(\varepsilon\to 0.\) (4) The limit of a uniformly converging or decreasing sequence of \(m\)-subharmonic function is \(m\)-subharmonic. (5) Maximum of a finite number of \(m\)-subharmonic functions is a \(m\)-subharmonic function._ Now as in [14] and [17] we define the complex Hessian operator for locally bounded \(m\)-subharmonic functions as follows. **Definition 2.3**.: Assume that \(u_{1},\ldots,u_{p}\in SH_{m}(\Omega)\cap L^{\infty}_{\rm loc}(\Omega)\). Then the _complex Hessian operator_\(H_{m}(u_{1},\ldots,u_{p})\) is defined inductively by \[dd^{c}u_{p}\wedge\cdots\wedge dd^{c}u_{1}\wedge\beta^{n-m}=dd^{c}(u_{p}dd^{c}u _{p-1}\wedge\cdots\wedge dd^{c}u_{1}\wedge\beta^{n-m}).\] It was shown in [14] and later in [17] that \(H_{m}(u_{1},\ldots,u_{p})\) is a closed positive current of bidegree \((n-m+p,n-m+p).\) Moreover, this operator is continuous under decreasing sequences of locally bounded \(m\)-subharmonic functions. In particular, when \(u=u_{1}=\cdots=u_{m}\in SH_{m}(\Omega)\cap L^{\infty}_{\rm loc}(\Omega)\) the Borel measure \(H_{m}(u)=(dd^{c}u)^{m}\wedge\beta^{n-m}\) is well defined and is called the complex \(m\)-Hessian of \(u\). **Example 2.4**.: By using an example which is due to Sadullaev and Abullaev in [17] we show that there exists a function which is \(m\)-subharmonic but not \((m+1)\)-subharmonic. Let \(\Omega\subset\mathbb{C}^{n}\) be a domain and \(0\notin\Omega\). Consider the Riesz kernel given by \[K_{m}(z)=-\frac{1}{|z|^{2(n/m-1)}},1\leq m<n.\] We note that \(K_{m}\in C^{2}(\Omega)\). As in [1] we have \[(dd^{c}K_{m})^{k}\wedge\beta^{n-k}=n(n/m-1)^{k}(1-k/m)|z|^{-2kn/m}\beta^{n}.\] Then \((dd^{c}K_{m})^{k}\wedge\beta^{n-k}\geq 0\) for all \(k=1,\ldots,m\) and, hence, \(K_{m}\in SH_{m}(\Omega)\). However, \((dd^{c}K_{m})^{m+1}\wedge\beta^{n-m-1}<0\) then \(K_{m}\notin SH_{m+1}(\Omega)\). **2.2** Next, we recall the classes \(\mathcal{E}_{m}^{0}(\Omega)\), \(\mathcal{F}_{m}(\Omega)\) and \(\mathcal{E}_{m}(\Omega)\) introduced and investigated in [1]. Let \(\Omega\) be a bounded \(m\)-hyperconvex domain in \(\mathbb{C}^{n}\), which mean there exists an \(m-\) subharmonic function \(\rho:\Omega\to(-\infty,0)\) such that the closure of the set \(\{z\in\Omega:\rho(z)<c\}\) is compact in \(\Omega\) for every \(c\in(-\infty,0).\) Such a function \(\rho\) is called the exhaustion function on \(\Omega.\) Throughout this paper \(\Omega\) will denote a bounded \(m-\) hyperconvex domain in \(\mathbb{C}^{n}.\) Put \[\mathcal{E}_{m}^{0}=\mathcal{E}_{m}^{0}(\Omega)=\{u\in SH_{m}^{-}(\Omega)\cap L ^{\infty}(\Omega):\lim_{z\to\partial\Omega}u(z)=0,\int\limits_{\Omega}H_{m}(u )<\infty\},\] \[\mathcal{F}_{m}=\mathcal{F}_{m}(\Omega)=\big{\{}u\in SH_{m}^{-}(\Omega): \exists\mathcal{E}_{m}^{0}\ni u_{j}\searrow u,\sup\limits_{j}\int\limits_{ \Omega}H_{m}(u_{j})<\infty\big{\}},\] and \[\mathcal{E}_{m}=\mathcal{E}_{m}(\Omega)=\big{\{}u\in SH_{m}^{-}(\Omega): \forall z_{0}\in\Omega,\exists\text{ a neighborhood }\omega\ni z_{0},\text{ and }\] \[\mathcal{E}_{m}^{0}\ni u_{j}\searrow u\text{ on }\omega,\sup\limits_{j}\int \limits_{\Omega}H_{m}(u_{j})<\infty\big{\}}.\] In the case \(m=n\) the classes \(\mathcal{E}_{m}^{0}(\Omega)\), \(\mathcal{F}_{m}(\Omega)\) and \(\mathcal{E}_{m}(\Omega)\) coincide, respectively, with the classes \(\mathcal{E}^{0}(\Omega)\), \(\mathcal{F}(\Omega)\) and \(\mathcal{E}(\Omega)\) introduced and investigated earlier by Cegrell in [1] and [1]. From Theorem 3.14 in [1] it follows that if \(u\in\mathcal{E}_{m}(\Omega)\), the complex \(m\)-Hessian \(H_{m}(u)=(dd^{c}u)^{m}\wedge\beta^{n-m}\) is well defined and it is a Radon measure on \(\Omega\). On the other hand, by Remark 3.6 in [1] the following description of \(\mathcal{E}_{m}(\Omega)\) may be given \[\mathcal{E}_{m}=\mathcal{E}_{m}(\Omega)=\big{\{}u\in SH_{m}^{-}(\Omega): \forall U\Subset\Omega,\exists v\in\mathcal{F}_{m}(\Omega),v=u\text{ on }U\big{\}}.\] **Example 2.5**.: For \(0<\alpha<1\) we define the function \[u_{m,\alpha}(z):=-(-\log\|z\|)^{\frac{\alpha m}{n}}+(\log 2)^{\frac{\alpha m}{n} },1\leq m\leq n,\] on the ball \(\Omega:=\{z\in\mathbb{C}^{n}:\|z\|<\frac{1}{2}\}\). Direct computations as in Example 2.3 of [1] shows that \(u_{m,\alpha}\in\mathcal{E}_{m}(\Omega)\), \(\forall 0<\alpha<\frac{1}{m}\). **2.3.** We say that an \(m-\) subharmonic function \(u\) is maximal if for every relatively compact open set \(K\) on \(\Omega\) and for each upper semicontinuous function \(v\) on \(\overline{K},\)\(v\in SH_{m}(K)\) and \(v\leq u\) on \(\partial K,\) we have \(v\leq u\) on \(K.\) The family of maximal \(m-\) subharmonic function defined on \(\Omega\) will be denoted by \(MSH_{m}(\Omega).\) As in the plurisubharmonic case, if \(u\in\mathcal{E}_{m}(\Omega)\) then maximality of \(u\) is characterized by \(H_{m}(u)=0\) (see [19]). **2.4.** Following [15], a set \(E\subset\mathbb{C}^{n}\) is called \(m\)-polar if \(E\subset\{v=-\infty\}\) for some \(v\in SH_{m}(\mathbb{C}^{n})\) and \(v\) is not equivalent \(-\infty.\) **2.5.** In the same fashion as the relative capacity introduced by Bedford and Taylor in [1], the \(Cap_{m}\) relative capacity is defined as follows. **Definition 2.6.** Let \(E\subset\Omega\) be a Borel subset. The \(m\)-capacity of \(E\) with respect to \(\Omega\) is defined in [15] by \[Cap_{m}(E,\Omega)=\sup\Bigl{\{}\int\limits_{E}H_{m}(u):u\in SH_{m}(\Omega),-1 \leq u\leq 0\Bigr{\}}.\] Proposition 2.8 in [15] gives some elementary properties of the \(m\)-capacity similar to those presented in [1]. Namely, we have: a) \(Cap_{m}(\bigcup\limits_{j=1}^{\infty}E_{j})\leq\sum\limits_{j=1}^{\infty}Cap_{ m}(E_{j}).\) b) If \(E_{j}\nearrow E\) then \(Cap_{m}(E_{j})\nearrow Cap_{m}(E).\) According to Theorem 3.4 in [15] (see also Theorem 2.24 in [15]), a Borel subset \(E\) of \(\Omega\) is \(m\)-polar if and only if \(Cap_{m}(E)=0.\) A more qualitative result in this direction will be supplied in Corollary 3.4. In discussing convergence of complex Hessian operator, the following notion stemming from the work of Xing in [11], turns out to be quite useful. **Definition 2.7.**_A sequence \(\{u_{j}\}\subset SH_{m}(\Omega)\) is said to converge in \(Cap_{m}\) to \(u\in SH_{m}(\Omega)\) if for every \(\delta>0\) and every compact set \(K\) of \(\Omega\) we have_ \[\lim_{j\to\infty}Cap_{m}(\{|u-u_{j}|>\delta\}\cap K)=0.\] Generalizing the methods of Cegrell in [15], it is proved in Theorem 3.6 of [1] that \(H_{m}(u_{j})\to H_{m}(u)\) weakly if \(u_{j}\to u\) in \(Cap_{m}\) and if all \(u_{j}\) are bounded from below by a fixed element of \(\mathcal{F}_{m}.\) **2.6.** Let \(u\in SH_{m}(\Omega),\) and let \(\Omega_{j}\) be a fundamental sequence of \(\Omega,\) which means \(\Omega_{j}\) is strictly pseudoconvex, \(\Omega_{j}\Subset\Omega_{j+1}\) and \(\cup_{j=1}^{\infty}\Omega_{j}=\Omega.\) Set \[u^{j}(z)=\bigl{(}\sup\{\varphi(z):\varphi\in SH_{m}(\Omega),\varphi\leq u\text { on }\Omega_{j}^{c}\}\bigr{)}^{*},\] where \(\Omega_{j}^{c}\) denotes the complement of \(\Omega_{j}\) on \(\Omega\). We can see that \(u^{j}\in SH_{m}(\Omega)\) and \(u^{j}=u\) on \((\overline{\Omega_{j}})^{c}.\) From definition of \(u^{j}\) we see that \(\{u^{j}\}\) is an increasing sequence and therefore \(\lim\limits_{j\to\infty}u^{j}\) exists everywhere except on an \(m-\) polar subset on \(\Omega.\) Hence, the function \(\tilde{u}\) defined by \(\tilde{u}=\big{(}\lim\limits_{j\to\infty}u^{j}\big{)}^{*}\) is \(m-\) subharmonic function on \(\Omega.\) Obviously, we have \(\tilde{u}\geq u.\) Moreover, if \(u\in\mathcal{E}_{m}(\Omega)\) then \(\tilde{u}\in\mathcal{E}_{m}(\Omega)\) and \(\tilde{u}\in MSH_{m}(\Omega).\) Set \[\mathcal{N}_{m}=\mathcal{N}_{m}(\Omega)=\{u\in\mathcal{E}_{m}(\Omega):\tilde{ u}=0.\}\] We have the following inclusion \[\mathcal{F}_{m}(\Omega)\subset\mathcal{N}_{m}(\Omega)\subset\mathcal{E}_{m}( \Omega).\] Theorem 4.9 in [19] shows that a function \(u\in\mathcal{F}_{m}(\Omega)\) if and only if it belongs to the class \(\mathcal{N}_{m}(\Omega)\) and has bounded total Hessian mass. Let \(\mathcal{K}\) be one of the classes \(\mathcal{E}_{m}^{0}(\Omega),\mathcal{F}_{m}(\Omega),\mathcal{N}_{m}(\Omega), \mathcal{E}_{m}(\Omega).\) Denote by \(\mathcal{K}^{a}\) the set of all function in \(\mathcal{K}\) whose Hessian measures vanish on all \(m-\)polar set of \(\Omega\). We say that a \(m-\) subharmonic function defined on \(\Omega\) belongs to the class \(\mathcal{K}(f,\Omega),\) where \(f\in\mathcal{E}_{m}\cap MSH_{m}(\Omega)\) if there exists a function \(\varphi\in\mathcal{K}\) such that \[f\geq u\geq f+\varphi.\] Note that \(\mathcal{K}(0,\Omega)=\mathcal{K}.\) We end this preliminary section by recalling the following Holder type inequality proved in Proposition 3.3 of [17]. In the case of plurisubharmonic functions, this sort of estimate was proved by Cegrell in his seminal work [10]. **Proposition 2.8**.: _Let \(u_{1},\cdots,u_{m}\in\mathcal{F}_{m}(\Omega).\) Then we have_ \[\int_{\Omega}H_{m}(u_{1},\cdots,u_{m})\leq\Big{[}\int_{\Omega}H_{m}(u_{1}) \Big{]}^{\frac{1}{m}}\cdots\Big{[}\int_{\Omega}H_{m}(u_{m})\Big{]}^{\frac{1} {m}}.\] ## 3 Comparison Principles for the Operator \(H_{\chi,m}\) Let \(\chi:\mathbb{R}^{-}\times\Omega\to\mathbb{R}^{+}\) be a measurable function which is the pointwise limit of a sequence of _continuous_ functions defined on \(\mathbb{R}^{-}\times\Omega.\) The weighted \(m-\)Hessian operator \(H_{\chi,m}\) is defined as follows \[H_{\chi,m}(u):=\chi(u(z),z)(dd^{c}u)^{m}\wedge\beta^{n-m},\ \forall u\in \mathcal{E}_{m}.\] Notice that this operator is well defined since \(\chi(u(z),z)\) is measurable, being the pointwise limit of a sequence of measurable functions on \(\Omega\). The goal of this section is to presents some versions of the comparison principle for the operators \(H_{m}\) and \(H_{\chi,m}\). A basic ingredient is the following result (see Theorem 3.6 in [10]). Note that in the case \(m=n\), this lemma was included in Theorem 4.9 of [11]. We should say that all these work are rooted in Proposition 4.2 in [12] where an analogous result for plurisubharmonic functions may be found. **Proposition 3.1**.: _Let \(u,u_{1},\cdots,u_{m-1}\in\mathcal{E}_{m}(\Omega),v\in SH_{m}(\Omega)\) and \(T:=ddc_{1}^{u}\wedge\cdots dd^{c}u_{m-1}\wedge\beta^{n-m}.\) Then the two non-negative measures \(dd^{c}\max(u,v)\wedge T\) and \(dd^{c}u\wedge T\) coincide on the set \(\{v<u\}.\)_ Now we start with the following versions of the comparison principle. **Lemma 3.2**.: _Let \(u,v\in\mathcal{E}_{m}\) be such that_ \[H_{m}(u)=0\text{ on the common singular set }\{u=v=-\infty\}. \tag{2}\] _Let \(h\in SH_{m}^{-}(\Omega)\) be such that \(h\geq-1\). Then the following estimate_ \[\frac{1}{m!}\int\limits_{\{u<v\}}(v-u)^{m}(dd^{c}h)^{m}\wedge\beta^{n-m}\leq \int\limits_{\{u<v\}}(-h)[H_{m}(u)-H_{m}(v)] \tag{3}\] _holds true if one of the following conditions are satisfies:_ _(a)_ \(\underset{z\to\partial\Omega}{\liminf}[u(z)-v(z)]\geq 0;\)__ _(b)_ \(u\in\mathcal{F}_{m}.\)__ **Remark 3.3**.: _Observe that when \(h=-1\) then (3) reduces to the more standard form of the comparison principle_ \[\int\limits_{\{u<v\}}H_{m}(v)\leq\int\limits_{\{u<v\}}H_{m}(u).\] Proof.: We follow closely the arguments in Section 4 of [11] where analogous results for plurisubharmonic functions are established. First we prove (3) under the assumption (a). By applying Lemma 5.5 in [12] to the case \(k:=m,w_{1}=\cdots=w_{k}=h\), we obtain \[\frac{1}{m!}\int_{\{u<v\}}(v-u)^{m}(dd^{c}h)^{m}\wedge\beta^{n-m }+\int_{\{u<v\}}(-h)(dd^{c}v)^{m}\wedge\beta^{n-m}\] \[\leq\int\limits_{\{u<v\}\cup\{u=v=-\infty\}}(-h)(dd^{c}u)^{m} \wedge\beta^{n-m}\] \[=\int\limits_{\{u<v\}}(-h)(dd^{c}u)^{m}\wedge\beta^{n-m}\] Here the last line follows from the assumption (2). After rearranging these estimates we obtain (3). Now suppose (b) is true. Then for \(\varepsilon>0\) we set \(v_{\varepsilon}:=\max\{u,v-\varepsilon\}.\) Then \(u\leq v_{\varepsilon}\in\mathcal{F}_{m}\). So we may apply Lemma 5.4 in [19] to get \[\frac{1}{m!}\int\limits_{\Omega}(v_{\varepsilon}-u)^{m}H_{m}(h)\leq\int \limits_{\Omega}(-h)[H_{m}(u)-H_{m}(v_{\varepsilon})].\] which is the same as \[\frac{1}{m!}\int\limits_{\{u<v-\varepsilon\}}(v_{\varepsilon}-u)^{m}H_{m}(h) \leq\int\limits_{\Omega}(-h)[H_{m}(u)-H_{m}(v_{\varepsilon})]. \tag{4}\] Now we apply Proposition 3.1 to get \(H_{m}(v_{\varepsilon})=H_{m}(u)\) on \(\{u>v-\varepsilon\}\) and \(H_{m}(v_{\varepsilon})=H_{m}(v)\) on \(\{u<v-\varepsilon\}\). This yields \[\int\limits_{\Omega}(-h)[H_{m}(u)-H_{m}(v_{\varepsilon})] =\int\limits_{\{u\leq v-\varepsilon\}}(-h)[H_{m}(u)-H_{m}(v_{ \varepsilon})]\] \[\leq\int\limits_{\{u\leq v-\varepsilon\}}(-h)H_{m}(u)+\int \limits_{\{u<v-\varepsilon\}}hH_{m}(v_{\varepsilon})\] \[=\int\limits_{\{u\leq v-\varepsilon\}}(-h)H_{m}(u)+\int\limits_{ \{u<v-\varepsilon\}}hH_{m}(v).\] Combining the above equality and (4) we obtain \[\frac{1}{m!}\int\limits_{\{u<v-\varepsilon\}}(v_{\varepsilon}-u)^{m}H_{m}(h)+ \int\limits_{\{u<v-\varepsilon\}}(-h)H_{m}(v)\leq\int\limits_{\{u\leq v- \varepsilon\}}(-h)H_{m}(u). \tag{5}\] By Fatou's lemma we have \[\liminf\limits_{\varepsilon\to 0}\int\limits_{\{u<v-\varepsilon\}}(v_{ \varepsilon}-u)^{m}H_{m}(h)\geq\int\limits_{\{u<v\}}(v-u)^{m}H_{m}(h).\] On the other hand, note that \(\{u\leq v-\varepsilon\}\subset\{u<v\}\cup\{u=v=-\infty\}.\) Therefore using the hypothesis (2) we obtain \[\lim\limits_{\varepsilon\to 0}\int\limits_{\{u\leq v-\varepsilon\}}(-h)H_{m}(u)= \int\limits_{\{u<v\}}(-h)H_{m}(u).\] So by letting \(\varepsilon\to 0\) in both sides of (5) we complete the proof. Using the above result we are able to get useful estimates on the size of the sublevel sets of \(u\in\mathcal{F}_{m}\). **Corollary 3.4**.: _For \(u\in\mathcal{F}_{m}\) and \(s>0\) we have the following estimates: (i)\(Cap_{m}(\{u<-s\})\leq\frac{1}{s^{m}}\int_{\Omega}H_{m}(u).\) (ii) \(\int\limits_{\{u\leq-s\}}H_{m}(u_{s})\leq 2^{m}m!\int\limits_{\{u<-s/2\}}H_{m}(u)\) where \(u_{s}:=\max\{u,-s\}.\)_ Proof.: (i) Fix \(h\in SH_{m}(\Omega),-1\leq h<0.\) By the comparison principle Lemma 3.2 we have \[\int\limits_{\{u<-s\}}H_{m}(h)\leq\int\limits_{\{\frac{u}{s}<h\}}H_{m}(h)\leq \frac{1}{s^{m}}\int\limits_{\{\frac{u}{s}<h\}}H_{m}(u)\leq\frac{1}{s^{m}}\int \limits_{\Omega}H_{m}(u).\] We are done. (ii) By Lemma 3.2 we have \[\int\limits_{\{u\leq-s\}}H_{m}(u_{s}) \leq\int\limits_{\{u\leq-s\}}(-1-\frac{2u}{s})^{m}H_{m}(u_{s})\] \[=\int\limits_{\{u\leq-s\}}(-s-2u)^{m}H_{m}\Big{(}\max\Big{\{} \frac{u}{s},-1\Big{\}}\Big{)}\] \[=2^{m}\int\limits_{\{u\leq-s\}}(-\frac{s}{2}-u)^{m}H_{m}\Big{(} \max\Big{\{}\frac{u}{s},-1\Big{\}}\Big{)}\] \[\leq 2^{m}m!\int\limits_{\{u<-s/2\}}H_{m}(u).\] The proof is thereby completed. A major consequence of Lemma 3.2 is the following version of the comparison principle which was essentially proved in Corollary 3.2 of [1] for the case when \(m=n.\) **Theorem 3.5**.: _Let \(u\in\mathcal{N}_{m}(f)\) and \(v\in\mathcal{E}_{m}(f).\) Assume that the following conditions hold true: (a) \(H_{m}(u)\) puts no mass on \(\{u=v=-\infty\};\) (b) \(H_{m}(u)\leq H_{m}(v)\) on \(\{u<v\}.\) Then we have \(u\geq v\) on \(\Omega.\) In particular, if \(H_{m}(u)=H_{m}(v)\) on \(\Omega\) then \(u=v\) on \(\Omega.\)_ Our proof below supplies more details to the original one in Corollary 3.2 of [1] for the case when \(m=n.\) Proof.: Fix \(\varepsilon>0\). Choose \(\varphi\in\mathcal{N}_{m}(\Omega)\) such that \(f\geq u\geq f+\varphi\) on \(\Omega\). Let \(\{\Omega_{j}\}\) be a fundamental sequence of \(\Omega\). Define \[\varphi_{j}=\Big{(}\sup\{w:w\in SH_{m}(\Omega),w\leq\varphi\text{ on }\Omega \setminus\overline{\Omega}_{j}\}\Big{)}^{*}.\] Then \(\varphi_{j}\in SH_{m}(\Omega),\varphi_{j}\leq 0\) and \(\varphi_{j}=\varphi\) on \(\Omega\setminus\overline{\Omega}_{j}\). This yields that \[\max\{u,v\}\geq v_{j}:=\max\{u,v+\varphi_{j}\}\in\mathcal{E}_{m}(\Omega).\] Since \(f\geq v\) on \(\Omega\) we also have for every \(j\geq 1\) \[\lim_{z\to\partial\Omega}(u(z)-v_{j}(z))=0.\] Now we note that (b) implies the estimate \[H_{m}(v+\varphi_{j})\geq H_{m}(v)\geq H_{m}(u)\text{ on }\ \{u<v\}.\] It follows, in view of Proposition 5.2 in [17], that \[H_{m}(v_{j})\geq H_{m}(u)\text{ on }\ \{u<v\}. \tag{6}\] Next, using the definition of \(Cap_{m,\Omega}\) we obtain \[\frac{\varepsilon^{m}}{m!}Cap_{m,\Omega}(\{u+2\varepsilon<v_{j}\}) =\frac{\varepsilon^{m}}{m!}\sup\Big{\{}\int\limits_{\{u+2 \varepsilon<v_{j}\}}H_{m}(h):h\in SH_{m}(\Omega),-1\leq h\leq 0\Big{\}}\] \[\leq\frac{1}{m!}\sup\Big{\{}\int\limits_{\{u+2\varepsilon<v_{j}\} }(v_{j}-u-\varepsilon)^{m}H_{m}(h):h\in SH_{m}(\Omega),-1\leq h\leq 0\Big{\}}\] \[\leq\frac{1}{m!}\sup\Big{\{}\int\limits_{\{u+\varepsilon<v_{j}\} }(v_{j}-u-\varepsilon)^{m}H_{m}(h):h\in SH_{m}(\Omega),-1\leq h\leq 0\Big{\}}\] \[\leq\sup\Big{\{}\int\limits_{\{u+\varepsilon<v_{j}\}}(-h)[H_{m}(u )-H_{m}(v_{j})]:h\in SH_{m}(\Omega),-1\leq h\leq 0\Big{\}}\] \[=0.\] Here we apply the assumption (a) to obtain the fourth inequality and the last equality follows from (6) and the inclusion \(\{u+\varepsilon<v_{j}\}\subset\{u<v\}\). Thus \(v_{j}\leq u+2\varepsilon\) outside a polar set of \(\Omega\). Letting \(j\to\infty\) while noting that \(\varphi_{j}\to 0\) outside a polar set of \(\Omega\), we see that \(v\leq u+2\varepsilon\) off a polar set of \(\Omega\). Now subharmonicity of \(u\) and \(v\) forces \(v\leq u+2\varepsilon\) entirely on \(\Omega\). The proof is complete by letting \(\varepsilon\to 0\) Using the basic properties of \(m-\)subharmonic functions in Proposition 2.2 and the comparison principle Lemma 3.2, as in the plurisubharmonic case (see [1]), we have the following quasicontinuity property of \(m-\)subharmonic functions (see Theorem 2.9 in [10] and Theorem 4.1 in [1]). **Proposition 3.6**.: _Let \(u\in SH_{m}(\Omega)\). Then for every \(\varepsilon>0\) we may find an open set \(U\) in \(\Omega\) with \(Cap_{m}(U)<\varepsilon\) and \(u|_{\Omega\setminus U}\) is continuous._ Using the above result and the Lemma 3.2, as in the plurisubharmonic case (see [1]), we have the following important fact about negligible sets for \(m-\)subharmonic functions (see Theorem 5.3 in [1]). **Proposition 3.7**.: _Let \(\{u_{j}\}\) be a sequence of negative \(m-\) subharmonic functions on \(\Omega.\) Set \(u:=\sup\limits_{j\geq 1}u_{j}\). Then the set \(\{z\in\Omega:u(z)<u^{*}(z)\}\) is \(m-\)polar._ Now we are able to formulate a version of the comparison principle for the operator \(H_{\chi,m}\) mentioned at the beginning of this section. **Theorem 3.8**.: _Suppose that the function \(t\mapsto\chi(t,z)\) is decreasing in \(t\) for every \(z\in\Omega\setminus E,\) where \(E\) is a \(m-\)polar subset of \(\Omega.\) Let \(u\in\mathcal{N}_{m}(f),v\in\mathcal{E}_{m}(f)\) be such that \(H_{\chi,m}(u)\leq H_{\chi,m}(v).\) Assume also that \(H_{m}(u)\) puts no mass on \(\{u=-\infty\}\cup E.\) Then we have \(u\geq v\) on \(\Omega.\)_ Proof.: We claim that \(H_{m}(u)\leq H_{m}(v)\) on \(\{u<v\}\). For this, fix a compact set \(K\subset\{u<v\}\). Let \(\theta_{j}\geq 0\) be a sequence of continuous functions on \(\Omega\) with compact support such that \(\theta_{j}\downarrow\hbox{1\kern-2.5pt{\rm I}}_{K}.\) Since \[\chi(v,z)H_{m}(v)\geq\chi(u,z)H_{m}(u)\text{ as measures on }\Omega\] we obtain \[\int_{\Omega}\theta_{j}H_{m}(v) =\int_{\Omega}\frac{\theta_{j}}{\chi(v,z)}\chi(v,z)H_{m}(v)\] \[\geq\int_{\Omega}\frac{\theta_{j}}{\chi(v,z)}\chi(u,z)H_{m}(u)\] \[=\int_{\Omega}\theta_{j}\frac{\chi(u,z)}{\chi(v,z)}H_{m}(u).\] Letting \(j\to\infty\) we get \[\int_{K}H_{m}(v)\geq\int_{K}\frac{\chi(u,z)}{\chi(v,z)}H_{m}(u)\geq\int_{K \setminus E}\frac{\chi(u,z)}{\chi(v,z)}H_{m}(u)=\int_{K}H_{m}(u)\] where the second inequality follows from the assumption that \(\chi(u(z),z)\geq\chi(v(z),z)\) on \(\{z:u(z)<v(z)\}\setminus E\) and the last estimate follows from the fact that \(H_{m}(u)\) puts no mass on \(E.\) Thus \(H_{m}(u)\leq H_{m}(v)\) on \(\{u<v\}\) as claimed. Now we may apply Theorem 3.5 to conclude \(u\geq v\) This section ends up with the following simple fact about convergence of measures where the concept of convergence in capacity plays a role. **Proposition 3.9**.: _Let \(f,\{f_{j}\}_{j\geq 1}\) be quasicontinuous functions defined on \(\Omega\) and \(\mu,\{\mu_{j}\}_{j\geq 1}\) be positive Borel measures on \(\Omega\). Then \(f_{j}\mu_{j}\) converges weakly to \(f\mu\) if the following conditions are satisfied: (i) \(\mu_{j}\) converges to \(\mu\) weakly; (ii) \(f_{j}\) converges to \(f\) in \(Cap_{m};\) (iii) The functions \(\{f_{j}\},f\) are locally uniformly bounded on \(\Omega;\) (iv) \(\{\mu_{j}\}\) are uniformly absolutely continuous with respect to \(Cap_{m}\) in the sense that for every \(\varepsilon>0\) there exists \(\delta>0\) such that if \(X\) is a Borel subset of \(\Omega\) and satisfies \(Cap_{m}(X)<\delta\) then \(\mu_{j}(X)<\varepsilon\) for all \(j\geq 1.\)_ Proof.: First we note that \(\mu\) is also absolutely continuous with respect to \(Cap_{m}\). Indeed, it suffices to apply (iii) and fact that for each _open_ subset \(X\) of \(\Omega\) we have \(\mu(X)\leq\liminf\limits_{j\to\infty}\mu_{j}(X).\) Now we let \(\varphi\) be a continuous function with compact support on \(\Omega.\) Then we write \[\int\varphi[f_{j}d\mu_{j}-fd\mu]=\int\varphi(f_{j}-f)d\mu_{j}+\Big{[}\int \varphi fd\mu_{j}-\int\varphi fd\mu\Big{]}.\] Then using (i), (iii), (iv) and quasicontinuity of \(f\) we see that the second term tends to \(0\) as \(j\to\infty\) while the first term also goes to \(0\) in view of (ii), (iv) and (iii). ## 4 Weighted complex \(m\)-Hessian equations Let \(\chi:\mathbb{R}^{-}\times\Omega\to\mathbb{R}^{+}\) be a continuous function. Let \(f\in\mathcal{E}_{m}(\Omega)\cap MSH_{m}(\Omega)\) be given. Then, under certain restriction on \(\chi\) and the measure \(\mu\), we have the following existence result for weighted complex \(m-\)Hessian equations. **Theorem 4.1**.: _Let \(\mu\) be a non-negative on \(\Omega\) with \(\mu(\Omega)<\infty\). Assume that the following conditions are satisfied: (a) There exists \(\varphi\in\mathcal{F}_{m}(f)\cap L^{1}(\Omega,\mu)\) such that \(\mu\leq H_{m}(\varphi);\) (b) \(\mu\) puts no mass on \(m-\)polar subset of \(\Omega;\) (c) \(\chi(t,z)\geq 1\) for all \(t<0,z\in\Omega.\)_ _Then the equation_ \[\chi(u,z)H_{m}(u)=\mu\] _has a solution \(u\in\mathcal{F}_{m}^{a}(f)\cap L^{1}(\Omega,d\mu).\) Furthermore, if the function \(t\mapsto\chi(t,z)\) is decreasing for all \(z\) out side a \(m-\)polar set then such a solution \(u\) is unique._ **Remark 4.2**.: _The uniqueness of \(u\) fails without further restriction on \(\chi\). Indeed, consider the case \(m=n\), and \(\Omega:=\{z:|z|<1\}.\) Let_ \[u_{1}(z):=|z|^{2}-1,u_{2}(z):=\frac{1}{2}(|z|^{2}-1).\] _Set_ \[\Gamma_{1}:=\{(u_{1}(z),z):z\in\Omega)\},\Gamma_{2}:=\{(u_{2}(z),z):z\in\Omega )\}.\] _Then \(\Gamma_{1}\cap\Gamma_{2}=\emptyset\) and \(\Gamma_{1}\cup\Gamma_{2}\) is a closed subset of \((-\infty,0)\times\Omega.\) We will find a continuous function \(\chi:(-\infty,0)\times\Omega\rightarrow\mathbb{R}\) such that \(\chi(t,z)\geq 1\) and that_ \[\chi(u_{1},z)H_{n}(u_{1})=\chi(u_{2},z)H_{n}(u_{2})\Leftrightarrow 2^{n}\chi( u_{1}(z),z)=\chi_{2}(u_{2}(z),z),\ z\in\Omega. \tag{7}\] _For this purpose, we first let \(\chi=1\) on \(\Gamma_{1},\chi=2^{n}\) on \(\Gamma_{2}\). Next, by Tietze's extension theorem, we may extend \(\chi\) to a continuous function on \((-\infty,0)\times\Omega\) such that \(1\leq\chi\leq 2^{n}\). Thus \(\chi\) is a function satisfies (7) and of course the condition (c). Now we put_ \[\mu:=\chi(u_{1},z)H_{n}(u_{1})=C\chi(u_{1}(z),z)dV_{2n},\] _where \(C>0\) depends only on \(n.\) So \(u_{1},u_{2}\) are two distinct solution of the Hessian equation \(\chi(u,z)H_{n}(u)=\mu.\) Moreover, we note that_ \[H_{n}(u_{1})\leq\mu\leq 2^{n}CdV_{2n}\leq H_{n}(C^{\prime}u_{1})\] _where \(C^{\prime}>0\) is a sufficiently large constant. Thus, we have shown that \(\mu\) satisfies also the conditions (a) and (b) of Theorem 4.1._ For the proof of Theorem 4.1 we need the following result which is Theorem 3.7 in [1]. The lemma was proved by translating the original proof in [1] for plurisubharmonic functions to the case of \(m-\)subharmonic ones. **Lemma 4.3**.: _Let \(\mu\) be a non-negative, finite measure on \(\Omega\). Assume that \(\mu\) puts no mass on \(m-\)polar subsets of \(\Omega\). Then there exists \(u\in\mathcal{F}_{m}(f)\) such that \(H_{m}(u)=\mu.\)_ The result below states Lebesgue integrable of elements in \(\mathcal{F}_{m}(f).\) **Lemma 4.4**.: _Let \(\varphi\in\mathcal{F}_{m}(f).\) Then \(\varphi\in L^{1}(\Omega,dV_{2n}).\)_ Proof.: We may assume that \(f=0.\) Choose \(\theta\in\mathcal{E}_{m}^{0}\) such that \(H_{m}(\theta)=dV_{2n}.\) Then by integration by parts we have \[\int_{\Omega}\varphi dV_{2n}=\int_{\Omega}\varphi H_{m}(\theta)=\int_{\Omega} \theta dd^{c}\varphi\wedge(dd^{c}\theta)^{m-1}\wedge\beta^{n-m}>-\infty.\] Here the last estimate follows from Holder inequality Proposition 2.8 and the fact that \(\theta\) is bounded from below. Next, we will prove a lemma which might be of independent interest. **Lemma 4.5**.: _Let \(\mu\) be a positive measure on \(\Omega\) which vanishes on all \(m-\) polar sets and \(\mu(\Omega)<\infty.\) Let \(\{u_{j}\}\in SH_{m}^{-}(\Omega)\) be a sequence satisfying the following conditions: (i) \(\underset{j\geq 1}{\sup}\int\limits_{\Omega}-u_{j}d\mu<\infty;\) (ii) \(u_{j}\to u\in SH_{m}^{-}(\Omega)\) a.e. \(dV_{2n}.\)_ _Then we have_ \[\lim_{j\to\infty}\int_{\Omega}|u_{j}-u|d\mu=0.\] The above result is implicitly contained in the proof of Lemma 5.2 in [10]. We include the proof here only for the reader convenience. Notice that we also use some ideas in [DHB] at the end of the proof of the lemma. Proof.: We split the proof into two steps. _Step 1._ We will prove \[\lim_{j\to\infty}\int_{\Omega}u_{j}d\mu=\int_{\Omega}ud\mu. \tag{8}\] To see this, we note that, in view of (i), by passing to a subsequence we may achieve that \[\lim_{j\to\infty}\int_{\Omega}u_{j}d\mu=a. \tag{9}\] Notice that, by monotone convergence theorem, we have \[\lim_{N\to\infty}\int_{\Omega}\max\{u,-N\}d\mu=\int_{\Omega}ud\mu,\] and for each \(N\geq 1\) fixed \[\lim_{j\to\infty}\int_{\Omega}\max\{u_{j},-N\}d\mu=\int_{\Omega}\max\{u,-N\}d\mu.\] Therefore, using a diagonal process, it suffices to prove (8) under the restriction that \(u_{j}\) and \(u\) are all uniformly bounded from below. Since \(\mu(\Omega)<\infty\) we see that the set \(A:=\{u_{j}\}_{j\geq 1}\) is bounded in the Hilbert space \(L^{2}(\Omega,\mu)\). Thus, by Mazur's theorem, we can find a sequence \(\tilde{u}_{j}\) belonging to the convex hull of \(A\) that converges to some element \(\tilde{u}\in L^{2}(\Omega,\mu).\) After switching to a subsequence we may assume that \(\tilde{u}_{j}\to\tilde{u}\) a.e. in \(d\mu.\) But by (ii) \(\tilde{u}_{j}\to u\) in \(L^{2}(\Omega,dV_{2n})\) so \((\underset{k\geq j}{\sup}\tilde{u}_{k})^{*}\downarrow u\) entirely on \(\Omega.\) Thus, using monotone convergence theorem we obtain \[\int_{\Omega}ud\mu=\lim_{j\to\infty}\int_{\Omega}(\underset{k\geq j}{\sup} \tilde{u}_{k})^{*}d\mu=\lim_{j\to\infty}\int_{\Omega}(\underset{k\geq j}{\sup} \tilde{u}_{k})d\mu=\int_{\Omega}\tilde{u}d\mu=a.\] Here the second equality follows from the fact that \(\mu\) does not charge the \(m-\)polar negligible set \((\sup_{k\geq j}\tilde{u}_{k})^{*}\neq(\sup_{k\geq j}\tilde{u}_{k})\), and the last equality results from the choice of \(\tilde{u}_{j}\) and (9). The equation (8) follows. _Step 2._ Completion of the proof. Set \(v_{j}:=(\sup_{k\geq j}u_{k})^{*}\). Then \(v_{j}\geq u_{j},v_{j}\downarrow u\) on \(\Omega\) and \(v_{j}\to u\) in \(L^{1}(\Omega,dV_{2n}).\) So by the result obtained in Step 1 we have \[\lim_{j\to\infty}\int_{\Omega}v_{j}d\mu=\int_{\Omega}ud\mu=\lim_{j\to\infty} \int_{\Omega}u_{j}d\mu. \tag{10}\] Using the triangle in equality we obtain \[\int_{\Omega}|u_{j}-u|d\mu \leq\int_{\Omega}(v_{j}-u)d\mu+\int_{\Omega}(v_{j}-u_{j})d\mu\] \[=2\int_{\Omega}(v_{j}-u)d\mu+\int_{\Omega}(u-u_{j})d\mu.\] Hence by applying (10) we finish the proof of the lemma. Now, we turn to the proof of Theorem 4.1 where the fixed point method from [10] will be crucial. Proof.: (of Theorem 4.1) We set \[\mathcal{A}:=\{u\in\mathcal{F}_{m}(f):\varphi\leq u\leq f\}.\] First using Lemma 4.4 we see that \(\mathcal{A}\) is a compact convex subset of \(L^{1}(\Omega,dV_{2n})\). Moreover, from the assumption on \(\mu,\) and Lemma 4.5 we infer that \(\mathcal{A}\) is also compact in \(L^{1}(\Omega,\mu).\) Let \(\mathcal{S}:\mathcal{A}\to\mathcal{A}\) be the operator assigning each element \(u\in\mathcal{A}\) to the _unique_ solution \(v:=\mathcal{S}(u)\in\mathcal{F}_{m}(f)\) of the equation \[H_{m}(v)=\frac{1}{\chi(u(z),z)}d\mu.\] This is possible according to Lemma 4.3, because by (b), the measure on the right hand side does not charge \(m-\)polar subsets of \(\Omega\). Note also that for such a solution \(v\in\mathcal{F}_{m}(f)\), by (a) and (c), we have \(H_{m}(v)\leq\mu\leq H_{m}(\varphi)\). So the comparison principle (Theorem 3.5) yields that \(v\geq\varphi\) on \(\Omega.\) Hence the operator \(\mathcal{S}\) indeed maps \(\mathcal{A}\) into itself. The key step is to check continuity (in \(L^{1}(\Omega)\)) of \(\mathcal{S}\). Thus, given a sequence \(\{u_{j}\}_{j\geq}\subset\mathcal{A},u_{j}\to u\) in \(L^{1}(\Omega)\). We must show \(\mathcal{S}(u_{j})\to\mathcal{S}(u)\) in \(L^{1}(\Omega)\). By passing to subsequences of \(u_{j}\) coupling with Lemma 4.5, we may assume that \(u_{j}\to u\) a.e. \((d\mu)\). Now we define for \(z\in\Omega\) the following sequences of non-negative bounded measurable functions \[\psi^{1}_{j}(z):=\inf_{k\geq j}\frac{1}{\chi(u_{k}(z),z)},\psi^{2}_{j}(z):=\sup _{k\geq j}\frac{1}{\chi(u_{k}(z),z)}.\] Then we have: (i) \(0\leq\psi_{j}^{1}(z)\leq\frac{1}{\chi(u_{j}(z),z)}\leq\psi_{j}^{2}(z)\leq 1\) for \(j\geq 1\); (ii) \(\lim\limits_{j\rightarrow\infty}\psi_{j}^{1}(z)=\lim\limits_{j\rightarrow\infty }\psi_{2}^{1}(z)=\frac{1}{\chi(u(z),z)}\) a.e. \((d\mu)\). Now, using Lemma 4.3 we may find \(v_{j}^{1},v_{j}^{2}\in\mathcal{F}_{m}(f)\) are solutions of the equations \[H_{m}(v_{j}^{1})=\psi_{j}^{1}d\mu,H_{m}(v_{j}^{2})=\psi_{j}^{2}d\mu.\] Then, using the comparison principle we see that \(v_{j}^{1}\downarrow v^{1},v_{j}^{2}\uparrow v^{2}\), furthermore, in view of (i) we also have \[v_{j}^{1}\geq S(u_{j})\geq v_{j}^{2}. \tag{11}\] Next we use (ii) to get \[H_{m}(v_{j}^{1})\rightarrow\frac{1}{\chi(u,z)}d\mu,H_{m}(v_{j}^{2})\to \frac{1}{\chi(u,z)}d\mu.\] So by the monotone convergence theorem we infer \[H_{m}(v^{1})=H_{m}((v^{2})^{*})=\frac{1}{\chi(u(z),z)}d\mu=H_{m}(\mathcal{S}(u )).\] Applying again the comparison principle we obtain \(v^{1}=(v^{2})^{*}=\mathcal{S}(u)\) on \(\Omega\). By the squeezing property (11), \(S(u_{j})\to S(u)\) pointwise outside a \(m-\)polar set of \(\Omega\). Since \(\mu\) puts no mass on \(m-\)polar sets, we may apply Lebesgue dominated convergence theorem to achieve that \(\mathcal{S}(u_{j})\rightarrow\mathcal{S}(u)\) in \(L^{1}(\Omega,d\mu)\). Thus \(\mathcal{S}:\mathcal{A}\rightarrow\mathcal{A}\) is continuous. So we can invoke Schauder's fixed point theorem to attain \(u\in\mathcal{A}\) such that \(u=\mathcal{S}(u)\). Note also that \(H_{m}(u)\), being dominated by \(\mu\), does not charge \(m-\)polar sets, so \(u\in\mathcal{F}_{m}^{a}(f)\). Hence \(u\) is a solution of the weighted \(m-\)Hessian equation that we are looking for. Finally, under the restriction that \(\chi(t,z)\) is decreasing for all \(z\) out side a \(m-\)polar set, we may apply Theorem 3.8 to achieve the uniqueness of such a solution \(u\). In our next result, we deal with the situation when \(\mu\) is dominated by a suitable function of \(Cap_{m}\). This type of result is somewhat motivated from seminal work of Kolodjiez in [13]. **Theorem 4.6**.: _Let \(\mu\) be a non-negative Borel measure on \(\Omega\) with \(\mu(\Omega)<\infty\) and \(F:[0,\infty)\rightarrow[0,\infty)\) be non-decreasing function with \(F(0)=0\) and_ \[\int_{1}^{\infty}F(\frac{1}{s^{m}})ds<\infty. \tag{12}\] _Assume that the following conditions are satisfied: (a) \(\mu(X)\leq F(Cap_{m}(X))\) for all Borel subsets \(X\) of \(\Omega\)_; _(b) There exists a measurable function \(G:\Omega\rightarrow[0,\infty]\) such_ \[\chi(t,z)\geq G(z),\ \forall(t,z)\in(-\infty,0)\times\Omega\text{ and }c:=\int \limits_{\Omega}\frac{1}{G}d\mu<\infty.\] _Then the equation_ \[\chi(u,z)H_{m}(u)=\mu\] _has a solution \(u\in\mathcal{F}_{m}\cap L^{1}(\Omega,\mu).\)_ **Remark 4.7**.: _According to Proposition 2.1 in [4], for every \(p\in(0,\frac{n}{n-m})\) there exists a constant \(A\) depending only on \(p\) such that_ \[V_{2n}(X)\leq ACap_{m}(X)^{p}\] _for all Borel subsets \(X\) of \(\Omega.\) So the Lebesgue measure \(dV_{2n}\) satisfies the assumption (a) for \(F(x)=Ax^{p}\) and \(p\) is any number in the interval \((\frac{1}{m},\frac{n}{n-m}).\)_ Proof.: Let \[\mathcal{A}:=\Big{\{}u\in\mathcal{F}_{m}:\int_{\Omega}H_{m}(u)\leq c\Big{\}}.\] First, using Holder inequality Proposition 2.8, we will show \(A\) is convex. Indeed, let \(\alpha\in[0,1],\) it suffices to prove \(\int\limits_{\Omega}H_{m}(\alpha u+(1-\alpha)v)\leq c.\) For this, we use Proposition 2.8 to get \[\int_{\Omega}H_{m}(\alpha u+(1-\alpha)v) =\int_{\Omega}dd^{c}(\alpha u+(1-\alpha)v)^{m}\wedge\beta^{n-m}\] \[=\int_{\Omega}\sum\limits_{k=0}^{m}\binom{m}{k}\alpha^{k}(1- \alpha)^{m-k}(dd^{c}u)^{k}\wedge(dd^{c}v)^{m-k}\wedge\beta^{n-m}\] \[=\sum\limits_{k=0}^{m}\binom{m}{k}\alpha^{k}(1-\alpha)^{m-k}\int _{\Omega}(dd^{c}u)^{k}\wedge(dd^{c}v)^{m-k}\wedge\beta^{n-m}\] \[\leq\sum\limits_{k=0}^{m}\binom{m}{k}\alpha^{k}(1-\alpha)^{m-k} \Big{[}\int_{\Omega}H_{m}(u)\Big{]}^{\frac{k}{m}}\Big{[}\int_{\Omega}H_{m}(v) \Big{]}^{\frac{m-k}{m}}\] \[\leq\Big{[}\sum\limits_{k=0}^{m}\binom{m}{k}\alpha^{k}(1-\alpha) ^{m-k}\Big{]}c=c.\] Thus we have proved that \(\mathcal{A}\) is indeed convex. We want to show \(\mathcal{A}\) is compact in \(L^{1}(\Omega,\mu).\) Indeed, first by Lemma 4.4 we have \(\mathcal{A}\subset L^{1}(\Omega,dV_{2n}).\) Next we let \(\{u_{j}\}\) be a sequence in \(\mathcal{A}.\) By Lemma 3.4, for \(s>0\) we have \[Cap_{m}(\{u_{j}<-s\})\leq\frac{1}{s^{m}}\int_{\Omega}H_{m}(u_{j})\leq\frac{c}{ s^{m}}. \tag{13}\] So, in particular \(u_{j}\) cannot contain converge to \(-\infty\) uniformly on compact sets of \(\Omega\). Hence by passing to a subsequence we may achieve that \(u_{j}\) converges in \(L^{1}_{loc}(\Omega,dV_{2n})\) to \(u\in SH_{m}(\Omega),u<0.\) Notice that, using the comparison principle as in Lemma 2.1 in [10] we conclude that \(u\in\mathcal{F}_{m}.\) Now we claim that \(u_{j}\to u\) in \(L^{1}(\Omega,\mu).\) In view of Lemma 4.5, it suffices to check that \[\sup\limits_{j\geq 1}\int_{\Omega}(-u_{j})d\mu<\infty. \tag{14}\] For this purpose, we apply (18) and the assumption (a) to obtain \[\mu(\{u_{j}<-s\})\leq F(Cap_{m}(\{u_{j}<-s\}))\leq F(\frac{c}{s^{m}}).\] Hence \[\sup\limits_{j\geq 1}\int_{\Omega}(-u_{j})d\mu=\sup\limits_{j\geq 1}\int_{0}^{ \infty}\mu(\{u_{j}<-s\})ds<\infty\] where the last integral converges in view of (12). Thus the claim (14) follows. By Lemma 4.5 we have \(u_{j}\to u\) in \(L^{1}(\Omega,d\mu).\) From now on, our argument will be close to that of the proof of Theorem 4.1. More precisely, let \(\mathcal{S}:\mathcal{A}\rightarrow\mathcal{A}\) be the operator assigning each element \(u\in\mathcal{A}\) to the _unique_ solution \(v:=\mathcal{S}(u)\in\mathcal{F}_{m}\) of the equation \[H_{m}(v)=\frac{1}{\chi(u(z),z)}d\mu.\] This is possible according to Lemma 4.3, because by (a) and (b), the measure on the right hand side does not charge \(m-\)polar subsets of \(\Omega\) and has total finite mass \(\leq c\). By repeating the same reasoning as in the proof of Theorem 4.1 (the only notable change is to replace the upper bound of the sequence \(\{\psi_{j}^{2}\}\) by \(\frac{1}{G}\)) we can see that \(\mathcal{A}\) is continuous. Thus, applying again Schauder's fixed point theorem we conclude that \(\mathcal{S}\) admits a fixed point which is a solution of the equation \(\chi(u,z)H_{m}(u)=\mu.\) The proof is then complete. Our article ends up with the following "weak" stability result. **Theorem 4.8**.: _Let \(\Omega,\mu,F,\chi\) and \(G\) be as in Theorem 4.6. Let \(\mu_{j}\) be a sequence of positive Borel measures on \(\Omega\) such that \(\mu_{j}\leq\mu\) and \(\mu_{j}\) converges weakly to \(\mu.\) Let \(u_{j}\in\mathcal{F}_{m}\) be a solution of the equation_ \[\chi(u(z),u)H_{m}(u)=\mu_{j}.\] _Assume that \(F\) and \(\chi\) satisfies the following additional properties: (i) \(\int\limits_{1}^{\infty}F(\frac{1}{s^{2m}})ds<\infty;\) (ii) \(\frac{1}{G}\in L^{2}(\Omega,d\mu);\) (iii) \(\mu^{\prime}:=\frac{1}{G}\mu\) is absolutely continuous with respect to \(Cap_{m};\) (iv) For every compact subsets \(K\) of \(\Omega\) and \(t_{0}\in(-\infty,0)\) we have:_ _(a) \(\sup\{\chi(t,z):t<t_{0},z\in K\}<\infty;\)_ _(b) There exists a constant_ \(C>0\) _(depending on_ \(K,t_{0}\)_) such that for_ \(t<t^{\prime}<t_{0}\) _and_ \(z\in K\) _the estimate below holds true_ \[|\chi(t,z)-\chi(t^{\prime},z)|\leq C|t-t^{\prime}|.\] _(v)_ \(\chi\) _is continuous on_ \((-\infty,0)\times\Omega.\)__ _Then there exists a subsequence of_ \(u_{j}\) _converging in_ \(Cap_{m}\) _to_ \(u\in\mathcal{F}_{m}\) _such that_ \[\chi(u(z),u)H_{m}(u)=\mu.\] We require the following convergence result for the operator \(H_{m}\). This is inspired from Theorem 1 in [20]. **Lemma 4.9**.: _Let \(\{u_{j}\}\) be a sequence in \(\mathcal{F}_{m}\) that converges to \(u\in\mathcal{F}_{m}\) in \(Cap_{m}\). Assume that_ \[\lim_{a\to\infty}\Big{(}\limsup_{j\to\infty}\int\limits_{\{u_{j}<-a\}}H_{m}(u _{j})\Big{)}=0. \tag{15}\] _Then \(H_{m}(u_{j})\) converges weakly to \(H_{m}(u).\)_ Proof.: Fix a continuous function \(\varphi\) with compact support in \(\Omega.\) For \(a>0\) we set \[u_{j,a}:=\max\{u_{j},-a\},u_{a}:=\max\{u,-a\}.\] Then we have \[\int_{\Omega}\varphi[H_{m}(u_{j})-H_{m}(u)] =\int_{\Omega}\varphi[H_{m}(u_{j})-H_{m}(u_{j,a})]\] \[+\int_{\Omega}\varphi[H_{m}(u_{j,a})-H_{m}(u_{a})]\] \[+\int_{\Omega}\varphi[H_{m}(u_{a})-H_{m}(u)].\] Note that, by Theorem 3.6 in [18] we have \(\int\limits_{\Omega}\varphi[H_{m}(u_{a})-H_{m}(u)]\to 0\) as \(a\to\infty\) and \(\int\limits_{\Omega}\varphi[H_{m}(u_{j,a})-H_{m}(u_{a})]\to 0\) as \(j\to\infty\) for any _fixed_\(a>0.\) Thus it suffices to check \[\lim_{a\to\infty}\Big{(}\limsup_{j\to\infty}\Big{|}\int_{\Omega}\varphi[H_{m} (u_{j})-H_{m}(u_{j,a})]\Big{|}\Big{)}=0. \tag{16}\] For this, we observe that \(H_{m}(u_{j,a})=H_{m}(u_{j})\) on the set \(\{u_{j}>-a\}\) by Proposition It now follows, using Corollary 3.4 (ii), that \[\Big{|}\int_{\Omega}\varphi[H_{m}(u_{j})-H_{m}(u_{j,a})]\Big{|} =\Big{|}\int\limits_{\{u_{j}\leq-a\}}\varphi[H_{m}(u_{j})-H_{m}(u_{ j,a})]\Big{|}\] \[\leq\|\varphi\|_{\Omega}\Big{[}\int\limits_{\{u_{j}\leq-a\}}H_{m} (u_{j})+\int\limits_{\{u_{j}\leq-a\}}H_{m}(u_{j,a})\Big{]}\] \[\leq(2^{m}m!+1)\|\varphi\|_{\Omega}\int\limits_{\{u_{j}<-a/2\}}H_ {m}(u_{j}).\] Thus (16) follows immediately from the assumption (15). We are done. Proof.: Since \[\int\limits_{\Omega}H_{m}(u_{j})\leq\int\limits_{\Omega}\frac{1}{G}d\mu_{j} \leq\int\limits_{\Omega}\frac{1}{G}d\mu<\infty,\ \forall j\] by Lemma 4.4, the sequence \(\{u_{j}\}\) is bounded in \(L^{1}(\Omega,dV_{2n}).\) Thus after switching to a subsequence we may assume \(u_{j}\) converges in \(L^{1}(\Omega,dV_{2n})\) to \(u\in SH_{m}(\Omega).\) Our main step is to check that \(u_{j}\to u\) in \(Cap_{m}.\) To this end, set \(\mu^{\prime}:=\frac{1}{G}\mu,\) we will first claim that \(u_{j}\to u\) in \(L^{1}(\Omega,\mu^{\prime}).\) Since \(\mu\) and hence \(\mu^{\prime}\) puts no mass on \(m-\)polar sets, in view of Lemma 4.5, it suffices to show \[\sup\limits_{j\geq 1}\int\limits_{\Omega}(-u_{j})d\mu^{\prime}<\infty. \tag{17}\] For this purpose, we apply Corollary 3.4 (i) to get \[Cap_{m}(\{|u_{j}|^{2}>s\})=Cap_{m}(\{u_{j}<-s^{1/2}\})\leq\frac{1}{s^{2m}}\int \limits_{\Omega}H_{m}(u_{j})\leq\frac{\mu(\Omega)}{cs^{2m}}. \tag{18}\] So by the assumption (a) and (18) we obtain \[\mu(\{|u_{j}|^{2}>s\})\leq F(Cap_{m}(\{u_{j}<-s^{1/2}\}))\leq F(\frac{\mu( \Omega)}{cs^{2m}}).\] This implies \[\sup\limits_{j\geq 1}\int\limits_{\Omega}|u_{j}|^{2}d\mu=\sup\limits_{j\geq 1 }\int_{0}^{\infty}\mu(\{|u_{j}|^{2}>s\})ds<\infty\] where the last integral converges in view of the assumption (i). Hence, using Cauchy-Schwarz's inequality and the assumption (ii) we obtain (17). Now we turn to the convergence in \(Cap_{m}\) of \(u_{j}\). Fix a compact set \(K\) of \(\Omega\) and \(\delta>0\) Then by Lemma 3.2, for \(h\in SH_{m}(\Omega),-1\leq h<0\), we have \[\int\limits_{\{u-u_{j}>\delta\}}H_{m}(h) \leq(\frac{2}{\delta})^{m}\int\limits_{\{u-u_{j}>\delta\}}(u-u_{j} -\frac{\delta}{2})^{m}H_{m}(h)\] \[\leq(\frac{2}{\delta})^{m}\int\limits_{\{u>u_{j}+\frac{\delta}{2 }\}}(u-u_{j}-\frac{\delta}{2})^{m}H_{m}(h)\] \[\leq(\frac{2}{\delta})^{m}\int\limits_{\{u-\frac{\delta}{2}>u_{j }\}}(-h)H_{m}(u_{j})\] \[\leq(\frac{2}{\delta})^{m}\int\limits_{\{u-\frac{\delta}{2}>u_{j }\}}\frac{1}{\chi(u_{j}(z),z)}d\mu_{j}\] \[\leq(\frac{2}{\delta})^{m}\int\limits_{\{u-\frac{\delta}{2}>u_{j }\}}\frac{1}{G}d\mu\] \[\leq(\frac{2}{\delta})^{m+1}\int\limits_{\Omega}|u_{j}-u|d\mu^{ \prime}.\] It follows that \[Cap_{m}(\{u-u_{j}>\delta\})\leq(\frac{2}{\delta})^{m+1}\int\limits_{\Omega}|u _{j}-u|d\mu^{\prime}\to 0\text{ as }j\rightarrow\infty.\] Here the last assertion follows from Lemma 4.5. Thus \[\lim\limits_{j\rightarrow\infty}Cap_{m}(\{u-u_{j}>\delta\})=0.\] Given \(\varepsilon>0\), by quasi-continuity of \(u\) we can find an open subset \(U\) of \(\Omega\) with \(Cap_{m}(U)<\varepsilon\) such that \(u\) is continuous on the compact set \(K\setminus U.\) Then by Dini's theorem for all \(j\) large enough the set \(\{u_{j}-u>\delta\}\cap K\) is contained in \(U.\) So we have \(\lim\limits_{j\rightarrow\infty}Cap_{m}(\{u_{j}-u>\delta\}\cap K)=0.\) Putting all these facts together we obtain \[\lim\limits_{j\rightarrow\infty}Cap_{m}(\{|u_{j}-u|>\delta\}\cap K)=0.\] So, \(u_{j}\) indeed converges to \(u\) in \(Cap_{m}\) as claimed. We now wish to apply Lemma 4.9. For this, fix \(a>0.\) Then we have \[\int\limits_{\{u_{j}<-a\}}H_{m}(u_{j}) =\int\limits_{\{u_{j}<-a\}}\frac{1}{\chi(u_{j}(z),z)}d\mu_{j}\] \[\leq\int\limits_{\{u_{j}<-a\}}\frac{1}{G}d\mu_{j}=\int\limits_{\{ u_{j}<-a\}}d\mu^{\prime}.\] In view of (iii) and (18) we infer that the last term goes to \(0\) uniformly in \(j\) as \(a\rightarrow\infty\). Thus we may apply Lemma 4.9 to reach that \(H_{m}(u_{j})\) converges weakly to \(H_{m}(u)\). To finish off, it remains to check \(\chi(u_{j}(z),z)\rightarrow\chi(u(z),z)\) in \(Cap_{m}.\) To see this, we use the extra assumption (iv)(b) and the fact we have proved above that \(u_{j}\to u\) in \(Cap_{m}.\) Now we are in a position to apply Proposition 3.9. In details, we note the following facts: (a) \(\chi(u_{j}(z),z)\) and \(\chi(u(z),z)\) are quasicontinuous on \(\Omega\), since \(u_{j}\) and \(u\) are such functions and since \(\chi\) is continuous on \((-\infty,0)\times\Omega\) by the assumption \((v);\) (b) \(\chi(u_{j}(z),z)\) and \(\chi(u(z),z)\) are locally uniformly bounded on \(\Omega.\) To see this, it suffices to note that on each compact subset \(K\) of \(\Omega\) the functions \(\{u_{j}\}\) and \(u\) are bounded from above by a fixed constant \(t_{0}<0,\) so by the assumption (iv)(a) we obtained the required local uniform boundedness; (c) The sequence \(\{H_{m}(u_{j})\},\) being dominated by \(\mu^{\prime},\) are uniformly absolutely continuous with respect to \(Cap_{m}\) in view of the assumption \((iii).\) It follows that \[\mu_{j}=\chi(u_{j}(z),z)H_{m}(u_{j})\rightarrow\chi(u(z),z)H_{m}(u)\] weakly in \(\Omega\). Therefore \(\chi(u(z),z)H_{m}(u)=\mu.\) The proof is then complete.
2303.00773
Revisiting the alignment of radio galaxies in the ELAIS-N1 field
Aims. Previous studies reported an alignment of the major axes of radio galaxies on various angular scales. Here, we study the alignment of radio galaxies in the ELAIS-N1 Low Frequency ARray (LOFAR) deep field, which covers an area of 25 $\rm deg^2$. \newline Methods. The low noise level of about 20$ \rm ~ \mu Jy/beam$ of the LOFAR deep field observations at 150 MHz enabled the identification of 447 extended ($> 30 \rm ''$) radio galaxies for which we have measured the major axis position angle. We found that 95\% of these sources have either photometric or spectroscopic redshifts, which we then used for a three-dimensional analysis. \newline Results. We show the distribution of the position angles of radio galaxies in the ELAIS-N1 field and perform multiple statistical tests to check whether the radio galaxies are randomly oriented. We found that the distribution of position angles is consistent with being uniform. Two peaks around position angles of 50 and 140$\rm~ deg$ are spurious and are not caused by an alignment, as shown by a 3D analysis. In conclusion, our results do not support a 2D or 3D alignment of radio galaxies on scales smaller than $\sim 4 \rm ~ deg$.
Marco Simonte, Heinz Andernach, Marcus Brueggen, Philip Best, Erik Osinga
2023-03-01T19:01:06Z
http://arxiv.org/abs/2303.00773v1
# Revisiting the alignment of radio galaxies in the ELAIS-N1 field+ ###### Abstract Context: Aims:Previous studies reported an alignment of the major axes of radio galaxies on various angular scales. Here, we study the alignment of radio galaxies in the ELAIS-N1 Low Frequency ARray (LOFAR) deep field, which covers an area of 25 deg\({}^{2}\). Methods:The low noise level of about 20 uJy/beam of the LOFAR deep field observations at 150 MHz enabled the identification of 447 extended (\(>30^{\prime\prime}\)) radio galaxies for which we have measured the major axis position angle. We found that 95% of these sources have either photometric or spectroscopic redshifts, which we then used for a three-dimensional analysis. Results:We show the distribution of the position angles of radio galaxies in the ELAIS-N1 field and perform multiple statistical tests to check whether the radio galaxies are randomly oriented. We found that the distribution of position angles is consistent with being uniform. Two peaks around position angles of 50 and 140 deg are spurious and are not caused by an alignment, as shown by a 3D analysis. In conclusion, our results do not support a 2D or 3D alignment of radio galaxies on scales smaller than \(\sim 4\) deg. Conclusions: ## 1 Introduction The cosmological principle is an assumption in modern cosmology which states that the Universe is (statistically) isotropic and homogeneous on suitably large scales (\(\gtrsim 100\) Mpc). Multiple observations have investigated the degree of anisotropy in the cosmic microwave background (Bennett et al., 1996; Hansen et al., 2004; Planck Collaboration et al., 2016, 2020) confirming the principle of homogeneity and isotropy of the Universe. However, several authors have reported an intriguing alignment of the linear polarisation of quasars (Socelman et al., 1979; Hutsemekers, 1998; Hutsemekers & Lamy, 2001; Jain et al., 2004; Cabanac et al., 2005; Pelgrims & Cudell, 2014; Slagter & Miedema, 2021; Friday et al., 2022). Interestingly, they found an alignment mainly occurring in groups of 10-30 objects and on potentially Gpc scales. Some other studies focused on the alignment of radio galaxy jets (e.g., Sanders, 1984; Kapahi et al., 1985; West, 1991; Joshi et al., 2007; Tiwari & Jain, 2013), some of which support a possible departure from the cosmological principle. Taylor & Jagannathan (2016) studied the spatial distributions of the major axis position angle of radio galaxies in the ELAIS-N1 Giant Metrewave Radio Telescope (GMRT, Ananthakrishnan, 1995) deep field. They claimed the existence of a 2D alignment around \(PA\sim 140^{\circ}\) over an area of \(\sim 1.7\) deg\({}^{2}\). However, for lack of the redshifts of the host galaxies, they did not perform a 3D analysis. The first attempts to detect an alignment on larger scales were made by Contigiani et al. (2017) and Panwar et al. (2020) who used catalogue data from the Faint Images of the Radio Sky at Twenty-cm (FIRST, Becker et al., 1995; Helfand et al., 2015) and the TIFR GMRT Sky Survey (TGSS, Intema et al., 2017). They detected a signal over a scale smaller than 2\({}^{\circ}\), but did not find strong evidence for a 3D alignment. For the first time, Blinov et al. (2020) explore the alignment of parsec-scale jets finding that their radio sources do not show any global alignment. However, Mandarkas et al. (2021), with a similar but larger sample, detected a strong signal of an alignment of parsec-scale jets in multiple regions of the sky. Nevertheless, the redshift distribution of their sources spans a wide range, \(0<z\lesssim 1.5\) Most recently, Osinga et al. (2020) searched for alignment using 7555 extended sources from the first data release of the Low Frequency ARray Two metre Sky Survey (LoTSS, Shimwell et al., 2019). However, despite their use of host redshifts, they could only detect a 2D alignment of the position angles of the radio galaxies over a scale of 5\({}^{\circ}\) and could not exclude the possibility that the signal arises from systematic effects. Although multiple studies have now presented evidence for a 2D or 3D alignment, an explanation for such a phenomenon is lacking. West (1991), Hutsemekers et al. (2014) Pelgrims & Hutsemekers (2016) found an alignment between the radio and optical emissions from active galactic nuclei (AGN) and the surrounding large-scale structure. Moreover, Malarecki et al. (2013, 2015) showed that giant radio galaxies (Willis et al., 1974) have a tendency to grow in a direction perpendicular to the major axes of galaxy overdensities. However, the connection between the orientation of radio galaxy jets and the large-scale structure is unclear. In this paper, we revisit the alignment of radio galaxies jets in the ELAIS-N1 field. We make use of photometric redshifts of the host galaxies to perform a 3D analysis. The outline of this paper is as follows: In Sec. 2 we explain how we built our catalogue of extended radio galaxies (ERGs) and how we measured the orientation of the radio galaxies. In Sec. 3 we present our results of the 2D and 3D analysis. In Sec. 4, we discuss our results in the context of theoretical and observational work on the orientation of radio galaxies and give our summary. Throughout this work we adopt a flat \(\Lambda\)CDM cosmology with \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\rm m}=0.3\), \(\Omega_{\Lambda}=0.7\). ## 2 Methods We inspected the ELAIS-N1 LOw-Frequency ARray (LOFAR, van Haarlem et al., 2013) deep field (Sabater et al., 2021). With an effective observing time of 163.7 h, it reaches a root mean square noise level at 150 MHz lower than 30 \(\mu\)Jy beam\({}^{-1}\) across the inner 10 deg\({}^{2}\) and below 20 \(\mu\)Jy beam\({}^{-1}\) in the very centre. The ELAIS-N1 LOFAR Deep Field (ELDF) is centred on 16h11m00s + 55\({}^{\circ}\)00\({}^{\prime}\)00\({}^{\prime\prime}\)(J2000) and it covers an area of about 25 deg\({}^{2}\). The 6\({}^{\circ}\) resolution of the radio image ensures a robust classification of the sources and, most importantly, the identification of the hosts and radio features such as jets and hotspots. ### The sample of extended radio galaxies We searched for all the ERGs with a largest angular size (LAS) larger than \(\sim 30\)" within an area of \(\sim 25\) deg\({}^{2}\). We measured the LAS as the distance between the opposite ends of the ERGs. However, this method can overestimate the size of the Fanaroff-Riley type II (FRII, Fanaroff & Riley, 1974) as commented in Kuzmicz & Jamrroy (2021). Thus, for such ERGs, we measured the LAS as the distance between the two hotspots, whenever identified on the VLA Sky Survey images (Lacy et al., 2020). The radio position angles (RPAs) were manually measured (by using Aladin+, Bonnarel et al., 2000) in the range [0,180) degrees as the angle between the source's major axis and the local meridian, from N through E. For straight (or only slightly bent) FRI and FRII, the RPA is either that of the inner jets (FR I) or that of the direction connecting the two hotspots (FR II). In the case of bent sources (e.g., Wide-Angle-Tailed RGs), measuring the RPA is less trivial. For such cases, we measured the RPA in the vicinity of the core where usually the jets are not bent yet and flagged them as uncertain measurements. We carefully avoided measuring the RPA of overlapping sources unless the morphology of the ERGs was very clear. A large number of optical and infrared surveys, such as the Wide-Field Infrared Survey Explorer (WISE, Cutri et al., 2012; Cutri et al., 2013; Schlafly et al., 2019; Marcoco et al., 2021), the Sloan Digital Sky Survey (SDSS, York et al., 2000), the Legacy survey (Dey et al., 2019) and the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS, Flewelling et al., 2020) enabled the identification of the host galaxies (see Kondapally et al., 2021; Andernach et al., 2021; Simonte et al., 2022, for further details on the host identification and radio source classification). We looked for available redshifts (either spectroscopic or photometric) in multiple catalogues such as Rowan-Robinson et al. (2013), Bilicki et al. (2014), Bilicki et al. (2016), Beck et al. (2016),Duncan et al. (2021), Beck et al. (2021), Zhou et al. (2021), Wen & Han (2021) and Duncan (2022). If for a single source multiple photometric redshifts were available, we computed their mean and error by taking the standard deviation of the various redshifts. For spectroscopic redshifts, we do not report errors since they are generally more accurate (typical errors are usually around 0.00015) than the precision we can achieve on the linear size given our errors in measuring the angular size. For 15 optically very faint host or infrared-only detected host galaxies, no redshift estimate was available. The deepest full-sky catalogue is the Zhou et al. (2021) DESI DR9 photometric redshift catalogue (a deeper catalogue from Duncan et al. (2021) exists over the inner 7deg\({}^{2}\) of the ELAIS-N1 field). Zhou et al. (2021) the faintest galaxies have a maximum redshift of around 1.3. Thus, we assumed a redshift in the range of 1.1-1.5 for those host galaxies without redshift listed in the literature. This assumption will not affect our analysis as we will use only those sources with a redshift reported in the literature for the 3D analysis. We found 447 ERGs for which we provide redshift, LAS, largest linear size (LLS) and RPA. We show some of our ERGs in Table 1 and the full list will be made available at the CDS and through the VizieR service+ (Ochsenbein et al., 2000). To test the alignment in the region inspected by Taylor & Jaganathan (2016), we located all sources these authors had used (their Fig. 2) and measured their RPAs. Some of these RGs have an angular size smaller than 30\({}^{\prime\prime}\). The resolution of 6\({}^{\circ}\) of the LOFAR images does not enable reliable measurement of the RPA of the smallest sources and we flagged these measurements as uncertain. We had to discard 9 RGs used by Taylor & Jaganathan (2016) as 8 of them are separate sources and one is a spiral galaxy (see Appendix A). However, we were able to add 24 more ERGs within the sky area they studied that we were able to identify using the LOFAR data. In Tab. 1 we compare our sample with previous lists of RGs used for the RPA analysis. In this work, we analysed a field \(\sim 10\) times larger than that of Taylor & Jaganathan (2016), but much smaller than those used by Contigani et al. (2017), Panwar et al. (2020) and Osinga et al. (2020). Nevertheless, our sample has the largest RGs sky density in the central region (\(241.5^{\circ}<RA<243.75^{\circ},53.9^{\circ}<DEC<55.2^{\circ}\)), which is reported in the last row, while the second-last row shows the RGs sky density considering the full ELDF. Footnote †: [https://vizier.cds.unistra.fr](https://vizier.cds.unistra.fr) ### Statistical tests We performed multiple tests to assess the (non-)uniformity of the RPA distribution. Different methods have been used in past analyses to study the distribution of the orientation of RGs. We \begin{table} \begin{tabular}{l c c c c} \hline (1) & (2) & (3) & (4) & (5) \\ Survey & Freq. & RMS & N of & RGs density \\ & GHz & mJy/b & RGs & deg\({}^{-2}\) \\ \hline Taylor\({}^{1}\) & 0.61 & 0.01 & 65 & 38.2 \\ FIRST\({}^{2}\) & 1.4 & 0.15 & 30059 & 4.3 \\ FIRST\({}^{3}\) & 1.4 & 0.15 & 18775 & 1.9 \\ LoTSS\({}^{4}\) & 0.15 & 0.07 & 7555 & 17.8 \\ ELDF\({}^{5}\) & 0.15 & 0.03 & 447 & 17.9 \\ ELDF-C\({}^{6}\) & 0.15 & 0.02 & 78 & 45.9 \\ \hline \end{tabular} \end{table} Table 1: Comparison between our catalogue and previous samples. References: 1-Taylor & Jagannathan (2016), 2-Contigiani et al. (2017), 3-Panwar et al. (2020), 4-Osinga et al. (2020), 5,6-this work: ELDF-C refers to the central region of the ELDF (\(241.5^{\circ}<RA<243.75^{\circ},53.9^{\circ}<DEC<55.2^{\circ}\)). use five different tests for (non-)uniformity of angles: 1. The Kolmogorov-Smirnov (KS) test compares the underlying distribution of the sample of the RPA against a given distribution, which in our case is a uniform distribution. The null hypothesis is that the two distributions are identical and the closer the p-value is to zero the more confident we are in rejecting the null hypothesis. A common threshold used to reject the null hypothesis of the two distributions being drawn from the same population is a p-value p\(<\)0.05, which means that there is only a 5% chance that the two samples are in fact drawn from the same population. 2. Pearson's \(\chi^{2}\) test for uniformity tests the null hypothesis stating that the frequency distribution of certain events observed in a sample is consistent with a particular theoretical distribution (in our case a uniform one). As with the KS test, the smaller the p-value the more likely it is that the two distributions are different. This test is performed with binned data and, in our case, we used 18 bins which are \(10^{\circ}\) wide. 3. Our set of RPAs belongs to the category of circular data (Fisher 1993) which are fundamentally different from linear data due to their periodic nature. The Rayleigh test (Mardia & Jupp 2000) assesses the uniformity of circular data. To this end, this test compares the test statistic of the unit vector, resulting from the sum of all the vectors pointing towards the different angles of the sample, with the same statistics estimated from a uniformly distributed sample. The null hypothesis of such test is that the data are uniformly distributed over the circle. The test statistic is the mean resultant length of the unit vector and it is defined as \[\bar{R}=\frac{1}{n}\left[\left(\sum_{i=1}^{n}\cos\theta_{i}\right)^{2}+\left( \sum_{i=1}^{n}\sin\theta_{i}\right)^{2}\right]^{1/2}, \tag{1}\] where n is the size of the sample and the angles \(\theta_{i}\) are the RPAs multiplied by two since these are orientations (axial vectors) in the range [\(0^{\circ}\), \(180^{\circ}\)) while the Rayleigh test is performed on the range [\(0^{\circ}\), \(360^{\circ}\)). \(\bar{R}\) can range from 0 to 1. This statistic is zero for a uniform distribution, thus it is reasonable to reject uniformity when \(R\) is large. It is worth mentioning that this test is not sensitive to non-uniform distributions that have \(\bar{R}=0\). An example is a bimodal distribution with two peaks that are \(180^{\circ}\) apart as every vector pointing towards a certain direction is cancelled by a vector pointing along the opposite direction. This issue can mildly affect our analysis since the major peaks in our distributions of the RPAs are \(180^{\circ}\) apart once the RPAs are multiplied by two (see Sec. 3 below). 4. The semi-variance (Cressie 1993) is a statistical tool used in spatial analysis to measure the dispersion of a certain variable on different scales. It is defined as follows: \[\gamma(d)=\frac{1}{2m(d)}\sum_{i=1}^{m(d)}\left[s(x_{i})-s(x_{i}+d)\right]^{2}, \tag{2}\] where \(m(d)\) is the number of pairs separated by an (angular) distance in the range [\(d\), \(d+\delta d\)] (we used \(\delta d=0.2^{\circ}\)) and \(s\) is the variable measured at the vector location \(x_{i}\) and in our case is the RPA of the ERGs. The semi-variance is constant over all angular scales when the distribution of the variable \(s\) is uniform. A value for the semi-variance smaller than what is predicted by a uniform distribution at a certain scale indicates an alignment of the ERGs. On the other hand, a larger semi-variance suggests a larger dispersion than expected from a random distribution, indicating that no alignment is present on that scale. We performed a simple Monte-Carlo simulation to infer the value of the semi-variance of randomly distributed ERGs on different angular scales. We generated 447 (which is the size of our sample) random angles uniformly distributed in the range [\(0,180\)) which have the same spatial distribution of the ERGs in our sample and we computed the semi-variance on different angular scales. We repeated the operation 10000 times and then averaged the semi-variance values on the different scales. We folded the data in circularity to take into account the periodicity of the RPAs. On every scale, we obtained a constant semi-variance of 0.82, consistent with the result from Taylor & Jagannathan (2016). The error on the semi-variance, \(\sigma_{\rm SM}\) was estimated by calculating the standard deviation of the 10000 values on each angular scale. 5. Finally, we probed the alignment of the ERGs at different angular scales using the dispersion measure analysis (Jain et al. 2004). The dispersion measure is defined as the inner product between a certain position angle \(\theta\) and the RPAs, \(\theta_{\rm\epsilon}\), of the \(n\) closest sources to a certain \(i\)-th ERG (including the source itself) and it is an indication of the alignment of the ERGs. Following Jain et al. (2004), Contigiani et al. (2017) and Osinga et al. (2020), it can be shown that the maximum dispersion measure around the source \(i\) is \[D_{i,n}|_{\rm max}=\frac{1}{n}\left[\left(\sum_{k=1}^{n}\cos(\theta_{k}) \right)^{2}+\left(\sum_{k=1}^{n}\sin(\theta_{k})\right)^{2}\right]^{1/2}. \tag{3}\] The closer \(D_{i,n}|_{\rm max}\) is to 1, the more aligned the \(n\) galaxies are. The statistic, \(S_{n}\) used to test the (non-)uniformity of the distribution of the RPAs is the average of the \(D_{i,n}|_{\rm max}\) calculated for each source of the sample. This statistic computed from our dataset is compared to the same statistics coming from Monte-Carlo simulated samples, \(S_{n,\rm MC}\). To compute \(S_{n,\rm MC}\) we generated 447 randomly oriented ERGs with the same spatial distribution of our sources and followed the formalism described in Jain et al. (2004), Contigiani et al. (2017) and Osinga et al. (2020). We repeated the calculation of \(S_{n,\rm MC}\) 10000 times and estimate the average, \(\langle S_{n,\rm MC}\rangle\), and the error, \(\sigma_{n,\rm MC}\), as the standard deviation of 10000 generated statistics. The significance level for rejecting the null hypothesis that a sample of ERGs is randomly oriented is found through a one-tailed significance test, expressed as: \[SL=1-\Phi\left(\frac{S_{n}-\langle S_{n,\rm MC}\rangle}{\sigma_{n,\rm MC}} \right), \tag{4}\] where \(\Phi\) is the cumulative normal distribution function. The closer the significance level is to 0 the more confident we are in rejecting the hypothesis of uniformity. Since the number of nearest neighbours can be translated to an angular scale extending to the \(n\)-th nearest neighbour, we can probe multiple angular scales varying \(n\). To do so, we calculated the maximum angular distance between the relevant ERG and the n-th closest neighbour and took the median value among the 447 sources. The same analysis can be implemented considering the 3D position of the ERGs to test whether a 3D alignment, i.e. between sources that are physically close to each other, is present. We approximated the redshift of the source with the average redshift estimated for each ERGs without taking into account the error and we did not include those sources without a redshift value reported in the literature. The uncertainties of some redshift estimations might mildly affect the analysis: in fact, while ERGs with \(z<1\) have a redshift error of about 0.05, for more distant sources, which represent 30% of our sample, the error increases to 0.2. We then converted the redshift to comoving distance and measured the 3D comoving distance between all the ERGs in our sample. Moreover, Jain et al. (2004) verified that the variance of the statistic \(S_{n}\) is inversely proportional to the size of the sample which means that, compared to Contigiani et al. (2017) and Osinga et al. (2020) who used much larger samples, we are more affected by the shot noise. ## 3 Results In this section, we present the distribution of the RPAs in the ELAIS-N1 field. We initially focus on the inner region studied by Taylor & Jagannathan (2016) and then expand the analysis to the entire ELDF. ### Alignment in the central part of ELAIS-N1 Here, we look at the distribution of the RPAs in the inner \(\sim\)1.7 deg\({}^{2}\) of the ELAIS-N1 field (\(241.5<RA<243.75,53.9<DEC<55.2\)), where Taylor & Jagannathan (2016) found a statistically significant alignment of radio galaxies. We recall that 9 radio sources they used in their analysis are not actual radio galaxies and we could add 24 more ERGs. Thus, the sample for such analysis consists of 78 ERGs, of which 19 are flagged as uncertain RPA measurement. We show the distribution of the RPAs in the inner region of the ELAIS-N1 field in Fig. 1. The blue histogram shows the distribution of the total sample of RPA in this region, while in the red histogram the uncertain measurements are excluded. The figure clearly shows a peak at RPAs around 140\({}^{\circ}\) in agreement with Taylor & Jagannathan (2016). We then carried out the statistical tests explained in Sec. 2.2 and found a p-value of 0.66 and 0.31 for the KS test and the \(\chi^{2}\) test, respectively. The latter test is valid for large samples and it is customary to recommend, in applications of the test, that the smallest expected number in any bin should be 5 (Cochran, 1952). We performed the test using 13 bins with a width of 15\({}^{\circ}\) which lead to an expected value of about 6.5 elements per bin. The resulting p-value, in this case, is 0.23. Concerning the Rayleigh test, we found a mean resultant length \(R=0.009\) which results in a p-value=0.96. Thus, even though the distribution shows a clear peak, we cannot reject the hypothesis of uniformity of the RPAs in this region. Moreover, the analysis involving the semi-variance (Fig. 2) shows that there is no correlation between the RPAs of the ERGs, located at different positions of the sky, at any angular scale. Here, the blue line and points are the values estimated by using randomly generated data which have the same spatial distribution of the 78 ERGs in the inner region of the ELAIS-N1 field, while the orange points are the result of the analysis performed on our dataset. We did not perform an analysis based on the dispersion measure (that is the 5th method listed in Sec. 2.2) due to the smaller number of ERGs when restricting the study to the inner region of the field. As a matter of fact, with a sample of only 78 objects, we are certainly dominated by the shot noise (Jain et al., 2004) which would cancel out any signal unless the alignment is very strong, which does not seem to be the case here. We performed the statistical tests on the sample of 59 ERGs for which we could measure a reliable RPA as well. We obtained a p-value of 0.10, 0.01 and 0.46 for the KS, \(\chi^{2}\) and Rayleigh test, respectively. The result of the \(\chi^{2}\) test holds when considering bins with a width of 15\({}^{\circ}\). Nevertheless, this is the only test which suggests an alignment of the ERGs in the inner region as also the semi-variance test applied to this smaller sample cannot reject the hypothesis of a uniform distribution. The sensitivity of the LOFAR (20 \(\mu Jy/\)beam) and GMRT (10 \(\mu Jy/\)beam) ELAIS-N1 deep field observations are quite similar, but the four times lower frequency of LOFAR makes a RG, with a typical spectral index of -0.8, about three times brighter at 144 MHz compared to 610 MHz. Moreover, the availability of deeper infrared source catalogues like CatWISE (Marocco et al., 2021) and unWISE (Schlafly et al., 2019) enables the identification of more distant galaxies which may emit in the radio band as point-like sources. Such contamination, if superimposed on the emission of an ERG, may slightly change the morphology of the latter and lead to a wrong RPA measurement. In order to attempt to reproduce the Taylor & Jagannathan (2016) results, we extracted the positions, sizes and RPAs of the RGs from their Fig. 2 as follows: the end points of all vectors were digitized with the g3data software, and saved as RA, DEC in degrees. Then, we reviewed the RPA measurements and could closely match the histogram shown in their Fig. 3. We ran our first four statistical tests on the recovered data, but found that none of them is able to reject the hypothesis of uniformity. In particular, for the Rayleigh test, we obtained a mean resultant length of 0.09 from our analysis of these data, which is highly discrepant from the value of 0.68 derived by Taylor & Jagannathan (2016) that led them to conclude non-uniformity of RPAs. The origin of this difference is uncertain, although we note that if we omit to multiply the RPAs by a factor of two (a step which is required, since the test assesses uniformity over a circle and the RPAs are distributed over \([0,180)\)) then we obtain an erroneous mean resultant length of 0.64, which is much closer to the value quoted by Taylor & Jagannathan (2016). ### Alignment in the entire ELAIS-N1 field We show the distribution of the RPAs of the radio galaxies in the ELDF in Fig. 3. The blue histogram represents the total sample, while the red histogram shows the distribution for the Figure 1: Distribution of the RPAs of the 78 ERGs (blue histogram) that we found in the inner region of the ELDF and of the 59 certain sources (red histogram). The black line shows the expected number of objects per bin for a uniform distribution of 78 ERGs. 377 certain sources, i.e. those ERGs that do not show a complex morphology and for which we could accurately measure the RPA. The black line denotes the expected number of objects per bin if the distribution were uniform. Now, we performed the same statistical tests considering the total sample. The results, with a p-value equal to 0.71, 0.33 and 0.88 for the KS test, \(\chi^{2}\) test and Rayleigh test respectively, suggest that the uniformity holds when including the entire field as well. These results are also confirmed by the analysis of the semi-variance. We measured the semi-variance in our sample, shown by the orange points in Fig. 4. The blue line and points are the semi-variance values estimated from randomly generated data and the shadowed region represents the \(2\sigma_{\rm SM}\) values. The larger uncertainties on the largest scale are due to poor statistics since not many pairs are separated by such large distances. Overall, there is no clear evidence for a convincing signal in favour of an alignment as the orange points are always consistent with 0.82 within the error. Finally, we show the results of the 2D (black line) and 3D (blue line) dispersion measure tests in Fig. 5. The significance level, \(SL\), is plotted as a function of the number of nearest neighbours, n, and angular scale in degrees. Following previous studies (e.g., Contigiani et al. 2017), a commonly used criterion for the presence of an alignment signal is \(SL<\)0.03 (\(\log({\rm SL})<-1.5\)). As mentioned in Sec. 2.2, we are more affected by the shot noise due to the comparatively smaller size of our sample. However, a minimum significance level of about 0.2 in Fig. 5 suggests there is no evident signal, neither in the 2D nor in the 3D analysis, at any scale. These results also hold when considering only the ERGs with reliable RPA measurement. Even though the tests suggest that radio galaxies are randomly oriented, two conspicuous peaks are visible on an RPA range between \(50^{\circ}-60^{\circ}\) and \(\sim 140^{\circ}-150^{\circ}\) (the latter was seen by Taylor & Jagannathan 2016 as well). The Poisson distribution gives the probability that a given number of observations fall within an interval of values knowing the average frequency of that particular event. Thus, using such a distribution we find that the two peaks are \(\sim 2.5\sigma\) (for RPAs between \(50^{\circ}-60^{\circ}\)) and \(\sim 1.5\sigma\) (for RPA between \(140^{\circ}-150^{\circ}\)) above the average. In Fig. 6 we show the spatial and redshift distributions of the ERGs with an orientation between \(50^{\circ}-60^{\circ}\) (upper panel) and \(140^{\circ}\)-\(150^{\circ}\) (lower panel). We selected the ERGs up to redshift 1.5 since the majority of ERGs at larger redshifts either do not have a redshift estimate in the literature or have very large errors. The black rectangles highlight the region inspected by Taylor & Jagannathan (2016). In both cases, there is no 3D alignment of ERGs as the redshifts span a range from \(0.1\lesssim z\lesssim 1.5\). ## 4 Discussion and summary The tidal torque theory predicts that the angular momentum of the dark matter proto-halos is acquired during their formation which occurs along the entire evolution of the large-scale structure of the universe (Peebles 1969; Doroshkevich 1970; White 1984; Porciani et al. 2002; Schafer 2009). As a result, an alignment between optical galaxies and the large-scale structure (e.g. filaments and sheets) is expected (Hu et al. 2006; Joachimi et al. 2015; Kirk et al. 2015). In a first attempt to study this alignment Hawley & Peebles (1975) found a small departure from isotropy in the distribution of the orientation angle, measured as the angle between the major axis of the galaxy and the local meridian. Lee (2004) argued that the observed large-scale coherence in the orientation of nearby spiral galaxies found by Navarro et al. (2004) can be fully explained by the tidal torque theory. Others have tried to look at a possible alignment Figure 3: Distribution of the RPAs of the 447 ERGs we found in the ELDF (red histograms) and of the 377 certain sources (red histogram). The black line shows the expected number of objects per bin for a uniform distribution considering the total sample. Figure 2: Estimate of the semi-variance on different angular scales in the inner region of the ELAIS-N1 field. The blue line and points are the semi-variance values obtained for randomly generated position angles with the same spatial distribution of the 78 ERGs. The shadowed region represents the \(2\sigma_{\rm SM}\) values. The orange points are estimated from our dataset. of galaxies and most of these found that the minor axes of early-type galaxies are preferentially oriented perpendicular to the host filament (Tempel et al., 2013; Tempel & Libeskind, 2013; Hirv et al., 2017), while late-type galaxies have spin axes parallel to the closest filament (Tempel et al., 2013; Tempel & Libeskind, 2013; Hirv et al., 2017; Blue Bird et al., 2020; Kraljic et al., 2021; Tudorache et al., 2022). However, some conflicting results have been found (Jones et al., 2010; Zhang et al., 2015; Pahwa et al., 2016; Krolewski et al., 2019). Recently, Rodriguez et al. (2022), by using the IllustrisTNG simulations (Nelson et al., 2019), found an alignment with the large-scale structure of red galaxies in the centres of galaxy clusters and groups. They then speculated that this anisotropy in the orientation of the central galaxies is the consequence of a concatenation of alignments. Starting from the alignment between the central galaxy and the host cluster (Yuan & Wen, 2022), eventually, the host halo aligns with the structures surrounding it. Some work found that there is a mild preference for radio jets to align with the minor axis of the galaxy host (Kotanyi & Ekers, 1979; Battye & Browne, 2009; Kamali et al., 2019; Vazquez Najar & Andernach, 2019). Assuming that the alignment between radio jets and optical galaxies is real, one could in principle look at the alignment between the radio galaxies and the large-scale structure (e.g., West, 1991). Nevertheless, some opposing results regarding the orientations of radio jets have been found (Schmitt et al., 2002; Verdoes Kleijn & de Zeeuw, 2005; Hopkins et al., 2012) casting doubts on this assumption. In this work, we revisited the alignment of radio jets in the ELAIS-N1 field. We inspected the LOFAR ELAIS-N1 deep field in which we identified the host galaxies of 447 ERGs whose radio emission extends over at least \(\sim 0.5^{\prime}\). We measured the RPA of the major radio axis (assuming it is a tracer of the underlying radio jets direction) and studied their distribution by using a number of statistical tests, none of which is able to reject the null hypothesis of uniform orientations. Similar results are obtained when restricting the analysis to the region inspected by Taylor & Jagannathan (2016). Only when restricting the sample to the 59 ERGs with reliable RPA measurement in the inner region, the \(\chi^{2}\) test returns a p-value=0.01 (i.e. it attributes a 1% chance of the result being a statistical fluctuation). However, none of the other statistical tests on this sample is able to reject the hypothesis of uniformity of the RPA distribution. We recovered the data used by Taylor & Jagannathan (2016) for their analysis and showed that, even with such sample, we could not obtain the same results. Furthermore, we found that the redshifts of ERGs with orientations near the two peaks (around 50deg and 140deg) span a wide range, \(0.1\lesssim z\lesssim 1.5\), strongly arguing against the idea of a 3D alignment of radio galaxies. Other reports of a 3D alignment (e.g., Contigiani et al., 2017; Panwar et al., 2020) have not been statistically significant. However, several studies reported a 2D alignment (Contigiani et al., 2017; Panwar et al., 2020; Mandarakas et al., 2021) over angular scales similar to those that we studied. The maximum angular scale we could explore is \(\sim 4\degr\) (see Fig. 4) which is the scale over which Osinga et al. (2020) found a 2D alignment. The combination of the two results might suggest that the 2D alignment of radio galaxies may exist on scales larger than those probed by our analysis. ## Acknowledgements This work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2121 "Quantum Universe" - 390833306 as well as grant DFG BR2026/27. Figure 4: Estimate of the semi-variance on different angular scales. The blue line and points highlight the constant value of the semi-variance for randomly generated position angles with the same spatial distribution of the 447 ERGs in our sample. The shadowed region represents the \(2\sigma_{\rm SM}\) values. The orange points are the semi-variance values of our sample. Figure 5: Significance level of the dispersion measure test (SL) as a function of the nearest neighbours (n, lower abscissa) and angular scale in degrees (upper abscissa). The black line shows the results of the 2D analysis while the 3D analysis is shown with the blue line. Such a large SL (\(>0.03\)) shows that no alignment is present in the ELAIS-N1 field at any scale in our analysis. HA has benefited from grant CIIC 138/2022 of Universidad de Guanajuato, Mexico. PNB is grateful for support from the UK STFC via grant ST/V000594/1. EO acknowledges support from the VIDI research programme with project number 639.042.729 LOFAR (van Haarlem et al. 2013) is the Low Frequency Array designed and constructed by ASTRON. It has observing, data processing, and data storage facilities in several countries, which are owned by various parties (each with their own funding sources), and that are collectively operated by the ILT foundation under a joint scientific policy. The ILT resources have benefited from the following recent major funding sources: CNRS-INSU, Observatoire de Paris and Universite d'Orleans, France; BMBF, MIWF-NRW, MPG, Germany; Science Foundation Ireland (SFI), Department of Business, Enterprise and Innovation (DBEI), Ireland; NWO, The Netherlands; The Science and Technology Facilities Council, UK; Ministry of Science and Higher Education, Poland; The Istituto Nazionale di Astrofisica (INAF), Italy. This research made use of the Dutch national e-infrastructure with support of the SURF Cooperative (e-infra 180169) and the LOFAR e-infra group. The Julich LOFAR Long Term Archive and the German LOFAR network are both coordinated and operated by the Julich Supercomputing Centre (JSC), and computing resources on the supercomputer JUWELS at JSC were provided by the Gauss Centre for Supercomputing e.V. (grant CHTB00) through the John von Neumann Institute for Computing (NIC). This research made use of the University of Hertfordshire high-performance computing facility and the LOFAR-UK computing facility located at the University of Hertfordshire and supported by STFC [ST/P000096/1], and of the Italian LOFAR IT computing infrastructure supported and operated by INAF, and by the Physics Department of Turin university (under an agreement with Consorzio Interuniversitario per la Fisica Spaziale) at the C3S Supercomputing Centre, Italy.
2305.14488
Looking forwards and backwards: dynamics and genealogies of locally regulated populations
We introduce a broad class of spatial models to describe how spatially heterogeneous populations live, die, and reproduce. Individuals are represented by points of a point measure, whose birth and death rates can depend both on spatial position and local population density, defined via the convolution of the point measure with a nonnegative kernel. We pass to three different scaling limits: an interacting superprocess, a nonlocal partial differential equation (PDE), and a classical PDE. The classical PDE is obtained both by first scaling time and population size to pass to the nonlocal PDE, and then scaling the kernel that determines local population density; and also (when the limit is a reaction-diffusion equation) by simultaneously scaling the kernel width, timescale and population size in our individual based model. A novelty of our model is that we explicitly model a juvenile phase: offspring are thrown off in a Gaussian distribution around the location of the parent, and reach (instant) maturity with a probability that can depend on the population density at the location at which they land. Although we only record mature individuals, a trace of this two-step description remains in our population models, resulting in novel limits governed by a nonlinear diffusion. Using a lookdown representation, we retain information about genealogies and, in the case of deterministic limiting models, use this to deduce the backwards in time motion of the ancestral lineage of a sampled individual. We observe that knowing the history of the population density is not enough to determine the motion of ancestral lineages in our model. We also investigate the behaviour of lineages for three different deterministic models of a population expanding its range as a travelling wave: the Fisher-KPP equation, the Allen-Cahn equation, and a porous medium equation with logistic growth.
Alison M. Etheridge, Thomas G. Kurtz, Ian Letter, Peter L. Ralph, Terence Tsui Ho Lung
2023-05-23T19:37:00Z
http://arxiv.org/abs/2305.14488v2
# Looking forwards and backwards: dynamics and genealogies of locally regulated populations ###### Abstract We introduce a broad class of mechanistic spatial models to describe how spatially heterogeneous populations live, die, and reproduce. Individuals are represented by points of a point measure, whose birth and death rates can depend both on spatial position and local population density, defined at a location to be the convolution of the point measure with a suitable non-negative integrable kernel centred on that location. We pass to three different scaling limits: an interacting superprocess, a nonlocal partial differential equation (PDE), and a classical PDE. The classical PDE is obtained both by a two-step convergence argument, in which we first scale time and population size and pass to the nonlocal PDE, and then scale the kernel that determines local population density; and in the important special case in which the limit is a reaction-diffusion equation, directly by simultaneously scaling the kernel width, timescale and population size in our individual based model. A novelty of our model is that we explicitly model a juvenile phase. The number of juveniles produced by an individual depends on local population density at the location of the parent; these juvenile offspring are thrown off in a (possibly heterogeneous, anisotropic) Gaussian distribution around the location of the parent; they then reach (instant) maturity with a probability that can depend on the local population density at the location at which they land. Although we only record mature individuals, a trace of this two-step description remains in our population models, resulting in novel limits in which the spatial dynamics are governed by a nonlinear diffusion. Using a lookdown representation, we are able to retain information about genealogies relating individuals in our population and, in the case of deterministic limiting models, we use this to deduce the backwards in time motion of the ancestral lineage of an individual sampled from the population. We observe that knowing the history of the population density is not enough to determine the motion of ancestral lineages in our model. We also investigate (and contrast) the behaviour of lineages for three different deterministic models of a population expanding its range as a travelling wave: the Fisher-KPP equation, the Allen-Cahn equation, and a porous medium equation with logistic growth. **Key words:** population model, interacting superprocess, lookdown construction, porous medium equation, reaction-diffusion equation, travelling waves, genealogies, Fisher-KPP equation **MSC 2010 Subject Classification:** Primary: Secondary: ###### Contents * 1 Introduction * 2 Model and main results * 2.1 Scaling limits of the population process * 2.2 Ancestral lineages in the scaling limit * 3 Examples and applications * 3.1 Beyond linear diffusion * 3.2 Ancestry in different types of travelling waves * 3.3 Clumping from nonlocal interactions * 3.4 Lineage motion is not uniquely determined by population density * 4 * 4 Heuristics * 4.1 The population density * 4.2 Motion of ancestral lineages * 5 The lookdown process * 5.1 Lookdown representation of the model of Definition 2.4 * 5.2 Explicit construction of lines of descent * 5.3 Limiting processes for lines of descent * 6 Proofs of convergence for nonlocal models * 6.1 Preliminaries * 6.2 Proof of Theorem 2.10: convergence for the nonlocal process * 6.3 Convergence of some nonlocal equations to classical PDEs * 6.3.1 Reaction-diffusion equation limits * 6.3.2 Porous Medium Equation * 7 Simultaneous scaling with interaction distance * 7.1 Moment bounds for \(\rho_{\epsilon}*\eta\) * 7.2 Continuity estimates for \(\rho_{\epsilon}*\eta\) * 7.3 Identification of the limit * 8 Proofs of results for the lookdown process and ancestral lineages * 8.1 Tightness of the Lookdown Process * 8.2 Motion of ancestral lineages * A Markov Mapping Theorem * A.1 Lookdown Generators * B Technical Lemmas * B.1 Constraints on kernel widths * B.2 Tightness of processes Introduction As one takes a journey, long or short, the landscape changes: forests thicken or thin or change their composition; even in flat plains springtime grasslands host intergrading mosaics of different types of flowers. The aim of this paper is to introduce and study a broad class of mechanistic spatial models that might describe how spatially heterogeneous populations live, die, and reproduce. Questions that we (start to) address include: How does population density change across space and time? How might we learn about the underlying dynamics from genealogical or genetic data? And, how does genetic ancestry spread across geography when looking back through time in these populations? Reproduction of individuals naturally leads to spatial branching process models, including branching random walk, branching Brownian motion, and the Dawson-Watanabe superprocesses. However, as a result of the branching assumption (once born, individuals behave independently of one another), a population evolving according to any of these models will either die out or grow without bound and, in so doing, can develop clumps of arbitrarily large density and extent. Our starting point here is an individual-based model of a single species in continuous space in which birth, death, and establishment may all depend on local population density as well as on spatial location, allowing for stable populations through density-dependent feedback. Although it is often mathematically convenient to assume that individuals follow Brownian motion during their lifetime, in our model, offspring are thrown off according to some spatial distribution centred on the location of the parent and do not subsequently move. This is particularly appropriate for modelling plant populations, in which this dispersal of offspring around the parent is the only source of spatial motion. Often models do not distinguish between juveniles and adults, so, for example, the number of adults produced by a single parent is determined only by the degree of crowding at the location of the parent. Although we shall similarly only follow the adult population, in formulating the dynamics of the models we shall distinguish between production of juveniles, which will depend upon the location of the adult, and their successful establishment, which will depend on the location in which a juvenile lands. The result is that not only the absolute number, but also the spatial distribution around their parent, of those offspring that survive to adulthood will depend upon the local population density. We shall consider three different classes of scaling limits for our model. The first yields a class of (generalised) superprocesses in which coefficients governing both the spatial motion and the branching components of the process can depend on local population density; the second is a corresponding class of deterministic non-local differential equations; and the third are classical PDEs. We measure local population density around a point by convolving with a smooth kernel \(\rho(\cdot)\), which may differ for the two stages of reproduction. When the limiting population process is deterministic, it is a (weak) solution of an equation of the form \[\partial_{t}\varphi_{t}(x)=r\left(x,\varphi_{t}\right)\mathcal{B}^{*}\left[ \varphi_{t}(\cdot)\gamma\big{(}\cdot,\varphi_{t}\big{)}\right]\left(x\right)+ \varphi_{t}(x)F\left(x,\varphi_{t}\right), \tag{1.1}\] where \(\varphi_{t}(x)\) can be thought of as the population density at \(x\) (although the limit may be a measure without a density), and \(\mathcal{B}^{*}\) is (the adjoint of) a strictly uniformly elliptic second order differential operator, typically the Laplacian. The dependence of each of the terms \(r\), \(\gamma\), and \(F\) on \(\varphi\) is only through the local density at \(x\), e.g., \(F(x,\varphi)=F(x,\rho*\varphi(x))\). We shall be more specific about the parameters below. By replacing \(\rho\) by \(\rho^{\epsilon}(\cdot)=\rho(\cdot/\epsilon)/\epsilon^{d}\), we can also scale the "width" of the region over which we measure local population density. When the population follows (1.1), we expect that if we take a second limit of \(\epsilon\to 0\), thus scaling the kernels appearing in \(r\), \(\gamma\), and \(F\) and making interactions pointwise, we should recover a nonlinear PDE. We verify that this is indeed the case in two important examples: a special case of the porous medium equation with a logistic growth term, in which the limiting equation takes the form \[\partial_{t}\varphi=\Delta(\varphi^{2})+\varphi(1-\varphi); \tag{1.2}\] and a wide class of semi-linear PDEs of the form \[\partial_{t}\varphi=\mathcal{B}^{*}\varphi+\varphi F(\varphi), \tag{1.3}\] which includes the Fisher-KPP equation and the Allen-Cahn equation. Equations of this form have been studied extensively in the context of spatial ecology (see for instance Lam and Lou (2023) and Cantrell and Cosner (2004)) and in many other fields; for instance, Ghosh and Good (2022) derive a stochastic version of (1.3) to describe abundances of mutant bacteria strains along the human gut, while Li et al. (2022) study the effects of nonlinear diffusion on long-term survival of a lattice-based interacting particle system. However, we do not study the effect of movement of adults, which can additionally affect the limiting equations: see for instance Holmes et al. (1994) or Potts and Borger (2023). It is of interest to understand under what conditions we can replace the two-step limiting process described above by one in which we simultaneously scale the kernels and the other parameters in our population model to arrive at the PDE limit. This is mathematically much more challenging, but we establish such one-step convergence in cases for which the limit is a classical reaction-diffusion equation of the form (1.3) with \(\mathcal{B}=\Delta\), and \(\rho\) is a Gaussian density. We allow a wide class of reaction terms, \(F\), so that the Fisher-KPP equation (that is equation (1.3) with \(\mathcal{B}=\Delta\) and \(F(\varphi)=1-\varphi\)) emerges as a special case. Such results on (one-step) convergence to reaction-diffusion equation limits have been achieved for a variety of interacting particle systems. Following the now classical contributions of De Masi et al. (1986); DeMasi and Presutti (2006); Oelschlager (1985), much of this work has focused on lattice based models with one particle per site, or on systems with a fixed number, \(N\), of interacting diffusions as \(N\to\infty\). For systems of proliferating particles, as considered for example by Oelschlager (1989); Flandoli et al. (2019); Flandoli and Huang (2021), an additional challenge (also apparent in our models), is the control of concentration of particles. We follow Oelschlager (1989); Flandoli et al. (2019) in considering'moderate interactions', meaning that the number of individuals in the neighbourhood over which we measure local population density tends to infinity, whereas Flandoli and Huang (2021) also consider the situation in which that number remains finite. We refer to Flandoli and Huang (2021) for a more thorough literature review, but note that both our model and scaling differ from those considered in the body of work discussed there: whereas in those settings, the only scalings are the number of particles in the system and the size of the neighbourhood over which individuals interact with one another, in keeping with the vast literature on continuous state branching models, we also scale time and so must ensure that births are adequately compensated by deaths to prevent the population from exploding. The history of a natural population is often only accessible indirectly, through patterns of genetic diversity that have been laid down; from genetic data, one can try to infer the genealogical trees that relate individuals in a sample from the population, and these have been shaped by its history (see e.g., Neigel and Avise, 1993; Kelleher et al., 2019). It is therefore of interest to establish information about the distribution of genealogical trees under our population model, which we do with a lookdown construction. Lookdown constructions were first introduced in Donnelly and Kurtz (1996) to provide a mechanism for retaining information about genealogical relationships between individuals sampled from a population evolving according to the Moran model when passing to the infinite population limit. Since then, they have been extended to a wide range of models. Of particular relevance to our work here are the papers of Kurtz and Rodrigues (2011) and Etheridge and Kurtz (2019), in which lookdown constructions are provided for a wide variety of population models, including spatially structured branching processes. In general, even armed with a lookdown construction, calculation of relevant statistics of the genealogy remains a difficult question. However, in special circumstances, some progress can be made. As an illustration, we shall consider a scenario that has received a great deal of attention in recent years, in which a population is expanding into new territory as a travelling wave. In Section 3.2 we shall describe the motion of a single ancestral lineage relative to three different (deterministic) wavefronts across \(\mathbb{R}^{1}\). Most work on the topic of "waves" of expanding populations has focused on models that caricature the classical Fisher-KPP equation with a stochastic term, i.e. \[dw=\big{(}\Delta w+sw(1-w)\big{)}dt+\sqrt{\frac{\alpha(w)}{N}}W(dt,dx),\] where \(W\) is space-time white noise, and \(N\) is a measure of the local population density. The coefficient \(\alpha(w)\) is generally taken to be either \(w\), corresponding to a superprocess limit, or \(w(1-w)\) giving a spatial analogue of a Wright-Fisher diffusion. Starting with the pioneering work of Brunet et al. (2006), a considerable body of evidence has been amassed to underpin the conjecture that for this, and a wide class of related models, genealogies converge on suitable timescales in the infinite density limit to a Bolthausen-Sznitman coalescent. This reflects the fact that, for this equation, ancestral lineages become trapped in the wavefront, where the growth rate of the population is highest. Once there, they will experience rapid periods of coalescence corresponding to significant proportions of individuals in the front being descended from particularly reproductively successful ancestors. If one replaces the logistic growth term of the classical Fisher-KPP equation with a nonlinearity that reflects cooperative behaviour in the population, such as \[wF(w)=w(1-w)(Cw-1), \tag{1.4}\] then, for sufficiently large \(C\) (strong cooperation), the nature of the deterministic wave changes from "pulled" to "pushed", [Birzu et al., 2018, 2021], and so the genealogies will be quite different from the Fisher-KPP case. For example, Etheridge and Penington [2022] show that for a discrete space model corresponding to this nonlinearity with \(C>2\), after suitable scaling, the genealogy of a sample converges not to a Bolthausen-Sznitman coalescent, but to a Kingman coalescent. The reason, roughly, is that ancestral lineages settle to a stationary distribution relative to the position of the wavefront which puts very little weight close to the 'tip' of the wave, so that when ancestral lineages meet it is typically at a location in which population density is high, where no single ancestor produces a disproportionately large number of descendants in a short space of time. The shape of the wave is not determined solely by the reaction term. For example, as a result of the nonlinear diffusion, for suitable initial conditions, the solution to the one-dimensional porous medium equation with logistic growth (1.2) converges to a travelling wave with a sharp cut-off; i.e., in contrast to the classical Fisher KPP equation, the solution at time \(t\) vanishes beyond \(x=x_{0}+ct\) for some constant wavespeed \(c>0\)[Kamin and Rosenau, 2004]. As a first step towards understanding what we should expect in models with nonlinear diffusion, one can ask about the position of an ancestral lineage relative to the wavefront in the deterministic models. In Section 3.2 we shall see that in our framework, even with logistic growth, the nonlinear diffusion corresponding to the porous medium equation results in a stationary distribution for the ancestral lineage that is concentrated behind the wavefront, leading us to conjecture that in the stochastic equation the cooperative behaviour captured by the nonlinear diffusion will also result in a qualitatively different pattern of coalescence to that seen under the stochastic Fisher-KPP equation. Indeed, we believe that it should be feasible to show that in an appropriate limit one recovers a Kingman coalescent. Structure of the paperIn this paper we study scaling limits of spatial population models, obtaining convergence of both the population process (i.e., the population density as a function of time, although strictly speaking it is a measure that may not have a density) and of lineages traced back through such a population. We retain information about lineages as we pass to the scaling limit by means of a lookdown construction. In what follows we first study various scaling limits of the spatial population process, and then turn our attention to lineages traced back through these populations. First, in Section 2, we describe the model and the main results, Theorems 2.10, 2.20, and 2.23. Next, in Section 3, we discuss a few striking consequences of these results regarding the behavior of genealogies in traveling waves, the appearance of periodic "clumps" in seemingly homogeneous population models, and identifiability of the underlying dynamics from a stationary population profile. In Section 4, we provide heuristic explanations of why the theorems ought to be true, and some key ideas behind them, and in Section 5 we define and discuss the lookdown construction. Proofs of the results begin in Section 6, which proves results for population models with nonlocal interactions, while Section 7 gives the more difficult proof for the case when interaction distances also go to zero in the limit. Finally, Section 8 gives proofs for convergence of the lookdown process and the associated results for the motion of lineages. The Appendix contains a few more technical and less central lemmas. The results are illustrated in a few places with individual-based simulations, made using SLiM (Haller and Messer, 2019), but these are provided for visualization and we do not embark on numerical study. Acknowledgements:Thanks go to Gilia Patterson for identifying the "clumping" phenomenon, and to Marcin Bownick and David Levin for useful discussions. AME thanks everyone in MAPS at Universite Paris Cite for their hospitality during the period in which much of this research took place. AME and PLR also thank the Kavli Institute for Theoretical Physics for their hospitality and birdwatching opportunities. PLR was supported by the NIH NHGRI (grant #HG011395), IL by the ANID/Doctorado en el extranjero doctoral scholarship, grant #2018-72190055, and TTHL by the EPSRC Centre for Doctoral Training in Mathematics of Random Systems: Analysis, Modelling and Simulation (EP/S023925/1) the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy, EXC-2047/1-390685813, the Rhodes Trust and St. John's College, Oxford. ## 2 Model and main results Our model is one of individuals distributed across a continuous space which we shall take to be \(\mathbb{R}^{d}\). For applications, \(d=1\) or \(d=2\) (or even \(d=3\) for cells within the body), but our main results apply more generally. At time zero, the population is distributed over a bounded region, with \(\mathcal{O}(N)\) individuals per unit area in that region, so the total number of individuals will also be \(\mathcal{O}(N)\). The population changes in continuous time, and we encode the state of the population at time \(t\) by a counting measure \(X(t)\), which assigns one unit of mass to the location of each individual. Population dynamics are controlled by three quantities, birth (\(\gamma\)), establishment (\(r\)), and death (\(\mu\)), each of which can depend on spatial location and local population density in a way specified below. Each individual gives birth at rate \(\gamma\) to a single (juvenile) offspring, which is dispersed according to a kernel \(q(x,\cdot)\) away from the location \(x\) of the parent. We assume that \(q\) is the density of a multivariate Gaussian, allowing a nonzero mean and anisotropic variance. Both the mean and covariance of \(q\) can change across space, but do not depend on population density. The offspring does not necessarily survive to be counted in the population: it "establishes" with probability \(r\), or else it dies immediately. Independently, each individual dies with rate \(\mu\). We aim to capture universal behaviour by passing to a scaling limit. Specifically, we shall take the "density", \(N\), to infinity, and also scale time by a factor of \(\theta=\theta(N)\), in such a way that defining \(\eta^{N}(t)=X(\theta t)/N\), the process \(\{\eta^{N}(t)\}_{t\geq 0}\) will converge to a suitable measure-valued process as \(N\) and \(\theta\) tend to infinity, with the nature of the limit depending on how they tend to infinity together. Evidently, we also need to scale the dispersal kernel if we are to obtain a nontrivial limit, for which we use \(q_{\theta}(x,\cdot)\), the density of the multivariate Gaussian obtained by multiplying the mean and variance components of \(q(x,\cdot)\) by \(1/\theta\). Birth, establishment, and death can depend on the location of the individual and the local population density. Since we would like the population density to scale with \(N\), these are functions of \(X/N\), i.e., the counting measure with mass \(1/N\) placed at the location of each individual. First consider birth rates, defined by a nonnegative function \(\gamma(x,m):\mathbb{R}^{d}\times\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}\) of location \(x\) and local population density \(m\). Local population density is defined as the convolution of \(X/N\) with a smooth (non-negative, integrable) kernel \(\rho_{\gamma}(\cdot)\). We write this convolution as \(\rho_{\gamma}\!*\!X/N\). Then, when the state of the population is \(X\), an individual at location \(x\) gives birth to a single juvenile offspring at rate \(\gamma(x,\rho_{\gamma}\!*\!X(x)/N)\). Similarly, the establishment probability of an offspring at location \(y\) is is \(r(y,\rho_{r}\!*\!X(y)/N)\), where \(r(y,m):\mathbb{R}^{d}\times\mathbb{R}_{\geq 0}\to[0,1]\) and again \(\rho_{r}\!*\!X\) is the convolution of \(X/N\) with the smooth kernel \(\rho_{r}\). We shall write \(\mu_{\theta}(x,X/N)\) for the per-capita death rate of mature individuals in the population. In order for the population density to change over timescales of order \(\theta\), we should like the net per capita reproductive rate to scale as \(1/\theta\). In classical models, in which \(r\), \(\gamma\), and \(\mu\) are constant, this quantity is simply \(r\gamma-\mu\). Here, because production of juveniles and their establishment are mediated by population density measured relative to different points, the net reproductive rate will take a more complicated form. In particular, the total rate of production of mature offspring by an individual at \(x\) will be \[\gamma\big{(}x,\rho_{\gamma}\!*\!X(x)/N\big{)}\int r\big{(}y,\rho_{r}\!*\!X(y) /N\big{)}q_{\theta}(x,dy). \tag{2.1}\] Nonetheless, it will be convenient to define the death rate \(\mu_{\theta}\) in terms of its deviation from \(r\gamma\). To this end, we define the death rate of an individual at \(x\) to be \[\mu_{\theta}(x)=\max\left\{0,r(x,\rho_{r}\!*\!X(x)/N)\gamma(x,\rho_{\gamma}\!* \!X(x)/N)-\frac{1}{\theta}F(x,\rho_{F}\!*\!X(x)/N)\right\}, \tag{2.2}\] where this equation defines \(F(x,m):\mathbb{R}^{d}\times\mathbb{R}_{\geq 0}\to\mathbb{R}\), and \(\rho_{F}\) is again a smooth kernel. The function \(F\) is nearly the net per capita reproductive rate, scaled by \(\theta\), and would be equal to it in a nonspatial model; but, as can be seen from (2.1), differs because an offspring's establishment probability is measured at their new location rather than that of their parent. For the most part, we work with \(F\) instead of \(\mu_{\theta}\). So, each of the three demographic parameters \(r\), \(\gamma\), and \(F\), depends on local density, measured by convolution with a smooth kernel, each of which can be different. As a result, death rate depends (in principle) on population densities measured in three different ways, so that we could write \(\mu_{\theta}(x)=\mu_{\theta}(x,\rho_{\gamma}\!*\!X(x)/N,\rho_{r}\!*\!X(x)/N, \rho_{F}\!*\!X(x)/N)\). This may seem unnecessarily complex. However, not only is it natural from a biological perspective, it also turns out to be convenient for capturing nontrivial examples in the scaling limit. **Remark 2.1**: _Although this model allows fairly general birth and death mechanisms, there are a number of limitations. Perhaps most obviously, to simplify the notation individuals give birth to only one offspring at a time, although this restriction could be easily lifted [as in Section 3.4 of Etheridge and Kurtz, 2019]. Furthermore, individuals do not move during their lifetime, and the age of an individual does not affect its fecundity or death rate. Finally, there is no notion of mating (although limitations on reproduction due to availability of mates can be incorporated into the birth rate, \(\gamma\)), so the lineages we follow will be uniparental. For these reasons, the model is most obviously applicable to bacterial populations or selfing plants, although we do not anticipate that incorporation of these complications will change the general picture._ For each \(N\) and \(\theta\), we study primarily the process with mass scaled by \(N\) and time scaled by \(\theta\), \[\left(\eta_{t}^{N}\right)_{t\geq 0}:=\left(X(\theta t)/N\right)_{t\geq 0},\] which takes values in the space of cadlag paths in \(\mathcal{M}_{F}(\mathbb{R}^{d})\) (the space of finite measures on \(\mathbb{R}^{d}\) endowed with the weak topology). In fact \(\eta_{t}^{N}\) will be a purely atomic measure comprised of atoms of mass \(1/N\). **Notation 2.2**: _Expressions like \(\gamma(x,\rho_{\gamma}\!*\!\eta(x))\) will appear repeatedly in what follows. To make formulae more readable, we overload notation to define_ \[\gamma(x,\eta):=\gamma(x,\rho_{\gamma}\!*\!\eta(x)),\] _and similarly write \(r(x,\eta)\) for \(r(x,\rho_{r}\!*\!\eta(x))\), \(F(x,\eta)\) for \(F(x,\rho_{F}\!*\!\eta(x))\), and \(\mu_{\theta}(x,\eta)\) for the expression of equation (2.2). When convenient, we may also suppress the arguments completely, writing simply \(\gamma\), \(r\), \(F\), and \(\mu_{\theta}\) for these quantities._ **Terminology 2.3**: _In our prelimiting model, the population is represented by a point measure in which each individual is assigned a mass \(1/N\). We use the term "population density" for this process, as it is supposed to measure population size relative to a nominal occupancy of \(N\) individuals per unit area. There is no implication that the measure representing the population is absolutely continuous with respect to Lebesgue measure; indeed in the prelimit it is certainly not._ In summary, at each time \(t\), \(\eta_{t}^{N}\) is purely atomic, consisting of atoms of mass \(1/N\) (which are the individuals). At instantaneous rate \(\theta\gamma(x,\eta_{t}^{N})N\eta_{t}^{N}(dx)\) an offspring of mass \(1/N\) is produced at location \(x\); it disperses to a location \(y\) offset from \(x\) by an independent Gaussian random variable with mean \(\vec{b}(x)/\theta\) and covariance matrix \(\mathbf{C}(x)/\theta\), and once there establishes instantaneously with probability \(r(y,\eta_{t}^{N})\), or else dies. At instantaneous rate \(\theta\mu_{\theta}(x,\eta_{t}^{N})N\eta_{t}^{N}(dx)\) an individual at location \(x\) dies. Note that the process \(\left(\eta_{t}^{N}\right)_{t\geq 0}\), which records numbers and locations of adult individuals, is just a scaled spatial birth and death process. If, for example, we insist that \(\gamma(x,m)\) is bounded, then existence (and in particular non-explosion) is guaranteed by comparison with a pure birth process. We do not dwell on this, as we shall require more stringent conditions if we are to pass to the limit as \(\theta\) and \(N\) tend to infinity. It is convenient to characterise the process as a solution to a martingale problem. We write \(C_{b}^{\infty}(\mathbb{R}^{d})\) for the space of bounded smooth functions on \(\mathbb{R}^{d}\), and, where convenient, we write \(\langle f,\eta\rangle=\int_{\mathbb{R}^{d}}f(x)\eta(dx)\). **Definition 2.4** (Martingale Problem Characterisation): _For each value of \(N\) and \(\theta\), and each purely atomic \(\eta_{0}^{N}\in\mathcal{M}_{F}(\mathbb{R}^{d})\) with atoms of mass \(1/N\), \((\eta_{t}^{N})_{t\geq 0}\) is the (scaled) empirical measure of a birth-death process with cadlag paths in \(\mathcal{M}_{F}(\mathbb{R}^{d})\) for which, for all \(f\in C_{b}^{\infty}(\mathbb{R}^{d})\), writing \(q_{\theta}(x,dy)\) for the Gaussian kernel with mean \(x+\vec{b}(x)/\theta\) and covariance \(\mathbf{C}(x)/\theta\),_ \[\begin{split} M_{t}^{N}(f):=\langle f,\eta_{t}^{N}\rangle- \langle f,\eta_{0}^{N}\rangle\\ -\int_{0}^{t}\bigg{\{}\bigg{\langle}\left(\int\theta\left(f(z)r( z,\eta_{s}^{N})-f(x)r(x,\eta_{s}^{N})\right)q_{\theta}(x,dz)\right)\gamma(x, \eta_{s}^{N}),\eta_{s}^{N}(dx)\bigg{\rangle}\\ +\left\langle f(x)F(x,\eta_{s}^{N}),\eta_{s}^{N}(dx)\right\rangle \bigg{\}}ds\end{split} \tag{2.3}\] _is a martingale (with respect to the natural filtration), with angle bracket process_ \[\begin{split}\left\langle M^{N}(f)\right\rangle_{t}=& \frac{\theta}{N}\int_{0}^{t}\bigg{\{}\bigg{\langle}\gamma(x,\eta_{s}^{N})\int f ^{2}(z)r(z,\eta_{s}^{N})q_{\theta}(x,dz),\eta_{s}^{N}(dx)\bigg{\rangle}\\ &+\left\langle\mu_{\theta}(x,\eta_{s}^{N})f^{2}(x),\eta_{s}^{N}( dx)\right\rangle\bigg{\}}ds.\end{split} \tag{2.4}\] The angle bracket process (or, "conditional quadratic variation") is the unique previsible process making \((M^{N}(f)_{t})^{2}-\langle M^{N}(f)\rangle_{t}\) a martingale with respect to the natural filtration. It differs from the usual quadratic variation (usually denoted \([M^{N}(f)]_{t}\)) because the process has jumps; for the (continuous) limit the two notions will coincide. The use of angle brackets for both integrals and this process is unfortunately standard but should not cause confusion, since the angle bracket process always carries a subscript for time. The form of (2.3) and (2.4) is explained in Section 4. Note that since (juvenile) individuals are produced at rate \(N\gamma\eta\), but each has mass \(1/N\), these factors of \(N\) cancel in (2.3). Under our scaling, \(N\) and \(\theta=\theta(N)\) will tend to infinity in such a way that \(\alpha:=\lim_{N\to\infty}\theta(N)/N\) exists and is finite. From the expression (2.4) it is easy to guess that whether the limiting processes will be deterministic or stochastic is determined by whether \(\alpha\) is zero or nonzero. It is convenient to record some notation for the generator of the diffusion limit of a random walk with jump distribution determined by \(q_{\theta}(x,dy)\). **Definition 2.5** (Dispersal generator): _As above, we define the dispersal kernel, \(q_{\theta}(x,dy)\), to be the density of a multivariate Gaussian with mean \(\vec{b}(x)/\theta\) and covariance matrix \(\mathbf{C}(x)/\theta\) (although often we omit the dependence of \(\vec{b}\) and \(\mathbf{C}\) on \(x\)). Furthermore, we define for \(f\in C_{b}^{\infty}(\mathbb{R}^{d})\),_ \[\mathcal{B}f(x)=\sum_{ij}\mathbf{C}(x)_{ij}\partial_{x_{i}}\partial_{x_{j}}f(x )+\sum_{i}\vec{b}(x)_{i}\partial_{x_{i}}f(x) \tag{2.5}\] _and denote the adjoint of \(\mathcal{B}\) by_ \[\mathcal{B}^{*}f(x) =\sum_{ij}\partial_{x_{i}}\partial_{x_{j}}(\mathbf{C}(x)_{ij}f(x))- \sum_{i}\partial_{x_{i}}(f(x)\vec{b}(x)_{i})\] \[=\sum_{ij}C_{ij}(x)\partial_{x_{i}}\partial_{x_{j}}f(x)+\sum_{i} \left(\sum_{j}\partial_{x_{j}}C_{ij}(x)-\vec{b}_{i}(x)\right)\partial_{x_{i}}f (x)\] \[\qquad+\left(\sum_{ij}\partial_{x_{i}}\partial_{x_{j}}C_{ij}(x)- \sum_{i}\partial_{x_{i}}\vec{b}_{i}(x)\right)f(x).\] **Remark 2.6**: \(\mathcal{B}\) _is defined so that_ \[\theta\int\left(f(y)-f(x)\right)q_{\theta}(x,dy)\rightarrow\mathcal{B}f(x) \qquad\text{as }\theta\rightarrow\infty.\] **Remark 2.7**: _An equivalent way to describe the model would be to say that when the state of the population is \(\eta\), an individual at \(x\) gives birth at rate_ \[\theta\gamma(x,\eta)\int r(y,\eta)q(x,dy),\] _and that offspring disperse according to the kernel_ \[q_{\theta}^{\mathfrak{m}}(x,\eta,dy):=\frac{r(y,\eta)q_{\theta}(x,dy)}{\int r (z,\eta)q_{\theta}(x,dz)}.\] _Clearly, the random walk driven by this dispersal kernel is biased towards regions of higher establishment probability. For comparison with future results, it is interesting to write down the limiting generator:_ \[\lim_{\theta\rightarrow\infty}\theta\int(f(y)-f(x))q_{\theta}^{\mathfrak{m}} (x,\eta,dy)=\frac{\mathcal{B}\left[f(\cdot)r(\cdot,\eta)\right](x)-f(x) \mathcal{B}\left[r(\cdot,\eta)\right](x)}{r(x,\eta^{N})}. \tag{2.6}\] _In the simplest case of unbiased isotropic dispersal (i.e., \(\vec{b}=0\) and \(\mathbf{C}=1\)), \(\mathcal{B}=\Delta\), and so (2.6) is equal to_ \[\Delta f(x)+2\nabla f(x)\cdot\nabla\log r(\cdot,\rho_{r}*\eta(\cdot))(x).\] _One might guess that the spatial motion described by following the ancestral lineage of an individual back through time would be described (in the limit) by the adjoint of this generator. However, we will see in Section 2.2 that this is not in fact the case._ In order to pass to a scaling limit, we will need to impose some conditions on the parameters of our model. **Assumptions 2.8**: _We shall make the following assumptions on the parameters of our model._ **Dispersal generator:** _We assume that_ 1. \(\vec{b}(x)\) _and_ \({\bf C}(x)\) _are_ \(\alpha\)_-Holder continuous for some_ \(\alpha\in(0,1]\) _and uniformly bounded in each component, and_ 2. _the operator_ \({\cal B}\) _is uniformly strictly elliptic, i.e.,_ \(\inf_{x}\inf_{y:\|y\|=1}\sum_{ij}y_{i}C(x)_{ij}y_{j}>0\)_._ **Reproduction parameters:** _We assume that_ 1. _The function_ \(F(x,m)\) _satisfies_ 1. \(F(x,m)\) _is locally Lipschitz in_ \(m\)_;_ 2. \(F(x,m)\) _is uniformly bounded above (but not necessarily below);_ 3. _for each fixed_ \(m\)_,_ \(\sup_{x\in\mathbb{R}^{d}}\sup_{k\leq m}|F(x,k)|<\infty\)_;_ 2. _The functions_ \(r(x,m)\)_,_ \(\gamma(x,m)\) _have bounded first and second derivatives in both arguments;_ 3. \(\gamma(x,m)\) _is uniformly bounded;_ 4. _For each_ \(f\in C_{b}^{2}(\mathbb{R}^{d})\)_, there is a_ \(C_{f}\) _such that_ \[|\gamma(x,\eta)\theta\int(r(y,\eta)f(y)-r(x,\eta)f(x))q_{\theta}(x,dy)|\leq C _{f}(1+|f(x)|)\] _for all_ \(x\in\mathbb{R}^{d}\) _and_ \(\eta\in{\cal M}_{F}(\mathbb{R}^{d})\)_. Furthermore,_ \(C_{f}\) _only depends on the norm of the first two derivatives of_ \(f\)_, i.e.,_ \[C_{f}=C(\sup_{x}\sup_{\|z\|=1}\max(\sum_{i}z_{i}\partial_{x_{i}}f(x),\sum_{ij} z_{i}z_{j}\partial_{x_{i}x_{j}}f(x))).\] _To keep expressions manageable, we shall also assume that_ \[\mu_{\theta}(x)=r(x,\eta)\gamma(x,\eta)-\frac{1}{\theta}F(x,\eta),\] _that is, this expression is non-negative so that there is no need to take the maximum with zero in (2.2). (This is anyway true for sufficiently large \(\theta\).)_ Since we take bounded \(f\), for most situations the bound \(C_{f}(1+|f(x)|)\) in Condition 6 above can be safely replaced simply by \(C_{f}\); however, this will be useful in certain situations where we consider a sequence of \(f\) with increasing upper bounds. We now give two concrete situations in which Condition 6 is satisfied. The proof is in Section 6.1. **Lemma 2.9**: _Assume that Conditions 2.8 are satisfied, except for Condition 6. If either_ 1. \(|\partial_{x_{i}}r(x,\eta)|\) _and_ \(|\partial_{x_{i}x_{j}}r(x,\eta)|\) _are uniformly bounded for_ \(x\in\mathbb{R}^{d}\)_,_ \(\eta\in{\cal M}_{F}(\mathbb{R}^{d})\) 2. _or,_ \(m^{2}\gamma(x,m)\) _is uniformly bounded and there exists_ \(C<\infty\) _such that for_ \(\theta\) _sufficiently large, and all_ \(x\in\mathbb{R}^{d}\)_,_ \(\eta\in\mathcal{M}_{F}(\mathbb{R}^{d})\)_,_ \[\theta\int\big{(}\rho_{r}\!*\!\eta(y)-\rho_{r}\!*\!\eta(x)\big{)}q_{\theta}(x, dy)\leq C\rho_{\gamma}\!*\!\eta(x),\] _and_ \[\theta\int\big{(}\rho_{r}\!*\!\eta(y)-\rho_{r}\!*\!\eta(x)\big{)}^{2}q_{\theta }(x,dy)\leq C(\rho_{\gamma}\!*\!\eta(x))^{2},\] _then Condition 6 is also satisfied._ The purpose of the conditions that we have placed on the reproduction parameters is to ensure that the net per capita reproduction rate (before time scaling) is order \(1/\theta\). As remarked above, because of the non-local reproduction mechanism, it no longer suffices to assume that \(r(x,\eta)\gamma(x,\eta)-\mu_{\theta}(x)\) is of order \(1/\theta\). Perhaps the simplest example in which we can see this is the case where \(\gamma\equiv 1\) and \(F\equiv 0\), so that \(\mu_{\theta}=r\), and \(\eta=\delta_{x}\) (i.e., the population has all individuals at a single location), so that \(\rho_{r}\!*\!\eta(y)=\rho_{r}(y)\). In this case, the mean rate of change of the total population size is \(\int(r(y,\rho_{r}(y))-r(x,\rho_{r}(x)))q_{\theta}(x,dy)\); the first condition of Lemma 2.9 would ensure this is of order \(1/\theta\). If \(r(x,m)\) is independent of \(m\), then the conditions are easy to satisfy; they just require some regularity of \(r\) as a function of \(x\). Condition 1 of Lemma 2.9 is also satisfied if for example \(\|\nabla\rho_{r}\|\leq C\rho_{r}\) and \(m\partial_{m}r(x,m)\), \(m^{2}\partial_{mm}r(x,m)\) are bounded. This is the case, for instance, if \(\rho_{r}\) decays exponentially. On the other hand, it might seem more natural to take \(\rho_{r}\) to be a Gaussian density with parameter \(\sigma_{r}\), say. Then, as we check in Lemma B.1, Condition 2 of Lemma 2.9 is satisfied if \(\rho_{\gamma}\) is also Gaussian with parameter \(\sigma_{\gamma}\) and \(\sigma_{\gamma}>\sigma_{r}\). For large enough \(\theta\), this condition guarantees that \(\sigma_{r}+1/\theta<\sigma_{\gamma}\), so that the establishment probability of a juvenile is controlled by individuals that are already 'felt' by the fecundity-regulating kernel \(\rho_{\gamma}\) at the location of their parent. ### Scaling limits of the population process Our main results depend on two dichotomies: Is the limiting process deterministic or a (generalized) superprocess? And, are interactions pointwise in the limit or nonlocal? See Figure 1 for snapshots of the population from direct simulation of the process using SLiM (Haller and Messer, 2019) illustrating this first dichotomy. Below we have results for deterministic limits with pointwise and nonlocal interactions, and for superprocess limits with nonlocal interactions. Scaling limits with nonlocal interactions:Recall that the process \((\eta_{t}^{N})_{t\geq 0}\) takes its values in the space \(\mathcal{D}_{[0,\infty)}(\mathcal{M}_{F}(\mathbb{R}^{d}))\) of cadlag paths on \(\mathcal{M}_{F}(\mathbb{R}^{d})\). We endow \(\mathcal{M}_{F}(\mathbb{R}^{d})\) with the topology of weak convergence and \(\mathcal{D}_{[0,\infty)}(\mathcal{M}_{F}(\mathbb{R}^{d}))\) with the Skorohod topology. A sequence of processes taking values in \(\mathcal{D}_{[0,\infty)}(\mathcal{M}_{F}(\mathbb{R}^{d}))\) is said to be tight if the corresponding sequence of distributions is tight, i.e., that any infinite subsequence has a weakly convergent subsubsequence. Our first main result establishes tightness of our rescaled population processes in the case in which interactions remain nonlocal under the scaling, and characterises limit points as solutions to a martingale problem. **Theorem 2.10**: _Let \((\eta_{t}^{N})_{t\geq 0}\) be as defined in Definition 2.4 and assume that as \(N\to\infty\), \(\theta(N)\to\infty\) in such a way that \(\theta(N)/N\to\alpha\). (However, the kernels \(\rho_{r}\), \(\rho_{\gamma}\), and \(\rho_{F}\) remain fixed.) Suppose that Assumptions 2.8 hold and, further, that \(\{\eta_{0}^{N}\}_{N\geq 1}\) is a sequence of purely atomic measures, with \(\eta_{0}^{N}\) comprised of atoms of mass \(1/N\), which is tight in \(\mathcal{M}_{F}(\mathbb{R}^{d})\). Also assume there exists a nonnegative \(f_{0}\in C(\mathbb{R}^{d})\) with uniformly bounded first and second derivatives (i.e., with \(\sup_{x}\sup_{\|z\|=1}\sum_{i}\partial_{x_{i}}f_{0}(x)z_{i}\) and \(\sup_{x}\sup_{\|z\|=1}\sum_{ij}\partial_{x_{i}x_{j}}f_{0}(x)z_{i}z_{j}\) both finite) and \(f_{0}(x)\to\infty\) as \(|x|\to\infty\) for which \(\left\langle f_{0}(x),\eta_{0}^{N}(dx)\right\rangle<C<\infty\) for some \(C\) independent of \(N\). Then the sequence of processes \((\eta_{t}^{N})_{t\geq 0}\) is tight, and for any limit point \((\eta_{t})_{t\geq 0}\), for every \(f\in C_{b}^{\infty}(\mathbb{R}^{d})\),_ \[\begin{split} M_{t}(f)&:=\left\langle f(x),\eta_{t} (dx)\right\rangle-\left\langle f(x),\eta_{0}(dx)\right\rangle\\ &\qquad-\int_{0}^{t}\left\langle\gamma(x,\eta_{s})\mathcal{B} \left(f(\cdot)r(\cdot,\eta_{s})\right)(x)+f(x)F(x,\eta_{s}),\eta_{s}(dx) \right\rangle ds\end{split} \tag{2.7}\] _is a martingale (with respect to the natural filtration), with angle bracket process_ \[\left\langle M(f)\right\rangle_{t}=\alpha\int_{0}^{t}\left\langle 2\gamma \left(x,\eta_{s}\right)r\left(x,\eta_{s}\right)f^{2}(x),\eta_{s}(dx)\right\rangle ds. \tag{2.8}\] Figure 1: Snapshots of two simulations, with small \(\alpha=\theta/N\) (left) and large \(\alpha=\theta/N\) (right). Simulations are run with a Fisher-KPP-like parameterization: birth and establishment are constant, while death increases linearly with density, at slope \(1/\theta\). Left: \(\alpha=0.1\). Right: \(\alpha=10\). Other parameters were the same: dispersal and interactions distance were set to \(1\), and the equilibrium density is \(10\) individuals per unit area. _If \(\alpha=0\) the limit is deterministic._ Recall when interpreting (2.7) that, for instance, \(r(x,\eta_{s})=r(x,\rho_{r}\!*\!\eta_{s}(x))\), and so \(\mathcal{B}(fr)(x)=\mathcal{B}(f(\cdot)r(\cdot,\rho_{r}\!*\!\eta_{s}(\cdot)))(x)\). The proof of this theorem appears in Section 6.2. Theorem 2.10 provides tightness of the rescaled processes. If the limit points are unique, then this is enough to guarantee convergence. **Corollary 2.11**: _Under the assumptions of Theorem 2.10, if the martingale problem defined by equations (2.7) and (2.8) has a unique solution, then \((\eta_{t}^{N})_{t\geq 0}\) converges weakly to that solution as \(N\to\infty\)._ When \(\alpha>0\), the limit points can be thought of as interacting superprocesses. For example, when \(r\) and \(\gamma\) are constant, and \(F(x,\eta_{s})=1-\rho_{F}\!*\!\eta_{s}(x)\), we recover a superprocess with nonlinear death rates corresponding to logistic growth [Etheridge, 2004] that is a continuous limit of the Bolker-Pacala model [Bolker and Pacala, 1997, 1999]. We are not aware of a general result to determine when we will have uniqueness of solutions to the martingale problem of Theorem 2.10 when \(\alpha>0\). However, the Dawson-Girsanov transform tells us that we have uniqueness in this special case of the superprocess with nonlinear death rates, and Perkins stochastic calculus (and its adaptation to a lookdown setting) provides uniqueness for cases with interactions in the dispersal mechanism of the superprocess. We refer to Dawson [1993], Perkins [1992], and Donnelly and Kurtz [1999] for approaches to showing that these sorts of martingale problems are well-posed. For the deterministic case of \(\alpha=0\), the limiting process is a weak solution to a nonlocal PDE. We next describe some situations in which more is known about uniqueness and whether the solution is close to the corresponding local PDE. First, recall the following notion of solution to a PDE. **Definition 2.12** (Weak solutions): _We say that \((\eta_{t})_{t\geq 0}\), with \(\eta_{t}\in\mathcal{M}_{F}(\mathbb{R}^{d})\), is a weak solution to the PDE_ \[\partial_{t}\varphi=r\mathcal{B}^{*}(\gamma\varphi)+\varphi F \tag{2.9}\] _(where \(r\), \(\gamma\) and \(F\) can all be functions of \(\varphi\)) if, for all \(f\in C_{b}^{\infty}(\mathbb{R}^{d})\),_ \[\frac{d}{dt}\langle f,\eta_{t}\rangle=\langle\gamma\mathcal{B}(rf)+fF,\eta_{ t}\rangle.\] The notation \(\varphi\) is meant to be suggestive of a density, and recall that equation (2.9) has made dependencies on \(x\) and \(\varphi\) implicit; written out more explicitly, (2.9) is \[\partial_{t}\varphi_{t}(x)=r\left(x,\rho_{r}\!*\!\varphi_{t}(x)\right) \mathcal{B}^{*}\left[\varphi_{t}(\cdot)\gamma\big{(}x,\rho_{\gamma}\!*\! \varphi_{t}(\cdot)\big{)}\right](x)+\varphi_{t}(x)F\left(x,\rho_{F}\!*\! \varphi_{t}(x)\right).\] Because Theorem 2.10 only tells us about weak convergence, in the case \(\alpha=0\) we can only deduce that any limit point \(\eta_{t}\) is a weak solution to this nonlocal PDE. Specialising the results of Kurtz and Xiong [1999] to the deterministic setting provides general conditions under which we have existence and uniqueness of solutions to (2.9) which have an \(L^{2}\)-density with respect to Lebesgue measure. Recall that the Wasserstein metric, defined by \[\rho(\nu_{1},\nu_{2})=\sup\Big{\{}\Big{|}\int fd\nu_{1}-\int fd\nu_{2}\Big{|}:\sup_ {x}|f(x)|\leq 1,|f(x)-f(y)|\leq\|x-y\|\Big{\}},\] determines the topology of weak convergence on \(\mathcal{M}_{F}(\mathbb{R}^{d})\). We write \(r(x,\eta)\gamma(x,\eta)\mathbf{C}(x)=J(x,\eta)J(x,\eta)^{T}\), and \(\beta(x,\eta)=r(x,\eta)\gamma(x,\eta)\big{(}\tilde{b}(x)+2\mathbf{C}(x)\nabla \log r(x,\eta)\big{)}\) (quantities that will appear in Proposition 5.6). If \(J\), \(\beta\), and \(F\) are bounded and Lipschitz in the sense that \[|J(x_{1},\nu_{1})-J(x_{2},\nu_{2})|,|\beta(x_{1},\nu_{1})-\beta(x_{2},\nu_{2} )|,|F(x_{1},\nu_{1})-F(x_{2},\nu_{2})|\leq C(\|x_{1}-x_{2}\|+\rho(\nu_{1},\nu_ {2})) \tag{2.10}\] for some \(C>0\), the methods of Kurtz and Xiong [1999] show that if the initial condition \(\eta_{0}\) for our population process has an \(L^{2}\) density, then so does \(\eta_{t}\) for \(t>0\). Although the necessary estimates (for which we refer to the original paper) are highly nontrivial, the idea of the proof is simple. Take a solution to the equation and use it to calculate the coefficients \(r\), \(\gamma\) and \(F\) that depend on local population density. Then \(\eta\) solves the _linear_ equation obtained by regarding those values of \(r\), \(\gamma\) and \(F\) as given. It remains to prove that the solution to the linear equation has a density which is achieved by obtaining \(L^{2}\) bounds on its convolution with the heat semigroup at time \(\delta\) and letting \(\delta\to 0\). We also have the following uniqueness result. **Theorem 2.13** (Special case of Kurtz and Xiong [1999], Theorem 3.5): _Suppose \(J\), \(\beta\), and \(F\) are bounded and Lipschitz in the sense of (2.10). If \(\eta_{0}\) has an \(L^{2}(\mathbb{R}^{d})\)-density, then there exists a unique \(L^{2}(\mathbb{R}^{d})\)-valued solution of (2.9) in the sense of Definition 2.12._ **Remark 2.14**: _Kurtz and Xiong [1999] considers an infinite system of stochastic differential equations for the locations and weights of a collection of particles that interact through their weighted empirical measure, which is shown to be the unique solution to a stochastic PDE. As we shall see through our lookdown representation in Section 5, the solution to our deterministic equation can be seen as the empirical measure of a countable number of particles (all with the same weight) which, in the notation above, evolve according to_ \[X(t)=X(0)+\int_{0}^{t}\beta\big{(}X(s),\eta_{s}\big{)}ds+\int_{0}^{t}J\big{(} X(s),\eta_{s}\big{)}dW(s)\] _(with an independent Brownian motion \(W\) for each particle)._ Two-step convergence to PDE:Although the coefficients at \(x\) in (2.9) are nonlocal, we can choose our kernels \(\rho_{\gamma}\), \(\rho_{r}\), and \(\rho_{F}\) in such a way that they depend only on the population in a region close to \(x\), and so we expect that under rather general conditions solutions of the nonlocal PDE will be close to the corresponding classical PDE. The following propositions provide two concrete situations in which this is true. In the first, the PDE is a reaction-diffusion equation, and in the proof in Section 6.3.1 we borrow an idea from Penington [2017] to express the solutions to both the nonlocal equation and the classical PDE through a Feynman-Kac formula. **Proposition 2.15**: _Let \(\rho_{F}^{\epsilon}(x)=\rho_{F}\big{(}x/\epsilon\big{)}/\epsilon^{d}\). Assume \(\varphi_{0}\in L^{2}(\mathbb{R})\) is a positive, uniformly Lipschitz, and uniformly bounded function. Suppose that \(\varphi^{\epsilon}\in L^{2}(\mathbb{R}^{d})\) is a weak solution to the equation_ \[\partial_{t}\varphi^{\epsilon}=\mathcal{B}^{*}\varphi^{\epsilon}+\varphi^{ \epsilon}F(\rho_{F}^{\epsilon}*\varphi^{\epsilon}),\qquad x\in\mathbb{R}^{d}, \,t>0, \tag{2.11}\] _with initial condition \(\varphi_{0}(\cdot)\), and that \(\varphi\) is a weak solution to the equation_ \[\partial_{t}\varphi=\mathcal{B}^{*}\varphi+\varphi F(\varphi),\qquad x\in \mathbb{R}^{d},\,t>0, \tag{2.12}\] _also with initial condition \(\varphi_{0}(\cdot)\). Suppose further that \(F\) is a Lipschitz function which is bounded above, and that \(\vec{b}(x)\) and \(\mathbf{C}(x)\), the drift and covariance matrix of \(\mathcal{B}\), satisfy the conditions of Assumptions 2.8. Then, for all \(T>0\) there exists a constant \(K=K(T,\|\varphi_{0}\|_{\infty})<\infty\) and a function \(\delta(\epsilon)\) (dependent on \(\rho_{F}\)) with \(\delta(\epsilon)\to 0\) as \(\epsilon\to 0\), such that, for all \(0\leq t\leq T\), and \(\epsilon\) small enough,_ \[\|\varphi_{t}(\cdot)-\varphi_{t}^{\epsilon}(\cdot)\|_{\infty}\leq K\delta( \epsilon).\] _In particular, as \(\epsilon\to 0\), we have that \(\varphi^{\epsilon}\) converges uniformly in compact intervals of time to \(\varphi\)._ **Remark 2.16**: _Note that Theorem 2.13 guarantees uniqueness of solutions to equation (2.11)._ Our second example in which we know solutions to the nonlocal PDE converge to solutions of the local PDE as interaction distances go to zero is a nonlocal version of a porous medium equation with logistic growth. That is, we consider non-negative solutions to the equation \[\partial_{t}\psi^{\epsilon}=\Delta\left(\psi^{\epsilon}\,\rho_{\gamma}^{ \epsilon}*\psi^{\epsilon}\right)+\psi^{\epsilon}\left(1-\rho_{\gamma}^{ \epsilon}*\psi^{\epsilon}\right). \tag{2.13}\] The case without the reaction term (and with \(\mathbb{R}^{d}\) replaced by a torus) is considered by Lions and Mas-Gallic (2001) who use it as a basis for a particle method for numerical solution of the porous medium equation. Of course this does not quite fit into our framework, since in the notation of our population models this would necessitate \(\gamma(x,m)=\rho_{\epsilon}*m\) which is not bounded. However, this can be overcome by an additional layer of approximation (c.f. our numerical experiments of Section 3.1) and we do not allow this to obtain us here. Existence and uniqueness of solutions to (2.13) can be obtained using the approach of Lions and Mas-Gallic (2001), so we should like to prove that as \(\epsilon\to 0\) we have convergence to the solution to the porous medium equation with logistic growth: \[\partial_{t}\psi=\Delta\left(\psi^{2}\right)+\psi\left(1-\psi\right). \tag{2.14}\] **Notation 2.17**: _We use \(\rightharpoonup\) to denote weak convergence in the sense of analysts; that is, \(\psi^{\epsilon}\rightharpoonup\psi\) in \(L^{1}\) means \(\int\psi^{\epsilon}vdx\to\int\psi vdx\) for all \(v\in L^{\infty}\)._ _We write \(L^{2}_{t}(H^{1})\) for functions for which the \(H^{1}\) norm in space is in \(L^{2}\) with respect to time, i.e._ \[\int_{0}^{T}\int\left\{\psi_{t}(x)^{2}+\|\nabla\psi_{t}(x)\|^{2}\right\}dxdt<\infty,\] _and \(C_{t}(L^{1})\) will denote functions for which the \(L^{1}\) norm in space is continuous in time._ **Proposition 2.18**: _Suppose that we can write \(\rho_{\gamma}=\zeta*\zeta\), where \(\zeta(x)=\zeta(-x)\) and \(\zeta\in\mathcal{S}(\mathbb{R}^{d})\) (the Schwartz space of rapidly decreasing functions). Furthermore, suppose that \(\psi_{0}^{\epsilon}\geq 0\) is such that there exist \(\lambda\in(0,1)\) and \(C\in(0,\infty)\) (independent of \(\epsilon\)) such that_ \[\int\exp(\lambda\|x\|)\psi_{0}^{\epsilon}(x)dx<C,\quad\text{ and }\sup_{ \epsilon}\int\psi_{0}^{\epsilon}|\log\psi_{0}^{\epsilon}|dx<\infty,\] _with \(\psi_{0}^{\epsilon}\rightharpoonup\psi_{0}\) as \(\epsilon\to 0\). Then writing \(\psi^{\epsilon}\) for the solution to (2.13) on \([0,T]\times\mathbb{R}^{d}\) with initial condition \(\psi_{0}^{\epsilon}\), \(\psi^{\epsilon}\rightharpoonup\psi\) as \(\epsilon\to 0\) where \(\psi\in L_{t}^{2}(H^{1})\cap C_{t}(L^{1})\), \(\int\psi|\log\psi|dx<\infty\), and \(\psi\) solves (2.14) on \([0,T]\times\mathbb{R}^{d}\)._ The example that we have in mind for the kernel \(\rho_{\gamma}\) is a Gaussian kernel. For the proof, see Section 6.3.2. **Remark 2.19**: _Although it seems hard to formulate an all-encompassing result, Propositions 2.15 and 2.18 are by no means exhaustive. When the scaling limit is deterministic, one can expect analogous results under rather general conditions. However, when the limit points are stochastic, they resemble "nonlinear superprocesses" and so one cannot expect a density with respect to Lebesgue measure in \(d\geq 2\). It is then not reasonable to expect to be able to make sense of the limit if we scale the kernels in this way. Moreover, in one dimension, where the classical superprocess does have a density with respect to Lebesgue measure, the form of (2.7) suggests that even if one can remove the local averaging from \(\gamma\), it will be necessary to retain averaging of \(r\) in order to obtain a well-defined limit._ One-step convergence to PDE:Theorem 2.10, combined with Proposition 2.15 or 2.18 implies that we can take the limit \(N\to\infty\) followed by the limit \(\epsilon\to 0\) to obtain solutions to the PDE (2.12). However, it is of substantial interest to know whether we can take those two limits simultaneously. The general case seems difficult, but we prove such "diagonal" convergence in the following situation. The proof is provided in Section 7. **Theorem 2.20** (Convergence to a PDE): _Let \((\eta_{t}^{N})_{t\geq 0}\) be as defined in Definition 2.4 with \(r(x,m)\equiv 1\equiv\gamma(r,m)\), \(F(x,m)\equiv F(m)\), \(\rho_{F}^{\epsilon}\) a symmetric Gaussian density with variance parameter \(\epsilon^{2}\), and \(\mathcal{B}=\Delta/2\). Further suppose that \(F(m)\) is a polynomial with \(F(m)\mathbf{1}_{m\geq 0}\) bounded above. Assume that \(\langle 1,\eta_{0}^{N}\rangle\) is uniformly bounded, and that for all \(x\in\mathbb{R}^{d}\) and \(k\in\mathbb{N}\),_ \[\limsup_{\epsilon\to 0}\mathbb{E}\big{[}\rho_{F}^{\epsilon}*\eta_{0}(x)^{k} \big{]}<\infty,\] _and_ \[\limsup_{\epsilon\to 0}\int\mathbb{E}\big{[}\rho_{F}^{\epsilon}*\eta_{0}(x)^{k} \big{]}dx<\infty.\] _Finally assume that \(N\to\infty\), \(\theta\to\infty\) and \(\epsilon\to 0\) in such a way that_ \[\frac{1}{\theta\epsilon^{2}}+\frac{\theta}{N\epsilon^{d}}\to 0. \tag{2.15}\] _Then the sequence of \(\mathcal{D}_{[0,\infty)}(\mathcal{M}_{F}(\mathbb{R}^{d}))\)-valued stochastic processes \(\big{(}\rho^{\epsilon}_{F}\ast\eta^{N}_{t}(x)dx\big{)}_{t\geq 0}\) converges weakly to a measure-valued process with a density \(\varphi(t,x)\) that solves_ \[\partial_{t}\varphi(t,x)=\frac{1}{2}\Delta\varphi(t,x)+\varphi(t,x)F(\varphi(t,x)). \tag{2.16}\] **Remark 2.21**: _In fact, our proof goes through without significant change under the conditions that \(F(m)\mathbf{1}_{m\geq 0}\) is bounded above (but not necessarily below), and that for all \(m,n\in[0,\infty)\)_ \[|F(m)|\leq\sum_{j=1}^{k}a_{j}m^{j},\quad\text{ and }|F(n)-F(m)|\leq|n-m|\sum_{j=1}^ {k^{\prime}}b_{j}\Big{(}n^{j}+m^{j}\Big{)},\] _for some non-negative constants \(\{a_{j}\}_{j=0}^{k}\), \(\{b_{j}\}_{j=0}^{k^{\prime}}\). We take \(F\) to be polynomial to somewhat simplify notation in the proof._ ### Ancestral lineages in the scaling limit Now that we have established what we can say about how population density changes with time, we turn to results on ancestral lineages, i.e., how genealogical ancestry can be traced back across the landscape. Informally, a _lineage_\((L^{N}_{t})_{t\geq 0}\), begun at a spatial location \(L^{N}_{0}=x\) where there is a focal individual in the present day can be obtained by, for each time \(t\), setting \(L^{N}_{t}\) to be the spatial location of the individual alive at time \(t\) before the present from whom the focal individual is descended. Since in our model individuals have only one parent, this is unambiguous. Although we did not explicitly retain such information, it is clear that for finite \(N\), since individuals are born one at a time, one could construct the lineage \((L^{N}_{t})_{t=0}^{T}\) given the history of the population \((\eta^{N}_{t})_{t=0}^{T}\), for each starting location to which \(\eta^{N}_{T}\) assigns positive mass. It is less clear, however, how to formally retain such information when we pass to the scaling limit. The _lookdown construction_ in Section 5 will enable us to recover information about ancestry in the infinite population limit. Roughly speaking, each particle is assigned a unique "level" from \([0,\infty)\) that functions as a label and thus allows reconstruction of lineages. The key to the approach is that levels are assigned in such a way as to be exchangeable, so that sampling a finite number, \(k\) say, of individuals from a given region is equivalent to looking at the individuals in that region with the \(k\) lowest levels. Moreover, as we pass to the infinite population limit, the collection of (individual, level) pairs converges, as we show in Theorem 5.4. See Etheridge and Kurtz (2019) for an introduction to these ideas. In particular, even in the infinite population limit, we can sample an individual from a region (it will be the individual in that region with the lowest level) and trace its line of descent. This will allow us to calculate, for each \(x\) and \(y\in\mathbb{R}^{d}\), the proportion of the population at location \(x\) in the present day population that are descended from a parent who was at location \(y\) at time \(t\) in the past. To make sense of this in our framework, in Section 8.2, we justify a weak reformulation of this idea. We are interested in two questions. First, when is the motion of an ancestral lineage, given complete knowledge of the population process, a well-defined process? In other words, is knowledge of the process \((\eta_{t})_{t=0}^{T}\) that records numbers of individuals but not their ancestry sufficient to define the distribution of \((L_{t})_{t=0}^{T}\)? Second, does the process have a tractable description? We focus on the simplest situation, that in which the population process is deterministic. However, the results here apply when the population process solves either a nonlocal or a classical PDE. There will be no coalescence of ancestral lineages in the deterministic limit, but understanding motion of single lineages is useful in practice, and our results can be seen as a first step towards understanding genealogies for high population densities. Proofs of results in this section are found in Section 8. **Definition 2.22** (Ancestral lineage): _Let \((\varphi_{t}(x))_{0\leq t\leq T}\) denote the density of the scaling limit of our population model, solving (2.9), and let \(y\) be a point with \(\varphi_{T}(y)>0\). We define \((L_{s})_{s=0}^{T}\), the ancestral lineage of an individual sampled from the population at \(y\) at time \(T\), by setting \(L_{0}=y\) and \(L_{s}\) to be the position of the unique ancestor of that individual at time \(T-s\). We define \((Q_{s})_{s\geq 0}\) to be the time inhomogeneous semigroup satisfying_ \[Q_{s}f(y):=\mathbb{E}_{y}[f(L_{s})].\] Our next result identifies the ancestral lineage as a diffusion by characterizing its generator. **Theorem 2.23**: _For \(\varphi:\mathbb{R}^{d}\to\mathbb{R}\), define_ \[\mathcal{L}_{\varphi}f =\frac{r}{\varphi}\left[\mathcal{B}^{*}(\gamma\varphi f)-f \mathcal{B}^{*}(\gamma\varphi)\right] \tag{2.17}\] \[=r\gamma\left[\sum_{ij}\mathbf{C}_{ij}\partial_{x_{i}x_{j}}f+ \sum_{j}\vec{m}_{j}\partial_{x_{j}}f\right], \tag{2.18}\] _where \(\vec{m}\) is the vector_ \[\vec{m}_{j}=2\sum_{i}C_{ij}\partial_{x_{i}}\log(\gamma\varphi)+2\sum_{i} \partial_{x_{i}}C_{ij}-\vec{b}_{j}.\] _Then the generator of the semigroup \(Q_{s}\) of Definition 2.22 is given by \(\partial_{s}Q_{s}f(y)=\mathcal{L}_{\varphi_{T-s}}Q_{s}f(y)\)._ **Remark 2.24**: _As usual, to make the generator readable, we've written it in concise notation, omitting the dependencies on location and population density, which itself changes with time. When interpreting this, remember that everything depends on location and density at that location and time - for instance, "\(r\)" is actually \(r(x,\varphi(x))\) (in the classical case), or \(r(x,\rho_{r}*\eta(x))\) (in the nonlocal case)._ _Moreover, we haven't proved any regularity of the population density process \(\varphi\), so, as written, the generator (2.17) may not make sense. Instead, it should be interpreted in a weak sense which is made precise in Section 8.2._ **Corollary 2.25**: _In addition to the assumptions of Theorem 2.23, if the covariance of the dispersal process is isotropic (i.e., \({\bf C}=\sigma^{2}I\)), then_ \[{\cal L}_{\varphi}f=r\gamma\left(\sigma^{2}\Delta f+\left(2\sigma^{2}\nabla \log(\gamma\varphi)-\vec{b}\right)\cdot\nabla f\right). \tag{2.19}\] _(However, \(\vec{b}\) can still depend on location.)_ In other words, the lineage behaves as a diffusion driven by Brownian motion run at speed \(\sigma^{2}\) multiplied by the local per-capita production of mature offspring (\(r\gamma\)) in a potential tilted by migration bias (\((\varphi_{s}\gamma)^{2}\exp(-\vec{b}\cdot x/\sigma^{2})\), whose gradient appears in the drift term of the generator). In particular, lineages are drawn to regions of high fecundity (production of juveniles), but their speed is determined by the rate of production of mature offspring. This can be compared to Remark 2.7. **Corollary 2.26**: _In addition to the assumptions of Corollary 2.25, if the population process is stationary (so \(\varphi_{t}\equiv\varphi\)), and \(\vec{b}(x)=\nabla h(x)\) for some function \(h\), then \(Y\) is reversible with respect to_ \[\pi(x)=\frac{\gamma}{r}\varphi(x)^{2}e^{-h(x)/\sigma^{2}}. \tag{2.20}\] Long-term fitness of an individual is proportional to the fraction of lineages from the distant future that pass through the individual, and hence the total long-term fitness at a location is proportional to the stationary distribution of \(Y\) there, if it exists. Therefore, if \(\pi\) is integrable then the per-capita long-term fitness of an individual at \(x\) is proportional to \(\pi(x)/\varphi(x)\). **Corollary 2.27**: _In addition to the assumptions of Corollary 2.25, suppose that the population process is described by a travelling wave with velocity \(\mathfrak{c}\), i.e., the population has density \(\varphi(t,x)=w(x-t\mathfrak{c})\) where \(w\) solves_ \[r{\cal B}^{*}(\gamma w)+wF+\mathfrak{c}\cdot\nabla w=r\sigma^{2}\Delta(\gamma w )-\vec{b}\cdot\nabla(\gamma w)+\mathfrak{c}\cdot\nabla w=0.\] _Then the semigroup \(Q_{s}\) of the motion of a lineage in the frame that is moving at speed \(\mathfrak{c}\) is time-homogeneous with generator_ \[{\cal L}f=\sigma^{2}r\gamma\left(\Delta f+2\nabla\log(\gamma w)\cdot\nabla f \right)+(\mathfrak{c}-r\gamma\vec{b})\cdot\nabla f. \tag{2.21}\] ## 3 Examples and applications We now discuss some consequences of these results. ### Beyond linear diffusion Equation (2.9) is a nonlocal version of a reaction-diffusion equation; the diffusion is nonlinear if \(\gamma\) depends on population density: in other words, if the diffusivity of the population depends on the population density. Passing to the classical limit, we recover equations like (2.14). Such equations are widely used in a number of contexts in biology in which motility within a population varies with population density. For example, density dependent dispersal is a common feature in spatial models in ecology, eukaryotic cell biology, and avascular tumour growth; see Sherratt (2010) and references therein for further discussion. In particular, it has been suggested as a model for the expansion of a certain type of bacteria on a thin layer of agar in a Petri dish (Cohen et al., 1999). We shall pay particular attention to the case in which the equation can be thought of as modelling the density of an expanding population. We focus on the monostable reaction of (2.14). Comparing with (2.9), we see that to set up a limit in which the population density \(\varphi\) follows the porous medium equation with logistic growth of (2.14), we need \(r=1\), \(\gamma=\varphi\), and \(F=1-\varphi\). Consulting equation (2.2), this implies that \(\mu_{\theta}=\max\left(0,(1+1/\theta)\varphi-1/\theta\right)\). In other words, establishment is certain and birth rates increase linearly with population density, but to compensate, death rates increase slightly faster (also linearly). Alert readers will notice that the condition from Assumptions 2.8 that \(\gamma(x,m)\) be uniformly bounded is violated. This can be corrected by use of a cut-off, and in fact the downwards drift provided by the logistic control of the population size prevents \(m\) from getting too big. In practice the simulations shown in Figure 2 take discrete time steps of length \(dt\) (with \(dt\) suitably small), and have each individual reproduce and die with probabilities, respectively, \[p_{\text{birth}}(m)=\left(1-e^{-mdt}\right)\qquad p_{\text{death}}(m)=\left(1- e^{-(m(1+1/\theta)-1/\theta)dt}\right),\] where \(m\) is the local density at their location. This makes \(\gamma(x,m)=p_{\text{birth}}(m)/dt\approx m\) and \[F(x,m)=\theta(\gamma(x,m)-\mu(x,m))/dt\approx 1-m.\] Birth and death rates are equal at density \(m=1\), corresponding to an unscaled density of \(N\) individuals per unit area. In one dimension, equation (2.14) has an explicit travelling wave solution \[w^{P}(t,x):=\left(1-e^{\frac{1}{2}(x-x_{0}-t)}\right)_{+}. \tag{3.1}\] Notice that the wave profile has a sharp boundary at \(x=x_{0}+t\). There are also travelling wave solutions with \(c>1\)(Gilding and Kersner, 2005), which lack this property. However, for initial conditions that decay sufficiently rapidly at infinity, such as one might use in modelling a population invading new territory, the solution converges to (3.1) (Kamin and Rosenau, 2004). In Figure 2 we show simulations of the individual based model described above, which display travelling wave solutions qualitatively similar to solutions of (2.14), with better agreement for smaller \(\theta/N\) (but in both cases, \(N\) is reasonably large). Figure 2: Simulated populations under a porous medium equation with logistic growth (2.14) in \(d=1\), \(\theta/N\) small on the left; large on the right. Values of \(\theta\) in top and bottom figures are 1 and 100, respectively, and both have \(N\) set so that the density is roughly 100 individuals per unit of habitat (as displayed on the vertical axis). See text for details of the simulations. ### Ancestry in different types of travelling waves Although it remains challenging to establish the distribution of genealogical trees relating individuals sampled from our population model, as described in the introduction, we can gain some insight by investigating the motion of a single ancestral lineage. Here we do that in the context of a one-dimensional population expanding into new territory as a travelling wave. We focus on three cases in which we have explicit information about the shape of the travelling wave profile: the Fisher-KPP equation, a special case of the Allen-Cahn equation with a bistable nonlinearity, and the porous media equation with logistic growth, equation (2.14). We work here in one dimension, and take \(\sigma^{2}=1\) and \(\vec{b}=0\). Fisher-KPP equation:Consider the classical Fisher-KPP equation, \[\partial_{t}\varphi=\partial_{xx}\varphi+\varphi(1-\varphi). \tag{3.2}\] Even though we do not have an explicit formula for the wave shape in this case, our methods provide information about ancestral lineages. The equation has non-negative travelling wave solutions of speed \(c\) for all \(c\geq 2\), but, started from any compact perturbation of a Heaviside function, the solution will converge to the profile \(w^{\hat{F}}\) with the minimal wavespeed, \(c=2\)(Kolomogorov et al., 1937; Fife and McLeod, 1977; Bramson, 1983). No matter what initial condition, for any \(t>0\) the support of the solution will be the whole real line. In this case, we must have \(r=\gamma=1\), and \(F(x,m)=1-m\) so \(\mu_{\theta}(x,m)=1+(m-1)/\theta\). By Corollary 2.27, the generator of the motion of an ancestral lineage is \[\mathcal{L}_{F}f=\partial_{xx}f+2\frac{\partial_{x}w^{F}}{w^{F}}\partial_{x}f +2\partial_{x}f. \tag{3.3}\] Near the tip of the wave (for \(x\) large), \(w^{F}(x)\sim e^{-x}\), so (3.3) implies that the motion of a lineage is close to unbiased Brownian motion. On the other hand, in the "bulk", a lineage behaves approximately as Brownian motion with drift at rate two to the right. This implies that ancestral lineages are pushed into the tip of the wave, and there is no stationary distribution, so that long-term dynamics of genetic inheritance depend on the part of the wave not well-approximated by a smooth profile, in agreement with the previous results referred to in the Introduction. Allen-Cahn equation:Now take the Allen-Cahn equation: \[\partial_{t}\varphi=\partial_{xx}\varphi+\varphi(1-\varphi)(2\varphi-1+s), \tag{3.4}\] for a given \(s\in(0,2)\). Once again we have taken \(r=\gamma=1\), but now the reaction term \(F(x,m)=(1-m)(2m-1+s)\) is bistable. This equation can be used to model the motion of so-called hybrid zones in population genetics; see, for example, Barton (1979), Gooding (2018), and Etheridge et al. (2022). This equation has an explicit travelling wave solution with speed \(s\) and shape \[w^{A}(x)=(1+e^{x})^{-1},\] i.e., \(\phi_{t}(x)=w^{A}(x-st)\) solves (3.4). Substituting \(w^{A}\) in place of \(w^{F}\) in (3.3), we find that the generator of an ancestral lineage relative to the wavefront is now, \[\mathcal{L}_{A}f =\partial_{xx}f+2\frac{\partial_{x}w^{A}}{w^{A}}\partial_{x}f+s \partial_{x}f\] \[=\partial_{xx}f-2\frac{e^{x}}{1+e^{x}}\partial_{x}f+s\partial_{x }f,\] so lineages in the tip are pushed leftwards into the bulk of the wave at a rate \(s-2e^{x}/(1+e^{x})\). The density of the speed measure for this diffusion is \[m_{A}(x)\propto e^{sx}(1+e^{x})^{-2},\] which is integrable, and so determines the unique stationary distribution. Thus the position of the ancestral lineage relative to the wavefront will converge to a stationary distribution which is maximised away from the extreme tip of the wave. This is consistent with Etheridge and Penington (2022), who consider an analogous stochastic population model, although the stronger result there (that the genealogy of a sample from behind the wavefront is approximately a Kingman coalescent) requires the stronger condition \(s<1\). Porous Medium equation with logistic growth:Finally, consider equation (2.14). Setting \(x_{0}=0\) (for definiteness) and substituting the form of \(w^{P}\) from equation (3.1) into Corollary 2.27, with \(c=1\), \(\gamma(x,m)=m\), \(r(x,w)=1\), and \(F(x,m)=(1-m)\), the generator of the diffusion governing the position of the ancestral lineage relative to the wavefront is, for \(x<0\), \[\mathcal{L}_{P}f =w^{P}\left(\partial_{xx}f+2\frac{\partial_{x}((w^{P})^{2})}{(w^ {P})^{2}}\partial_{x}f\right)+\partial_{x}f\] \[=\left(1-e^{\frac{1}{2}x}\right)\partial_{xx}f-2e^{\frac{1}{2}x} \partial_{x}f+\partial_{x}f.\] The speed measure corresponding to this diffusion has density \[m_{P}(\xi) \propto\frac{1}{2(1-e^{\xi/2})}\exp\left(\int_{\eta}^{\xi}\left\{ 1-\frac{e^{x/2}}{1-e^{x/2}}\right\}dx\right)\] \[\propto e^{\xi}\left(1-e^{\xi/2}\right),\quad\text{for }\xi<0\] and \(m_{P}(\xi)=0\) for \(\xi\geq 0\), which is integrable and so when suitably normalised gives the unique stationary distribution. Notice that even though we have the same reaction term as in the Fisher-KPP equation, with this form of nonlinear diffusion, at stationarity the lineage will typically be significantly behind the front, suggesting a different genealogy. It is interesting to compare the stationary distribution we have obtained here to the expression that we'd get by setting \(\vec{b}=-\mathfrak{c}\) and using Corollary 2.26, i.e., by giving each offspring a mean displacement that offsets the motion of the wave. In the Fisher-KPP and Allen-Cahn cases above, we get the same expressions, but this is only the case because \(r\equiv\gamma\equiv 1\) in both. For the PME with \(\vec{b}=1\) we have \(\mathcal{B}^{*}=\Delta+\nabla\) and so the equation solved by the population density is \[\partial_{t}\varphi=\Delta\varphi+2\varphi\nabla\varphi+\varphi(1-\varphi),\] which has a traveling wave solution of the same shape but moving at half the speed, \(\varphi(x,t)=w^{P}(x-t/2)\), and a stationary distribution of the lineage relative to the wavefront of \(\pi(\xi)\propto e^{3\xi/2}(1-e^{\xi/2})^{3}\). ### Clumping from nonlocal interactions Simulating these processes and exploring parameter space, one sooner or later comes upon a strange observation: with certain parameter combinations, the population spontaneously forms a regular grid of stable, more or less discrete patches, separated by areas with nearly no individuals, as shown in Figure 3. The phenomenon is discussed in Section 16.10 of Haller and Messer (2022), and has been described in similar models, e.g., by Britton (1990), Sasaki (1997), Hernandez-Garcia and Lopez (2004), Young et al. (2001), and Berestycki et al. (2009). For example, if the density-dependent effects of individuals extend farther (but not too much farther) than the typical dispersal distance, then depending on the interaction kernel new offspring landing between two clumps can effectively find themselves in competition with _both_ neighbouring clumps, while individuals within a clump compete with only one. More mathematically, consider the case in which \(\mathcal{B}=\sigma^{2}\Delta\) and all parameters are spatially homogeneous, so that \(r(x,\eta)=r(\rho_{r}\!*\!\eta(x))\), and similarly for \(\gamma\) and \(F\). If \(\varphi_{0}\) is such that \(F(\varphi_{0})=0\) and \(F^{\prime}(\varphi_{0})<0\), then the constant solution \(\varphi\equiv\varphi_{0}\) is a nontrivial equilibrium of (1.1). However, this constant solution may not be unique, it may be unstable, and a stable solution may have oscillations on a scale determined by the interaction distance. To understand the stability of the constant solution \(\varphi\equiv\varphi_{0}\), we linearise (1.1) around \(\varphi_{0}\): let \(\varphi_{t}(x)=\varphi_{0}+\psi_{t}(x)\), and (informally) \(r(x)\approx r(\varphi_{0})+r^{\prime}(\varphi_{0})\rho_{r}\!*\!\psi(x)\). Writing \(r_{0}=r(\varphi_{0})\) and \(r_{0}^{\prime}=r^{\prime}(\varphi_{0})\), with analogous expressions for \(\gamma\) and \(F\), \[\partial_{t}\psi\approx\sigma^{2}\varphi_{0}r_{0}\gamma_{0}^{\prime}\Delta \rho_{\gamma}\!*\!\psi+\sigma^{2}r_{0}\gamma_{0}\Delta\psi+\varphi_{0}F_{0}^{ \prime}\rho_{F}\!*\!\psi.\] Letting \(\widehat{f}(u)=\int e^{2\pi iux}f(x)dx\) denote the Fourier transform, \[\partial_{t}\widehat{\psi}(u)\approx\left\{-u^{2}\sigma^{2}\varphi_{0}r_{0} \gamma_{0}^{\prime}\widehat{\rho}_{\gamma}(u)-u^{2}\sigma^{2}r_{0}\gamma_{0 }+\varphi_{0}F_{0}^{\prime}\widehat{\rho}_{F}(u)\right\}\widehat{\psi}(u). \tag{3.5}\] In the simplest case, in which \(\gamma\) is constant, so \(\gamma_{0}^{\prime}=0\), this reduces to \[\partial_{t}\widehat{\psi}(u)\approx\left(-u^{2}\sigma^{2}r_{0}\gamma_{0}+ \varphi_{0}F_{0}^{\prime}\widehat{\rho}_{F}(u)\right)\widehat{\psi}(u). \tag{3.6}\] If we take \(\rho_{F}=p_{\epsilon^{2}}\), then \(\widehat{\rho}_{F}(u)=\exp(-2\pi\epsilon^{2}u^{2})\) and (recalling that \(F_{0}^{\prime}<0\)) the term in brackets is always negative, and we recover the well-known fact that in this case the constant solution is stable. If, on the other hand, \(\widehat{\rho}_{F}\) changes sign, there may be values of \(u\) for which the corresponding quantity is positive. For example, if \(d=1\) and \(\rho_{F}(x)={\bf 1}_{[-\epsilon,\epsilon]}(x)/2\epsilon\), then \(\widehat{\rho}_{F}(u)=\sin(2\pi\epsilon u)/(2\pi\epsilon u)\), which is negative for \(u\in(1/(2\epsilon),1/\epsilon)\) (and periodically repeating intervals). Setting \(v=\epsilon u\), the bracketed term on the right hand side of (3.6) becomes \[\varphi_{0}F^{\prime}_{0}\frac{1}{2\pi v}\sin(2\pi v)-\frac{\sigma^{2}}{ \epsilon^{2}}v^{2}r_{0}\gamma_{0},\] and we see that if \(\sigma^{2}/\epsilon^{2}\) is sufficiently small, there are values of \(v\) for which this is positive. In other words, in keeping with our heuristic above, if dispersal is sufficiently short range relative to the range over which individuals interact, there are unstable frequencies that scale with the interaction distance \(\epsilon\). In two dimensions, replacing the indicator of an interval by that of a ball of radius \(\epsilon\), a similar analysis applies, except that the sine function is replaced by a Bessel function. Now suppose that \(\gamma\) is not constant. Then, from (3.5), if we take \(\rho_{\gamma}=\rho_{F}=p_{\epsilon}^{2}\), \[\partial_{t}\widehat{\psi}(u)=e^{-2\pi^{2}\epsilon^{2}u^{2}}\left\{-\sigma^{2 }\varphi_{0}r_{0}\gamma^{\prime}_{0}u^{2}-\sigma^{2}r_{0}\gamma_{0}u^{2}e^{2 \pi^{2}\epsilon^{2}u^{2}}+\varphi_{0}F^{\prime}_{0}\right\}\widehat{\psi}(u).\] If we make the (reasonable) assumption that \(\gamma^{\prime}_{0}<0\), then we see that even when the Fourier transform of \(\rho\) does not change sign, there may be parameter values for which the constant solution is unstable. As before, we set \(v=\epsilon u\). The term in brackets becomes \[\frac{\sigma^{2}}{\epsilon^{2}}v^{2}r_{0}\left(-\varphi_{0}\gamma^{\prime}_{0 }-\gamma_{0}e^{2\pi^{2}v^{2}}\right)+\varphi_{0}F^{\prime}_{0},\] and, provided \(-\varphi_{0}\gamma^{\prime}_{0}/\gamma_{0}>1\), for sufficiently small \(v\) the term in round brackets is positive. We now see that if \(\sigma^{2}/\epsilon^{2}\) is sufficiently _large_, the equilibrium state \(\varphi\equiv\varphi_{0}\) is unstable. As before, the unstable frequencies will scale with \(\epsilon\) and for given \(F\), \(r\) and \(\gamma\), whether or not such unstable frequencies exist will be determined by \(\sigma^{2}/\epsilon^{2}\), but in this case of Gaussian kernels, it is interaction distance being sufficiently small relative to dispersal that will lead to instability. ### Lineage motion is not uniquely determined by population density It is natural for applications to wonder about identifiability: when can the observed quantities like population density or certain summaries of lineage movement uniquely determine the underlying demographic parameters? Consider a deterministic, continuous population generated by parameters \(\gamma\), \(r\), and \(F\), with \(\vec{b}=0\) and \(\mathbf{C}=I\). Suppose it has a stationary profile \(w(x)\), that must satisfy \[r\Delta(\gamma w)+Fw=0.\] It is easy to see that \(w\) does not uniquely specify \(\gamma\), \(F\), and \(r\): let \(\lambda(x)\) be a smooth, nonnegative function on \(\mathbb{R}^{d}\), and let \(\widetilde{r}(x,m)=\lambda(x)r(x,m)\) and \(\widetilde{F}(x,m)=\lambda(x)F(x,m)\) (and, let \(\widetilde{\gamma}=\gamma\)). Since \(\mu=r\gamma-F/\theta\), this corresponds to multiplying both establishment probabilities and death rates by \(\lambda\). Then the population with parameters \(\widetilde{\gamma}\), \(\widetilde{r}\), and \(\widetilde{F}\) has the same stationary profile(s) as the original population. Figure 3: **Left:** A snapshot of individual locations in a two-dimensional simulation in which the constant density is unstable and a stable, periodic pattern forms. **Right:** Population density in an expanding wave in a one-dimensional simulation forming a periodic pattern; each panel shows the wavefront in three periods of time; within each period of time the wavefront at earlier times is shown in blue and later times in pink. In both cases, \(\gamma(m)=3/(1+m)\), \(\mu\equiv 0.3\), and \(r\equiv 1\); dispersal is Gaussian with \(\sigma=0.2\) and density is measured with \(\rho_{\gamma}(x)=p_{9}(x)\), i.e., using a Gaussian kernel with standard deviation 3. Can these two situations be distinguished from summaries of lineage movement? The first has lineage generator \[f\mapsto\mathcal{L}f=r\gamma\left(\Delta f+2\nabla\log(\gamma w)\cdot\nabla f \right),\] while the second has lineage generator \(f\mapsto\lambda(x)\mathcal{L}f(x)\). In other words, although the stationary profile of the population is unchanged when we scale local establishment and death by \(\lambda\), the motion of lineages is sped up locally by \(\lambda\). This corresponds to making areas with \(\lambda>1\) more "sink-like" and \(\lambda<1\) more "source-like": if \(\lambda(x)>1\), then at \(x\) both the death rate and probability of establishment of new individuals are higher. As a result, lineages in the second model spend more time in areas with \(\lambda<1\), i.e., those areas have higher reproductive value, something that is, in principle, discernible from genetic data (because, for instance, making reproductive value less evenly distributed reduces long-term genetic diversity). ## 4 Heuristics In this section we perform some preliminary calculations and use them to provide heuristic arguments for our main results, to build intuition before the formal proofs. ### The population density _We reiterate that in our prelimiting model, the population is represented by a point measure \(\eta^{N}\) in which each individual is assigned a mass \(1/N\). We use the term "population density" for this process, as it is supposed to measure population size relative to a nominal occupancy of \(N\) individuals per unit area, but it is not absolutely continuous with respect to Lebesgue measure._ We write \(\mathcal{P}^{N}\) for the generator of the scaled population process \(\eta^{N}\) of Definition 2.4 acting on test functions of the form \(G(\left\langle f,\eta\right\rangle)\), where \(f\geq 0\) is smooth and bounded on \(\mathbb{R}^{d}\) and \(G\in C^{\infty}([0,\infty))\). Recall that \(\theta=\theta(N)\to\infty\) as \(N\to\infty\) in such a way that \(\theta(N)/N\to\alpha\). A Taylor expansion allows us to write \[\mathcal{P}^{N}G(\left\langle f,\eta\right\rangle)=G^{\prime}( \left\langle f,\eta\right\rangle)\lim_{\delta t\downarrow 0}\frac{1}{ \delta t}\mathbb{E}\left[\left\langle f,\eta_{\delta t}\right\rangle-\left\langle f,\eta\right\rangle\right|\eta_{0}=\eta\right]\\ +\frac{1}{2}G^{\prime\prime}(\left\langle f,\eta\right\rangle) \lim_{\delta t\downarrow 0}\frac{1}{\delta t}\mathbb{E}\left[\left(\left\langle f, \eta_{\delta t}\right\rangle-\left\langle f,\eta\right\rangle\right)^{2} \right|\eta_{0}=\eta\right]+\epsilon_{N}(f,G,\eta), \tag{4.1}\] where the terms that make up \(\epsilon_{N}(f,G,\eta)\) will be negligible in our scaling limit (at least if \(G^{\prime\prime\prime}<\infty\)). #### Mean measure Recall that in our parameterization only death rates \(\mu_{\theta}\) and the dispersal kernel \(q_{\theta}\) depend on \(\theta\). For a suitable test function \(f\), we find \[\begin{split}\mathcal{P}^{N}\langle f,\eta\rangle&= \lim_{\delta t\downarrow 0}\frac{1}{\delta t}\mathbb{E}\left[\,\langle f, \eta_{\delta t}\rangle-\langle f,\eta\rangle|\,\eta_{0}=\eta\right]\\ &=\theta\int\int f(z)r(z,\eta)q_{\theta}(x,dz)\gamma(x,\eta) \eta(dx)-\theta\int f(x)\mu_{\theta}(x,\eta)\eta(dx).\end{split} \tag{4.2}\] The first term is the increment in \(\langle f,\eta\rangle\) resulting from a birth event (recalling that we don't kill the parent) integrated against the rate of such events, and the second reflects death events. The factor of \(\theta\) appears from the time rescaling. In both terms, the rate of events has a factor of \(N\) (because events happen at a rate proportional to the number of individuals, whereas \(\eta\) has mass \(1/N\) for each individual) which is offset by the fact that the birth or loss of a single individual at the point \(y\), say, changes \(\langle f,\eta\rangle\) by \(f(y)/N\). We use the fact that \(\int q_{\theta}(x,dz)=1\) to rewrite (4.2) as \[\begin{split}\int\left(\int\theta\left(f(z)r(z,\eta)-f(x)r(x, \eta)\right)q_{\theta}(x,dz)\right)\gamma(x,\eta)\eta(dx)\\ +\int f(x)\theta\Big{(}r(x,\eta)\gamma(x,\eta)-\mu_{\theta}(x, \eta)\Big{)}\eta(dx).\end{split} \tag{4.3}\] We have defined \(\mu_{\theta}\) so that the second term is simple: \[\theta\Big{(}r(x,\eta)\gamma(x,\eta)-\mu_{\theta}(x,\eta)\Big{)}=F(x,\eta).\] Furthermore, recall from Remark 2.6 that \[\int\theta\Big{(}r(z,\eta)f(z)-r(x,\eta)f(x)\Big{)}q_{\theta}(x,dz)\qquad \overset{\theta\to\infty}{\longrightarrow}\qquad\mathcal{B}\big{(}r(\cdot, \eta)f(\cdot)\big{)}(x). \tag{4.4}\] In particular, if dispersal is determined by a standard multivariate Gaussian with mean zero and covariance \(\sigma^{2}I/\theta\), then \(\mathcal{B}=\sigma^{2}\Delta\), where \(\Delta\) denotes the Laplacian. In summary, equation (4.3) converges to \[\int\gamma(x,\eta)\mathcal{B}\big{(}f(\cdot)r(\cdot,\eta)\big{)}(x)\eta(dx)+ \int f(x)F(x,\eta)\eta(dx), \tag{4.5}\] which explains the form of the martingale of Theorem 2.10. #### Quadratic variation We now look at the second order term in (4.1), which will converge to the quadratic variation of the limiting process. An individual at location \(x\) gives birth to a surviving offspring at \(y\) at rate \[\gamma(x,\eta)r(y,\eta)q_{\theta}(x,dy),\] and since this increments \(\langle f,\eta\rangle\) by \(f(y)/N\), the contribution to the quadratic variation from birth events, which occur at rate \(\theta\) per individual (so, rate \(N\theta|\eta|\) overall), is \[\int N\theta\gamma(x,\eta)\int\frac{1}{N^{2}}f^{2}(y)r(y,\eta)q_{ \theta}(x,dy)\eta(dx).\] Similarly, the increment in \(\langle f,\eta\rangle\) resulting from the death of an individual at \(x\) is \(-f(x)/N\), and so combining with the above, the second order term in the generator takes the form \[G^{\prime\prime}(\langle f,\eta\rangle)\frac{1}{2}N\theta\left\{ \int\gamma(x,\eta)\int\frac{1}{N^{2}}f^{2}(y)r(y,\eta)q_{\theta}(x,dy)\eta(dx) +\int\mu_{\theta}(x,\eta)\frac{1}{N^{2}}f^{2}(x)\eta(dx)\right\}\] \[\qquad=\frac{1}{2}G^{\prime\prime}(\langle f,\eta\rangle)\frac{ \theta}{N}\int\left\{\gamma(x,\eta)\int f^{2}(y)r(y,\eta)q_{\theta}(x,dy)+f^{ 2}(x)\mu_{\theta}(x,\eta)\right\}\eta(dx).\] Since \(\int f^{2}(y)r(y,\eta)q_{\theta}(x,dy)\to f^{2}(x)r(x,\eta)\) and \(r\gamma+\mu_{\theta}=2r\gamma-F/\theta\to 2r\gamma\) as \(\theta\to\infty\), this converges to \[\frac{\alpha}{2}G^{\prime\prime}(\langle f,\eta\rangle)\big{\langle}2r(x,\eta )\gamma(x,\eta),\eta(dx)\big{\rangle}.\] An entirely analogous argument shows that if \(G^{\prime\prime\prime}\) is bounded, then the term \(\epsilon_{\theta,N}(f,G,\eta)\) in (4.1) will be \(\mathcal{O}(\theta/N^{2})\). If we hold \(\rho_{\gamma}\), \(\rho_{r}\), \(\rho_{F}\) fixed, then by taking \(\theta/N\to 0\), the second order term in the generator will vanish and we expect a deterministic limit, for which \(\partial_{t}\langle f,\eta_{t}\rangle\) is equal to (4.5). In other words, the limit is a weak solution to the deterministic equation \[\partial_{t}\varphi_{t}(x)=r(x,\varphi_{t})\mathcal{B}\big{(} \gamma(\cdot,\varphi_{t})\varphi_{t}(\cdot)\big{)}(x)+F(x,\varphi_{t})\varphi _{t}(x) \tag{4.6}\] in the sense of Definition 2.12, where \(\varphi_{t}\) is the density of \(\eta_{t}\), if it has a density. On the other hand, if \(N=\alpha\theta\) for some \(\alpha>0\), the second order term remains, and we expect a "generalised superprocess" limit. The limiting quadratic variation is exactly as seen in Theorem 2.10. One-step convergence:In order to pass directly to a classical PDE limit in Theorem 2.20 we impose the stronger condition that \(\theta/N\epsilon^{d}\to 0\) and also require that \(\theta\epsilon^{2}\to\infty\). Recall that in this case, we take \(\rho_{F}^{\epsilon}\) to be a symmetric Gaussian density with variance \(\epsilon^{2}\). The condition \(\theta\epsilon^{2}\to\infty\) ensures that \(\epsilon^{2}\) is large enough relative to \(1/\theta\) that the regularity gained by smoothing our population density by convolution with \(\rho_{\epsilon}\) is preserved under the dynamics dictated by \(q_{\theta}\). To understand the first condition, note that we are aiming to obtain a deterministic expression for the limiting population density. It is helpful to think about a classical Wright-Fisher model (with no spatial structure and just two types, say). We know then that if the timescale \(\theta\) is on the same order as population size \(N\), we see stochastic fluctuations in the frequencies of the two types in the limit as \(N\to\infty\); to obtain a deterministic limit, we look over timescales that are short relative to population size. In our setting, the total population size is replaced by the local population size, as measured by convolution with \(\rho_{\epsilon}\), which we expect to be of order \(N\epsilon^{d}\), and so in order to ensure a deterministic limit we take \(\theta/(N\epsilon^{d})\to 0\). ### Motion of ancestral lineages Although our proof of Theorem 2.23 uses an explicit representation in terms of the lookdown process, the result can be understood through informal calculations. Suppose that we have traced a lineage back to an individual at location \(y\) at time \(t\). Looking further back through time, at the time of the birth of that individual, the lineage will jump to the location of the parent of the individual. Now, the rate at which new individuals are born to parents at \(x\) and establish at \(y\) is \[\theta N\eta_{t}^{N}(dx)\gamma(x,\eta_{t}^{N})q_{\theta}(x,dy)r(y,\eta_{t}^{N}).\] Suppose that \(\eta^{N}\) did have a density (in the prelimit it does not), say \(\eta_{t}^{N}(dx)=\varphi_{t}^{N}(x)dx\). Informally, since the number of individuals near \(y\) is \(N\varphi_{t}^{N}(y)dy\), the probability that a randomly chosen individual near \(y\) is a new offspring from a parent at \(x\) in \([t,t+dt)\) is \[\frac{\theta\varphi_{t}^{N}(x)\gamma(x,\eta_{t}^{N})r(y,\eta_{t}^{N})}{ \varphi_{t}^{N}(y)}\frac{q_{\theta}(x,dy)}{dy}dxdt. \tag{4.7}\] Leaving aside questions of whether a lineage can be treated as a randomly chosen individual, we define a continuous-time jump process whose transition rates, conditional on \((\varphi_{t}^{N})_{t=0}^{T}\), are given by (4.7). Because we are tracing the lineage backwards in time we make the substitution \(s=T-t\) and write \((L_{s}^{N})_{s=0}^{T}\) for the location of a lineage that moves according to these jump rates. Then, abusing notation to write \(q_{\theta}(x,y)\) for the density of \(q_{\theta}(x,dy)\), \[\begin{split}&\mathbb{E}[f(L_{s+ds}^{N})-f(y)\mid L_{s}^{N}=y]\\ &\qquad=ds\,\theta\int\left(f(x)-f(y)\right)\frac{\varphi_{T-s}^{ N}(x)\gamma(x,\eta_{T-s}^{N})r(y,\eta_{T-s}^{N})}{\varphi_{T-s}^{N}(y)}q_{ \theta}(x,y)dx.\end{split} \tag{4.8}\] (Note that this integral is with respect to \(x\).) Referring back to Remark 2.6, a quick calculation shows that as \(N\to\infty\), \[\begin{split}\theta\int&\big{(}f(x)-f(y)\big{)}g(x )q_{\theta}(x,y)dx\\ &\qquad=\theta\int\big{\{}(f(x)g(x)-f(y)g(y))-f(y)(g(x)-g(y)) \big{\}}q_{\theta}(x,y)dx\\ &\qquad\to\mathcal{B}^{*}(fg)(y)-f(y)\mathcal{B}^{*}g(y).\end{split}\] Applying this to (4.8) with \(g=\varphi_{T-s}\gamma\), this suggests that the generator of the limiting process is \[\mathcal{L}_{s}f=\frac{r}{\varphi_{T-s}}\left\{\mathcal{B}^{*}(\gamma\varphi _{T-s}f)-f\mathcal{B}^{*}(\gamma\varphi_{T-s})\right\}. \tag{4.9}\] This agrees with Theorem 2.23. The lookdown process Our characterisation of the motion of lines of descent (from which we establish that of ancestral lineages) when we pass to the scaling limit in our model will be justified via a lookdown construction. In this section we present such a construction for the general population model of Definition 2.4. It will be in the spirit of Kurtz and Rodrigues (2011). The general set-up is as follows. Each individual will be labelled with a "level", a number in \([0,N]\). We will still encode the process embellished by these levels as a point measure: if the \(i^{\text{th}}\) individual's spatial location is \(x_{i}\) and level is \(u_{i}\), then we will write \[\xi^{N}=\sum_{i}\delta_{x_{i},u_{i}},\] which is a measure on \(\mathbb{R}^{d}\times[0,N]\). Note that each individual contributes mass \(1\) to the measure, not \(1/N\) as above. If we assign mass \(1/N\) to each individual and ignore the levels we will recover our population model. Moreover, at any time, the levels of individuals in a given spatial region will be exchangeable and conditionally uniform on \([0,N]\): in particular, choosing the \(k\) individuals with the lowest levels in that region is equivalent to taking a uniform random sample of size \(k\) from the population in the region. However, this exchangeability is only as regards the _past_: an individual's level encodes information about their future reproductive output, since individuals with lower levels tend to live longer, and have more offspring. For more explanation of the set-up and how this is possible, see Kurtz and Rodrigues (2011) and Etheridge and Kurtz (2019) (and note that our \(N\) corresponds to the \(\lambda\) of those papers). The power of this approach is that we can pass to a limit under the same scalings as described in Theorem 2.10, and the limiting "spatial-level" process will still be a point measure, and so we explicitly retain the notion of individuals and lineages in the infinite-population limit. ### Lookdown representation of the model of Definition 2.4 _For the remainder of this section, when there is no risk of ambiguity we shall suppress the superscript \(N\) on the processes \(\eta\) and \(\xi\)._ In this subsection, we'll define the process \((\xi_{t})_{t\geq 0}\) in terms of the dynamics of labelled particles, and write down its generator. The dynamics depend on the spatial locations of particles, and in this section \(\eta_{t}\) is the corresponding spatial measure, i.e., \[\eta_{t}(\cdot)=\frac{1}{N}\xi_{t}(\cdot\times[0,N]).\] A nontrivial consequence of the way we define \(\xi_{t}\) will be that this process has the same distribution as the process \((\eta_{t})_{t\geq 0}\) of Definition 2.4. (This provides our justification for using the same notation for both.) Following Etheridge and Kurtz (2019), we build the generator step by step from its component parts. Suppose that the initial population is composed of \(O(N)\) particles with levels uniformly distributed on \([0,N]\), and that the current state of the population is \(\xi\), with spatial projection \(\eta\). An individual at spatial location \(x\) with level \(u\) produces one juvenile offspring at rate \[2\theta\left(1-\frac{u}{N}\right)\gamma(x,\eta),\] which disperses to a location relative to \(x\) drawn from the kernel \(q_{\theta}(x,\cdot)\). Averaging over the uniform distribution of the level \(u\), we recover the birth rate \(\theta\gamma(x,\eta)\). This juvenile - suppose its location is \(y\) - either survives, with probability \(r(y,\eta)\), or immediately dies. (As before, "maturity" is instantaneous.) If it survives, a new level \(u_{1}\) is sampled independently and uniformly from \([u,N]\), and the parent and the offspring are assigned in random order to the levels \(\{u,u_{1}\}\). This random assignment of levels to parent and offspring will ensure that assignment of individuals to levels remains exchangeable. Evidently this mechanism increases the proportion of individuals with higher levels. To restore the property that the distribution of levels is conditionally uniform given \(\eta\), we impose that the level \(v\) of an individual at location \(x\) evolves according to the differential equation \[\dot{v}=-\theta\frac{v}{N}\left(N-v\right)\gamma(x,\eta)\int_{\mathbb{R}^{d}} r(y,\eta)q_{\theta}(x,dy).\] Since \(v\in[0,N]\), this moves levels down; see Etheridge and Kurtz (2019), Section 3.4 for a detailed explanation. Levels never cross below \(0\), while particles whose levels move above \(N\) are regarded as dead (and are removed from the population). Therefore, in order to incorporate death, the level of the individual at location \(x\) with level \(u\) moves upwards at an additional rate \(\theta\mu_{\theta}(x,\eta)u\). Since levels are uniform, it is easy to check that if \(\mu_{\theta}\) were constant, this would imply an exponential lifetime for each individual; see Etheridge and Kurtz (2019), Section 3.1 for more general justification. Putting these together, the level \(u\) of an individual at \(x\) evolves according to: \[\dot{u}=-\theta\frac{u}{N}\left(N-u\right)\gamma(x,\eta)\int_{\mathbb{R}^{d}} r(y,\eta)q_{\theta}(x,dy)+\theta\mu_{\theta}(x,\eta)u. \tag{5.1}\] We shall write \[b_{\theta}(x,\eta):=\theta\left(\gamma(x,\eta)\int_{\mathbb{R}^{d}}r(y,\eta)q _{\theta}(x,dy)-\mu_{\theta}(x,\eta)\right),\] which captures the local net difference between reproduction and death, and \[c_{\theta}(x,\eta):=\frac{\theta}{N}\gamma(x,\eta)\int_{\mathbb{R}^{d}}r(y, \eta)q_{\theta}(x,dy), \tag{5.2}\] which captures the local rate of production of successful offspring. Recall from equation (2.2) that \(F(x,\eta)=\theta(r(x,\eta)\gamma(x,\eta)-\mu_{\theta}(x,\eta))\), and so \[b_{\theta}(x,\eta)=\theta\gamma(x,\eta)\int_{\mathbb{R}^{d}}\left(r(y,\eta)-r (x,\eta)\right)q_{\theta}(x,dy)+F(x,\eta). \tag{5.3}\] Under Assumptions 2.8, as \(\theta\to\infty\) and \(c_{\theta}(x,\eta)\) will tend to \(\alpha\gamma(x,\eta)r(x,\eta)\) \[b_{\theta}(x,\eta)\to\gamma(x,\eta)\mathcal{B}r(x,\eta)+F(x,\eta). \tag{5.4}\] We can then rewrite the differential equation governing the dynamics of the level of each individual as \[\dot{u} =\theta\gamma(x,\eta)\int_{\mathbb{R}^{d}}r(y,\eta)q_{\theta}(x,dy )\left\{-\frac{u}{N}\left(N-u\right)+u\right\}-b_{\theta}(x,\eta)u\] \[=c_{\theta}(x,\eta)u^{2}-b_{\theta}(x,\eta)u. \tag{5.5}\] Now, we can write down the generator for \((\xi_{t})_{t\geq 0}\), the lookdown process. In what follows, we will write sums (and, products) over "\((x,u)\in\xi\)" to mean a sum over the (location, level) pairs of each individual in the population. Test functions for \(\xi\) will take the form \[f(\xi)=\prod_{(x,u)\in\xi}g(x,u)=\exp\left(\int\log g(x,u)\xi(dx,du)\right), \tag{5.6}\] where \(g(x,u)\) is differentiable in \(u\) and smooth in \(x\). We will also assume that \(0\leq g(x,u)\leq 1\) for all \(u\in[0,N]\), and \(g(x,u)\equiv 1\) for \(u\geq N\). In the expressions that follow, we shall often see one or more factor of \(1/g(x,u)\); it should be understood that if \(g(x,u)=0\), then it simply cancels the corresponding factor in \(f(\xi)\). First consider the terms in the generator that come from birth events. When a birth successfully establishes, a new level is generated above the parent's level, and this new level is assigned to either the offspring or the parent. Since the probability of each is \(1/2\), the contribution of birth to the generator maps \(f(\xi)\) to \[f(\xi)\sum_{(x,u)\in\xi}2\frac{\theta}{N}\gamma(x,\eta)\int_{u}^{N}\int_{ \mathbb{R}^{d}}\left(\frac{1}{2}\bigg{\{}g(y,u_{1})+\frac{g(y,u)g(x,u_{1})}{g (x,u)}\bigg{\}}-1\right)r(y,\eta)q_{\theta}(x,dy)du_{1} \tag{5.7}\] \[=f(\xi)\sum_{(x,u)\in\xi}2\gamma(x,\eta)\bigg{\{}\frac{1}{2N}\int_{u}^{N}g(x, u_{1})du_{1}\frac{\theta\int_{\mathbb{R}^{d}}(g(y,u)-g(x,u))r(y,\eta)q_{\theta}(x,dy )}{g(x,u)} \tag{5.8}\] In (5.7), \(u_{1}\) is the new level and \(y\) is the offspring's location, and so the two terms in the integral correspond to the two situations: in the first, we have added an individual at \((y,u_{1})\), while in the second, we replace an individual at \((x,u)\) by one at \((x,u_{1})\) and another at \((y,u)\). We've rewritten it in the form (5.8) because each of the two pieces naturally converges to a separate term in the limit. The remaining term in the generator is due to the motion of particles' levels. Reading off from (5.5), it takes the form \[f(\xi)\sum_{(x,u)\in\xi}\left(c_{\theta}(x,\eta)u^{2}-b_{\theta}(x,\eta)u\right) \frac{\partial_{u}g(x,u)}{g(x,u)}. \tag{5.9}\] We can now define the spatial-level process explicitly as a solution to a martingale problem, whose generator is just the sum of (5.8) and (5.9). We need some notation. Write \(\mathcal{C}=\mathcal{C}(\mathbb{R}^{d}\times[0,\infty))\) for the counting measures on \(\mathbb{R}^{d}\times[0,\infty)\) and \(\mathcal{C}_{N}\) for the subset consisting of counting measures on \(\mathbb{R}^{d}\times[0,N]\). **Definition 5.1** (Martingale Problem Characterisation): _For given positive values of \(N\) and \(\theta\), define the generator \(A^{N}\) by_ \[A^{N}f(\xi)\] \[\quad=f(\xi)\sum_{(x,u)\in\xi}2\gamma(x,\eta)\Bigg{\{}\frac{1}{2N }\int_{u}^{N}g(x,u_{1})du_{1}\frac{\theta\int_{\mathbb{R}^{d}}(g(y,u)-g(x,u))r (y,\eta)q_{\theta}(x,dy)}{g(x,u)}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{\theta}{ N}\int_{u}^{N}\int_{\mathbb{R}^{d}}\left(\frac{g(y,u_{1})+g(x,u_{1})}{2}-1 \right)r(y,\eta)q_{\theta}(x,dy)du_{1}\Bigg{\}}\] \[\qquad\qquad+f(\xi)\sum_{(x,u)\in\xi}\left(c_{\theta}(x,\eta)u^{2 }-b_{\theta}(x,\eta)u\right)\frac{\partial_{u}g(x,u)}{g(x,u)}, \tag{5.10}\] _where \(f(\xi)=\prod_{(x,u)\in\xi}g(x,u)\) is as defined in (5.6), and \(\eta(\cdot)=\xi(\cdot\times[0,N])/N\) as before. Given \(\xi_{0}\in\mathcal{C}_{N}\), we say that a \(\mathcal{D}_{[0,\infty)}(\mathcal{C}_{N})\)-valued process \((\xi_{t})_{t\geq 0}\) is a solution to the \((A^{N},\xi_{0})\) martingale problem if \(f(\xi_{t})-f(\xi_{0})-\int_{0}^{t}A^{N}f(\xi_{s})ds\) is a martingale (with respect to the natural filtration) for all test functions \(f\) as defined above._ The martingale problem for finite \(N\) has a unique solution. Next we state the limiting martingale problem, for which we do not necessarily have uniqueness. As before, the parameter \(\alpha\) will correspond to \(\lim_{N\to\infty}\theta(N)/N\). Whereas for finite \(N\), conditional on the population process \(\eta_{t}^{N}\), the levels of particles are independent and uniformly distributed on \([0,N]\), in the infinite population limit, conditional on \(\eta_{t}\), the process \(\xi_{t}\) is Poisson distributed on \(\mathbb{R}^{d}\times[0,\infty)\) with mean measure \(\eta_{t}\times\lambda\), where \(\lambda\) is Lebesgue measure. **Definition 5.2** (Martingale Problem Characterisation, scaling limit): _Fix \(\alpha\in[0,\infty)\), and define test functions \(f\) by \(f(\xi)=\prod_{(x,u)\in\xi}g(x,u)\) with \(g\) differentiable in \(u\), smooth in \(x\), satisfying \(0\leq g(x,u)\leq 1\) and such that there exists a \(u_{0}\) with \(g(x,u)=1\) for all \(u>u_{0}\)._ _Then, define the operator \(A\) on such test functions by_ \[Af(\xi) =f(\xi)\sum_{(x,u)\in\xi}\gamma(x,\eta)\frac{\mathcal{B}(g(\cdot,u)r( \cdot,\eta))(x)-g(x,u)\mathcal{B}r(x,\eta)}{g(x,u)}\] \[\qquad+f(\xi)\sum_{(x,u)\in\xi}2\alpha\gamma(x,\eta)r(x,\eta)\int_ {u}^{\infty}(g(x,u_{1})-1)du_{1}\] \[\qquad+f(\xi)\sum_{(x,u)\in\xi}\left(\alpha\gamma(x,\eta)r(x,\eta )u^{2}-\left\{\gamma(x,\eta)\mathcal{B}r(x,\eta)+F(x,\eta)\right\}u\right) \frac{\partial_{u}g(x,u)}{g(x,u)}. \tag{5.11}\] _We say that a \(\mathcal{D}_{[0,\infty)}(\mathcal{C})\)-valued process \((\xi_{t})_{t\geq 0}\) is a solution to the \((A,\xi_{0})\) martingale problem if it has initial distribution \(\xi_{0}\) and \(f(\xi_{t})-f(\xi_{0})-\int_{0}^{t}Af(\xi_{s})ds\) is a martingale (with respect to the natural filtration) for all test functions \(f\) as defined above._ The lookdown processes have been carefully constructed so that observations about the past spatial positions of individuals in the population do not give us any information about the assignment of individuals to levels. In other words, the dynamics of the lookdown process preserve the conditionally uniform (or in the limit, conditionally Poisson) structure - if started with uniform levels, levels are uniform at all future times. Moreover, if we average over levels in the expression for the generator (equation (5.10) or (5.11)) we recover the generator for the population process. Once this is verified (along with some boundedness conditions) the Markov Mapping Theorem (Theorem A.1; also see Etheridge and Kurtz [2019]) tells us that by "removing labels" from the lookdown process \(\xi\) we recover the population process \(\eta\). To make this precise, define the spatial projection maps \(\kappa^{N}:\mathcal{M}(\mathbb{R}^{d}\times[0,N])\to\mathcal{M}(\mathbb{R}^{d})\) by \(\kappa^{N}(\xi^{N})(\cdot)=\xi^{N}(\cdot\times[0,N])/N\), and \(\kappa:\mathcal{M}(\mathbb{R}^{d}\times[0,\infty))\to\mathcal{M}(\mathbb{R}^ {d})\) by \(\kappa(\xi)(\cdot)=\lim_{u_{0}\to\infty}\xi(\cdot\times[0,u_{0}])/u_{0}\). We will also need an inverse notion: for a measure \(\xi^{N}\) on \(\mathbb{R}^{d}\times[0,N]\) and a \(\sigma\)-field \(\mathcal{F}\), we say that \(\xi^{N}\)_is conditionally uniform given \(\mathcal{F}\)_ if \(\kappa^{N}(\xi)\) is \(\mathcal{F}\)-measurable and for all compactly supported \(f\), \[\mathbb{E}[e^{-\langle f,\xi\rangle}\mid\mathcal{F}]=e^{-\langle H_{f}^{N}, \kappa^{N}(\xi)\rangle}, \tag{5.12}\] where \[H_{f}^{N}(x)=-N\log\frac{1}{N}\int_{0}^{N}e^{-f(x,u)}du.\] In other words, the \([0,N]\) components of \(\xi\) are independent, uniformly distributed on \([0,N]\), and independent of \(\kappa^{N}(\xi)\). Similarly, for a measure \(\xi\) on \(\mathbb{R}^{d}\times[0,\infty)\) we say that \(\xi\)_is a conditionally Poisson random measure given \(\mathcal{F}\)_ if \(\kappa(\xi)\) is \(\mathcal{F}\)-measurable and for all compactly supported \(f\), \[\mathbb{E}[e^{-\langle f,\xi\rangle}\mid\mathcal{F}]=e^{-\langle\int_{0}^{ \infty}(1-e^{-f(x,u)})du,\kappa(\xi)(dx)\rangle}. \tag{5.13}\] In other words, \(\xi\) is conditionally Poisson with Cox measure \(\kappa(\xi)\times\lambda\), where \(\lambda\) is Lesbegue measure. **Proposition 5.3**: _If \(\widetilde{\eta}^{N}\) is a solution of the martingale problem of Definition 2.4 with initial distribution \(\eta_{0}^{N}\) then there exists a solution \(\xi^{N}\) of the \((A^{N},\xi_{0}^{N})\)-martingale problem of Definition 5.1 such that \(\eta^{N}=\kappa^{N}\circ\xi^{N}\) has the same distribution on \(D_{\mathcal{M}_{F}(\mathbb{R}^{d})}[0,\infty)\) as \(\widetilde{\eta}^{N}\). Furthermore, for each \(t\), \(\xi_{t}^{N}\) is conditionally uniform given \(\mathcal{F}_{t}^{\eta^{N}}\) in the sense of (5.12)._ _Similarly, if \(\widetilde{\eta}\) is a solution of the limiting martingale problem of Theorem 2.10 with initial distribution \(\eta_{0}\) then there exists a solution \(\xi\) of the martingale problem of of Definition 5.2 such that \(\eta=\kappa\circ\xi\) has the same distribution on \(D_{\mathcal{M}_{F}(\mathbb{R}^{d})}[0,\infty)\) as \(\widetilde{\eta}\). Furthermore, \(\xi_{t}\) is conditionally Poisson given \(\mathcal{F}_{t}^{\eta}\) in the sense of (5.13)._ Now we can present the main convergence theorem that is analogous to Theorem 2.10 for the population process. **Theorem 5.4**: _Let \((\xi_{t}^{N})\) satisfy Definition 5.1 and assume that as \(N\to\infty\), \(\theta\to\infty\) in such a way that \(\theta/N\to\alpha\). Let \(\eta_{0}^{N}=\kappa(\xi_{0}^{N})\) and suppose also that \(\eta_{0}^{N}\to\eta_{0}\) in \(\mathcal{M}_{F}(\mathbb{R}^{d})\), and that for each \(N\), \(\xi_{0}^{N}\) is conditionally uniform given \(\eta_{0}^{N}\) in the sense of (5.12). Then, \((\xi_{t}^{N})_{t\geq 0}\) has a subsequence which converges in distribution as \(N\to\infty\) to a measure-valued process \((\xi_{t})_{t\geq 0}\) with \(\xi_{t}\) conditionally Poisson given \(\eta_{t}=\kappa(\xi_{t})\) for each \(t\) in the sense of (5.13), that is a solution to the martingale problem of Definition 5.2._ Both results are proved in Section 8. ### Explicit construction of lines of descent The main interest in using a lookdown construction for our population processes is that it allows us to retain information about the relatedness of individuals as we pass to the infinite population limit. In order to exploit this, in this section we write down stochastic equations for the locations and levels of individuals in the prelimiting lookdown model. We will then be able to pass to the scaling limit. This provides an explicit description of the solution to the limiting martingale problem of Definition 5.2 which will enable us to identify all individuals in the current population that are descendants of a given ancestor at time zero. In theory at least, this allows us to recover all the information about genealogies relating individuals sampled from the present day population. This idea draws on the notion of "tracers", popular in statistical physics and used in population genetics by a number of authors including Hallatschek and Nelson (2008), Durrett and Fan (2016), and Biswas et al. (2021). We will construct the process using a Ulam-Harris indexing scheme. First, we assign each individual alive at time \(0\) a unique label from \(\mathbb{N}\). Suppose an individual with label \(a\) and level \(u\) reproduces, and as a result there are two individuals, one with level \(u\) and one with a new level \(u_{1}>u\). The parent individual, previously labeled \(a\), might be assigned either level. We will track chains of descendant individuals forwards through time by following levels, rather than individuals, and will call this a _line of descent_. So, after reproduction, we give a new label to _only_ the individual that is given the new level \(u_{1}\), retaining the label \(a\) for the individual with the old level \(u\). In this way, at each birth event, a unique label is assigned to the resulting individual with the higher level, and the label of an individual may change throughout its lifetime. Concretely, then: for each label \(a\) in \(\mathcal{I}=\bigcup_{k\geq 1}\mathbb{N}^{k}\), let \(\Pi_{a}\) be an independent Poisson process on \([0,\infty)^{2}\times\mathbb{R}^{d}\times\{0,1\}\). The mean measure of each \(\Pi_{a}\) is a product of Lebesgue measure on \([0,\infty)^{2}\), the density of the standard Gaussian on \(\mathbb{R}^{d}\), and \((\delta_{0}+\delta_{1})/2\) on \(\{0,1\}\). It will also be convenient to suppose that for each label \(a\) we have an enumeration of the points in \(\Pi_{a}\), so we may refer to "the \(j^{\text{th}}\) point in \(\Pi_{a}\)", although the precise order of this enumeration is irrelevant. If \((\tau,v,z,\kappa)\) is the \(j^{\text{th}}\) point in \(\Pi_{a}\), then \(\tau\) will determine a possible birth time, \(v\) will determine the level of the offspring, \(z\) will determine the spatial displacement of the offspring relative to the parent, \(\kappa\) will be used to determine whether parent or offspring is assigned the new level, and the new label produced will be \(a\oplus j\), i.e., the label \(a\) with \(j\) appended (so, if \(a=(a_{1},\ldots,a_{k})\) then \(a\oplus j=(a_{1},\ldots,a_{k},j)\)). Each label \(a\) has a birth time \(\tau_{a}\), when it is first assigned, and a (possibly infinite) death time \(\sigma_{a}\), when its level first hits \(N\). For any \(\tau_{a}\leq t\leq\sigma_{a}\) we denote by \(X_{a}(t)\) and \(U_{a}(t)\) the spatial location and level of the individual carrying label \(a\) at time \(t\), respectively. Furthermore, define \[\eta_{t}^{N}=\frac{1}{N}\sum_{a:\tau_{a}\leq t<\sigma_{a}}\delta_{X_{a}(t)} \qquad\text{and}\qquad\xi_{t}^{N}=\sum_{a:\tau_{a}\leq t<\sigma_{a}}\delta_{( X_{a}(t),U_{a}(t))}.\] Now, since we have defined labels so that the level does not jump, \(U_{a}\) satisfies (5.5) for \(\tau_{a}\leq t\leq\sigma_{a}\), i.e., \[\begin{split} U_{a}(t)&=U_{a}(\tau_{a})\\ &+\int_{\tau_{a}}^{t}\big{(}c_{\theta}(X_{a}(s),\eta_{s})U_{a}(s) ^{2}-b_{\theta}(X_{a}(s),\eta_{s})U_{a}(s)\big{)}\,ds,\end{split} \tag{5.14}\] and, of course, \(\sigma_{a}=\inf\{t\geq\tau_{a}:U_{a}(t)>N\}\). Potential reproduction events occur at times \(\tau\) for each point \((\tau,v,z,\kappa)\in\Pi_{a}\) with \(\tau_{a}\leq\tau<\sigma_{a}\). (We say "potential" since if the level of the resulting offspring is greater than \(N\), the event does not happen.) If this is the \(j^{\text{th}}\) point in \(\Pi_{a}\), the potential new label is \(a\oplus j\), the birth time is \(\tau_{a\oplus j}=\tau\), and the spatial displacement of the potential offspring is \(y(X(\tau-),z)\), where \[y(x,z):=\frac{1}{\theta}\vec{b}(x)+\frac{1}{\sqrt{\theta}}K(x)z,\] and \(K(x)K^{T}(x)=\mathbf{C}(x)\). Next we must choose the new level created at the birth event. We would like an individual with level \(u\) and at spatial position \(x\) to produce offspring at \(y\) at instantaneous rate \[2\Big{(}1-\frac{u}{N}\Big{)}\theta\gamma(x,\eta)r(x+y,\eta). \tag{5.15}\] To do this we will associate the point \((\tau,v,z,\kappa)\in\Pi_{a}\) with level \(u+v\ell\), where \(\ell\) is chosen so that the rate of appearance of points in \(\Pi_{a}\) with level below \(N\), that is points with \(v\ell<N-u\), is given by (5.15). Since the mean measure of \(\Pi_{a}\) is Lebesgue measure in the \(t\) and \(v\) directions, we must take \[\ell(x,y,\eta)=\frac{N-u}{2(1-u/N)\theta\gamma(x,\eta)r(x+y,\eta)}=\frac{1}{2N^{ -1}\theta\gamma(x,\eta)r(x+y,\eta)}, \tag{5.16}\] and, using this, the (potential) new level is \[U_{a\oplus j}(\tau)=U_{a}(\tau)+v\ell\big{(}X_{a}(\tau-),y(X_{a}(\tau-),z), \eta_{\tau-}\big{)}.\] If \(U_{a\oplus j}(\tau)<N\), the new individual labeled \(a\oplus j\) is produced, and \(\kappa\) determines which label, \(a\) or \(a\oplus j\), is associated with the new location, so \[X_{a\oplus j}(\tau)=X_{a}(\tau-)+(1-\kappa)y\big{(}X_{a}(\tau-),z\big{)}.\] On the other hand if \(U_{a\oplus j}(\tau)\geq N\), then \(X_{a}\) is unchanged and \(X_{a\oplus j}\) is undefined, so \[X_{a}(\tau)=X_{a}(\tau-)+\kappa y(X_{a}(\tau-),z)\mathbf{1}_{U_{a\oplus j}( \tau)<N}. \tag{5.17}\] Recall that the parental _individual_ always retains their spatial location, so that \(\kappa=0\) corresponds to the parent being assigned a new level, and our line of descent switching to the offspring. Combining these observations, \(X_{a}\), for \(\tau_{a}\leq t<\sigma_{a}\), solves the equation \[X_{a}(t)=X_{a}(\tau_{a})+\int_{[\tau_{a},t)\times[0,\infty)\times\mathbb{R} \times[0,1]}y(X_{a}(\tau-),z)\kappa\mathbf{1}_{U_{a}(\tau)+v\ell(X_{a}(\tau-), y(X_{a}(\tau-),z),\eta_{\tau-})<N}d\Pi_{a}(\tau,v,z,\kappa).\] Although we have described the evolution of a line of descent only for a given label (i.e., for \(\tau_{a}\leq t<\sigma_{a}\)), we can extend the definition to times \(0\leq t<\sigma_{a}\) by setting \(X_{a}(t)\) equal to \(X_{[a]_{t}}(t)\), where \([a]_{t}\) is the label of the ancestor of label \(a\) alive at time \(t\), and similarly for \(U_{a}(t)\). It is then straightforward, albeit tedious, to write down the time evolution of \((X_{a}(t),U_{a}(t))\) for all time back to \(t=0\) in terms of the driving Poisson processes. **Remark 5.5**: _Although we have a single construction that couples the processes across all \(N\), unlike in Kurtz and Rodrigues [2011] the actual trajectories, \(X_{a}(\cdot)\), do not necessarily coincide for different values of \(N\), since they are affected by the whole population process. However, this does suggest approximating the genealogies in the infinite density limit by simulating up until a sufficiently high level that we have a good approximation to the population process._ ### Limiting processes for lines of descent The previous section constructed the lookdown process using the same underlying Poisson processes \(\{\Pi_{a}\}_{a\in\mathcal{I}}\) for different values of \(N\). As a result, if the spatial projections \(\eta\) converge, then individual lines of descent converge pointwise as \(N\to\infty\). To see this, first note that if the Poisson processes are fixed then the set of events with which a given label \(a\in\mathcal{I}\) is associated is also fixed - this is the sequence \((\tau_{k},v_{k},z_{k},\kappa_{k})\) associated with the label \(a\). To conclude that the lines of descent converge, first, we clearly need that the spatial projections \(\eta\) converge. Supposing that they do, consider how a line of descent \((X_{a}(t),U_{a}(t))\) evolves. It throws off a new line of descent at a higher level when there is a point \((\tau,v,z,\kappa)\) in \(\Pi_{a}\) with \(\tau>\tau_{a}\) and \[v<2\frac{\big{(}N-U_{a}(\tau)\big{)}}{N}\theta\gamma(X_{a}(\tau-),\eta_{\tau-})r \Big{(}X_{a}(\tau-)+y\big{(}X_{a}(\tau-),z\big{)},\eta_{\tau-}\Big{)}. \tag{5.18}\] Since the mean measure of the \(v\) coordinate is Lebesgue measure, \(\theta/N\to\alpha\), and \(q_{\theta}(x,dy)\to\delta_{x}(dy)\), this corresponds in the limit to new lines of descent being thrown off according to a Poisson process with intensity \[2\alpha\gamma(X_{a}(t),\eta_{t})r(X_{a}(t),\eta_{t})dt\times du.\] Now consider the location of the line of descent: at each birth event, with probability one half the line of descent jumps to \(X_{a}(t)+y\). Taking \(g\) to be a suitable test function on \(\mathbb{R}^{d}\), and rewriting (5.18), when the level is \(u\) and the state of the population is \(\eta\), the generator of the spatial motion of the line of descent applied to \(g(x)\) is \[\Big{(}1-\frac{u}{N}\Big{)}\,\gamma(x,\eta)\theta\int_{\mathbb{R}^ {d}}r(x+y,\eta)(g(x+y)-g(x))q_{\theta}(x,dy)\] \[\qquad=\Big{(}1-\frac{u}{N}\Big{)}\,\gamma(x,\eta)\bigg{\{}\theta \int_{\mathbb{R}^{d}}(r(x+y,\eta)g(x+y)-r(x,\eta)g(x))q_{\theta}(x,dy)\] \[\qquad\qquad-\theta\int_{\mathbb{R}^{d}}(r(x+y,\eta)-r(x,\eta))g (x)q_{\theta}(x,dy)\bigg{\}}\] \[\qquad\to\gamma(x,\eta)\left(\mathcal{B}(rg)(x)-g(x)\mathcal{B}( r)(x)\right),\qquad\text{as }N,\theta\to\infty.\] Notice that the factors of 2 have cancelled, and that the result is independent of \(u\). Also recall that \(r(x,\eta)\) depends on \(\eta\) only through \(\rho_{r}*\eta(x)\), which is guaranteed to be smooth, so that \(\mathcal{B}(r)\) and \(\mathcal{B}(gr)\) are well-defined. We write out the differential operator above in more detail. Recall that \(\mathcal{B}g(x)=\sum_{i}\vec{b}_{i}\partial_{i}g(x)+\sum_{ij}\mathbf{C}_{ij} \partial_{ij}g(x)\), and for the moment write \(r(x)\) for \(r(x,\eta)\), \(\vec{b}(x)=\vec{b}\), and \(\mathbf{C}(x)=\mathbf{C}\) so that \[\mathcal{B}(rg)(x)-g(x)\mathcal{B}(r)(x) =r(x)\sum_{i}\vec{b}_{i}\partial_{i}g(x)+2\sum_{ij}\partial_{i}r( x)\mathbf{C}_{ij}\partial_{j}g(x)+r(x)\sum_{ij}\mathbf{C}_{ij}\partial_{ij}g(x)\] \[=r(x)\left\{\Big{(}\vec{b}+2\mathbf{C}\nabla\log r(x)\Big{)}\cdot \nabla g(x)+\sum_{ij}\mathbf{C}_{ij}\partial_{ij}g(x)\right\}. \tag{5.19}\] The only thing that remains is to describe how the levels change, but this is immediate from applying limit (5.4) to equation (5.5). We summarize the results in a proposition. **Proposition 5.6** (Line of descent construction): _Define \(J(x,\eta)\) and \(\beta(x,\eta)\) by_ \[r(x,\eta)\gamma(x,\eta){\bf C}(x)=J(x,\eta)J(x,\eta)^{T}\] \[\beta(x,\eta)=r(x,\eta)\gamma(x,\eta)\big{(}\vec{b}(x)+2{\bf C}(x) \nabla\log r(x,\eta)\big{)}.\] _Associate with each label \(a\in{\cal I}=\cup_{k\geq 1}{\mathbb{N}}^{k}\) an independent \(d\)-dimensional Brownian motion \(W_{a}\) and an independent Poisson process \(R_{a}\) on \([0,\infty)^{2}\) with Lebesgue mean measure, and with points ordered in some way. Given \(\eta_{0}\in{\cal M}_{F}({\mathbb{R}}^{d})\), let \((x_{i},u_{i})\) be the points of a Poisson process on \({\mathbb{R}}^{d}\times[0,\infty)\) with mean measure \(\eta_{0}\times\lambda\) (the product of \(\eta_{0}\) and Lebesgue measure). For each \(i\), begin a line of descent with label \(i\), location \(X_{i}(0)=x_{i}\), level \(U_{i}(0)=u_{i}\), and birth time \(\tau_{i}=0\)._ _Write \(\tau_{a}\) for the birth time of the label \(a\) and \(\sigma_{a}=\lim_{u_{0}\to\infty}\inf\{t\geq 0:U_{a}(t)>u_{0}\}\) the time the level hits \(\infty\). Suppose that the spatial locations and level of each line of descent \(a\) solve, for \(\tau_{a}\leq t<\sigma_{a}\),_ \[X_{a}(t)=X_{a}(\tau_{a})+\int_{\tau_{a}}^{t}\beta(X_{a}(s),\eta_ {s})ds+\int_{\tau_{a}}^{t}J(X_{a}(s),\eta_{s})dW_{a}(s)\] \[U_{a}(t)=U_{a}(\tau_{a})+\int_{\tau_{a}}^{t}\bigg{(}\alpha\gamma( X_{a}(s),\eta_{s})r(X_{a}(s),\eta_{s})U_{a}(s)^{2} \tag{5.20}\] \[\qquad\qquad\qquad\qquad-\big{\{}\gamma(X_{a}(s),\eta_{s}){\cal B }r(X_{a}(s),\eta_{s})+F(X_{a}(s),\eta_{s})\big{\}}U_{a}(s)\bigg{)}ds.\] _Each point in each \(R_{a}\) denotes a potential birth time for \(a\): if the \(j^{\text{th}}\) point in \(R_{a}\) is \((\tau,v)\), with \(\tau_{a}\leq\tau<\sigma_{a}\), then a new line of descent with label \(a\oplus j\) is produced, with birth time \(\tau_{a\oplus j}=\tau\), location \(X_{a\oplus j}(\tau)=X_{a}(\tau)\), and level_ \[U_{a\oplus j}(\tau)=U_{a}(\tau)+\frac{v}{2\alpha\gamma(X_{a}(\tau),\eta_{\tau })r(X_{a}(\tau),\eta_{\tau})},\] _if this is finite. For any solution to the equations above, the processes defined by_ \[\eta_{t}=\lim_{u_{0}\to\infty}\frac{1}{u_{0}}\sum_{a:\tau_{a}\leq t<\sigma_{a} \ ;\ U_{a}(t)<u_{0}}\delta_{X_{a}(t)}\qquad\text{and}\qquad\xi_{t}=\sum_{a:\tau_{ a}\leq t<\sigma_{a}}\delta_{(X_{a}(t),U_{a}(t))}\] _are solutions to the martingale problems of Theorems 2.10 and 5.4, respectively._ In particular, note that if \(\alpha=0\), no new lines of descent are produced. More precisely, comparing with (5.16), they are produced, but "at infinity", and their trace is seen in the spatial motion of the line of descent which results from the production of these lineages. **Proof** [of Proposition 5.6] The fact that a solution to the system of equations (5.20) is a solution to the martingale problem of Theorem 5.4 is an application of Ito's theorem. Furthermore, in Proposition 5.3 we showed that the conditional Poisson property of \(\xi_{0}\) is preserved (i.e., holds for \(\xi_{t}\) for all \(t\)), and so \((\eta_{t})_{t\geq 0}\) is well-defined, and furthermore that \(\eta_{t}\) is a solution to the martingale problem of Theorem 2.10. \(\square\) Proofs of the remainder of these results are in Section 8. **Remark 5.7**: _The process we consider is similar to the state-dependent branching processes of Kurtz and Rodrigues (2011), so one might expect that the proofs there would carry over with little change. However, there is an important difference: Recall that the level \(U_{a}(t)\) of a line of descent evolves as_ \[\dot{u}=c_{\theta}(x,\eta)u^{2}-b_{\theta}(x,\eta)u, \tag{5.21}\] _where \(b_{\theta}(x,\eta)\) and \(c_{\theta}(x,\eta)\) are defined in (5.3) and (5.2) respectively. Note that \(c_{\theta}(x,\eta)\geq 0\), while \(b_{\theta}(x,\eta)\) may take either sign. Assumptions 2.8 imply that \(c_{\theta}(x,\eta)\) is bounded, while \(b_{\theta}(x,\eta)\), because of \(F(x,\eta)\), is bounded above but not necessarily below. In Kurtz and Rodrigues (2011), \(b_{\theta}\) was bounded above and \(c_{\theta}\) was bounded away from zero, so they noted that if \(U_{a}(t)\geq b_{\theta}/c_{\theta}\) for some label \(a\), that line of descent would only move upwards from that time onwards. Furthermore, coefficients did not depend on the state of the process (i.e., on \(\eta\)), thus allowing the processes to be jointly and simultaneously constructed for all values of \(N\), with a pointwise embedding of \((\xi_{t}^{N})_{t\geq 0}\) within \((\xi^{M})_{t\geq 0}\) for \(b_{\theta}/c_{\theta}<N<M\). In other words, individuals with levels above \(N>b_{\theta}/c_{\theta}\) at time \(t_{0}\) do not affect \((\xi_{t}^{N})_{t\geq t_{0}}\), thus allowing a comparison of the number of lines of descent below level \(u_{0}\) to a branching process. Although we have provided a joint construction of \(\xi^{N}\) for all \(N\) in Section 5.2, it does not have this monotonicity: for one thing, \(b_{\theta}\) and \(c_{\theta}\) depend on the population process \(\eta\) and so all individuals can affect all other ones (even those with lower levels). Furthermore, in the deterministic case \(\theta/N\), and hence \(c\), converges to zero, and so lines of descent with arbitrarily high level may drift back downwards. Indeed, this must be the case if the population persists, since in the deterministic case there is no branching._ ## 6 Proofs of convergence for nonlocal models In this section we present formal proofs of the first two of our three scaling limits. In Subsection 6.2 we prove Theorem 2.10, to obtain (both stochastic and deterministic) limits in which interactions between individuals in the population are nonlocal. In Subsection 6.3 we show how, in two important examples in which the nonlocal limit is respectively a deterministic solution to a non-local equation of reaction-diffusion type and a deterministic solution to a nonlocal porous medium equation with an additional logistic growth term, one can pass to a further limit to obtain a classical PDE. ### Preliminaries Below we will have frequent use for the quantity \[B_{f}^{\theta}(x,\eta)=\theta\int_{\mathbb{R}^{d}}(f(y)r(y,\eta)-f(x)r(x,\eta) )q_{\theta}(x,dy). \tag{6.1}\] First, we prove Lemma 2.9. **Proof** [of Lemma 2.9] Here, we need to prove that \(|\gamma(x,\eta)B_{f}^{\theta}(x,\eta)|\) is bounded, uniformly over \(x\) and \(\eta\). First suppose that Condition 1 of Lemma 2.9 is satisfied. We write \[r(y,\eta)f(y)-r(x,\eta)f(x) =r(y,\eta)(f(y)-f(x))+(r(y,\eta)-r(x,\eta))f(x)\] \[=r(y,\eta)\left(\sum_{i}(y-x)_{i}\partial_{x_{i}}f(x)+\sum_{ij}(y- x)_{i}(y-x)_{j}\partial_{x_{i}x_{j}}f(z_{1})\right)\] \[\qquad+f(x)\left(\sum_{i}(y-x)_{i}\partial_{x_{i}}r(x,\eta)+\sum _{ij}(y-x)_{i}(y-x)_{j}\partial_{x_{i}x_{j}}r(z_{2},\eta)\right)\] \[=\left(r(x,\eta)+\sum_{j}(y-x)_{j}\partial_{x_{j}}r(z_{3},\eta) \right)\left(\sum_{i}(y-x)_{i}\partial_{x_{i}}f(x)\right)\] \[\qquad+r(y,\eta)\sum_{ij}(y-x)_{i}(y-x)_{j}\partial_{x_{i}x_{j}} f(z_{1})\] \[\qquad+f(x)\left(\sum_{i}(y-x)_{i}\partial_{x_{i}}r(x,\eta)+\sum _{ij}(y-x)_{i}(y-x)_{j}\partial_{x_{i}x_{j}}r(z_{2},\eta)\right),\] for some \(z_{i}=\kappa_{i}x+(1-\kappa_{i})y\). Integrating this against \(q(x,dy)\), we get that \[\left|\theta\int\left(r(y,\eta)f(y)-r(x,\eta)f(x)\right)q_{\theta }(x,dy)\right|\] \[\qquad\leq\bigg{|}\sum_{i}\left(r(x,\eta)\partial_{x_{i}}f(x)+f(x )\partial_{x_{i}}r(x,\eta)\right)\theta\int(y-x)_{i}q_{\theta}(x,dy)\bigg{|}\] \[\qquad+\left|f(x)\theta\int\sum_{ij}\partial_{x_{i}x_{j}}r(z_{2},\eta)(y-x)_{i}(y-x)_{j}q_{\theta}(x,dy)\right|\] \[\qquad+\left|\theta\int\sum_{ij}(y-x)_{i}(y-x)_{j}\left(\partial_ {x_{i}}f(x)\partial_{x_{j}}r(z_{3},\eta)+r(y,\eta)\partial_{x_{i}x_{j}}f(z_{1} )\right)q_{\theta}(x,dy)\right|\] Since \(q_{\theta}(x,dy)\) is the density of a Gaussian with mean \(\vec{b}(x)/\theta\) and covariance \(\mathbf{C}(x)/\theta\), and both \(\vec{b}(x)\) and \(\mathbf{C}(x)\) are uniformly bounded, so that \(\theta\int(y-x)_{i}q_{\theta}(x,dy)\) is bounded as well. Furthermore, a change of variables that diagonalizes \(\mathbf{C}(x)\) shows for any \(g:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d+d}\), that if \(C_{g}=\sup_{y}\sup_{\|z\|=1}\sum_{ij}g(y)_{ij}z_{i}z_{j}\) and \(\lambda_{*}=\sup_{y}\sup_{\|z\|=1}\sum_{ij}\mathbf{C}(y)_{ij}z_{i}z_{j}\) then \[\theta\int\sum_{ij}g(y)_{ij}(y-x)_{i}(y-x)_{j}q_{\theta}(x,dy)\leq C_{g} \lambda_{*}.\] Condition 1 gives uniform bounds on the derivatives of \(r(x,\eta)=r(x,\rho_{r}*\eta(x))\) in this expression and so, provided \(f\) also has uniformly bounded first and second derivatives, we have a bound of the form \[|B_{f}^{\theta}|\leq K_{1}+K_{2}|f(x)|,\] for suitable constants \(K_{1}\), \(K_{2}\) that depend only on the derivatives of \(f\). Now suppose instead that Condition 2 of Lemma 2.9 is satisfied. First note that \[\begin{split}|B_{f}^{\theta}|&=\left|\theta\int_{ \mathbb{R}^{n}}\big{\{}f(y)r\big{(}y,\rho_{r}\!*\!\eta(y)\big{)}-f(x)r\big{(}x, \rho_{r}\!*\!\eta(x)\big{)}\big{\}}q_{\theta}(x,dy)\right|\\ &\leq\left|\theta\int_{\mathbb{R}^{n}}\big{\{}f(y)r\big{(}y,\rho_ {r}\!*\!\eta(y)\big{)}-f(x)r\big{(}x,\rho_{r}\!*\!\eta(y)\big{)}\big{\}}q_{ \theta}(x,dy)\right|\\ &\qquad+\left|\theta\int_{\mathbb{R}^{n}}\big{\{}f(x)r\big{(}x, \rho_{r}\!*\!\eta(y)\big{)}-f(x)r\big{(}x,\rho_{r}\!*\!\eta(x)\big{)}\big{\}}q _{\theta}(x,dy)\right|.\end{split} \tag{6.2}\] (Note the extra term introduced here, \(r(x,\rho_{r}\!*\!\eta(y)\)), has the two arguments to \(r\) "at different locations", contrary to the usual pattern.) Writing \(K_{3}=\sup_{x,m}\max_{i}|\partial_{x_{i}}f(x)r(x,m)|\) and \(K_{4}=\sup_{x,m}\max_{i,j}|\partial_{x_{i}x_{j}}f(x)r(x,m)|\), the first term is bounded exactly as above. For the second, \[r(x,\rho_{r}\!*\!\eta(y))-r(x,\rho_{r}\!*\!\eta(x))\\ =(\rho_{r}\!*\!\eta(y)-\rho_{r}\!*\!\eta(x))r^{\prime}(x,\rho_{r}\! *\!\eta(x))+\frac{1}{2}(\rho_{r}\!*\!\eta(y)-\rho_{r}\!*\!\eta(x))^{2}r^{\prime \prime}(x,\overline{m}),\] where \(\overline{m}=\kappa^{\prime}\rho_{r}\!*\!\eta(x)+(1-\kappa^{\prime})\rho_{r}\! *\!\eta(y)\) for some \(0\leq\kappa^{\prime}\leq 1\), and we have used \(r^{\prime}\) and \(r^{\prime\prime}\) to denote the first and second derivatives of \(r(x,m)\) with respect to the second argument. So, writing \(K_{5}=\|r^{\prime}\|_{\infty}\) and \(K_{6}=\|r^{\prime\prime}\|_{\infty}\), the second term in (6.2) is bounded by \(f(x)\) multiplied by \[K_{5}\left|\theta\int_{\mathbb{R}^{d}}(\rho_{r}\!*\!\eta(y)-\rho_{r}\!*\!\eta( x))q_{\theta}(x,dy)\right|+K_{6}\left|\theta\int_{\mathbb{R}^{d}}(\rho_{r}\!*\! \eta(y)-\rho_{r}\!*\!\eta(x))^{2}q_{\theta}(x,dy)\right|.\] Under Condition 2 of Assumptions 2.8, this is bounded by a constant times \(\rho_{\gamma}\!*\!\eta(x)+(\rho_{\gamma}\!*\!\eta(x))^{2}\) and \(\sup_{x}m^{2}\gamma(x,m)\) is bounded, so therefore \(|\gamma(x,\eta)B_{f}^{\theta}(x,\eta)|\leq K_{7}+K_{8}f(x)\), where \(K_{7}\) comes from \(K_{3}\), \(K_{4}\), and the supremum of \(\gamma\), while \(K_{8}\) comes from \(K_{5}\), \(K_{6}\), and the supremum of \(m^{2}\gamma(x,m)\). \(\Box\) ### Proof of Theorem 2.10: convergence for the nonlocal process In this section we prove Theorem 2.10. This would be implied by convergence of the look-down process (see Kurtz and Rodrigues [2011] and Etheridge and Kurtz [2019]); however in our setting, because the parameters in the lookdown process depend on the empirical distribution, we actually use tightness of the sequence of population processes in the proofs of tightness for the corresponding lookdown processes. **Proof** [Proof of Theorem 2.10.] The proof follows a familiar pattern. First we extend \(\mathbb{R}^{d}\) to its one-point compactification \(\overline{\mathbb{R}}^{d}\) and establish, in Lemma 6.2, compact containment of the sequence of scaled population processes in \(\mathcal{M}_{F}(\overline{\mathbb{R}}^{d})\) (for which, since we have compactified \(\mathbb{R}^{d}\), it suffices to consider the sequence of total masses); armed with this, tightness of the population processes in \(\mathcal{D}_{[0,\infty)}(\mathcal{M}_{F}(\overline{\mathbb{R}}^{d}))\) follows from tightness of the real-valued processes \((H(\eta_{t}))_{t\geq 0}\) for a sufficiently large class of test functions \(H\), which we establish through an application of the Aldous-Rebolledo criterion in Lemma 6.3. These ingredients are gathered together in Proposition 6.4 to deduce tightness of the scaled population processes in the larger space \(\mathcal{D}_{[0,\infty)}(\mathcal{M}_{F}(\overline{\mathbb{R}}^{d}))\). We then characterise limit points as solutions to a martingale problem in Lemma 6.6; finally in Lemma 6.7 we check that in the process of passing to the limit, no mass 'escaped to infinity', so that in fact the limit points take values in \(\mathcal{D}_{[0,\infty)}(\mathcal{M}_{F}(\mathbb{R}^{d}))\). \(\square\) As advertised, we work with the one-point compactification of \(\mathbb{R}^{d}\) and consider \((\eta_{t}^{N})_{t\geq 0}\) as a sequence of \(\mathcal{M}_{F}(\overline{\mathbb{R}}^{d})\)-valued processes. Since, for each \(K>0\), \(\{\eta:\langle 1,\eta\rangle\leq K\}\) is a compact set in \(\mathcal{M}_{F}(\overline{\mathbb{R}}^{d})\), we shall focus on controlling \((\langle 1,\eta_{t}^{N}\rangle)_{t\geq 0}\). The key is that Assumptions 2.8 are precisely chosen to guarantee boundedness of the net per-capita reproduction rate. **Lemma 6.1**: _Under Assumptions 2.8, for all \(f\in C_{b}^{2}(\mathbb{R}^{d})\) with uniformly bounded first and second derivatives, and all \(T>0\), there exists a \(C=C(f,T)<\infty\), independent of \(N\), such that_ \[\mathbb{E}[\langle f,\eta_{t}^{N}\rangle]\leq C\mathbb{E}[\langle 1,\eta_{0}^{N }\rangle] \tag{6.3}\] _for all \(N\geq 1\)._ **Proof** Consider the semimartingale decomposition from equation (2.3): \[\langle f,\eta_{t}^{N}\rangle=\langle f,\eta_{0}^{N}\rangle+\int_{0}^{t}\int_ {\mathbb{R}^{d}}\big{\{}\gamma(x,\eta_{s}^{N})B_{f}^{\theta}(x,\eta_{s}^{N})+ f(x)F(x,\eta_{s}^{N})\big{\}}\eta_{s}^{N}(dx)ds+M_{t}^{N}(f), \tag{6.4}\] where \(M_{t}^{N}(f)\) is a martingale and \(B_{f}^{\theta}\) is defined in (6.1). First note that Condition 6 of Assumptions 2.8 stipulates that \(|\gamma B_{f}^{\theta}|\) is uniformly bounded by a contant times \(1+f\), and so recalling that \(F\) is bounded above, we conclude that under Assumptions 2.8\(\gamma(x,\eta)B_{f}^{\theta}(x,\eta)+f(x)F(x,\eta)\leq C_{f}(1+f(x))\) for some \(C_{f}\). Now, taking expectations in (6.4), \[\mathbb{E}\left[\langle f,\eta_{t}^{N}\rangle\right]\leq\mathbb{E}\left[ \langle f,\eta_{0}^{N}\rangle\right]+C_{f}\int_{0}^{t}\mathbb{E}\left[\langle 1 +f,\eta_{s}^{N}\rangle\right]dt. \tag{6.5}\] The bound (6.3) then follows by first applying Gronwall's inequality in the case \(f=1\), which yields \[\mathbb{E}\big{[}\langle 1,\eta_{t}^{N}\rangle\big{]}\leq e^{Ct}\mathbb{E} \big{[}\langle 1,\eta_{0}^{N}\rangle\big{]},\] with \(C\) independent of \(N\), and then substituting the resulting bound on \(\mathbb{E}\big{[}\langle 1,\eta_{s}^{N}\rangle\big{]}\) into the expression above. \(\square\) With a bound on per-capita net growth rate in hand, bounds on the expectation of the supremum of the total population size over a finite time interval also follow easily. **Lemma 6.2** (Compact containment for the population process): _Under the assumptions of Theorem 2.10, for each \(T>0\), there exists some constant \(C_{T}\), independent of \(N\), such that_ \[\mathbb{E}\left[\sup_{0\leq t\leq T}\langle 1,\eta_{t}^{N}\rangle\right]\leq C_{T} \mathbb{E}[\langle 1,\eta_{0}\rangle]. \tag{6.6}\] _In particular, for any \(\delta>0\), there exists \(K_{\delta}>0\) such that_ \[\limsup_{N\to\infty}\mathbb{P}\left\{\sup_{s\in[0,T]}\langle 1,\eta_{s}^{N} \rangle>K_{\delta}\right\}\leq\frac{C_{T}}{K_{\delta}}<\delta. \tag{6.7}\] Proof.: First note that by Lemma 6.1, \(\mathbb{E}[\langle 1,\eta_{t}^{N}\rangle]\leq\mathbb{E}[\langle 1,\eta_{0}^{N} \rangle]e^{Ct}\) for some \(C\) (independent of \(N\)). Now, let \(M_{t}^{N*}(f)=\sup_{0\leq s\leq t}M_{t}^{N}(f)\), and as before let \(\langle M^{N}(f)\rangle_{t}\) be the angle bracket process of \(M_{t}^{N}(f)\). The Burkholder-Davis-Gundy inequality says that there is a \(K\) for which \(\mathbb{E}\left[M_{t}^{N*}(1)\right]\leq K\mathbb{E}[\sqrt{[M^{N}(1)]_{t}}]\), where \([M^{N}(1)]_{t}\) is the quadratic variation of \(M^{N}(1)\). Furthermore, as discussed by Hernandez-Hernandez and Jacka [2022], the expectation of the quadratic variation of a local martingale is bounded by a (universal) constant multiple of the expectation of its angle bracket process [Barlow et al., 1986, Item (4.b'), Table 4.1, p. 162]. Now, since \(\sqrt{x}\leq 1+x\), in the notation of Lemma 6.1, there is a \(C^{\prime}\) such that \[\mathbb{E}\left[M_{t}^{N*}(1)\right] \leq C^{\prime}\left(1+\mathbb{E}\left[\left\langle M^{N}(1) \right\rangle_{t}\right]\right)\] \[=C^{\prime}\left(1+\frac{\theta}{N}\mathbb{E}\left[\int_{0}^{t} \Big{\langle}\left\{\gamma(x,\eta_{s}^{N})\int_{\mathbb{R}^{d}}r(y,\eta_{s}^{ N})q_{\theta}(x,dy)+\mu_{\theta}(x,\eta_{s}^{N})\right\},\eta_{s}^{N}(dx)\Big{\rangle} ds\right]\right)\] \[=C^{\prime}\Big{(}1+\mathbb{E}\Big{[}\int_{0}^{t}\Big{\langle} \Big{\{}\frac{2\theta}{N}\gamma(x,\eta_{s}^{N})r(x,\eta_{s}^{N})\] \[+\frac{\gamma(x,\eta_{s}^{N})}{N}B_{1}^{\theta}(x,\eta_{s}^{N})- \frac{1}{N}F(x,\eta_{s}^{N})\Big{\}},\eta_{s}^{N}(dx)\Big{\rangle}ds\Big{]} \Big{)}.\] We have not assumed that \(F\) is bounded below, but to see that the term involving \(-F\) does not cause us problems, we rearrange equation (6.4) with \(f=1\) to see that \[\mathbb{E}\left[\int_{0}^{t}\Big{\langle}-F(x,\eta_{s}^{N}),\eta _{s}^{N}(dx)\Big{\rangle}ds\right] =\mathbb{E}[\langle 1,\eta_{0}^{N}\rangle]-\mathbb{E}[\langle 1,\eta_{t}^{N}\rangle] \tag{6.8}\] \[\qquad+\mathbb{E}\left[\int_{0}^{t}\Big{\langle}\gamma(x,\eta_{s }^{N})B_{1}^{\theta}(x,\eta_{s}^{N}),\eta_{s}^{N}(dx)\Big{\rangle}ds\right],\] which is bounded since \(\gamma(x,\eta)\) and \(B_{1}^{\theta}(x,\eta)\) are both bounded and \(\langle 1,\eta_{t}\rangle\geq 0\). Since \(\theta/N\to\alpha<\infty\), combining constants, we obtain that for some \(C^{\prime\prime}\), \[\mathbb{E}\left[M_{t}^{N*}(1)\right]\leq C^{\prime}+C^{\prime\prime}\mathbb{E }[\langle 1,\eta_{0}^{N}\rangle]e^{tC}.\] Taking suprema and expectations on both sides of equation (6.4), then again using the fact that \(\gamma(x,\eta)B_{1}^{\theta}(x,\eta)+F(x,\eta)\leq C\), \[\mathbb{E}\left[\sup_{0\leq s\leq T}\langle 1,\eta_{s}^{N}\rangle \right] \leq\mathbb{E}[\langle 1,\eta_{0}^{N}\rangle]+\mathbb{E}\left[\sup_{0\leq t \leq T}\int_{0}^{t}\Big{\langle}\left\{\gamma(x,\eta_{s}^{N})B_{1}^{\theta}(x,\eta_{s}^{N})+F(x,\eta_{s}^{N})\right\},\eta_{s}^{N}(dx)\Big{\rangle}ds\right]\] \[\qquad\qquad+\mathbb{E}[M_{t}^{N*}(1)]\] \[\leq\mathbb{E}[\langle 1,\eta_{0}^{N}\rangle]+C\mathbb{E}\left[ \int_{0}^{T}\sup_{0\leq s\leq t}\langle 1,\eta_{s}^{N}\rangle dt\right]+C^{\prime}+C^{ \prime\prime}\mathbb{E}[\langle 1,\eta_{0}^{N}\rangle]e^{tC}.\] Once again applying Gronwall's inequality, \[\mathbb{E}\left[\sup_{0\leq s\leq T}\langle 1,\eta_{s}^{N}\rangle\right]\leq C ^{\prime\prime\prime}\left(1+\mathbb{E}[\langle 1,\eta_{0}^{N}\rangle]\right)e^{2TC}.\] For any \(T\), the quantity on the right is bounded above by a constant \(C(T)\) independent of \(N\). As a result, for any \(K>0\), \[\limsup_{N\to\infty}\mathbb{P}\left[\sup_{0\leq s\leq T}\langle 1,\eta_{s}^{N} \rangle\geq K\right]\leq\frac{C(T)}{K}.\] \(\Box\) Our next task is to show tightness of \((\langle f,\eta_{t}^{N}\rangle)_{t\geq 0}\) for \(f\in C_{b}^{\infty}(\overline{\mathbb{R}}^{d})\). **Lemma 6.3** (Tightness of \((\langle f,\eta_{t}^{N}\rangle)_{t>0}\)): _For each \(f\in C_{b}^{\infty}(\overline{\mathbb{R}}^{d})\), the collection of processes \((\langle f,\eta_{t}^{N}\rangle)_{t\geq 0}\) for \(N=1,2,\ldots\) is tight as a sequence of cadlag, real-valued processes._ **Proof** The Aldous-Rebolledo criterion (Theorem B.2) applied to the semimartingale representation of \(\langle f,\eta_{t}^{N}\rangle\) of equation (6.4), tells us that it suffices to show that for each \(T>0\), (a) for each fixed \(0\leq t\leq T\), the sequence \(\{\langle f,\eta_{t}^{N}\rangle\}_{N\geq 1}\) is tight, and (b) for any sequence of stopping times \(\tau_{N}\) bounded by \(T\), and for each \(\nu>0\), there exist \(\delta>0\) and \(N_{0}>0\) such that \[\sup_{N>N_{0}}\sup_{t\in[0,\delta]}\mathbb{P}\left\{\left|\int_{ \tau}^{\tau+t}\int_{\mathbb{R}^{d}}\left\{\gamma(x,\eta_{s}^{N})B_{f}^{\theta }(x,\eta_{s}^{N})+f(x)F(x,\eta_{s}^{N})\right\}\eta_{s}^{N}(dx)ds\right|>\nu \right\}<\nu, \tag{6.9}\] \[\qquad\qquad\qquad\text{and}\qquad\sup_{N>N_{0}}\sup_{t\in[0, \delta]}\mathbb{P}\left\{\left|[M^{N}(f)]_{\tau+t}-[M^{N}(f)]_{\tau}\right|> \nu\right\}<\nu. \tag{6.10}\] Tightness of \(\langle f,\eta_{t}^{N}\rangle\) for fixed \(t\) follows from Lemma 6.1 and Markov's inequality, so we focus on the remaining conditions. The proof of Lemma 6.1 provides a uniform bound on \(\gamma B_{f}^{\theta}\), but we only know that \(F\) is bounded above. However, by assumption, for each fixed value of \(m\), \(\sup_{k\leq m}|F(x,k)|\) is uniformly bounded as a function of \(x\). Noting that \(\rho_{F}*\eta\leq\langle 1,\eta\rangle\|\rho_{F}\|_{\infty}\), we can use Lemma 6.2 to choose \(N_{0}\) and \(K\) such that if \(N>N_{0}\), then \[\mathbb{P}\left\{\sup_{0\leq s\leq T}\langle 1,\eta_{s}^{N}\rangle\geq K\right\}< \nu/2,\] we now choose \(\delta_{1}\) so that \[\delta_{1}\|f\|_{\infty}\sup\big{\{}\sup_{x}|F(x,k)|:k\leq K\|\rho_{F}\|_{\infty} \big{\}}<\nu/4,\qquad\sup_{x,\eta}\gamma(x,\eta)B_{f}^{\theta}(\,\eta)\delta_{1} <\nu/4,\] so that (6.9) is satisfied with \(\delta=\delta_{1}\). Similarly, \[\big{|}[M^{N}(f)]_{\tau+t}-[M^{N}(f)]_{\tau}\big{|}\] \[\quad=\Big{|}\int_{\tau}^{\tau+t}\frac{\theta}{N}\int_{\mathbb{R}^ {d}}\bigg{\{}\gamma(x,\eta_{s}^{N})\int_{\mathbb{R}^{d}}f^{2}(y)r(y,\eta_{s}^{ N})q_{\theta}(x,dy)+\mu_{\theta}(x,\eta_{s}^{N})f^{2}(x)\bigg{\}}\,\eta_{s}^{N}( dx)ds\Big{|}\] \[\quad=\Big{|}\int_{\tau}^{\tau+t}\frac{\theta}{N}\int_{\mathbb{R} ^{d}}\bigg{\{}\gamma(x,\eta_{s}^{N})\left(2f^{2}(x)r(x,\eta_{s}^{N})+B_{f^{2}} ^{\theta}(x,\eta_{s}^{N})\right)-f^{2}(x)\frac{F(x,\eta_{s}^{N})}{\theta} \bigg{\}}\,\eta_{s}^{N}(dx)ds\Big{|},\] and so using the fact that \(\theta/N\to\alpha<\infty\), an argument entirely analogous to that for (6.9) yields a \(\delta_{2}\) for which (6.10) is satsified. Taking \(\delta=\min\{\delta_{1},\delta_{2}\}\), the result follows. \(\Box\) We collect the implications of the last two lemmas into a proposition. **Proposition 6.4** (Tightness of \((\eta_{t}^{N})_{t\geq 0}\)): _The collection of measure-valued processes \(\{(\eta_{t}^{N})_{t\geq 0}:N\geq 1\}\) is tight in \(\mathcal{D}_{[0,\infty)}(\mathcal{M}_{F}(\overline{\mathbb{R}}^{d}))\)._ **Proof** Theorem 3.9.1 in Ethier and Kurtz (1986) says that if the collection of \(E\)-valued processes satisfies a compact containment condition (for any \(\epsilon>0\) and \(T>0\), there is a compact set such that the processes stay within that set up to time \(T\) with probability at least \(1-\epsilon\)), then the collection is relatively compact (which is equivalent to tightness since we are working on a Polish space) if and only if \(\{(f(\eta_{t}^{N}))_{t\geq 0}:N\geq 1\}\) is relatively compact for all \(f\) in a dense subset of \(C_{b}(E)\) under the topology of uniform convergence in compact sets. Since \(\{\nu:\langle 1,\nu\rangle\leq K\}\) is compact in \(\mathcal{M}_{F}(\overline{\mathbb{R}}^{d})\), Lemma 6.2 gives compact containment. Lemma 6.3 shows that the real-valued processes \(\langle f,\eta_{t}^{N}\rangle\) are relatively compact for all \(f\in\mathcal{C}_{b}^{\infty}(\overline{\mathbb{R}}^{d})\). Since by the Stone-Weierstrass theorem, the algebra of finite sums and products of terms of this form is dense in the space of bounded continuous functions on \(\mathcal{M}_{F}(\overline{\mathbb{R}}^{d})\), and tightness of \(\langle f,\eta_{t}^{N}\rangle\) extends to sums and products of this form by Lemma B.3, we have relative compactness in \(\mathcal{D}_{[0,\infty)}(\mathcal{M}_{F}(\overline{\mathbb{R}}^{d}))\). \(\Box\) We wish to characterise the limit points of \(\{(\eta_{t}^{N})_{t>0}\}_{N\geq 1}\) as solutions to a martingale problem with generator \(\mathcal{P}^{\infty}\) which we now identify. Most of the work was done in Section 4. First, we record an equivalent formulation of the martingale problems, which were essentially laid out in Subsection 4.1. **Lemma 6.5**: _For \(G\in{\cal C}^{\infty}(\mathbb{R})\) with \(\|G^{\prime\prime\prime}\|_{\infty}<\infty\), and \(f\in{\cal C}^{\infty}_{b}(\overline{\mathbb{R}}^{d})\), define the function \(G_{f}\) by \(G_{f}(\eta):=G(\langle f,\eta\rangle)\). Let \({\cal P}^{N}\) be the generator given by_ \[\begin{split}{\cal P}^{N}G_{f}(\eta):=\theta N\bigg{\langle}& \gamma(x,\eta)\int\left(G(\langle f,\eta\rangle+f(z)/N)-G(\langle f,\eta \rangle)\right)r(z,\eta)q_{\theta}(x,dz)\\ &+\left(G(\langle f,\eta\rangle-f(x)/N)-G(\langle f,\eta \rangle)\right)\mu_{\theta}(x,\eta),\eta(dx)\bigg{\rangle}.\end{split} \tag{6.11}\] _The process \((\eta^{N}_{t})_{t\geq 0}\) of Definition 2.4 is the unique solution to the \(({\cal P}^{N},\eta_{0})\)-martingale problem, i.e., if_ \[M_{t}:=G_{f}(\eta^{N}_{t})-G_{f}(\eta^{N}_{0})-\int_{0}^{t}{\cal P}^{N}G_{f}( \eta^{N}_{s})ds\] _is a martingale (with respect to the natural \(\sigma\)-field)._ _Furthermore, let \({\cal P}^{\infty}\) be the generator given by_ \[\begin{split}{\cal P}^{\infty}G_{f}(\eta):=G^{\prime}(\langle f,\eta\rangle)\big{\langle}\gamma(x,\eta){\cal B}\left(f(\cdot)r(\cdot,\eta) \right)(x)+f(x)F(x,\eta),\eta(dx)\big{\rangle}\\ +\alpha G^{\prime\prime}(\langle f,\eta\rangle)\big{\langle} \gamma\left(x,\eta\right)r\left(x,\eta\right)f^{2}(x),\eta(dx)\big{\rangle}. \end{split} \tag{6.12}\] _A process \((\eta^{\infty}_{t})_{t\geq 0}\) satisfies the martingale characterization of equations (2.7) and (2.8) if and only if it is a solution to the \(({\cal P}^{\infty},\eta^{\infty}_{0})\)-martingale problem, i.e., if for all such test functions_ \[M_{t}:=G_{f}(\eta^{\infty}_{t})-G_{f}(\eta^{\infty}_{0})-\int_{0}^{t}{\cal P} ^{\infty}G_{f}(\eta^{\infty}_{s})ds\] _is a martingale (with respect to the natural \(\sigma\)-field)._ **Lemma 6.6** (Characterisation of limit points): _Suppose that \((\eta^{N}_{0})_{N\geq 1}\) converges weakly to \(\eta_{0}\) as \(N\to\infty\). Then any limit point of \(\{(\eta^{N}_{t})_{t\geq 0}\}_{N\geq 1}\) in \({\cal D}_{[0,\infty)}({\cal M}_{F}(\overline{\mathbb{R}}^{d}))\) is a solution to the martingale problem for \(({\cal P}^{\infty},\eta_{0})\)._ **Proof** We use Theorem 4.8.2 in Ethier and Kurtz [1986]. First observe that the set of functions \(\{G_{f}(\eta):=G(\langle f,\eta\rangle),\ G\in{\cal C}^{\infty}(\mathbb{R}), \|G^{\prime\prime\prime}\|_{\infty}<\infty,\ f\in{\cal C}^{\infty}_{b}( \overline{\mathbb{R}}^{d})\}\) is separating on \({\cal M}_{F}(\overline{\mathbb{R}}^{d})\). Therefore, it suffices to show that for any \(t>0\) and \(\tau>0\) that \[\lim_{N\to\infty}\mathbb{E}\left[\left(G_{f}(\eta^{N}_{t+\tau})-G_{f}(\eta^{N }_{t})-\int_{t}^{t+\tau}{\cal P}^{\infty}G_{f}(\eta^{N}_{s})ds\right)\prod_{i= 1}^{k}h_{i}(\eta^{N}_{t_{i}})\right]=0 \tag{6.13}\] for all \(k\geq 0\), \(0\leq t_{1}<t_{2}<\ldots,t_{k}\leq t<t+\tau\), and bounded continuous functions \(h_{1},\ldots,h_{k}\) on \({\cal M}_{F}(\overline{\mathbb{R}}^{d})\). Since \((\eta^{N}_{t})_{t\geq 0}\) is Markov, the tower property gives that, for each \(N\), \[\mathbb{E}\left[\left(G_{f}(\eta^{N}_{t+\tau})-G_{f}(\eta^{N}_{t})-\int_{t}^{t+ \tau}{\cal P}^{N}G_{f}(\eta^{N}_{s})ds\right)\prod_{i=1}^{k}h_{i}(\eta^{N}_{t_ {i}})\right]=0. \tag{6.14}\] Therefore, it suffices to show that \[\lim_{N\to\infty}\mathbb{E}\left[\int_{t}^{t+\tau}\left|\mathcal{P}^{N}G_{f}(\eta _{s}^{N})-\mathcal{P}^{\infty}G_{f}(\eta_{s}^{N})\right|ds\prod_{i=1}^{k}h_{i}( \eta_{t_{i}}^{N})\right]=0, \tag{6.15}\] and, again using the tower property, since the functions \(h_{i}\) are bounded, this will follow if \[\lim_{N\to\infty}\mathbb{E}\left[\left.\int_{t}^{t+\tau}\left|\mathcal{P}^{N}G_ {f}(\eta_{s}^{N})-\mathcal{P}^{\infty}G_{f}(\eta_{s}^{N})\right|ds\right| \mathcal{F}_{t}\right]=0 \tag{6.16}\] (where \(\{\mathcal{F}_{t}\}_{t\geq 0}\) is the natural \(\sigma\)-field). We rewrite \(\mathcal{P}^{N}G_{f}(\eta_{s}^{N})\) using a Taylor series expansion up to third order for \(G\big{(}\langle f,\eta\rangle\pm f(y)/N\big{)}\) around \(G(\langle f,\eta\rangle)\). As in Section 4 (except that now we are more explicit about the error term), we find \[\mathcal{P}^{N}G_{f}(\eta):= G^{\prime}(\langle f,\eta\rangle)\int_{\mathbb{R}^{d}}\theta \Big{\langle}\gamma(x,\eta)\int_{\mathbb{R}^{d}}f(y)r(y,\eta)q_{\theta}(x,dy),-f(x)\mu_{\theta}(x,\eta),\eta(dx)\Big{\rangle}\] \[+\frac{1}{6}\frac{\theta}{N^{2}}\Big{\langle}G^{\prime\prime \prime}(w)\gamma(x,\eta)\int_{\mathbb{R}^{d}}f^{3}(y)r(y,\eta)q_{\theta}(x,dy) -G^{\prime\prime\prime}(v)f^{3}(x)\mu_{\theta}(x,\eta),\eta(dx)\Big{\rangle} \tag{6.17}\] for some \(w,v\in[\langle f,\eta\rangle-\|f\|_{\infty}/N,\langle f,\eta\rangle+\|f\|_{ \infty}/N]\). Combining with equation (4.5), and the fact that \(\mu_{\theta}(x,\eta)\to r(x,\eta)\gamma(x,\eta)\) as \(\theta\to\infty\), we have pointwise convergence: \[\lim_{N\to\infty}|\mathcal{P}^{N}G(\langle f,\eta\rangle)-\mathcal{P}^{\infty }G(\langle f,\eta\rangle)|=0. \tag{6.18}\] To conclude convergence of the expectation, we would like to apply the Dominated Convergence Theorem in (6.15). Recall that \(f\) and \(G\) and their derivatives are bounded, and so \(\gamma(x,\eta)\) is bounded independent of \(\theta\). Since \(\theta/N^{2}\to 0\), rearranging as in (4.3) and using the convergence of (4.4), we deduce that we can dominate \(\left|\mathcal{P}^{N}G_{f}(\eta_{s}^{N})-\mathcal{P}^{\infty}G_{f}(\eta_{s}^{N })\right|\) by a constant multiple of \(\langle 1+|F(x)|,\eta_{s}(dx)\rangle\). Since \(F\) is bounded above, there is a constant \(K\) such that \(|F|\leq K-F\) so that, exactly as in equation (6.8), we can check that \[\mathbb{E}\Big{[}\left.\int_{t}^{t+\tau}\big{\langle}|F(x,\eta_{s}^{N})|,\eta _{s}^{N}(dx)\big{\rangle}ds\right|\mathcal{F}_{t}\Big{]}<\infty,\] which concludes our proof. \(\square\) The last step in the proof of Theorem 2.10 is to check that any limit point \((\eta_{t})_{t\geq 0}\) of \(\{(\eta_{t}^{N})_{t\geq 0}\}_{N\geq 1}\) actually takes its values in \(\mathcal{M}_{F}(\mathbb{R}^{d})\), that is, "no mass has escaped to infinity". **Lemma 6.7**: _Under the assumptions of Theorem 2.10, let \((\eta_{t})_{t\geq 0}\) be a limit point of \(\{(\eta_{t}^{N})_{t\geq 0}\}_{N\geq 1}\). For any \(\delta>0\),_ \[\mathbb{P}\big{[}\eta_{t}\big{(}\{\|x\|>R\}\big{)}>\delta\big{]}\to 0\qquad \text{ as }R\to\infty.\] **Proof** [Sketch] Take \(f_{0}(x)\) as in the statement of Theorem 2.10, i.e., \(f_{0}\) is nonnegative, grows to infinity as \(x\to\infty\), has uniformly bounded first and second derivatives, and has \(\langle f_{0},\eta_{0}^{N}\rangle\) uniformly bounded in \(N\). We take a sequence of test functions \(f_{n}\) that increase to the function \(f_{0}\) and having uniformly bounded first and second derivatives, so that there is a (single) \(C\) from Condition 6 of Assumptions 2.8 such that \(\gamma(x,\eta)B_{f_{n}}^{\theta}(x,\eta)\leq C(1+f_{n}(x))\) for all \(x\), \(\eta\), and \(f_{n}\). Then, just as we arrived at equation (6.5), \[\mathbb{E}[\langle f_{n}(x),\eta_{t}^{N}(dx)\rangle]\leq\mathbb{E}[\langle f_{ n}(x),\eta_{0}^{N}(dx)\rangle]+C\int_{0}^{t}\mathbb{E}\Big{[}\Big{\langle}f_{n}( x),\eta_{s}^{N}(dx)\Big{\rangle}\Big{]}ds,\] with the same constant for all \(n\) and all \(N\). Gronwall's inequality then implies that \(\mathbb{E}[\langle f_{n},\eta_{t}^{N}\rangle]\leq C^{\prime}\) for some \(C^{\prime}\) independent of \(n\), \(N\), and \(t\in[0,T]\). By first taking \(N\to\infty\) and then \(n\to\infty\), we find that \(\mathbb{E}[\langle f_{0},\eta_{t}(dx)\rangle]\leq C^{\prime}\) for \(t\in[0,T]\), and since \(f_{0}\to\infty\) as \(|x|\to\infty\), an application of Markov's inequality tells us that for any \(\delta>0\), \[\mathbb{P}\big{\{}\eta_{t}(\{x:\|x\|>R\})>\delta\big{\}}\leq\frac{\mathbb{E}[ \langle f_{0},\eta_{t}\rangle]}{\inf_{\{x:\|x\|\geq R\}}f_{0}(x)}\to 0\qquad \text{as }R\to\infty.\] \(\square\) ### Convergence of some nonlocal equations to classical PDEs It is natural to conjecture that when the limit of the rescaled population process that we obtained in the previous section solves a nonlocal PDE, if we further scale the kernels \(\rho_{r}\), \(\rho_{\gamma}\), and \(\rho_{F}\) by setting \(\rho^{\epsilon}(\cdot)=\rho(\cdot/\epsilon)/\epsilon\), as \(\epsilon\to 0\), the corresponding solutions should converge to a limiting population density that solves the corresponding "classical" PDE. We verify this in two examples; in the first the nonlocal equation is a reaction-diffusion equation with the "nonlocality" only appearing in the reaction term; in the second the nonlocal PDE is a special case of a nonlinear porous medium equation. These, in particular, capture the examples that we explored in Section 3.2. #### 6.3.1 Reaction-diffusion equation limits In this subsection we prove Proposition 2.15. _The conditions of the proposition are in force throughout this subsection_. The proof rests on a Feynman-Kac representation. We write \((Z_{t})_{t\geq 0}\) for a diffusion with generator \(\mathcal{B}^{*}\) and denote its transition density by \(f_{t}(x,y)\). The first step is a regularity result for this density. **Lemma 6.8**: _Fix \(T>0\). There exists a constant \(K=K(T)>0\) such that, for any \(x,y\in\mathbb{R}^{d}\) and \(t\in[0,T]\),_ \[\int|f_{t}(x,z)-f_{t}(y,z)|dz\leq\frac{\|x-y\|}{\sqrt{t}}K. \tag{6.19}\] **Proof** We first use the Intermediate Value Theorem to obtain the bound \[\int|f_{t}(x,z)-f_{t}(y,z)|dz\leq\int\|x-y\|\|\nabla f_{t}(w,z)\|dz\] where \(\nabla\) acts on the first coordinate only and \(w\) is in the line segment \([x,y]\) joining \(x\) to \(y\). Under our assumptions on \(b\) and \(C\), equation (1.3) of Sheu [1991], gives existence of constants \(\lambda=\lambda(T)>0\) and \(K\) such that, \[\|\nabla f_{t}(w,z)\|\leq\frac{K}{\sqrt{t}}p_{\lambda t}(w,z),\] where \(p_{s}(x,y)\) is the Brownian transition density. Hence, \[\int|f_{t}(x,z)-f_{t}(y,z)|dz\leq K\frac{\|x-y\|}{\sqrt{t}}\int p_{\lambda t}( w,z)dz=K\frac{\|x-y\|}{\sqrt{t}}.\] \(\Box\) **Lemma 6.9**: _Fix \(T>0\). Let \(x,y\in\mathbb{R}^{d}\), \(t\in[0,T]\), and denote by \((Z^{y}_{t})_{t\geq 0}\) and \((Z^{x}_{t})_{t\geq 0}\) independent copies of the diffusion \((Z_{t})_{t\geq 0}\) starting from \(y\) and \(x\) respectively. There exists a constant \(K=K(T)>0\) such that,_ \[\mathbb{E}[\|Z^{y}_{t}-Z^{x}_{t}\|]\leq K(\sqrt{t}+\|y-x\|).\] **Proof** First we write, \[\mathbb{E}[\|Z^{y}_{t}-Z^{x}_{t}\|]=\int\int\|u-v\|f_{t}(y,u)f_{t}(x,v)dudv.\] Under our regularity assumptions on \(C\), \(b\), using equation (1.2) of Sheu [1991], there exist constants \(K\), \(\lambda=\lambda(T)>0\) for which, \[f_{t}(y,u)\leq Kp_{\lambda t}(y,u).\] It then follows that, \[\mathbb{E}[\|Z^{y}_{t}-Z^{x}_{t}\|]\leq\int\int\|u-v\|K^{2}p_{\lambda t}(y,u) p_{\lambda t}(x,v)dvdu=K^{2}\mathbb{E}[\|B^{y}_{\lambda t}-B^{x}_{\lambda t}\|], \tag{6.20}\] where \((B^{y}_{t})_{t\geq 0}\) and \((B^{x}_{t})_{t\geq 0}\) are independent Brownian motions starting at \(y\) and \(x\) respectively. Using the triangle inequality, and writing \((B^{0}_{t})_{t\geq 0}\) for a Brownian motion started from the origin, \[\mathbb{E}[\|B^{y}_{\lambda t}-B^{x}_{\lambda t}\|]\leq\|y-x\|+\mathbb{E}[\|B^ {0}_{2\lambda t}\|]\leq\|y-x\|+C\sqrt{t}. \tag{6.21}\] Substituting (6.21) in (6.20) gives the result. We use the representations of the solutions to equations (2.12) and (2.11) respectively: \[\varphi_{t}(x) =\mathbb{E}_{x}\Big{[}\varphi_{0}(Z_{t})+\int_{0}^{t}\varphi_{s}(Z_ {s})F(\varphi_{s}(Z_{s}))ds\Big{]}, \tag{6.22}\] \[\varphi_{t}^{\epsilon}(x) =\mathbb{E}_{x}\Big{[}\varphi_{0}(Z_{t})+\int_{0}^{t}\varphi_{s}^ {\epsilon}(Z_{s})F(\rho_{F}^{\epsilon}*\varphi_{s}^{\epsilon}(Z_{s}))ds\Big{]}, \tag{6.23}\] from which \[\varphi_{t}(x)-\varphi_{t}^{\epsilon}(x)=\mathbb{E}_{x}\left[\int_{0}^{t} \Big{(}\varphi_{s}(Z_{s})F\big{(}\varphi_{s}(Z_{s})\big{)}-\varphi_{s}^{ \epsilon}(Z_{s})F\big{(}\rho_{\epsilon}*\varphi_{s}^{\epsilon}(Z_{s})\big{)} \Big{)}ds\right], \tag{6.24}\] where \(\mathbb{E}_{x}\) denotes expectation for \(Z\) with \(Z_{0}=x\). The key to our proof of Proposition 2.15 will be to replace \(F(\varphi_{s}(Z_{s}))\) by \(F(\rho_{F}^{\epsilon}*\varphi_{s}(Z_{s}))\) in this expression. We achieve this through three lemmas. First we need a uniform bound on \(\varphi\) and \(\varphi^{\epsilon}\). **Lemma 6.10**: _For any \(T>0\) there exists \(M=M(T,\|\varphi_{0}\|)>0\) such that, for all \(0\leq t\leq T\):_ \[\max\{\|\varphi_{t}(\cdot)\|_{\infty},\|\varphi_{t}^{\epsilon}(\cdot)\|_{ \infty}\}<M.\] **Proof** Using that \(\varphi_{0}\) and \(F\) are bounded above, from the representation (6.22), we have \[\varphi_{t}(x)\leq\|\varphi_{0}\|_{\infty}+K\mathbb{E}\Big{[}\int_{0}^{t} \varphi_{s}(Z_{s})ds\Big{]}.\] In particular, \[\|\varphi_{t}(\cdot)\|_{\infty}\leq\|\varphi_{0}\|_{\infty}+K\int_{0}^{t}\| \varphi_{s}(\cdot)\|_{\infty}ds,\] so, by Gronwall's inequality, \[\|\varphi_{t}(\cdot)\|_{\infty}\leq\|\varphi_{0}\|_{\infty}\exp\left(KT\right).\] Similarly, \(\|\varphi_{t}^{\epsilon}(\cdot)\|_{\infty}\leq\|\varphi_{0}\|_{\infty}\exp \left(KT\right)\). \(\square\) We also need a continuity estimate for \(\varphi\). **Lemma 6.11**: _Let \(T>0\). There exists a constant \(K=K(T,\|\varphi_{0}\|_{\infty})>0\) and \(\delta_{0}=\delta_{0}(T,\|\varphi_{0}\|_{\infty})>0\) such that for all \(0<\delta<\delta_{0}\) and \(0\leq t\leq T\),_ \[\|x-y\|<\delta^{3}\Rightarrow|\varphi_{t}(x)-\varphi_{t}(y)|<K\delta.\] **Proof** First we need some notation. Fix \(T>0\) and write \(M\) for the corresponding constant from Lemma 6.10. Let \(\|F\|_{M}=\sup_{m\in[0,M]}|F(m)|\). We reserve \(\widehat{K}\) for the constant on the right hand side of equation (6.19) and \(\widehat{K}\) for the constant in Lemma 6.9, and write \(K_{\varphi_{0}}\) for the Lipschitz constant of \(\varphi_{0}\). Set \[\delta_{0}=\min\Big{(}\frac{1}{\|F\|_{M}^{2}},\frac{1}{Me\big{(}2\|F\|_{M}+ \widehat{K}\big{)}},\frac{1}{\widehat{K}K_{\varphi_{0}}+2\|F\|_{M}M},1\Big{)}.\] In what follows we take \(0<\delta<\delta_{0}\). We first prove that the result holds if \(t<\delta^{2}\). As before let \(Z_{t}^{x}\) and \(Z_{t}^{y}\) be independent copies of the diffusion \(Z_{t}\) starting at \(x\) and \(y\) respectively. From our representation (6.22) and Lemma 6.10, we can write: \[\begin{split}|\varphi_{t}(x)-\varphi_{t}(y)|&\leq \big{|}\mathbb{E}_{x}[\varphi_{0}(Z_{t})]-\mathbb{E}_{y}[\varphi_{0}(Z_{t})] \big{|}+2\|F\|_{M}Mt\\ &\leq\mathbb{E}[|\varphi_{0}(Z_{t}^{x})-\varphi_{0}(Z_{t}^{y})|] +2\|F\|_{M}Mt\\ &\leq K_{\varphi_{0}}\mathbb{E}[\|Z_{t}^{x}-Z_{t}^{y}\|]+2\|F\|_{M }Mt\\ &\leq\widetilde{K}K_{\varphi_{0}}(\sqrt{t}+\|y-x\|)+2\|F\|_{M}Mt \\ &\leq\widetilde{K}K_{\varphi_{0}}(\delta+\delta^{3})+2\|F\|_{M} M\delta^{2}\leq(\widetilde{K}K_{\varphi_{0}}+1)\delta,\end{split}\] where we have used Lemma 6.9 in the fourth inequality and the definition of \(\delta_{0}\) in the last inequality. Suppose now that \(\delta^{2}<t\). We will follow the pattern in Lemma 2.2 of Penington (2017). First, note that by the Feynman-Kac formula we have an alternative representation for \(\varphi_{t}(x)\): for any \(t^{\prime}<t\), \[\varphi_{t}(x)=\mathbb{E}_{x}\Big{[}\varphi_{t-t^{\prime}}(Z_{t^{\prime}}) \exp\big{(}\int_{0}^{t^{\prime}}F(\varphi_{t-s}(Z_{s}))ds\big{)}\Big{]}.\] Therefore, setting \(t^{\prime}=\delta^{2}\) and using Lemma 6.10, for all \(z\), \[e^{-\delta^{2}\|F\|_{M}}\mathbb{E}_{z}\big{[}\varphi_{t-\delta^{2}}(Z_{\delta ^{2}})\big{]}\leq\varphi_{t}(z)\leq e^{\delta^{2}\|F\|_{M}}\mathbb{E}_{z} \big{[}\varphi_{t-\delta^{2}}(Z_{\delta^{2}})\big{]}.\] We can then deduce that \[\begin{split}\varphi_{t}(x)-\varphi_{t}(y)&\leq e^{ \delta^{2}\|F\|_{M}}\mathbb{E}_{x}\big{[}\varphi_{t-\delta^{2}}(Z_{\delta^{2} })\big{]}-e^{-\delta^{2}\|F\|_{M}}\mathbb{E}_{y}\big{[}\varphi_{t-\delta^{2}} (Z_{\delta^{2}})\big{]}\\ &=e^{\delta^{2}\|F\|_{M}}\Big{(}\mathbb{E}_{x}\big{[}\varphi_{t- \delta^{2}}(Z_{\delta^{2}})\big{]}-\mathbb{E}_{y}\big{[}\varphi_{t-\delta^{2} }(Z_{\delta^{2}})\big{]}\Big{)}\\ &\qquad\qquad+\big{(}e^{\delta^{2}\|F\|_{M}}-e^{-\delta^{2}\|F\|_ {M}}\big{)}\mathbb{E}_{y}\big{[}\varphi_{t-\delta^{2}}(Z_{\delta^{2}})\big{]} \\ &\leq e^{\delta^{2}\|F\|_{M}}\Big{(}\mathbb{E}_{x}\big{[}\varphi_{ t-\delta^{2}}(Z_{\delta^{2}})\big{]}-\mathbb{E}_{y}\big{[}\varphi_{t-\delta^{2} }(Z_{\delta^{2}})\big{]}\big{)}+M\big{(}e^{\delta^{2}\|F\|_{M}}-e^{-\delta^{2} \|F\|_{M}}\big{)}.\end{split} \tag{6.25}\] To bound the differences of the expected values in the last equation note that, by using again Lemma 6.10, \[\begin{split}\mathbb{E}_{x}&\big{[}\varphi_{t- \delta^{2}}(Z_{\delta^{2}})\big{]}-\mathbb{E}_{y}[\varphi_{t-\delta^{2}}(Z_{ \delta^{2}})\big{]}\\ &=\int\varphi_{t-\delta^{2}}(z)(f_{\delta^{2}}(x,z)-f_{\delta^{2} }(y,z))dz\\ &\leq M\int\big{|}f_{\delta^{2}}(x,z)-f_{\delta^{2}}(y,z)\big{|}dz \\ &\leq M\widehat{K}\frac{\|x-y\|}{\delta}\leq M\widehat{K}\delta ^{2},\end{split}\] where we have used Lemma 6.8 and that \(\|x-y\|<\delta^{3}\). Substituting in (6.25), \[\varphi_{t}(x)-\varphi_{t}(y) \leq e^{\delta^{2}\|F\|_{M}}\left(M\widehat{K}\delta^{2}+M-Me^{-2 \delta^{2}\|F\|_{M}}\right)\] \[\leq e^{\delta^{2}\|F\|_{M}}\left(M\widehat{K}\delta^{2}+2M \delta^{2}\|F\|_{M}\right)\] \[\leq e\left(M\widehat{K}+2M\|F\|_{M}\right)\delta^{2}\leq\delta,\] where the last two inequalities follow from the definition of \(\delta\). Interchanging \(x\) and \(y\) yields the same bound for \(\varphi_{t}(y)-\varphi_{t}(x)\), and the result follows. \(\square\) We proceed to control the difference between \(F(\varphi)\) and \(F(\rho_{F}^{\epsilon}*\varphi)\). Note first that since \(\rho_{F}\in L^{1}\), \[I(\epsilon):=\int_{\{\|y\|>\epsilon^{3/4}\}}\rho_{F}^{\epsilon}(y)dy=\int_{\{ \|y\|>\epsilon^{-1/4}\}}\rho_{F}(y)dy\to 0\qquad\text{ as }\epsilon\to 0.\] **Lemma 6.12**: _Let \(T>0\). There exists a constant \(C=C(T,\|\varphi_{0}\|_{\infty})>0\) such that, for all \(0\leq t\leq T\), for all \(\epsilon\) small enough,_ \[\|\varphi_{t}(\cdot)-\rho_{F}^{\epsilon}*\varphi_{t}(\cdot)\|_{\infty}\leq C( I(\epsilon)+\epsilon^{1/4}). \tag{6.26}\] _Furthermore, there is a constant \(\widetilde{C}(T,\|\varphi_{0}\|_{\infty})=\widetilde{C}\) such that, for all \(0\leq t\leq T\),_ \[\|F(\varphi_{t}(\cdot))-F(\rho_{F}^{\epsilon}*\varphi_{t}(\cdot))\|_{\infty} \leq\widetilde{C}\big{(}I(\epsilon)+\epsilon^{1/4}\big{)}. \tag{6.27}\] **Proof** Let \(\epsilon<\delta_{0}^{4}\), with \(\delta_{0}\) from Lemma 6.11. Then, \[|\varphi_{t}(x)-\rho_{F}^{\epsilon}*\varphi_{t}(x)| \leq\int_{\|x-y\|>\epsilon^{3/4}}\rho_{F}^{\epsilon}(x-y)|\varphi _{t}(y)-\varphi_{t}(x)|dy\] \[+\int_{\|x-y\|\leq\epsilon^{3/4}}\rho_{F}^{\epsilon}(x-y)|\varphi _{t}(y)-\varphi_{t}(x)|dy\] \[\leq 2M\int_{\|x-y\|>\epsilon^{3/4}}\rho_{F}^{\epsilon}(x-y)dy+ \int_{\|x-y\|\leq\epsilon^{3/4}}\rho_{F}^{\epsilon}(x-y)K\epsilon^{1/4}dy\] \[\leq 2MI(\epsilon)+K\epsilon^{1/4},\] where we used the estimates of Lemma 6.10 and Lemma 6.11. This proves (6.26). For (6.27), let \(L_{M}\) be the (uniform) Lipschitz constant of \(F\) on \([0,M]\), with \(M\) still taken from Lemma 6.10. Then, \[\|F(\varphi_{t}(\cdot))-F(\rho_{F}^{\epsilon}*\varphi_{t}(\cdot)) \|_{\infty} \leq L_{M}\|\varphi_{t}(\cdot))-(\rho_{F}^{\epsilon}*\varphi_{t}( \cdot))\|_{\infty}\] \[\leq L_{M}(2MI(\epsilon)+K\epsilon^{1/4}),\] which proves (6.27). \(\square\) **Proof** [of Proposition 2.15] Let \(\epsilon\) be small enough that Lemma 6.12 holds. We use the notation \(\widehat{\delta}(\epsilon)\) for the quantity on the right hand side of (6.27). Then from the representation (6.24) and Lemma 6.12 we can write, \[|\varphi_{t}(x)-\varphi_{t}^{\epsilon}(x)|\] \[\leq\mathbb{E}_{x}\left[\int_{0}^{t}\Big{|}\varphi_{s}(Z_{s})F( \rho_{F}^{\epsilon}\ast\varphi_{s}(Z_{s}))-\varphi_{s}^{\epsilon}(Z_{s})F \big{(}\rho_{F}^{\epsilon}\ast\varphi_{s}^{\epsilon}(Z_{s})\big{)}\Big{|}ds \right]+Mt\widehat{\delta}(\epsilon)\] \[\leq\mathbb{E}_{x}\left[\int_{0}^{t}\big{|}F(\rho_{F}^{\epsilon} \ast\varphi_{s}^{\epsilon}(Z_{s}))\big{|}\cdot\big{|}\varphi_{s}^{\epsilon}(Z _{s})-\varphi_{s}(Z_{s})\big{|}ds\right]\] \[\qquad+\mathbb{E}_{x}\left[\int_{0}^{t}|\varphi_{s}(Z_{s})| \big{|}F(\rho_{F}^{\epsilon}\ast\varphi_{s}^{\epsilon}(Z_{s}))-F(\rho_{F}^{ \epsilon}\ast\varphi_{s}(Z_{s}))\big{|}ds\right]+Mt\widehat{\delta}(\epsilon)\] \[\leq\|F\|_{M}\int_{0}^{t}\|\varphi_{s}^{\epsilon}(\cdot)-\varphi_ {s}(\cdot)\|_{\infty}ds+ML_{M}\int_{0}^{t}\|\rho_{F}^{\epsilon}\ast\varphi_{s} ^{\epsilon}(\cdot)-\rho_{F}^{\epsilon}\ast\varphi_{s}(\cdot)\|_{\infty}ds+Mt \widehat{\delta}(\epsilon)\] \[\leq(\|F\|_{M}+ML_{M})\int_{0}^{t}\|\varphi_{s}^{\epsilon}(\cdot )-\varphi_{s}(\cdot)\|_{\infty}ds+Mt\widehat{\delta}(\epsilon),\] where the second inequality is the triangle inequality, and the third is Lemma 6.10. An application of Gronwall's inequality then yields, \[\|\varphi_{t}^{\epsilon}(\cdot)-\varphi_{t}(\cdot)\|_{\infty} \leq Mt\widehat{\delta}(\epsilon)\exp(t(\|F\|_{M}+ML_{M}))\] \[\leq MT\widehat{\delta}(\epsilon)\exp(T(\|F\|_{M}+ML_{M})),\] giving the result, since \(\widehat{\delta}\to 0\) as \(\epsilon\to 0\). \(\Box\) #### 6.3.2 Porous Medium Equation In this subsection we prove Proposition 2.18. To ease notation, we present the proof in \(d=1\) (although we retain the notation \(\nabla\)). However, to recall the dependence on \(\epsilon\) we write \(\rho^{\epsilon}\) for \(\rho_{\gamma}\). It should be clear that it extends almost without change to higher dimensions. Recall that we are concerned with non-negative solutions to the equation (2.13): \[\partial_{t}\psi_{t}^{\epsilon}(x)=\Delta\left(\psi_{t}^{\epsilon}\,\rho^{ \epsilon}\ast\psi_{t}^{\epsilon}\right)(x)+\psi_{t}^{\epsilon}(x)\left(1-\rho ^{\epsilon}\ast\psi_{t}^{\epsilon}(x)\right).\] and we assume that \(\rho=\zeta\ast\check{\zeta}\) with \(\zeta\) a rapidly decreasing function and \(\check{\zeta}(x)=\zeta(-x)\). The example we have in mind is \(\zeta\) (and therefore \(\rho\)) being the density of a mean zero Gaussian random variable. We shall prove that under the assumptions of Proposition 2.18, as \(\epsilon\to 0\), we have convergence to the solution to the porous medium equation with logistic growth, equation (1.2): \[\partial_{t}\psi_{t}(x)=\Delta\left(\psi_{t}^{2}\right)(x)+\psi_{t}(x)\left(1 -\psi_{t}(x)\right).\] We work on the time interval \([0,T]\). We will require a lower bound on \(\int\psi_{t}^{\epsilon}(x)\log\psi_{t}^{\epsilon}(x)dx\) which we record as a lemma. **Lemma 6.13**: _Suppose that there exists \(\lambda\in(0,1)\) and \(C<\infty\), both independent of \(\epsilon\), such that \(\int\exp(\lambda|x|)\psi_{0}^{\epsilon}(x)dx<C\). Then there exists a constant \(K<\infty\), independent of \(\epsilon\), such that \(\int\psi_{t}^{\epsilon}(x)\log\psi_{t}^{\epsilon}(x)dx>-K\) for all \(t\in[0,T]\)._ **Proof** First observe that, since \(x\log x\) is bounded below, \(\int_{-1}^{1}\psi_{t}^{\epsilon}(x)\log\psi_{t}^{\epsilon}(x)dx\) is bounded below, and recall that \(\psi_{t}^{\epsilon}(x)\geq 0\). Now consider \[\frac{d}{dt}\int\exp(\lambda x)\psi_{t}^{\epsilon}(x)dx=\int\exp( \lambda x)\Delta\big{(}\psi_{t}^{\epsilon}\,\rho^{\epsilon}*\psi_{t}^{\epsilon }\big{)}(x)dx\\ +\int\exp(\lambda x)\psi_{t}^{\epsilon}(x)\big{(}1-\rho^{ \epsilon}*\psi_{t}^{\epsilon}(x)\big{)}dx\\ =\int(\lambda^{2}-1)\exp(\lambda x)\psi_{t}^{\epsilon}(x)\rho_{ \epsilon}*\psi_{t}^{\epsilon}(x)dx+\int\exp(\lambda x)\psi_{t}^{\epsilon}(x)dx \\ \leq\int\exp(\lambda x)\psi_{t}^{\epsilon}(x)dx, \tag{6.28}\] and so, by Gronwall's inequality, \(\int\exp(\lambda x)\psi_{t}^{\epsilon}(x)dx\) is uniformly bounded on \([0,T]\). In particular, combining with the Mean Value Theorem, we find \[\int_{x}^{x+1}\psi_{t}^{\epsilon}(y)dy\leq C\exp(-\lambda x),\] where the constant \(C\) is independent of \(x\geq 1\). A fortiori, \[\int_{x}^{x+1}\psi_{t}^{\epsilon}(y)\mathbf{1}_{\psi_{t}^{\epsilon}(y)\leq 1 }dy\leq C\exp(-\lambda x). \tag{6.29}\] Now the function \(\psi\mapsto\mathbf{1}_{0\leq\psi\leq 1}\psi|\log\psi|\) is concave, and so using Jensen's inequality and (6.29), \[\int_{x}^{x+1}\psi_{t}^{\epsilon}(y)|\log\psi_{t}^{\epsilon}(y)|\mathbf{1}_{ \psi_{t}^{\epsilon}(y)\leq 1}dy\leq C^{\prime}x\exp(-\lambda x).\] Evidently a symmetric argument applies for \(x\leq-1\). Summing over \(x\), and using that \(\psi\log\psi\geq-\psi|\log\psi|\mathbf{1}_{\psi\leq 1}\), we find \[\int\psi_{t}^{\epsilon}(x)\log\psi_{t}^{\epsilon}(x)dx\geq-C^{\prime\prime} \sum_{x=1}^{\infty}x\exp(-\lambda x)>-K>-\infty,\] as required. \(\square\) **Proof** [Proof of Proposition 2.18] First observe that \[\int\psi_{t}^{\epsilon}(x)\,\rho^{\epsilon}*\psi_{t}^{\epsilon}( x)dx=\int\int\int\psi_{t}^{\epsilon}(x)\psi_{t}^{\epsilon}(x-y)\zeta^{ \epsilon}(y-z)\tilde{\zeta}^{\epsilon}(z)dzdydx\\ =\int\int\int\psi_{t}^{\epsilon}(\widetilde{x}-\widetilde{z}) \psi_{t}^{\epsilon}(\widetilde{x}-\widetilde{y})\zeta^{\epsilon}(\widetilde{y} )\zeta^{\epsilon}(\widetilde{z})d\widetilde{z}d\widetilde{y}d\widetilde{x}= \int\left(\zeta^{\epsilon}*\psi_{t}^{\epsilon}(x)\right)^{2}dx,\] where we have set \(\widetilde{x}=x-z\), \(\widetilde{y}=y-z\), \(\widetilde{z}=-z\). Now note that \[\frac{d}{dt}\int\psi_{t}^{\epsilon}(x)dx = \int\Delta\big{(}\psi_{t}^{\epsilon}\,\rho^{\epsilon}*\psi_{t}^{ \epsilon}\big{)}(x)dx+\int\psi_{t}^{\epsilon}(x)\big{(}1-\rho^{\epsilon}*\psi_ {t}^{\epsilon}(x)\big{)}dx\] \[= \int\psi_{t}^{\epsilon}(x)dx-\int\big{(}\zeta^{\epsilon}*\psi_{t }^{\epsilon}(x)\big{)}^{2}dx.\] Thus, Gronwall's inequality implies that \(\int\psi_{t}^{\epsilon}(x)dx\) is uniformly bounded above in \(\epsilon\) and \(t\in[0,T]\). Note that this also then gives a uniform bound on the rate of change of \(\int\psi_{t}^{\epsilon}(x)dx\), and since we are working on \([0,T]\) this will be enough to give continuity in time of the \(L^{1}\) norm of the limit when we pass to a convergent subsequence. Now consider \[\frac{d}{dt}\int\psi_{t}^{\epsilon}\log\psi_{t}^{\epsilon}dx = \int(1+\log\psi_{t}^{\epsilon})\left[\Delta\big{(}\psi_{t}^{ \epsilon}\,\rho^{\epsilon}*\psi_{t}^{\epsilon}\big{)}+\psi_{t}^{\epsilon} \big{(}1-\rho^{\epsilon}*\psi_{t}^{\epsilon}\big{)}\right]dx \tag{6.30}\] \[= \int(1+\log\psi_{t}^{\epsilon})\left[\nabla\Big{(}\psi_{t}^{ \epsilon}\,\nabla(\rho^{\epsilon}*\psi_{t}^{\epsilon})+\nabla\psi_{t}^{ \epsilon}\,\rho^{\epsilon}*\psi_{t}^{\epsilon}\Big{)}+\psi_{t}^{\epsilon} \big{(}1-\rho^{\epsilon}*\psi_{t}^{\epsilon}\big{)}\right]dx\] \[= \int\left[-\frac{\nabla\psi_{t}^{\epsilon}}{\psi_{t}^{\epsilon}} \Big{(}\psi_{t}^{\epsilon}\,\nabla(\rho^{\epsilon}*\psi_{t}^{\epsilon})+ \nabla\psi_{t}^{\epsilon}\,\rho^{\epsilon}*\psi_{t}^{\epsilon}\Big{)}+(1+\log \psi_{t}^{\epsilon})\psi_{t}^{\epsilon}(1-\rho^{\epsilon}*\psi_{t}^{\epsilon} )\right]dx\] \[= -\int\left(\nabla(\zeta^{\epsilon}*\psi_{t}^{\epsilon})\right)^{2 }dx-\int(\nabla\psi_{t}^{\epsilon})^{2}\frac{\rho^{\epsilon}*\psi_{t}^{ \epsilon}}{\psi_{t}^{\epsilon}}dx\] \[+\int\big{[}\psi_{t}^{\epsilon}+\psi_{t}^{\epsilon}\log\psi_{t}^ {\epsilon}\big{(}1-\rho^{\epsilon}*\psi_{t}^{\epsilon}\big{)}-\psi_{t}^{ \epsilon}\,\rho^{\epsilon}*\psi_{t}^{\epsilon}\big{]}\,dx\] \[= -\int\left(\nabla(\zeta^{\epsilon}*\psi_{t}^{\epsilon})\right)^{ 2}dx-\int(\nabla\psi_{t}^{\epsilon})^{2}\frac{\rho^{\epsilon}*\psi_{t}^{ \epsilon}}{\psi_{t}^{\epsilon}}dx-\int(\zeta^{\epsilon}*\psi_{t}^{\epsilon})^ {2}dx\] \[+\int\big{[}\psi_{t}^{\epsilon}+\psi_{t}^{\epsilon}\log\psi_{t}^ {\epsilon}\big{(}1-\rho^{\epsilon}*\psi_{t}^{\epsilon}\big{)}\big{]}\,dx.\] The first three terms are negative; and we already saw that the \(L^{1}\) norm of \(\psi_{t}^{\epsilon}\) is uniformly bounded. Moreover, since \(\psi_{t}^{\epsilon}\log\psi_{t}^{\epsilon}\) is uniformly bounded below and \(\int\rho^{\epsilon}(x)dx=1\), \[-\int\psi_{t}^{\epsilon}\log\psi_{t}^{\epsilon}\,\rho^{\epsilon}*\psi_{t}^{ \epsilon}dx\leq C\int\rho^{\epsilon}*\psi_{t}^{\epsilon}dx=C\int\psi_{t}^{ \epsilon}dx.\] From this and (6.30), we see immediately that \(\int\psi_{t}^{\epsilon}\log\psi_{t}^{\epsilon}dx\) is uniformly bounded above in \(\epsilon\) and \(t\in[0,T]\). Combining with Lemma 6.13, we deduce that we have a uniform bound on \(\int\psi_{s}^{\epsilon}(x)|\log\psi_{s}^{\epsilon}(x)|dx\). From (6.30), this in turn means that both \(\int_{0}^{t}\int(\zeta^{\epsilon}*\psi_{s}^{\epsilon}(x))^{2}dxds\) and \(\int_{0}^{t}\int\big{(}\nabla(\zeta^{\epsilon}*\psi_{s}^{\epsilon}(x))\big{)} ^{2}dxds\) are uniformly bounded in \(\epsilon\) and \(t\in[0,T]\). We shall next show that \(\zeta^{\epsilon}*\psi_{t}^{\epsilon}\) solves (1.2) up to a remainder of order \(\epsilon\). First observe that \[\int\Delta\left((\rho^{\epsilon}*\psi_{t}^{\epsilon})\psi_{t}^{\epsilon}\right) \phi dx=-\int\nabla(\rho^{\epsilon}*\psi_{t}^{\epsilon})\,\psi_{t}^{\epsilon} \,\nabla\phi dx-\int\rho^{\epsilon}*\psi_{t}^{\epsilon}\,\nabla\psi_{t}^{ \epsilon}\,\nabla\phi dx. \tag{6.31}\] We would like to show that this is close to \(\int(\zeta^{\epsilon}*\psi_{t}^{\epsilon})^{2}\Delta\phi dx\). For the first term \[\int\nabla(\rho^{\epsilon}*\psi_{t}^{\epsilon})\,\psi_{t}^{ \epsilon}\,\nabla\phi dx = \int\int\int\nabla\psi_{t}^{\epsilon}(x-y)\zeta^{\epsilon}(y-z) \tilde{\zeta}^{\epsilon}(z)\psi_{t}^{\epsilon}(x)\nabla\phi(x)dzdydx \tag{6.32}\] \[= \int\int\int\nabla\psi_{t}^{\epsilon}(\widetilde{x}-\widetilde{y} )\zeta^{\epsilon}(\widetilde{y})\zeta^{\epsilon}(\widetilde{z})\psi_{t}^{ \epsilon}(\widetilde{x}-\widetilde{z})\nabla\phi(\widetilde{x}-\widetilde{z}) d\widetilde{z}d\widetilde{y}d\widetilde{x}\] \[= \int(\nabla\zeta^{\epsilon}*\psi_{t}^{\epsilon})\,\left(\zeta^{ \epsilon}*(\psi_{t}^{\epsilon}\nabla\phi)\right)dx\] \[= \frac{1}{2}\int\nabla((\zeta^{\epsilon}*\psi_{t}^{\epsilon})^{2 })\,\nabla\phi dx\] \[+\int\nabla(\zeta^{\epsilon}*\psi_{t}^{\epsilon})\left[\zeta^{ \epsilon}*(\psi_{t}^{\epsilon}\,\nabla\phi)-\nabla\phi\left(\zeta^{\epsilon}* \psi_{t}^{\epsilon}\right)\right]dx,\] where, as before, we have substituted \(\widetilde{x}=x-z\), \(\widetilde{y}=y-z\), \(\widetilde{z}=-z\). To control the term (6.32) we use the Intermediate Value Theorem to see that \[\left|\int\left[\psi_{t}^{\epsilon}(x-y)\nabla\phi(x-y)\zeta^{ \epsilon}(y)-\nabla\phi(x)\psi_{t}^{\epsilon}(x-y)\zeta^{\epsilon}(y)\right]dy\right| \\ \leq C\|\Delta\phi\|_{\infty}\int\psi_{t}^{\epsilon}(x-y)y\zeta^{ \epsilon}(y)dy.\] Since \(\zeta\in\mathcal{S}(\mathbb{R})\), the integral in this expression is \(\mathcal{O}(\epsilon)\). Similarly, for the second term in (6.31), \[\int(\rho^{\epsilon}*\psi_{t}^{\epsilon})\,\nabla\psi_{t}^{ \epsilon}\,\nabla\phi dx = \int\int\int\psi_{t}^{\epsilon}(x-y)\zeta^{\epsilon}(y-z)\tilde{ \zeta}^{\epsilon}(z)\nabla\psi_{t}^{\epsilon}(x)\nabla\phi(x)dzdydx \tag{6.33}\] \[= \int\int\int\psi_{t}^{\epsilon}(\widetilde{x}-\widetilde{y}) \zeta^{\epsilon}(\widetilde{y})\zeta^{\epsilon}(\widetilde{z})\nabla\psi_{t}^ {\epsilon}(\widetilde{x}-\widetilde{z})\nabla\phi(\widetilde{x}-\widetilde{z}) d\widetilde{z}d\widetilde{y}d\widetilde{x}\] \[= \int(\zeta^{\epsilon}*\psi_{t}^{\epsilon})\,\left(\zeta^{\epsilon }*(\nabla\psi_{t}^{\epsilon}\,\nabla\phi)\right)dx\] \[= \frac{1}{2}\int\nabla((\zeta^{\epsilon}*\psi_{t}^{\epsilon})^{2}) \,\nabla\phi dx\] \[+\int(\zeta^{\epsilon}*\psi_{t}^{\epsilon})\left[\zeta^{\epsilon }*(\nabla\psi_{t}^{\epsilon}\,\nabla\phi)-\nabla\phi\left(\nabla\zeta^{ \epsilon}*\psi_{t}^{\epsilon}\right)\right]dx,\] and (6.33) is controlled in the same way as (6.32): using the Intermediate Value Theorem, \[\left|\int\left[\nabla\psi_{t}^{\epsilon}(x-y)\nabla\phi(x-y) \zeta^{\epsilon}(y)-\nabla\phi(x)\nabla\psi_{t}^{\epsilon}(x-y)\zeta^{ \epsilon}(y)\right]dy\right|\\ \leq C\|\Delta\phi\|_{\infty}\left|\int\nabla\psi_{t}^{\epsilon} (x-y)y\zeta^{\epsilon}(y)dy\right|,\] which again is \(\mathcal{O}(\epsilon)\). We now have the ingredients that we need. The calculations above yield both a uniform (in \(\epsilon\)) bound on \(\zeta^{\epsilon}*\psi_{t}^{\epsilon}\) in \(L^{1}\cap L^{2}\big{(}[0,T]\times\mathbb{R}\big{)}\), and that \[\int\psi_{t}^{\epsilon}(x)\phi(x)dx-\int\psi_{0}^{\epsilon}(x) \phi(x)dx=\int_{0}^{t}\int(\zeta^{\epsilon}*\psi_{s}^{\epsilon}(x))^{2}\Delta \phi(x)dx\\ +\int_{0}^{t}\int\zeta^{\epsilon}*\psi_{s}^{\epsilon}(x)\left(1- \zeta^{\epsilon}*\psi_{s}^{\epsilon}(x)\right)\phi(x)dx+\mathcal{O}(\epsilon) \tag{6.34}\] (for sufficiently regular \(\phi\)). Since \(\int\psi_{t}^{\epsilon}(x)\phi(x)dx-\int\zeta^{\epsilon}*\psi_{t}^{\epsilon}( x)\phi(x)dx\) is order \(\epsilon\), if we replace \(\psi^{\epsilon}\) by \(\zeta^{\epsilon}*\psi^{\epsilon}\) on the left hand side, then (6.34) says that \(\zeta^{\epsilon}*\psi^{\epsilon}\) solves (2.14) weakly up to order \(\epsilon\). Therefore, \(\zeta^{\epsilon}*\psi^{\epsilon}\) converges weakly to \(\psi\) in \(L^{1}\), where \(\psi\) is the (unique) solution to equation (2.14) and, so, therefore, does \(\psi^{\epsilon}\). In fact, strong convergence, that is \(\int|\psi^{\epsilon}-\psi|\phi dx\to 0\), follows from the uniform integrability of \(\psi^{\epsilon}\) that we can deduce from the uniform control of \(\int\psi^{\epsilon}|\log\psi^{\epsilon}|dx\) that we proved above. \(\Box\) ## 7 Simultaneous scaling with interaction distance In this section we prove Theorem 2.20, which proves convergence in the case that the width of the interaction kernel \(\rho_{F}\) simultaneously scales along with the parameters \(\theta\) and \(N\), in the special case in which \(r\equiv 1\equiv\gamma\), \(q_{\theta}(x,dy)\) is isotropic with zero mean, the kernel \(\rho_{F}\) is Gaussian, and the scaling limit is a reaction-diffusion equation. _To simplify notation, in this section we shall write_ \[\rho_{\epsilon}*\eta(x)=\rho_{F}^{\epsilon}*\eta(x)=\langle p_{\epsilon^{2}} (x,y),\eta(dy)\rangle,\] _where \(p_{t}(x,y)\) denotes the heat semigroup. The assumptions of Theorem 2.20 will be in force throughout, in particular,_ \[\epsilon^{2}\theta\to\infty,\qquad\text{and }\frac{\theta}{N\epsilon^{d}} \to 0. \tag{7.1}\] _That \(N,\theta\to\infty\) and \(\epsilon\to 0\) simultaneously will be implicit, so for example if we write \(\lim_{\epsilon\to 0}\), it should be understood that \(\theta,N\to\infty\) in such a way that (7.1) is satisfied. Moreover, where there is no risk of confusion, except where it is helpful for emphasis, we suppress dependence of \(\eta\) on \(N\)._ The first part of the proof mirrors that of Theorem 2.10: in Subsection 7.1 we establish bounds on the moments of \(\rho_{\epsilon}*\eta_{t}(x)\) that are sufficient to imply tightness and then apply standard results on convergence of Markov processes from Ethier and Kurtz (1986). The challenge comes in identifying the limit points. This is much more intricate than the case in which we do not scale the interaction kernel, as weak convergence will no longer be sufficient to guarantee the form of the nonlinear terms in the limiting equation. Identification of the limit will rest on regularity inherited from continuity estimates for a random walk with Gaussian jumps which we prove in Subsection 7.2, before identifying the limit points in Subsection 7.3. ### Moment bounds for \(\rho_{\epsilon}*\eta\) Let us write \({\cal L}^{\theta}f(x):=\theta\int(f(y)-f(x))q_{\theta}(x,y)dy\) where \(q_{\theta}\) is a Gaussian kernel of mean \(0\) and variance \(1/\theta\). We note that \({\cal L}^{\theta}\) is the generator of a continuous (time and space) random walk, which makes jumps of mean \(0\) and variance \(1/\theta\) at rate \(\theta\). In what follows we write \(\psi_{t}^{\epsilon,x}(y)\) for the solution of \[\partial_{t}\psi_{t}^{\epsilon,x}={\cal L}^{\theta}\psi_{t}^{\epsilon,x}, \tag{7.2}\] with initial condition \(\psi_{0}^{\epsilon,x}(y)=\rho_{\epsilon}(y-x)=p_{\epsilon^{2}}(x,y)\). To see why \(\psi_{t}^{\epsilon,x}\) is useful, first note that for any time-dependent function \(\phi_{t}(x)\) with time derivative \(\dot{\phi}_{t}(x)=\partial_{t}\phi_{t}(x)\), \[\langle\phi_{t}(x),\eta_{t}(dx)\rangle=\langle\phi_{0}(x),\eta_{ 0}(dx)\rangle+M_{t}(\phi)+\int_{0}^{t}\big{\langle}{\cal L}^{\theta}\phi_{s}( x)+\dot{\phi}_{s}(x),\eta_{s}(dx)\big{\rangle}ds\\ +\int_{0}^{t}\big{\langle}\phi_{s}(x)F(x,\eta_{s}),\eta_{s}(dx) \big{\rangle}ds, \tag{7.3}\] where \(M_{t}(\phi)\) is a martingale (with respect to the natural filtration) with angle bracket process given by (2.4) with \(f\) replaced by \(\phi_{s}(\cdot)\). So, taking \(\phi_{s}(\cdot)=\psi_{t-s}^{\epsilon,x}(\cdot)\) for \(0\leq s\leq t\), \[\rho_{\epsilon}*\eta_{t}(x) = \langle\psi_{0}^{\epsilon,x}(y),\eta_{t}(dy)\rangle \tag{7.4}\] \[= \langle\psi_{t}^{\epsilon,x}(y),\eta_{0}(dy)\rangle+\int_{0}^{t} \big{\langle}\psi_{t-s}^{\epsilon,x}(y)F\big{(}\rho_{\epsilon}*\eta_{s}(y) \big{)},\eta_{s}(dy)\big{\rangle}ds+M_{t}(x),\] where \(M_{t}(x)\) has mean zero and a second moment we can easily write down. **Lemma 7.1**: _Fix \(t>0\), let \((\Pi(s))_{s\geq 0}\) be a rate one Poisson process, and let \(T(t)=\Pi(\theta t)/\theta\). Then_ \[\psi_{t}^{\epsilon,x}(y)=\mathbb{E}\left[p_{\epsilon^{2}+T(t)}(x,y)\right],\] _and, moreover, since under our assumptions \(\theta\epsilon^{2}\) is bounded below, there is a \(C\) independent of \(\epsilon\) or \(t\) such that_ \[\|\psi_{t}^{\epsilon,x}\|_{\infty}\leq\frac{C}{(\epsilon^{2}+t)^{d/2}}.\] **Proof** The first claim is immediate from the definition of the random walk with generator \({\cal L}^{\theta}\). For the second claim, first define \(\tau(t)=T(t)-t\). Since if \(\tau(t)\geq-(\epsilon^{2}+t)/2\), then \(1/(\epsilon^{2}+T(t))\leq 2/(\epsilon^{2}+t)\), while \(\epsilon^{2}+T(t)\geq\epsilon^{2}\) always, partitioning over \(\{\tau(t)\geq-(\epsilon^{2}+t)/2\}\) and its complement, \[\|\psi^{\epsilon,x}\|_{\infty} =\mathbb{E}\left[\frac{1}{\big{(}2\pi(\epsilon^{2}+T(t))\big{)}^ {d/2}}\right]\] \[\leq\frac{C}{(\epsilon^{2}+t)^{d/2}}+\frac{C}{\epsilon^{d}} \mathbb{P}\left\{\tau(t)<-(\epsilon^{2}+t)/2\right\}. \tag{7.5}\] Now, observe that since \(\mathbb{E}[e^{-\Pi(\theta t)}]=\exp(-\theta t(1-e^{-1}))\), by Markov's inequality, \[\mathbb{P}\left\{\tau(t)<-\frac{\epsilon^{2}+t}{2}\right\} =\mathbb{P}\left\{e^{-\Pi(\theta t)}>e^{-\theta(t-\epsilon^{2})/2}\right\}\] \[\leq\frac{\mathbb{E}[\exp\big{(}-\Pi(\theta t)\big{)}]}{\exp\big{(} -\theta(t-\epsilon^{2})/2\big{)}}\] \[=\frac{\exp(-\theta t(1-e^{-1}))}{\exp(-\theta(t-\epsilon^{2})/ 2)}\] \[=\exp\left\{-\chi\theta t-\frac{\theta\epsilon^{2}}{2}\right\}, \tag{7.6}\] where \(\chi=1/2-e^{-1}>0\). The second term in (7.5) is therefore bounded by \[C\left(1+\frac{t}{\epsilon^{2}}\right)^{d/2}e^{-\chi\theta t}\frac{1}{( \epsilon^{2}+t)^{d/2}}e^{-\epsilon^{2}\theta/2}.\] Now observe that the derivative (with respect to \(t\)) of \(e^{-\chi\theta t}(1+t/\epsilon^{2})^{d/2}\) is \[\left(\frac{d}{2\epsilon^{2}}-\left(1+\frac{t}{\epsilon^{2}}\right)\chi \theta\right)\left(1+\frac{t}{\epsilon^{2}}\right)^{d/2-1}e^{-\chi\theta t},\] which is negative if \(\theta(\epsilon^{2}+t)>d/2\chi\). At the maximum, \((1+t/\epsilon^{2})=d/(2\chi\theta\epsilon^{2})\), and so this quantity is bounded uniformly over not only \(t\) but also \(\epsilon\) (since we've assumed that \(\theta\epsilon^{2}\) is bounded below). Therefore, we have the bound \[\frac{1}{\epsilon^{d}}\mathbb{P}\left\{\tau(t)<-(\epsilon^{2}+t)/2\right\} \leq\frac{C}{(\epsilon^{2}+t)^{d/2}}e^{-\epsilon^{2}\theta/2}. \tag{7.7}\] Substituting this into (7.5) yields the result. \(\Box\) **Lemma 7.2**: _Let \(\{\mathcal{F}_{t}\}_{t\geq 0}\) denote the natural filtration. Under the assumptions of Theorem 2.20, for each \(T\in[0,\infty)\), and \(k\in\mathbb{N}\), there exist constants \(C=C(k,T)\) and \(\widetilde{C}=\widetilde{C}(k,T)\), independent of \(\epsilon\), such that for all \(x\in\mathbb{R}^{d}\) and all \(u,t\in[0,T]\) with \(u<t\),_ \[\mathbb{E}\Big{[}\left.\big{(}\rho_{\epsilon}*\eta_{t}(x)\big{)}^{k}\Big{|} \,\mathcal{F}_{u}\right]\leq C\langle\psi_{t-u}^{\epsilon,x}(z),\eta_{u}(dz) \rangle^{k}+C\frac{\theta}{N\epsilon^{d}}\langle\psi_{t-u}^{\epsilon,x}(z), \eta_{u}(dz)\rangle; \tag{7.8}\] _and_ \[\mathbb{E}\Big{[}\left.\int_{u}^{t}\langle\psi_{t-s}^{\epsilon,x} (z),\eta_{s}(dz)\rangle^{k-1}\langle\psi_{t-s}^{\epsilon,x}(z)|F(\rho_{ \epsilon}*\eta_{s}(z))|,\eta_{s}(dz)\rangle ds\right|\mathcal{F}_{u}\Big{]}\\ \leq\widetilde{C}\langle\psi_{t-u}^{\epsilon,x}(z),\eta_{u}(dz) \rangle^{k}+\widetilde{C}\frac{\theta}{N\epsilon^{d}}\langle\psi_{t-u}^{ \epsilon,x}(z),\eta_{u}(dz)\rangle; \tag{7.9}\] _where the function \(\psi_{t}^{\epsilon,x}(\cdot)\) was defined in (7.2). In particular, under the assumptions of Theorem 2.20, the expected values of the quantities on the right hand side of (7.8) and (7.9) are both integrable with respect to Lebesgue measure._ ProofTo simplify our expressions, we shall consider the case \(u=0\), but the proof goes through unchanged for other values of \(u\). We proceed by induction. Taking expectations in (7.4), using that \(F\) is bounded above, and applying Gronwall's inequality to \(\langle\psi_{t-s}^{\epsilon,x},\eta_{s}\rangle\) we obtain \(\mathbb{E}[\langle\psi_{0}^{\epsilon,x},\eta_{t}\rangle]\leq C\mathbb{E}[ \langle\psi_{t}^{\epsilon,x},\eta_{0}\rangle]\), which implies (7.8) in the case \(k=1\). Moreover, rearranging (7.4) we find \[-\int_{0}^{t}\big{\langle}\psi_{t-s}^{\epsilon,x}(y)F(\rho_{\epsilon}*\eta_{s} (y),\eta_{s}(dy)\big{\rangle}ds=\langle\psi_{t}^{\epsilon,x}(y),\eta_{0}(dy) \rangle-\langle\psi_{0}^{\epsilon,x}(y),\eta_{t}(dy)\rangle+M_{t}(x), \tag{7.10}\] and taking expectations again, since \(\langle\psi_{0}^{\epsilon,x},\eta_{t}\rangle>0\), and \(M_{0}(x)=0\), this yields \[\mathbb{E}\Big{[}-\int_{0}^{t}\langle\psi_{t-s}^{\epsilon,x}(y)F(\rho_{ \epsilon}*\eta_{s}(y)),\eta_{s}(dy)\rangle ds\Big{|}\mathcal{F}_{0}\Big{]} \leq\langle\psi_{t}^{\epsilon,x}(y),\eta_{0}(dy)\rangle.\] Since \(F\) is bounded above, there exists a constant \(K\) such that \(|F|\leq K-F\) and so combined with the bound on \(\mathbb{E}[\langle\psi_{0}^{\epsilon,x}(y),\eta_{t}(dy)\rangle]\) just obtained, this in turn yields \[\mathbb{E}\Big{[}\int_{0}^{t}\big{\langle}\psi_{t-s}^{\epsilon,x}(y)|F(\rho_{ \epsilon}*\eta_{s}(y))|,\eta_{s}(dy)\big{\rangle}ds\Big{|}\mathcal{F}_{0} \Big{]}\leq\widetilde{C}\langle\psi_{t}^{\epsilon,x}(y),\eta_{0}(dy)\rangle,\] which is (7.9) in the case \(k=1\). Now suppose that we have established (7.8) and (7.9) for all exponents \(j<k\). First we apply the generator \(\mathcal{P}^{N}\) of our scaled population process to functions of the form \(\langle f,\eta\rangle^{k}\). Recalling that each jump of the process involves the birth or death of a single individual, and so increments \(\langle f,\eta\rangle\) by \(\pm f/N\) at the location of that individual and that \(r\equiv\gamma\equiv 1\), we find \[\mathcal{P}^{N}\Big{(}\langle f,\eta\rangle^{k}\Big{)}=\Big{\langle} \int\theta N\sum_{j=1}^{k}\binom{k}{j}\frac{f(y)^{j}}{N^{j}}\langle f,\eta \rangle^{k-j}q_{\theta}(x,dy),\eta(dx)\Big{\rangle}\\ +\Big{\langle}\theta N\Big{(}1-\frac{F(\rho_{\epsilon}*\eta(x))} {\theta}\Big{)}\sum_{j=1}^{k}\binom{k}{j}(-1)^{j}\frac{f(x)^{j}}{N^{j}}\langle f,\eta\rangle^{k-j},\eta(dx)\Big{\rangle}. \tag{7.11}\] Mimicking what we did above, we set \(f(\cdot)=\psi_{t}^{\epsilon,x}(\cdot)\) and write \[\mathbb{E}\Big{[}\langle\psi_{0}^{\epsilon,x},\eta_{t}\rangle^{ k}\Big{|}\mathcal{F}_{0}\Big{]}=\langle\psi_{t}^{\epsilon,x}(y),\eta_{0}(dy) \rangle^{k}+\mathbb{E}\Big{[}\int_{0}^{t}\mathcal{P}^{N}\big{(}\langle\psi_{t- s}^{\epsilon,x}(y),\eta_{s}(dy)\rangle^{k}\big{)}ds\\ -\int_{0}^{t}\langle k\dot{\psi}_{t-s}^{\epsilon,x}(y),\eta_{s}( dy)\rangle\big{\langle}\psi_{t-s}^{\epsilon,x}(y),\eta_{s}(dy)\rangle^{k-1} ds\Big{|}\mathcal{F}_{0}\Big{]}. \tag{7.12}\] Since \(\dot{\psi}_{s}^{\epsilon,x}=\mathcal{L}_{\theta}\psi_{s}^{\epsilon,x}\), the \(j=1\) terms from \(\mathcal{P}^{N}(\langle\psi_{t-s}^{\epsilon,x}(y),\eta_{s}(dy)\rangle^{k})\) combines with the last term in (7.12) to yield \[\int_{0}^{t}k\langle\psi_{t-s}^{\epsilon,x},\eta\rangle^{k-1}\langle F(\rho_{ \epsilon}*\eta_{s}(y))\psi_{t-s}^{\epsilon,x}(y),\eta_{s}(dy)\rangle ds.\] As for the remaining terms, using (from Lemma 7.1) that \(\sup_{s}\|\psi_{s}^{\epsilon,x}(\cdot)\|_{\infty}=C/\epsilon^{d}\), \(N\epsilon^{d}>1\), and our inductive hypothesis, we find \[\mathbb{E}\Big{[}\Big{\langle}\int_{0}^{t}\theta N\sum_{j=2}^{k} \binom{k}{j}\int\frac{\psi_{t-s}^{\epsilon,x}(z)^{j}}{N^{j}}\langle\psi_{t-s}^{ \epsilon,x},\eta_{s}\rangle^{k-j}q_{\theta}(y,dz),\eta_{s}(dy)\Big{\rangle}ds\\ +\Big{\langle}\int_{0}^{t}\theta N\sum_{j=2}^{k}\binom{k}{j}\frac {\psi_{t-s}^{\epsilon,x}(y)^{j}}{N^{j}}\langle\psi_{t-s}^{\epsilon,x},\eta_{s} \rangle^{k-j}(-1)^{j}\Big{(}1-\frac{F(\rho_{\epsilon}*\eta_{s}(y))}{\theta} \Big{)},\eta_{s}(dy)\Big{\rangle}ds\Big{|}\mathcal{F}_{0}\Big{]}\\ \leq C\mathbb{E}\Big{[}\Big{\langle}\int_{0}^{t}\sum_{j=2}^{k} \frac{\theta}{N\epsilon^{d}}\Big{(}\frac{1}{(N\epsilon^{d})^{j-2}}\Big{)} \Big{\langle}\psi_{t-s}^{\epsilon,x}(y)\Big{(}2+\frac{|F(\rho_{\epsilon}*\eta_{ s}(y))|}{\theta}\Big{)},\eta_{s}(dy)\Big{\rangle}\langle\psi_{t-s}^{\epsilon,x}, \eta_{s}\rangle^{k-j}ds\big{|}\mathcal{F}_{0}\Big{]}\\ \leq C^{\prime}\frac{\theta}{N\epsilon^{d}}\sum_{j=1}^{k-1} \langle\psi_{t}^{\epsilon,x}(y),\eta_{0}(dy)\rangle^{j}\leq C^{\prime\prime} \frac{\theta}{N\epsilon^{d}}\Big{(}\langle\psi_{t}^{\epsilon,x}(y),\eta_{0}(dy )\rangle^{k}+\langle\psi_{t}^{\epsilon,x}(y),\eta_{0}(dy)\rangle\Big{)}.\] Combining this with (7.11) and (7.12), using once again the fact that \(F\) is bounded above, we find \[\mathbb{E}\Big{[}\langle\psi_{0}^{\epsilon,x},\eta_{t}\rangle^{k }\Big{|}\mathcal{F}_{0}\Big{]}\leq\langle\psi_{t}^{\epsilon,x}(y),\eta_{0}(dy )\rangle^{k}+\widetilde{C}\mathbb{E}\Big{[}\int_{0}^{t}\langle\psi_{t-s}^{ \epsilon,x}(y),\eta_{s}(dy)\rangle^{k}ds\Big{|}\mathcal{F}_{0}\Big{]}\\ +C^{\prime\prime}\frac{\theta}{N\epsilon^{d}}\Big{(}\langle\psi_ {t}^{\epsilon,x}(y),\eta_{0}(dy)\rangle^{k}+\langle\psi_{t}^{\epsilon,x}(y), \eta_{0}(dy)\rangle\Big{)},\] and (7.8) follows from Gronwall's inequality. Rearranging exactly as in the case \(k=1\), we recover (7.9) and the inductive step is complete. \(\Box\) We shall also need the following consequence of the bounds that we obtained in Lemma 7.2: **Corollary 7.3**: _Under the assumptions of Theorem 2.20, for each \(k\geq 1\), \(T>0\), there is a \(C(k,T)\) such that_ \[\mathbb{E}\Big{[}\langle(\rho_{\epsilon}*\eta_{t})^{k},\eta_{t}\rangle\Big{]}< C(k,T)<\infty,\hskip 28.452756pt\text{for all }t\in[0,T]. \tag{7.13}\] **Proof** [Sketch] First observe that if \(A\in(0,1)\), then \[p_{A\epsilon^{2}}(x,y)=\frac{1}{A^{d/2}}p_{\epsilon^{2}}(x,y)\exp\Big{(}- \frac{\|x-y\|^{2}}{2\epsilon^{2}}\big{(}\frac{1}{A}-1\big{)}\Big{)}\leq\frac{ 1}{A^{d/2}}p_{\epsilon^{2}}(x,y). \tag{7.14}\] Now consider \[\mathbb{E}\big{[}\langle\rho_{\epsilon}*\eta_{t}(x),\eta_{t}(dx)\rangle\big{]} =\mathbb{E}\Big{[}\int\int p_{\epsilon^{2}}(x,z)\eta_{t}(dz)\eta_{ t}(dx)\Big{]}\] \[=\mathbb{E}\Big{[}\int\int\int p_{\epsilon^{2}/2}(x,y)p_{ \epsilon^{2}/2}(y,z)dy\eta_{t}(dz)\eta_{t}(dx)\Big{]}\] \[=\mathbb{E}\Big{[}\int\big{(}p_{\epsilon^{2}/2}*\eta_{t}(y)\big{)} ^{2}\,dy\Big{]}\] \[\leq C\int\mathbb{E}\big{[}\big{(}\rho_{\epsilon}*\eta_{t}(x) \big{)}^{2}\big{]}dx,\] where we used (7.14) in the last line. Using Lemma 7.2 and our assumptions on \(\eta_{0}\), this quantity is finite. To illustrate the inductive step, now consider \[\mathbb{E}\big{[}\big{\langle}\rho_{\epsilon}*\eta_{t}(x)^{2},\eta_ {t}(dx)\big{\rangle}\big{]}=\mathbb{E}\Big{[}\int\int\int p_{\epsilon^{2}}(x,z_ {1})p_{\epsilon^{2}}(x,z_{2})\eta_{t}(dz_{1})\eta_{t}(dz_{2})\eta_{t}(dx)\Big{]}\] \[\quad=\mathbb{E}\Big{[}\int\cdots\int p_{\epsilon^{2}/2}(x,y_{1}) p_{\epsilon^{2}/2}(x,y_{2})p_{\epsilon^{2}/2}(y_{1},z_{1})p_{\epsilon^{2}/2}(y_{2},z_{2})\eta_{t}(dz_{1})\eta_{t}(dz_{2})dy_{1}dy_{2}\eta_{t}(dx)\Big{]}. \tag{7.15}\] We use the identity \[p_{\epsilon^{2}/2}(x,y_{1})p_{\epsilon^{2}/2}(x,y_{2})=p_{\epsilon^{2}}(y_{1}, y_{2})p_{\epsilon^{2}/4}\Big{(}x,\frac{y_{1}+y_{2}}{2}\Big{)}\] to rewrite (7.15) as \[\mathbb{E}\Big{[}\int\int p_{\epsilon^{2}/2}*\eta_{t}(y_{1})\,p_ {\epsilon^{2}/2}*\eta_{t}(y_{2})\,p_{\epsilon^{2}/4}*\eta_{t}\Big{(}\frac{y_{1 }+y_{2}}{2}\Big{)}\,p_{\epsilon^{2}}(y_{1},y_{2})dy_{1}dy_{2}\Big{]}\] \[\leq\mathbb{E}\Big{[}\int\int\Big{\{}\big{(}p_{\epsilon^{2}/2}* \eta_{t}(y_{1})\big{)}^{3}+\big{(}p_{\epsilon^{2}/2}*\eta_{t}(y_{2})\big{)}^{ 3}+\big{(}p_{\epsilon^{2}/4}*\eta_{t}\big{(}\frac{y_{1}+y_{2}}{2}\big{)}\big{)} ^{3}\Big{\}}p_{\epsilon^{2}}(y_{1},y_{2})dy_{1}dy_{2}\Big{]},\] where we have used that for any non-negative real numbers \(\beta_{1}\), \(\beta_{2}\), \(\beta_{3}\), \(\beta_{1}\beta_{2}\beta_{3}\leq\beta_{1}^{3}+\beta_{2}^{3}+\beta_{3}^{3}\). For the first two terms in the sum we integrate with respect to \(y_{2}\) and \(y_{1}\) respectively to reduce to an expression of the form considered in Lemma 7.2. For the final term, the change of variables \(z_{1}=y_{1}+y_{2}\), \(z_{2}=y_{1}-y_{2}\) in the integral similarly allows us to integrate out the heat kernel, and we conclude that the result holds for \(k=2\). We can proceed in the same way for larger values of \(k\), using repeatedly that \[p_{t_{1}}(x,y_{1})p_{t_{2}}(x,y_{2})=p_{\frac{t_{1}t_{2}}{t_{1}+t_{2}}}\Big{(} x,\frac{t_{2}y_{1}+t_{1}y_{2}}{t_{1}+t_{2}}\Big{)}p_{t_{1}+t_{2}}(y_{1},y_{2})\] to write \[\prod_{j=1}^{k}p_{\tau}(y,y_{j})=\prod_{j=2}^{k}p_{\frac{j\tau}{j-1}}\big{(}y _{j},Y_{j-1}\big{)}p_{\frac{\tau}{k}}(y,Y_{k})\] where \[Y_{1}=y_{1},\qquad Y_{j}=\frac{j-1}{j}Y_{j-1}+\frac{1}{j}y_{j},\text{for }j\geq 2.\] Writing \(p_{\epsilon^{2}}(x,z_{j})=\int p_{\epsilon^{2}/2}(x,y_{j})p_{\epsilon^{2}/2}( y_{j},z_{j})dy_{j}\) and using the above with \(\tau=\epsilon^{2}/2\), this yields \[\big{\langle}\big{(}\rho_{\epsilon}*\eta_{t}(x)\big{)}^{k},\eta_ {t}(dx)\big{\rangle}=\int\cdots\int\prod_{j=2}^{k}p_{\epsilon^{2}j/2(j-1)}(y_ {j},Y_{j-1})\prod_{i=1}^{k}p_{\epsilon^{2}/2}*\eta_{t}(y_{i})p_{\epsilon^{2}/2 k}*\eta_{t}(Y_{k})dy_{1}\ldots dy_{k}\\ \leq\int\cdots\int\prod_{j=2}^{k}p_{\epsilon^{2}j/2(j-1)}(y_{j}, Y_{j-1})\Big{\{}\sum_{i=1}^{k}\big{(}p_{\epsilon^{2}/2}*\eta_{t}(y_{i})\big{)}^{k+1}+ \big{(}p_{\epsilon^{2}/2k}*\eta_{t}(Y_{k})\big{)}^{k+1}\Big{\}}dy_{1}\ldots dy _{k},\] and once again we can change variables in the integrals and use (7.14) to bound this by a constant multiple of \(\int\mathbb{E}\big{[}\big{(}\rho_{\epsilon}*\eta_{t}(x)\big{)}^{k+1}\big{]}dx\), and the inductive step is complete. \(\square\) **Corollary 7.4** (Tightness of \(\{(\rho_{\epsilon}*\eta_{t}^{N}(x)dx)_{t\geq 0}\}\)): _Under the assumptions of Theorem 2.20, the sequence of measure valued processes \(\{\rho_{\epsilon}*\eta_{t}^{N}(x)dx\}_{t\geq 0}\) (taking values in \(\mathcal{D}_{[0,T]}(\mathcal{M}_{F}(\mathbb{R}^{d}))\)) is tight._ **Proof** First observe that the proof, from Lemma 6.2, that \(\mathbb{E}[\sup_{0\leq t\leq T}\langle 1,\eta_{t}^{N}\rangle]\) is bounded goes through unchanged, and since \(\langle 1,\rho_{\epsilon}*\eta_{t}^{N}(x)dx\rangle=\langle 1,\eta_{t}^{N}\rangle\), compact containment follows. As in the nonlocal case, it suffices to prove that for \(T>0\), and any \(f\in C_{b}^{\infty}(\mathbb{R}^{d})\) with bounded second derivatives and \(\int|f(x)|dx<\infty\), the sequence of real-valued processes \(\big{\{}\big{(}\int f(x)\rho_{\epsilon}*\eta_{t}^{N}(x)dx\big{)}_{t\geq 0} \big{\}}_{N\geq 1}\) is tight. Let us temporarily write \(X_{f}^{N}(t)\) for \(\int f(x)\rho_{\epsilon}*\eta_{t}^{N}(x)dx\) and set \[w^{\prime}\big{(}X_{f}^{N},\delta,T\big{)}=\inf_{\{t_{i}\}}\max_{i}\sup_{s,t \in[t_{i-1},t_{i})}\big{|}X_{f}^{N}(t)-X_{f}^{N}(s)\big{|},\] where \(\{t_{i}\}\) ranges over all partitions of the form \(0=t_{0}<t_{1}<\cdots<t_{n-1}<T\leq t_{n}\) with \(\min_{1\leq i\leq n}(t_{i}-t_{i-1})>\delta\) and \(n\geq 1\). Using Corollary 3.7.4 of Ethier and Kurtz [1986], to prove tightness of the sequence of real-valued processes \(X_{f}^{N}\) it suffices to check compact containment of the sequence \(\{\int f(x)\rho_{\epsilon}*\eta_{t}^{N}(x)dx\}_{N\geq 1}\) at any rational time \(t\) and that for every \(\nu>0\) and \(T>0\), there exists \(\delta>0\) such that \[\limsup_{N\to\infty}\mathbb{P}\big{[}w^{\prime}\big{(}X_{f}^{N},\delta,T\big{)} >\nu\big{]}<\nu.\] Evidently this will follow if we can show that this condition is satisfied when we replace the minimum over all partitions with mesh at least \(\delta\) in the definition of \(w^{\prime}\), by the partition into intervals of length exactly \(\delta\). We have \[\big{|}\langle\rho_{\epsilon}*f,\eta_{t}^{N}\rangle-\langle\rho_{ \epsilon}*f,\eta_{s}^{N}\rangle\big{|}\leq\bigg{|}\int_{s}^{t}\Big{\langle} \theta\int\big{(}\rho_{\epsilon}*f(y)-\rho_{\epsilon}*f(x)\big{)}q_{\theta}(x,dy),\eta_{u}^{N}(dx)\Big{\rangle}du\bigg{|}\\ +\int_{s}^{t}\Big{\langle}|F\big{(}\rho_{\epsilon}*\eta_{u}^{N}( x)\big{)}|\rho_{\epsilon}*|f|(x),\eta_{u}^{N}(dx)\Big{\rangle}du+2\sup_{0\leq u \leq T}|\widehat{M}^{N}(f)_{u}|, \tag{7.16}\] where \(\widehat{M}^{N}(f)\) is the martingale of (6.4) with the test function \(f\) replaced by \(\rho_{\epsilon}*f\). We control each of the three terms on the right hand side separately. By the Intermediate Value Theorem, using \(T_{t}\) to denote the heat semigroup, there exists \(s\in(0,1/\theta)\) such that \[\bigg{|}\theta\int\big{(}\rho_{\epsilon}*f(y)-\rho_{\epsilon}*f (x)\big{)}q_{\theta}(x,dy)\bigg{|}=\bigg{|}\theta\Big{(}T_{\epsilon^{2}+1/ \theta}f(x)-T_{\epsilon^{2}}f(x)\Big{)}\bigg{|}\\ =|\partial_{s}T_{\epsilon^{2}+s}f(x)|=|T_{\epsilon^{2}+s}\Delta f (x)|\leq\|\Delta f\|_{\infty}.\] The first term in (7.16) is therefore bounded by \[\|\Delta f\|_{\infty}|t-s|\sup_{0\leq u\leq T}\langle 1,\eta_{u}^{N}\rangle.\] We follow the approach of Lemma 6.2. Consulting (2.4), the angle bracket process of \(\widehat{M}^{N}(f)\) satisfies \(\mathbb{E}[\langle\widehat{M}^{N}_{f}\rangle_{T}]\leq C(\theta/N)\int_{0}^{T} \mathbb{E}[\langle 1,\eta_{s}\rangle]ds\leq C^{\prime}\theta/N\) for some constants \(C\) and \(C^{\prime}\). Now, using the Burkholder-Davis-Gundy inequality and Barlow et al. [1986], \(\mathbb{E}[\sup_{0\leq u\leq T}|\widehat{M}^{N}(f)_{u}|^{2}]\leq C^{\prime }\mathbb{E}[\langle\widehat{M}^{N}(f)\rangle_{u}]\), and so using Markov's inequality, \[\limsup_{N\to\infty}\mathbb{P}\Big{[}2\sup_{0\leq u\leq T}|\widehat{M}^{N}(f)_ {u}|>\frac{\nu}{3}\Big{]}\leq\limsup_{N\to\infty}\frac{36}{\nu^{2}}C^{\prime \prime}\mathbb{E}\big{[}\langle\widehat{M}^{N}(f)\rangle_{T}\big{]}\leq \limsup_{N\to\infty}\frac{36}{\nu^{2}}\frac{C^{\prime}C^{\prime\prime}\theta} {N\epsilon^{d}}=0. \tag{7.17}\] Now consider \[\mathbb{E}\Big{[}\Big{(}\int_{s}^{t}\big{\langle}\rho_{\epsilon}* |f|(x)\big{|}F\big{(}\rho_{\epsilon}*\eta_{u}^{N}(x)\big{)}\big{|},\eta_{u}^{N }(dx)\big{\rangle}du\Big{)}^{2}\Big{]}\] \[=2\mathbb{E}\Big{[}\int_{s}^{t}\big{\langle}\rho_{\epsilon}*|f|(x )\big{|}F\big{(}\rho_{\epsilon}*\eta_{u}^{N}(x)\big{)}\big{|},\eta_{u}^{N}(dx) \big{\rangle}\int_{u}^{t}\big{\langle}\rho_{\epsilon}*|f|(x)\big{|}F\big{(} \rho_{\epsilon}*\eta_{r}^{N}(x)\big{)}\big{|},\eta_{r}^{N}(dx)\big{\rangle}drdu \Big{]}. \tag{7.18}\] Since \(F\) is polynomial, we use the approach of Corollary 7.3, the tower property, and Lemma 7.2, to bound this in terms of sums of terms of the form \[\mathbb{E}\Big{[}\int_{s}^{t}(t-u)\int\rho_{\epsilon}*|f|(x)\rho_{\epsilon}* \eta_{u}^{N}(x)^{j}dx\int\rho_{\epsilon}*|f|(y)\rho_{\epsilon}*\eta_{u}^{N}(y) ^{k}dydu\Big{]}.\] Now observe that, again using Lemma 7.2, since for nonnegative \(a\) and \(b\), \(a^{j}b^{k}\leq a^{j+k}+b^{j+k}\), \[\mathbb{E}\left[\int\int\rho_{\epsilon}*|f|(x)\rho_{\epsilon}* \eta_{u}^{N}(x)^{j}\rho_{\epsilon}*|f|(y)\rho_{\epsilon}*\eta_{u}^{N}(y)^{k} dxdy\right]\\ \leq\mathbb{E}\left[\int\int\|f\|_{\infty}\rho_{\epsilon}*\eta_{u }^{N}(x)^{j+k}\rho_{\epsilon}*|f|(y)dxdy+\int\int\rho_{\epsilon}*|f|(x)\|f\|_{ \infty}\rho_{\epsilon}*\eta_{u}^{N}(y)^{j+k}dxdy\right]\\ \leq C\int|f|(x)dx.\] Thus the quantity (7.18) is bounded by \(C(t-s)^{2}\) for a new constant \(C\) which we can take to be independent of \(s\), \(t\) and \(\epsilon\). Markov's inequality then gives \[\mathbb{P}\Big{[}\|f\|_{\infty}\int_{s}^{t}\big{\langle}\big{|}F\big{(}\rho_{ \epsilon}*\eta_{u}^{N}(x)\big{)}\big{|},\eta_{u}^{N}(dx)\big{\rangle}du\geq \frac{\nu}{3}\Big{]}\leq C\frac{(t-s)^{2}}{\nu^{2}}.\] A union bound gives that \[\mathbb{P}\Big{[}\max_{i}\|f\|_{\infty}\int_{t_{i-1}}^{t_{i}}\big{\langle}\big{|} F\big{(}\rho_{\epsilon}*\eta_{u}^{N}(x)\big{)}\big{|},\eta_{u}^{N}(dx)\big{\rangle} du\geq\frac{\nu}{3}\Big{]}\leq C\frac{T\delta}{\nu^{2}}. \tag{7.19}\] Now using Markov's inequality, we can choose \(K\) so that \[\mathbb{P}\Big{[}\|\Delta f\|_{\infty}\,\sup_{0\leq t\leq T}\langle 1,\eta_{t}^{N} \rangle>K\Big{]}<\frac{\nu}{3},\] and so choosing \(\delta\) so that \(K\delta<\nu/3\) in this expression and \(C\delta<\nu^{3}/3T\) in (7.19), combining with (7.17), the result follows. \(\Box\) ### Continuity estimates for \(\rho_{\epsilon}*\eta\) To identify the limit point of any convergent subsequence of \(\{\rho_{\epsilon}*\eta^{N}(x)\}\), we will require some control on the spatial continuity of the functions \(\rho_{\epsilon}*\eta^{N}(x)\). This will be inherited from the regularity of the transition density of the Gaussian random walk with generator \(\mathcal{L}^{\theta}\), which in turn follows from its representation as that of a Brownian motion evaluated at the random time \(T(t)\) defined in Lemma 7.1. Our approach will be to approximate \(\psi_{t}^{\epsilon,x}(\cdot)\) by \(p_{\epsilon^{2}+t}(x,\cdot)\), and to control the error that this introduces we need to control \(T(t)-t\). **Lemma 7.5**: _In the notation of Lemma 7.1, for any \(A>1\),_ \[\mathbb{P}\left\{T(t)-t>A(\epsilon^{2}+t)\right\}\leq\exp\left(-\frac{\theta A }{4}\big{(}\epsilon^{2}+t\big{)}\right).\] **Proof** This is just a Chernoff bound. With \(\Pi\) a rate one Poisson process as in Lemma 7.1, for any \(A>1\), \[\mathbb{P}\left\{T(t)-t>A(\epsilon^{2}+t)\right\} =\mathbb{P}\Big{\{}\Pi(\theta t)>\theta\Big{(}t+A(\epsilon^{2}+t) \Big{)}\Big{\}}\] \[\leq\frac{\mathbb{E}\left[\exp\left(\alpha\Pi(\theta t)\right) \right]}{\exp\left(\alpha\theta\big{(}t+A(\epsilon^{2}+t)\big{)}\right)}\] \[=\exp\left(\theta t\big{(}e^{\alpha}-1\big{)}-\alpha\theta\big{(} t+A(\epsilon^{2}+t)\big{)}\right)\] \[\leq\exp\left(\theta t\big{(}e^{\alpha}-\alpha-1-\frac{A\alpha}{ 2}\big{)}-\frac{A\alpha}{2}\theta(\epsilon^{2}+t)\right).\] Now set \(\alpha=1/2\). Since \(A>1\), \(e^{\alpha}-\alpha-1-A\alpha/2<0\) and the result follows. \(\Box\) As advertised, we wish to control the difference between \(\psi_{t}^{\epsilon,x}(y)\) and \(p_{\epsilon^{2}+t}(x,y)\). **Lemma 7.6**: _In the notation of Lemma 7.1, there exists a \(C<\infty\) such that_ \[|\psi_{t}^{\epsilon,x}(y)-p_{\epsilon^{2}+t}(x,y)|\leq\frac{C}{(\epsilon^{2} \theta)^{1/2}}p_{6(\epsilon^{2}+t)}(x,y)+\frac{C}{(\epsilon^{2}+t)^{d/2}}\exp( -\epsilon^{2}\theta/2). \tag{7.20}\] **Proof** Still using the notation of Lemma 7.1, we partition into three events according to the value of \(\tau(t)\). Let \(A_{1}=\{\tau(t)<-(\epsilon^{2}+t)/2\}\), \(A_{2}=\{\tau(t)>2(\epsilon^{2}+t)\}\), and \(A_{3}\) the remaining event, \(\{-(\epsilon^{2}+t)/2\leq\tau(t)\leq 2(\epsilon^{2}+t)\}\). Then, \[|\psi_{t}^{\epsilon,x}(y)-p_{\epsilon^{2}+t}(x,y)| =\big{|}\mathbb{E}\left[p_{\epsilon^{2}+t+\tau(t)}(x,y)-p_{ \epsilon^{2}+t}(x,y)\right]\big{|}\] \[\leq\mathbb{E}\left[(1_{A_{1}}+1_{A_{2}}+1_{A_{3}})\left|p_{ \epsilon^{2}+t+\tau(t)}(x,y)-p_{\epsilon^{2}+t}(x,y)\right|\right].\] For the first term, note that if \(a<b\) then \[|p_{a}(x,y)-p_{b}(x,y)| =\frac{1}{(2\pi)^{d/2}}\left|\frac{1}{a^{d/2}}e^{-\|x-y\|^{2}/2a}- \frac{1}{b^{d/2}}e^{-\|x-y\|^{2}/2b}\right|\] \[=\frac{1}{(2\pi a^{2})^{d/2}}e^{-\|x-y\|^{2}/2b}\left|e^{-\|x-y\|^ {2}\left(\frac{1}{2a}-\frac{1}{2b}\right)}-\left(\frac{a}{b}\right)^{d/2}\right|\] \[\leq C\left(\frac{b}{a}\right)^{d/2}p_{b}(x,y),\] where the inequality follows because both terms under the absolute value are less than \(1\). Since, on the event \(A_{1}\), \(\tau(t)<0\), we can apply this with \(a=\epsilon^{2}+t+\tau(t)\) and \(b=\epsilon^{2}+t\), and, using the bound (7.6), \[\mathbb{E}\left[1_{A_{1}}|p_{a}(x,y)-p_{b}(x,y)|\right] \leq C\left(\frac{\epsilon^{2}+t}{\epsilon^{2}}\right)^{d/2}p_{ \epsilon^{2}+t}(x,y)\mathbb{P}\left\{\tau(t)<-\frac{\epsilon^{2}+t}{2}\right\}\] \[\leq C\frac{1}{\epsilon^{d}}\mathbb{P}\left\{\tau(t)<-\frac{ \epsilon^{2}+t}{2}\right\}\] \[\leq\frac{C}{(\epsilon^{2}+t)^{d/2}}\exp\left(-\frac{\theta \epsilon^{2}}{2}\right).\] For the third term, we will first collect some facts. Observe that on the event \(A_{3}\), \(\epsilon^{2}+t+\tau(t)\) is between \((\epsilon^{2}+t)/2\) and \(3(\epsilon^{2}+t)\), and for any \(s\) in this interval, \[p_{2s}(y) \leq\left(\frac{6(\epsilon^{2}+t)}{\epsilon^{2}+t}\right)^{d/2}p_ {6(\epsilon^{2}+t)}(x,y)\] \[=6^{d/2}p_{6(\epsilon^{2}+t)}(x,y). \tag{7.21}\] Moreover, since \(ue^{-u}\leq e^{-1}\) for all \(u\geq 0\), \[\frac{\|x-y\|^{2}}{s}p_{s}(x,y) =\frac{4}{(2\pi s)^{d/2}}e^{-\frac{\|x-y\|^{2}}{4s}}\frac{\|x-y\| ^{2}}{4s}e^{-\frac{\|x-y\|^{2}}{4s}}\] \[\leq Cp_{2s}(x,y). \tag{7.22}\] Now, by the Intermediate Value Theorem, \[\left|p_{\epsilon^{2}+t+\tau(t)}(x,y)-p_{\epsilon^{2}+t}(x,y)\right|=|\tau(t) |\left|\frac{\partial p_{s}(x,y)}{\partial s}\right| \tag{7.23}\] for some \(s\) between \(\epsilon^{2}+t+\tau(t)\) and \(\epsilon^{2}+t\). Since \[\partial_{s}p_{s}(x,y) =\partial_{s}\left(\frac{1}{(2\pi s)^{d/2}}\exp\left(-\frac{\|x-y \|^{2}}{2s}\right)\right)\] \[=-\frac{d}{2s}p_{s}(x,y)+\frac{\|x-y\|^{2}}{2s^{2}}p_{s}(x,y),\] applying the inequality (7.22), using the fact that \(p_{s}(x,y)\leq 2^{d/2}p_{2s}(x,y)\), and then (7.21), we have that for any \(s\in((\epsilon^{2}+t)/2,3(\epsilon^{2}+t))\), \[\left|\frac{\partial}{\partial s}p_{s}(x,y)\right|\leq\frac{C}{s}p_{2s}(x,y) \leq\frac{C}{\epsilon^{2}+t}p_{6(\epsilon^{2}+t)}(x,y).\] Therefore, recalling that \(\mathbb{E}[\tau(t)^{2}]=t/\theta\), substituting into (7.23), \[\mathbb{E}\left[1_{A_{3}}\left|p_{\epsilon^{2}+t+\tau(t)}(x,y)-p _{\epsilon^{2}+t}(x,y)\right|\right] \leq\frac{C}{\epsilon^{2}+t}p_{6(\epsilon^{2}+t)}(x,y)\mathbb{E} \left[|\tau(t)|\right]\] \[\leq\frac{C}{\epsilon^{2}+t}p_{6(\epsilon^{2}+t)}(x,y)\mathbb{E} \left[\tau(t)^{2}\right]^{1/2}\] \[=\left(\frac{Ct}{\theta(\epsilon^{2}+t)^{2}}\right)^{1/2}p_{6( \epsilon^{2}+t)}(x,y)\] \[\leq\frac{C}{\sqrt{\theta\epsilon^{2}}}p_{6(\epsilon^{2}+t)}(x,y),\] where the last inequality follows from \(2\epsilon^{2}t\leq(\epsilon^{2}+t)^{2}\). Finally, on the event \(A_{2}=\{\tau(t)>2(\epsilon^{2}+t)\}\), we simply use \[\left|p_{\epsilon^{2}+t+\tau(t)}(x,y)-p_{\epsilon^{2}+t}(x,y)\right|\leq\frac {C}{(\epsilon^{2}+t)^{d/2}},\] so that \[\mathbb{E}\left[1_{A_{2}}\left|p_{\epsilon^{2}+t+\tau(t)}(x,y)-p_{\epsilon^{2 }+t}(x,y)\right|\right]\leq\frac{C}{(\epsilon^{2}+t)^{d/2}}\mathbb{P}\left\{ \tau(t)>2(\epsilon^{2}+t)\right\},\] and apply Lemma 7.5 with \(A=2\). \(\square\) The last result will be useful when combined with the next bound for the heat kernel. **Lemma 7.7**: _Let \(s>0\), and \(x,y,z\in\mathbb{R}^{d}\). The following estimate holds:_ \[\left|p_{s}(x,z)-p_{s}(y,z)\right|\leq\frac{C\|x-y\|}{\sqrt{s}}\left(p_{2s}(x,z)+p_{2s}(y,z)\right),\] _where the constant \(C\) does not depend on \(x,y,z\) or \(s\)._ **Proof** Expanding the difference of two squares, \[e^{-\frac{\|y-z\|^{2}}{2s}}-e^{-\frac{\|x-z\|^{2}}{2s}}=\left(e^{-\frac{\|y-z \|^{2}}{4s}}-e^{-\frac{\|x-z\|^{2}}{4s}}\right)\left(e^{-\frac{\|y-z\|^{2}}{4 s}}+e^{-\frac{\|x-z\|^{2}}{4s}}\right).\] Now, thinking of the first term in brackets as a function of a single variable \(x\) on the line segment \([y,z]\) connecting \(y\) to \(z\), we can apply the Intermediate Value Theorem and take the modulus to bound this expression by \[\|y-x\|\left(\frac{2\|w-z\|}{4s}\exp\left(-\frac{\|w-z\|^{2}}{4s}\right)\right) (4\pi s)^{d/2}\left(p_{2s}(y,z)+p_{2s}(x,z)\right)\] for some \(w\in[y,z]\). Using the fact that \(xe^{-x^{2}}\) is uniformly bounded, we can bound the first bracket in the last equation by \(C/\sqrt{s}\), and the result follows. We now have the ingredients that we need to write down a continuity estimate for \(\rho_{\epsilon}*\eta\). We fix \(\delta>0\) and suppose that \(s>\delta\). Let us write \[\widehat{\epsilon}(\delta,\epsilon,\theta):=\frac{1}{(\epsilon^{2}+\delta)^{d /2}}e^{-\epsilon^{2}\theta/2},\] and note that under the assumption that \(\epsilon^{2}\theta\to\infty\), for each fixed \(\delta>0\), \(\lim_{\epsilon\to 0,\theta\to\infty}\widehat{\epsilon}(\delta,\epsilon, \theta)=0\). Using the semimartingale decomposition (7.4), and Lemma 7.6, we have \[|\rho_{\epsilon}*\eta_{s}(y)-\rho_{\epsilon}*\eta_{s}(w)|=|\langle p _{\epsilon^{2}}(y,z)-p_{\epsilon^{2}}(w,z),\eta_{s}(dz)\rangle|\] \[\qquad\leq\big{\langle}|p_{\epsilon^{2}+s}(y,z)-p_{\epsilon^{2}+s }(w,z)|,\eta_{0}(dz)\big{\rangle}\] \[\qquad\qquad\qquad+\int_{0}^{s-\delta}\big{\langle}|p_{s-r+ \epsilon^{2}}(y,z)-p_{s-r+\epsilon^{2}}(w,z)|||F(\rho_{\epsilon}*\eta_{r}(z) |,\eta_{r}(dz)\big{\rangle}dr\] \[\qquad\qquad+\Big{\langle}\frac{C}{(\theta\epsilon^{2})^{1/2}} \Big{(}p_{6(\epsilon^{2}+s)}(y,z)+p_{6(\epsilon^{2}+s)}(w,z)\Big{)}+C \widehat{\epsilon}(\delta,\epsilon,\theta),\eta_{0}(dz)\Big{\rangle}\] \[\qquad\qquad+\int_{0}^{s-\delta}\big{\langle}\Big{\{}\frac{C}{( \epsilon^{2}\theta)^{1/2}}|p_{6(s-r+\epsilon^{2})}(y,z)+p_{6(s-r+\epsilon^{2} )}(w,z)|+\widehat{\epsilon}(\delta,\epsilon,\theta)\Big{\}}|F(\rho_{\epsilon}* \eta_{r}(z)|,\eta_{r}(dz)\big{\rangle}dr\] \[\qquad\qquad+\int_{s-\delta}^{s}\big{\langle}|\psi_{s-r}^{ \epsilon,y}(z)+\psi_{s-r}^{\epsilon,w}(z)|||F(\rho_{\epsilon}*\eta_{r}(z)|, \eta_{r}(dz)\big{\rangle}dr\] \[\qquad\qquad+|M_{s}(y)|+|M_{s}(w)|\] \[\leq\Big{\langle}\frac{\|y-w\|}{\sqrt{s+\epsilon^{2}}}\big{(}p_{ 2(s+\epsilon^{2})}(y,z)+p_{2(s+\epsilon^{2})}(w,z)\big{)},\eta_{0}(dz)\Big{\rangle}\] \[\qquad\qquad+\int_{0}^{s-\delta}\Big{\langle}\frac{\|y-w\|}{ \sqrt{s-r+\epsilon^{2}}}\big{(}p_{2(s-t+\epsilon^{2})}(y,z)+p_{2(s-r+\epsilon ^{2})}(w,z)\big{)}|F(\rho_{\epsilon}*\eta_{r}(z)|,\eta_{r}(dz)\Big{\rangle}dr\] \[\qquad\qquad+\Big{\langle}\frac{C}{(\theta\epsilon^{2})^{1/2}} \Big{(}p_{6(\epsilon^{2}+s)}(y,z)+p_{6(\epsilon^{2}+s)}(w,z)\Big{)}+C\widehat {\epsilon}(\delta,\epsilon,\theta),\eta_{0}(dz)\Big{\rangle}\] \[\qquad\qquad+\int_{0}^{s-\delta}\big{\langle}\Big{\{}\frac{C}{( \epsilon^{2}\theta)^{1/2}}(p_{6(s-r+\epsilon^{2})}(y,z)+p_{6(s-r+\epsilon^{2} )}(w,z))+\widehat{\epsilon}(\delta,\epsilon,\theta)\Big{\}}|F(\rho_{\epsilon}* \eta_{r}(z)|,\eta_{r}(dz)\big{\rangle}dr\] \[\qquad\qquad+\int_{s-\delta}^{s}\big{\langle}|\psi_{s-r}^{ \epsilon,y}(z)+\psi_{s-r}^{\epsilon,w}(z)|||F(\rho_{\epsilon}*\eta_{r}(z)|, \eta_{r}(dz)\big{\rangle}dr\] \[\qquad\qquad+|M_{s}(y)|+|M_{s}(w)|. \tag{7.24}\] Although this expression is lengthy, we have successfully isolated the terms involving \(\|y-w\|\), which will control the regularity as we pass to the limit. Asymptotically, we don't expect the martingale terms to contribute, since their quadratic variation scales with \(\theta/(N\epsilon^{d})\); under the assumption that \(\epsilon^{2}\theta\to\infty\), for any fixed \(\delta>0\), the terms arising from approximating the transition density \(\psi_{s-r}^{\epsilon,\cdot}(\cdot)\) of the Gaussian walk by \(p_{s-r+\epsilon^{2}}(\cdot,\cdot)\) at times with \(s-r>\delta\) will tend to zero; and the moment bounds of Lemma 7.2 will allow us to control the integral over \([s-\delta,s]\). There is some technical work to be done to rigorously identify the limit points of \(\rho_{\epsilon}*\eta^{N}\), but it really amounts to applying the tower property and our moment bounds from Lemma 7.2 and Corollary 7.3. ### Identification of the limit We now turn to the identification of the limit points of the sequence \(\big{\{}\big{(}\rho_{\epsilon}*\eta^{N}_{t}(x)dx\big{)}_{t\geq 0}\big{\}}_{N \geq 1}\). We would like to show that any limit point solves (2.16) in the limit, i.e., \[\langle f(x),\varphi(t,x)dx\rangle=\int_{0}^{t}\Big{\langle}\frac{1}{2}\Delta f (x)+f(x)F(\varphi(s,x)),\varphi(s,x)dx\Big{\rangle}ds. \tag{7.25}\] Since \(\langle f,\rho_{\epsilon}*\eta^{N}_{t}(x)dx\rangle=\langle\rho_{\epsilon}*f(x ),\eta^{N}_{t}(dx)\rangle\), and the limit is deterministic, this will follow if we can show that each of the terms in the semimartingale decomposition (2.3), with the test function \(f\) replaced by \(\rho_{\epsilon}*f(\cdot)\), converges to the corresponding term in (7.25). The linear term is straightforward. Write \(T.\) for the heat semigroup, so that \(\rho_{\epsilon}*f(x)=T_{\epsilon^{2}}f(x)\). By a Taylor expansion, \[\int_{0}^{t}\big{\langle}\mathcal{L}^{\theta}T_{\epsilon^{2}}f, \eta^{N}_{s}(dx)\big{\rangle}ds =\int_{0}^{t}\big{\langle}\frac{1}{2}\Delta T_{\epsilon^{2}}f(x),\eta^{N}_{s}(dx)\big{\rangle}+\mathcal{O}\Big{(}\frac{1}{\theta}\Big{)}\] \[=\int_{0}^{t}\big{\langle}\frac{1}{2}\Delta f(x),\rho_{\epsilon}* \eta^{N}_{s}(x)dx\big{\rangle}ds+\mathcal{O}\Big{(}\frac{1}{\theta}\Big{)}.\] Thus, from weak convergence we can deduce that under our scaling, for any (weakly) convergent subsequence \(\{\rho_{\epsilon}*\eta^{N}(x)dx\}_{N\geq 1}\), \[\int_{0}^{t}\big{\langle}\mathcal{L}^{\theta}T_{\epsilon^{2}}f,\eta^{N}_{s}( dx)\big{\rangle}ds\to\int_{0}^{t}\big{\langle}\frac{1}{2}\Delta f(x),\varphi(s,x)dx \big{\rangle}ds.\] The nonlinear term in the semimartingale decomposition is more intricate. It takes the form \[\mathbb{E}\Big{[}\int_{0}^{t}\big{\langle}T_{\epsilon^{2}}f(y)F\big{(}\rho_{ \epsilon}*\eta_{s}(y)\big{)},\eta_{s}(dy)\big{\rangle}ds\Big{]}\] and we should like to show that this converges to \[\int_{0}^{t}\int f(y)F(\varphi(s,y))\varphi(s,y)dyds.\] We proceed in stages. First we should like to transfer the heat semigroup from \(T_{\epsilon^{2}}f\) onto \(\eta_{s}\). Since \(f\) is smooth, this will follow easily if we can show that \[\mathbb{E}\Big{[}\int_{0}^{t}\big{\langle}T_{\epsilon^{2}}f(y)F\big{(}\rho_{ \epsilon}*\eta_{s}(y)\big{)},\eta_{s}(dy)\big{\rangle}ds\Big{]}\sim\mathbb{E} \Big{[}\int_{0}^{t}\big{\langle}T_{\epsilon^{2}}f(y)F\big{(}\rho_{\epsilon}* \eta_{s}(y)\big{)},\rho_{\epsilon}*\eta_{s}(y)dy\big{\rangle}ds\Big{]}.\] This is the content of Proposition 7.8. **Proposition 7.8**: _Under the conditions of Theorem 2.20,_ \[\lim_{\epsilon\to 0}\mathbb{E}\Big{[}\Big{|}\int_{0}^{t}\big{\langle}T_{ \epsilon^{2}}f(y)F(\rho_{\epsilon}*\eta_{s}(y)),\eta_{s}(y)\big{\rangle}-\big{ \langle}T_{\epsilon^{2}}f(y)F(\rho_{\epsilon}*\eta_{s}(y)),\rho_{\epsilon}* \eta_{s}(y)dy\big{\rangle}ds\Big{|}\Big{]}=0. \tag{7.26}\] **Proof** In fact we are going to fix \(\delta>0\), with \(t>\delta\), and show that the expression on the left hand side of (7.26) is less than a constant times \(\delta\), with a constant independent of \(\delta\), \(N\), and \(\epsilon\). Since \(\delta\) is arbitrary, the result will follow. We first note that, \[\langle T_{\epsilon^{2}}f(y)F(\rho_{\epsilon}*\eta_{s}(y)),\rho_{ \epsilon}*\eta_{s}(y)dy\rangle-\langle T_{\epsilon^{2}}f(y)F(\rho_{\epsilon}* \eta_{s}(dy)),\eta_{s}(dy)\rangle\] \[=\langle\int T_{\epsilon^{2}}f(y)F(\rho_{\epsilon}*\eta_{s}(y)) \rho_{\epsilon}(y-w)dy,\eta_{s}(dw)\rangle-\langle T_{\epsilon^{2}}f(w)F(\rho _{\epsilon}*\eta_{s}(w)),\eta_{s}(dw)\rangle\] \[=\langle\int\big{\{}T_{\epsilon^{2}}f(y)F(\rho_{\epsilon}*\eta_{s }(y))-T_{\epsilon^{2}}f(w)F(\rho_{\epsilon}*\eta_{s}(w))\big{\}}\rho_{ \epsilon}(w-y)dy,\eta_{s}(dw)\rangle.\] Let us denote the integral against \(dy\) in the last expression by \(I\), that is \[I:=\int\{T_{\epsilon^{2}}f(y)F(\rho_{\epsilon}*\eta_{s}(y))-T_{\epsilon^{2}}f (w)F(\rho_{\epsilon}*\eta_{s}(w))\}\rho_{\epsilon}(w-y)dy,\] and note that \(|I|\) is bounded by \[\int\Big{\{}|F(\rho_{\epsilon}*\eta_{s}(y))-F(\rho_{\epsilon}* \eta_{s}(w))|T_{\epsilon^{2}}f(y)+F(\rho_{\epsilon}*\eta_{s}(w))|T_{\epsilon^ {2}}f(y)-T_{\epsilon^{2}}f(w))|\Big{\}}\rho_{\epsilon}(w-y)dy\] \[\leq\int\|f\|_{\infty}\big{|}F(\rho_{\epsilon}*\eta_{s}(y))-F( \rho_{\epsilon}*\eta_{s}(w))\big{|}\rho_{\epsilon}(w-y)dy+C\epsilon\|f^{ \prime}\|_{\infty}|F(\rho_{\epsilon}*\eta_{s}(w))|, \tag{7.27}\] where we have used that \[\int|T_{\epsilon^{2}}f(y)-T_{\epsilon^{2}}f(w)|p_{\epsilon^{2}}(w,y)dy\leq \|f^{\prime}\|_{\infty}\int|y-w|p_{\epsilon^{2}}(w,y)dy.\] Now recall that \(F\) is a polynomial of degree \(n\), and so there exist real numbers \(b_{k}\) such that \(F(a)-F(b)=(a-b)\sum_{k=1}^{n-1}b_{k}a^{k}b^{n-1-k}\) and so \[|F(\rho_{\epsilon}*\eta_{s}(y))-F(\rho_{\epsilon}*\eta_{s}(w))|\leq|\rho_{ \epsilon}*\eta_{s}(y)-\rho_{\epsilon}*\eta_{s}(w)|\sum_{k=1}^{n-1}|b_{k}|\left( \rho_{\epsilon}*\eta_{s}(y)^{n-1}+\rho_{\epsilon}*\eta_{s}(w)^{n-1}\right).\] Combining the above, we have reduced the problem to showing that for any \(k\geq 0\), \[\lim_{\epsilon\to 0}\mathbb{E}\Big{[}\int_{0}^{t}\big{\langle}\int|\rho_{ \epsilon}*\eta_{s}(y)-\rho_{\epsilon}*\eta_{s}(w)|\left(\rho_{\epsilon}*\eta_ {s}(y)\right)^{k}+\rho_{\epsilon}*\eta_{s}(w)^{k}\big{)}\,p_{\epsilon^{2}}(w, y)dy,\eta_{s}(dw)\big{\rangle}ds\Big{]}=0. \tag{7.28}\] We are going to use the estimate (7.24). First note that by Lemma 7.2 the contribution to (7.28) from the integral over the time interval \([0,\delta]\) is \(\mathcal{O}(\delta)\). We focus instead on the interval \((\delta,t]\). The first term in (7.24) gives \[\int_{\delta}^{t}\mathbb{E}\Big{[}\Big{\langle}\int\big{\langle} \frac{\|y-w\|}{\sqrt{s+\epsilon^{2}}}\big{(}p_{2(s+\epsilon^{2})}(y,z)+p_{2(s+ \epsilon^{2})}(w,z)\big{)},\eta_{0}(dz)\big{\rangle}\\ \big{(}\rho_{\epsilon}*\eta_{s}(y)^{k}+\rho_{\epsilon}*\eta_{s}(w )^{k}\big{)}p_{\epsilon^{2}}(w,y)dy,\eta_{s}(dw)\Big{\rangle}\Big{]}ds.\] We "borrow" from the exponential term to see that \(\|y-w\|p_{\epsilon^{2}}(w,y)\leq C\epsilon p_{2\epsilon^{2}}(w,y)\) and so bound this by \[C\int_{\delta}^{t}\frac{\epsilon}{\sqrt{s+\epsilon^{2}}} \mathbb{E}\Big{[}\Big{\langle}\int\big{\langle}\big{(}p_{2(s+\epsilon^{2})}(y,z)+p_{2(s+\epsilon^{2})}(w,z)\big{)},\eta_{0}(dz)\big{\rangle}\\ \big{(}\rho_{\epsilon}*\eta_{s}(y)^{k}+\rho_{\epsilon}*\eta_{s}(w )^{k}\big{)}p_{2\epsilon^{2}}(w,y)dy,\eta_{s}(dw)\Big{\rangle}ds\Big{]}. \tag{7.29}\] The four terms in the product are taken separately, according to the combinations of \(w\) and \(y\) appearing. First, \[\int_{\delta}^{t}\frac{\epsilon}{\sqrt{s+\epsilon^{2}}}\mathbb{E}\Big{[} \Big{\langle}\int\big{\langle}p_{2(s+\epsilon^{2})}(y,z),\eta_{0}(dz)\big{ \rangle}\rho_{\epsilon}*\eta_{s}(y)^{k}p_{2\epsilon^{2}}(w,y)dy,\eta_{s}(dw) \Big{\rangle}\Big{]}ds\] can be rewritten as \[\int_{\delta}^{t}\frac{\epsilon}{\sqrt{s+\epsilon^{2}}} \mathbb{E}\Big{[}\int\int\big{\langle}p_{2(s+\epsilon^{2})}(y,z),\eta_{0}(dz) \big{\rangle}\rho_{\epsilon}*\eta_{s}(y)^{k}p_{\epsilon^{2}}(x,y)\rho_{ \epsilon}*\eta_{s}(x)dydxds\Big{]}\\ \leq\int_{\delta}^{t}\frac{\epsilon}{\sqrt{s+\epsilon^{2}}} \mathbb{E}\Big{[}\int\int\big{\langle}p_{2(s+\epsilon^{2})}(y,z),\eta_{0}(dz) \big{\rangle}\big{(}\rho_{\epsilon}*\eta_{s}(y)^{k+1}+\rho_{\epsilon}*\eta_{s} (x)^{k+1}\big{)}p_{\epsilon^{2}}(x,y)dydx\Big{]}ds,\] and using Lemma 7.2 and the tower property, and integrating with respect to \(s\), under our assumptions on \(\eta_{0}\), this is bounded by \[C\epsilon\int_{\delta}^{t}\frac{1}{\sqrt{s+\epsilon^{2}}} \mathbb{E}\Big{[}\int\int\big{\langle}p_{2(s+\epsilon^{2})}(y,z),\eta_{0}(dz) \big{\rangle}\\ \big{(}\rho_{\epsilon}*\eta_{0}(y)+\rho_{\epsilon}*\eta_{0}(y)^{ k+1}+\rho_{\epsilon}*\eta_{0}(x)+\rho_{\epsilon}*\eta_{0}(x)^{k+1}\big{)}p_{ \epsilon^{2}}(x,y)dydx\Big{]}ds\\ \leq C^{\prime}\epsilon\int_{\delta}^{t}\frac{1}{(s+\epsilon^{2} )^{d/2}}ds.\] For fixed \(\delta\), this bound tends to zero as \(\epsilon\to 0\). The term involving \(\big{\langle}p_{2(s+\epsilon^{2})}(w,z),\eta_{0}(dz)\big{\rangle}\rho_{ \epsilon}*\eta_{s}(w)^{k}\) is handled similarly. On the other hand \[\big{\langle}\int\langle p_{2(s+\epsilon^{2})}(y,z),\eta_{0}(dz) \rangle\rho_{\epsilon}*\eta_{s}(w)^{k}p_{2\epsilon^{2}}(w,y)dy,\eta_{s}(dw) \big{\rangle}\\ \leq\frac{C}{(s+\epsilon^{2})^{d/2}}\langle 1,\eta_{0} \rangle\big{\langle}\rho_{\epsilon}*\eta_{s}(w)^{k},\eta_{s}(dw)\big{\rangle},\] and since \(\langle 1,\eta_{0}\rangle\) is uniformly bounded we apply Corollary 7.3 to obtain a bound on the contribution to (7.29) from this term of the same form as the others. Now consider the contribution to the left hand side of (7.28) from the second term in (7.24). Since \(F\) is a polynomial, it is bounded by a sum of terms of the form \[\int_{\delta}^{t}\int_{0}^{s-\delta}\Big{\langle}\int\frac{\|y-w \|}{\sqrt{s-r+\epsilon^{2}}}\big{\langle}\big{(}p_{2(s-r+\epsilon^{2})}(y,z)+p_ {2(s-r+\epsilon^{2})}(w,z)\big{)}\rho_{\epsilon}*\eta_{r}(z)^{j},\eta_{r}(dz) \big{\rangle}\\ \rho_{\epsilon}*\eta_{s}(y)^{k}p_{\epsilon^{2}}(y,w)dy,\eta_{s}( dw)\Big{\rangle}drds\\ \leq C\epsilon\int_{\delta}^{t}\int_{0}^{s-\delta}\frac{1}{\sqrt{ s-r+\epsilon^{2}}}\Big{\langle}\int\big{\langle}\big{(}p_{2(s-r+\epsilon^{2})}(y,z)+p_ {2(s-r+\epsilon^{2})}(w,z)\big{)}\rho_{\epsilon}*\eta_{r}(z)^{j},\eta_{r}(dz) \big{\rangle}\\ \rho_{\epsilon}*\eta_{s}(y)^{k}p_{2\epsilon^{2}}(y,w)dy,\eta_{s }(dw)\Big{\rangle}drds,\] where as usual we have "borrowed" from the exponential term in \(p_{\epsilon}^{2}(y,w)\) to replace \(\|y-w\|\) by a constant times \(\epsilon\). Once again, our approach is to rearrange terms so that we can apply Lemma 7.2 or Corollary 7.3 to obtain a bound on the contribution to (7.28) from these terms of the form \(C\epsilon\) (where \(C\) may depend on \(\delta\) but not \(\epsilon\)). For example, using the Chapman-Kolmogorov equation to rewrite \[\int\Big{\langle}\big{\langle}p_{2(s-r+\epsilon^{2})}(y,z)\rho_{\epsilon}* \eta_{r}(z)^{j},\eta_{r}(dz)\big{\rangle}\rho_{\epsilon}*\eta_{s}(y)^{k}p_{2 \epsilon^{2}}(y,w),\eta_{s}(dw)\Big{\rangle}dy\] as \[\int\int\big{\langle}p_{2(s-r+\epsilon^{2})}(y,z)\rho_{\epsilon}*\eta_{r}(z)^{ j},\eta_{r}(dz)\big{\rangle}\rho_{\epsilon}*\eta_{s}(y)^{k}p_{\epsilon^{2}}(y,x) \rho_{\epsilon}*\eta_{s}(x)dxdy,\] and using Lemma 7.2 and the tower property, we are led to control terms of the form \[\mathbb{E}\Big{[}\int\big{\langle}p_{2(s-r+\epsilon^{2})}(y,z)\rho_{\epsilon}* \eta_{r}(z)^{j},\eta_{r}(dz)\big{\rangle}\rho_{\epsilon}*\eta_{r}(y)^{k+1}dy \Big{]}.\] This, in turn, is at most \[\mathbb{E}\Big{[}\big{\langle}\rho_{\epsilon}*\eta_{r}(z)^{j+k+1 },\eta_{r}(dz)\big{\rangle}\Big{]}+\mathbb{E}\Big{[}\int\int p_{2(s-r)}(y,x) \rho_{\epsilon}*\eta_{r}(x)\rho_{\epsilon}*\eta_{r}(y)^{j+k+1}dydx\Big{]}\\ \leq\mathbb{E}\Big{[}\big{\langle}\rho_{\epsilon}*\eta_{r}(z)^{j +k+1},\eta_{r}(dz)\big{\rangle}\Big{]}+2\int\mathbb{E}\Big{[}\rho_{\epsilon}* \eta_{r}(x)^{j+k+2}\Big{]}dx,\] which is bounded by Lemma 7.2. We now turn to the contribution arising from the martingale terms in (7.24): \[\mathbb{E}\Big{[}\int_{\delta}^{t}\Big{\langle}\int\big{(}|M_{s}(y)|+|M_{s}(w) |\big{)}\big{(}\rho_{\epsilon}*\eta_{s}(y)^{k}+\rho_{\epsilon}*\eta_{s}(w)^{k }\big{)}p_{\epsilon^{2}}(w,y)dy,\eta_{s}(dw)\Big{\rangle}ds\Big{]}.\] Since \(\psi_{t-s}^{\epsilon,x}(y)=\mathbb{E}[p_{T(t-s)+\epsilon^{2}}(x,y)]\), rearranging (7.4) we see that we can pull a convolution with \(p_{\epsilon^{2}/2}\) out of our expressions for \(M_{s}(y)\) and \(M_{s}(w)\) and so all the manipulations that we used to control terms above with still be valid. To deal with the two terms in the product involving \(|M_{s}(y)|\), we write the first as \(\int|M_{s}(y)|\rho_{\epsilon}*\eta_{s}(y)^{k+1}dy\) and then use Holder's inequality, Lemma 7.2, and the fact that \(\mathbb{E}\big{[}|M_{s}(y)|^{2}\big{]}\) is \(\mathcal{O}\big{(}\theta/(N\epsilon^{d})\big{)}\) to see that the contribution from this term tends to zero in the limit. For the second, we use the idea of the proof of Corollary 7.3 to reduce to a form to which we can apply Holder's inequality. Control of the terms arising from approximating \(\psi^{\epsilon,x}\) by the heat kernel follows in an entirely analogous way. Combining the above, we see that given \(\delta>0\), \[\lim_{\epsilon\to 0}\mathbb{E}\Big{[}\Big{|}\int_{0}^{t}\big{\langle}T_{ \epsilon^{2}}f(y)F(\rho_{\epsilon}*\eta_{s}(y)),\eta_{s}(y)\big{\rangle}ds- \int_{0}^{t}\big{\langle}T_{\epsilon^{2}}f(y)F(\rho_{\epsilon}*\eta_{s}(y)), \rho_{\epsilon}*\eta_{s}(y)dy\big{\rangle}ds\Big{|}\Big{]}<C\delta,\] where the constant \(C\) is independent of \(\delta\). Since \(\delta\) was arbitrary, the proof is complete. \(\Box\) Since \(f\) is smooth, \(T_{\epsilon}^{2}f-f\) is \(\mathcal{O}(\epsilon)\), with an application of the triangle inequality, \[\lim_{\epsilon\to 0}\mathbb{E}\Big{[}\Big{|}\int_{0}^{t}\big{\langle}T_{ \epsilon^{2}}f(y)F(\rho_{\epsilon}*\eta_{s}(y)),\eta_{s}(y)\big{\rangle}ds- \int_{0}^{t}\big{\langle}f(y)F(\rho_{\epsilon}*\eta_{s}(y)),\rho_{\epsilon}* \eta_{s}(y)dy\big{\rangle}ds\Big{|}\Big{]}<C\delta,\] now follows immediately. Thus to complete the characterisation of the limit, it remains to show that if we take a convergent subsequence \(\big{\{}\big{(}\rho_{\epsilon}*\eta_{t}^{N}(dx)\big{)}_{t\geq 0}\big{\}}\) converging to a limit point \(\big{(}\varphi(t,x)dx\big{)}_{t\geq 0}\), then \[\int_{0}^{t}\int f(x)\rho_{\epsilon}*\eta_{s}(x)F(\rho_{\epsilon}*\eta_{s}^{N }(x))dxds\to\int_{0}^{t}\int f(x)\varphi(s,x)F(\varphi(s,x))dxds.\] Since \(F\) is a polynomial, we consider powers of \(\rho_{\epsilon}*\eta\). To illustrate the approach, we first prove that \[\int_{0}^{t}\int f(x)\rho_{\epsilon}*\eta_{s}^{N}(x)^{2}dxds\to\int_{0}^{t} \int f(x)\varphi(s,x)^{2}dxds. \tag{7.30}\] The convergence of higher powers will follow in an entirely analogous manner, but with more complex expressions. The approach is standard. We fix \(\tau>0\) and, in keeping with our notation \(\rho_{\epsilon}\), in this subsection, use \(\rho_{\tau}\) to denote the symmetric Gaussian kernel with variance parameter \(\tau^{2}\). Our strategy is to show that, up to an error that tends to zero as \(\tau\to 0\), \[\int_{0}^{t}\int f(z)\rho_{\epsilon}*\eta_{s}(z)^{2}dzds\sim\int_{0}^{t}\int \int f(z)(\rho_{\epsilon}*\eta_{s})(z)\rho_{\tau}(z-y)(\rho_{\epsilon}*\eta_{ s})(y)dzdyds. \tag{7.31}\] Analogously, also up to an error that vanishes as \(\tau\to 0\), \[\int_{0}^{t}\int f(z)\varphi(s,z)^{2}dzds\sim\int_{0}^{t}\int\int f(z)\varphi (s,z)\rho_{\tau}(z-y)\varphi(s,y)dzdyds. \tag{7.32}\] On the other hand, weak convergence of \(\rho_{\epsilon}*\eta\) (plus continuity of the mapping \((z,y)\to f(z)\rho_{\tau}(z-y)\)) gives that \[\int_{0}^{t}\int\int f(z)(\rho_{\epsilon}*\eta_{s})(z)\rho_{\tau}(z-y)(\rho_{ \epsilon}*\eta_{s})(y)dzdyds\to\int_{0}^{t}\int\int f(z)\varphi(s,z)\rho_{\tau} (z-y)\varphi(s,y)dzdyds. \tag{7.33}\] Since \(\tau\) is arbitrary, the convergence (7.30) will follow. **Proposition 7.9**: _Under the conditions of Theorem 2.20, we have that along any convergent subsequence,_ \[\limsup_{\epsilon\to 0}\mathbb{E}\Big{[}\Big{|}\int_{0}^{t}\int f (y)\rho_{\epsilon}*\eta_{s}(y)^{2}dyds\\ -\int_{0}^{t}\int\int f(z)(\rho_{\epsilon}*\eta_{s})(z)\rho_{ \tau}(z-y)(\rho_{\epsilon}*\eta_{s})(y)dzdyds\Big{|}\Big{]}\leq C\tau, \tag{7.34}\] _where \(C\) is independent of \(\tau\)._ **Proof** First note, \[\int_{0}^{t}\mathbb{E}\left[\left|\langle f(y),(\rho_{\epsilon}* \eta_{s}(y))^{2}\rangle dy\right\rangle-\int\int f(y)(\rho_{\epsilon}*\eta_{s })(z)\rho_{\tau}(z-y)(\rho_{\epsilon}*\eta_{s})(y)dzdy\right|\right]ds\\ \leq\|f\|_{\infty}\int\int_{0}^{t}\mathbb{E}\left[\int\left\{ \left|(\rho_{\epsilon}*\eta_{s})(y)-(\rho_{\epsilon}*\eta_{s})(z)\right|\rho _{\tau}(z-y)dz\right\}(\rho_{\epsilon}*\eta_{s})(y)\right]dsdy. \tag{7.35}\] Now proceed exactly as in the proof of Proposition 7.8. The only distinction is that \(|p_{\epsilon^{2}}(y,z)-p_{\epsilon^{2}}(w,z)|\) is replaced by \(|p_{\tau}(y,z)-p_{\tau}(w,z)|\) and the estimate \(\|y-w\|p_{\tau^{2}}(y,w)\leq C\tau p_{2\tau^{2}}(y,w)\) replaces the corresponding statement with \(\epsilon^{2}\) replacing \(\tau^{2}\) in our previous argument. \(\square\) The extension of Proposition 7.9 to higher moments is straightforward, if notationally messy. For fixed (but arbitrary) \(\tau\), one shows that \[\limsup_{\epsilon\to 0}\mathbb{E}\Big{[}\Big{|}\int_{0}^{t}\int f (y)\rho_{\epsilon}*\eta_{s}^{N}(y)^{k}dyds\\ -\int_{0}^{t}\int\cdots\int f(y_{1})\rho_{\epsilon}*\eta_{s}^{N} (y_{1})\prod_{i=2}^{k}\rho_{\tau}(y_{i}-y_{i-1})\rho_{\epsilon}*\eta_{s}^{N}( y_{i})dy_{k}\ldots dy_{1}ds\Big{|}\Big{]}\leq C\tau,\] as well as a corresponding statement with \(\rho_{\epsilon}*\eta_{s}^{N}(x)\) replaced by \(\varphi(s,x)\) and then use weak convergence to see that, up to an error of order \(\tau\), any limit point of the sequence \(\{\rho_{\epsilon}*\eta^{N}(x)dx\}\) solves (the weak form of) equation (2.16). Since \(\tau\) was arbitrary, the proof of Theorem 2.20 is complete. Proofs of results for the lookdown process and ancestral lineages Now we turn to results about the lookdown process, first establishing the basic connection between the population process \(\eta^{N}\) and the lookdown process \(\xi^{N}\), Proposition 5.3, and then in the next section, convergence of the lookdown process itself. **Proof** [of Proposition 5.3] This proposition is the content of the Markov Mapping Theorem, reproduced from Etheridge and Kurtz (2019) as Theorem A.1, applied to our situation. The function \(\gamma\) of that theorem is what we have called \(\kappa\) above, and the kernel \(\alpha\) of that theorem is the transition function that assigns levels uniformly on \([0,N]\) (in the first case) or as a Poisson process with Lesbegue intensity (in the limiting case). We need a continuous \(\psi^{N}(\xi)\geq 1\) such that \(|A^{N}f(\xi)|\leq c_{f}\psi^{N}(\xi)\) for all \(f\) in the domain of \(A^{N}\) (and similarly a function \(\psi\) for \(A\)). We also need that applying the lookdown generator to a function and averaging over levels is equivalent to applying the population process generator to the function whose dependence on levels has been averaged out, a condition which we precisely state, and verify, in Lemmas A.2 and A.3 of the Appendix. For finite \(N\), taking \(f(\xi)\) of the form (5.6), we can use \(\psi^{N}(\xi)=\langle C(1+u|F(x,\eta)|),\xi(dx,du)\rangle\) for an appropriate constant \(C\). For the scaling limit, recall that the test functions \(f\) are of the form \(f(\xi)=\prod_{(x,u)\in\xi}g(x,u)\) with \(g(x,u)=1\) for \(u\geq u_{0}\), and consulting (5.11), we see that most terms in \(Af(\xi)\) can be bounded as above by constant multiples of \(\langle 1,\eta\rangle\). However, the term involving \(F\) is, as usual, more troublesome. Since \(0\leq f(\xi)/g(x,u)\leq 1\) for any \((x,u)\in\xi\), \[\big{|}f(\xi)\sum_{(x,u)\in\xi}F(x,\eta)u\frac{\partial_{u}g(x,u) }{g(x,u)}\big{|} \leq\|\partial_{u}g\|_{\infty}\sum_{(x,u)\in\xi}|F(x,\eta)u\mathbf{ 1}_{u\leq u_{0}}|\] \[\leq\|\partial_{u}g\|_{\infty}e^{u_{0}}\sum_{(x,u)\in\xi}|F(x, \eta)ue^{-u}|.\] The first line would be just what we want, except that \(\psi(\xi)\) cannot depend on \(f\), and hence neither on \(u_{0}\). So, the second line provides us with the required bound: we absorb \(\|\partial_{u}g\|_{\infty}e^{u_{0}}\) into \(c_{f}\) and take \(\psi(\xi)=1+\langle 1+F(x,\eta)ue^{-u},\xi(dx,du)\rangle\). \(\Box\) ### Tightness of the Lookdown Process Now we turn to the main theorem on convergence of the lookdown process, Theorem 5.4, whose proof follows a similar pattern to that of convergence for the population processes in Section 6.2. We first give a description of the lookdown process \(\xi^{N}\) in terms of the lines of descent introduced in Section 5.2. Each line of descent gives birth to lines at higher levels at rate \(2(N-u)c_{\theta}(x,\eta)\), and each such new line chooses a level uniformly from \([u,N]\), a spatial location \(y\) from the kernel \[q^{m}(x,dy,\eta)=r(y,\eta)q(x,dy)/\int_{\mathbb{R}^{d}}r(y,\eta)q(x,dy), \tag{8.1}\] and the two lines swap spatial locations with probability \(1/2\); the level of each line of descent evolves according to equation (5.21). It is evident from the description of the process (or, by differentiating in Definition 5.1) that \[\begin{split}\left\langle f,\xi_{t}^{N}\right\rangle& =\left\langle f,\xi_{0}^{N}\right\rangle+M_{t}^{f}\\ &\qquad+\int_{0}^{t}\bigg{\langle}c_{\theta}(x,\eta_{s}^{N}) \int_{u}^{N}\int_{\mathbb{R}^{d}}\left(f(y,u_{1})+f(x,u_{1})+f(y,u)-f(x,u) \right)q^{m}(x,dy,\eta_{s}^{N})du_{1}\\ &\qquad\qquad\qquad\qquad+\left(c_{\theta}(x,\eta_{s}^{N})u^{2}- b_{\theta}(x,\eta_{s}^{N})u\right)\frac{d}{du}f(x,u),\xi_{s}^{N}(dx,du) \bigg{\rangle}ds,\end{split} \tag{8.2}\] where \(M^{f}\) is a martingale with angle bracket process \[\begin{split}\left\langle M^{f}\right\rangle_{t}& =\int_{0}^{t}\bigg{\langle}c_{\theta}(x,\eta_{s}^{N})\int_{u}^{N} \int_{\mathbb{R}^{d}}\big{[}f(y,u_{1})^{2}\\ &\qquad\qquad+\left(f(x,u_{1})+f(y,u)-f(x,u)\right)^{2}\big{]}du _{1}q^{m}(x,dy,\eta_{s}^{N}),\xi_{s}^{N}(dx,du)\bigg{\rangle}ds.\end{split} \tag{8.3}\] **Remark 8.1**: _In addition to tightness of the measure-values processes \(\xi^{N}\), the bounds used in the proofs below also imply tightness of the number of lines of descent and the number of births below a fixed level, and of the motion of individual lines of descent. In other words, the limiting "line of descent" construction of Section 5.2 holds._ **Proof** [Proof of Theorem 5.4] As in Section 6.2, the theorem will follow from tightness and characterization of the limit points. This time, the processes \(\xi^{N}\) take values in \(\mathcal{M}(\overline{\mathbb{R}^{d}}\times[0,\infty))\), the space of locally finite measures on space \(\times\) levels. (They will in fact be point measures, including the limit, but that is a consequence of this theorem.) Again, tightness follows from a compact containment condition, tightness of one-dimensional distributions, and an application of Ethier and Kurtz (1986) Theorem 3.9.1. Lines of descent can escape to infinite level in finite time, and so we endow \(\mathcal{M}(\overline{\mathbb{R}^{d}}\times[0,\infty))\) with the vague topology "in the level coordinate", induced by test functions on \(\overline{\mathbb{R}^{d}}\times[0,\infty)\) of the form \(g(x)h(u)\), where \(g\in C_{b}(\overline{\mathbb{R}^{d}})\) is bounded and continuous and \(h\in C_{c}([0,\infty))\) is compactly supported (following, e.g., Etheridge and Kurtz (2019), Condition 2.1). In several places below we require a dense subset of \(C_{b}(\mathcal{M}(\overline{\mathbb{R}^{d}}\times[0,\infty)))\), the bounded, continuous functions on \(\mathcal{M}(\overline{\mathbb{R}^{d}}\times[0,\infty))\). The functions \(\xi\mapsto\left\langle f,\xi\right\rangle\) do not form not a dense subset of \(C_{b}(\mathcal{M}(\overline{\mathbb{R}^{d}}\times[0,\infty)))\), but they do separate points and vanish nowhere, i.e., for any \(\xi_{1}\) and \(\xi_{2}\) there is an \(f\) with \(\left\langle f,\xi_{1}\right\rangle\neq\left\langle f,\xi_{2}\right\rangle\), and a \(g\) such that \(\left\langle g,\xi_{1}\right\rangle\neq 0\). Therefore, by the Stone-Weierstrass theorem, the algebra they generate is dense in \(C_{b}(\mathcal{M}(\overline{\mathbb{R}^{d}}\times[0,\infty)))\) (with respect to uniform convergence on compact subsets). Topological in this way, the space \({\cal M}(\overline{\mathbb{R}^{d}}\times[0,\infty))\) is completely metrizable, and we may choose a countable set of nonnegative \(f_{k}\), each supported on \(\mathbb{R}^{d}\times[0,u_{k}]\) for some \(u_{k}<\infty\), such that a subset \(K\subset{\cal M}(\overline{\mathbb{R}^{d}}\times[0,\infty))\) is relatively compact if and only if \(\sup_{\xi\in K}\langle f_{k},\xi\rangle<\infty\) for each \(k\). (To see this, use Theorem A.2.3 of Kallenberg [1997] and the first argument made in the proof of Lemma B.3.) Below, Lemma 8.4 proves exactly this, and therefore compact containment. Here we have compactified \(\mathbb{R}^{d}\) for convenience (since it turned out to be straightforward to show that mass does not escape to infinity in space); however, we need to use the vague topology "in the level direction" because _levels_ may escape to infinity in finite time in the limit. In order to apply Ethier and Kurtz [1986] Theorem 3.9.1 we require that \(\{(F(\underline{\xi}_{t}^{N}))_{t\geq 0}\}_{N}\) is tight as a sequence of real-valued cadlag processes, for all \(F\) in a subset of \(C_{b}({\cal M}(\overline{\mathbb{R}^{d}}\times[0,\infty)))\) that is dense with respect to uniform convergence on compact subsets. Lemma 8.5 shows that \(\{\langle f,\xi_{t}^{N}\rangle\}_{N}\) is a tight sequence for any \(f:\overline{\mathbb{R}^{d}}\times[0,\infty)\to\mathbb{R}\) with compact support in the level direction. Since as above the algebra generated by the functions \(\xi\mapsto\langle f,\xi\rangle\) is dense in \(C_{b}({\cal M}(\overline{\mathbb{R}^{d}}\times[0,\infty)))\), it suffices to show that tightness for the processes \(\langle f,\xi_{t}^{N}\rangle\) extends to finite sums and products of these processes, which is shown in Lemma B.3. The fact that martingale properties are preserved under passage to the limit is straightforward, and can be proved in a way analogous to Lemma 6.6; we omit the proof. Finally, we must show that the limiting lookdown process \(\xi\) projects to the limiting process \(\eta\), i.e., a solution of the martingale problem in Theorem 2.10. Let \(N_{k}\to\infty\) be a sequence along which \(\xi^{N_{k}}\) converges. By Theorem 2.10, there is a subsequence \(N_{k(j)}\) along which the projected population processes \(\eta^{N_{k(j)}}\) converge, and the limit solves the martingale problem. Thus any limit point of \(\xi^{N}\) projects to a population process \(\eta\) solving the martingale problem of Theorem 2.10. \(\Box\) What we need for compact containment will come from the following Lemma. The generality is unimportant - for concreteness one may take \(h(u)=e^{-u}\). **Lemma 8.2**: _Let \(h\) be a positive, continuous, nonincreasing, differentiable function on \([0,\infty)\) such that \(\int_{0}^{\infty}\int_{u}^{\infty}h(v)dvdu\), \(\int_{0}^{\infty}u^{2}|h^{\prime}(u)|du\), and \(\int_{0}^{\infty}h(u)^{2}du\) are all finite. Suppose that Assumptions 2.8 hold, and that \(\theta/N\to\alpha\) and \(\xi_{0}^{N}\to\xi_{0}\) weakly as \(N\to\infty\), where each \(\xi_{0}^{N}\) is conditionally uniform given \(\eta_{0}^{N}\) in the sense of (5.12) and \(\xi_{0}\) is conditionally Poisson given \(\eta_{0}\) in the sense of (5.13). Then for any \(T\) there exists a constant \(K(T)\) such that for all \(M>0\),_ \[\limsup_{N\to\infty}\mathbb{P}\left\{\sup_{0\leq t\leq T}\langle h,\xi_{t}^{N }\rangle>M\right\}<\frac{K(T)}{M}.\] We postpone the proof of this Lemma until we have shown how it yields compact containment. First, we show that this implies compact containment of the processes \((\langle f,\xi_{t}^{N}\rangle)_{0\leq t\leq T}\) for arbitrary compactly supported \(f\). **Lemma 8.3**: _Suppose \(f\in C(\overline{\mathbb{R}}^{d}\times[0,\infty))\) and there is a \(u_{f}\) such that if \(u\geq u_{f}\) then \(\sup_{x}f(x,u)=0\). Under the assumptions of Lemma 8.2, for any \(T\) there exists a constant \(K(f,T)\) such that for all \(M>0\),_ \[\limsup_{N\to\infty}\mathbb{P}\left\{\sup_{0\leq t\leq T}\langle f,\xi_{t}^{N} \rangle>M\right\}<\frac{K(f,T)}{M}.\] **Proof** [of Lemma 8.3] Let \(h\) be as in Lemma 8.2, so there is a \(c_{f}<\infty\) such that \(f(x,u)\leq c_{f}h(u)\) for all \(x\) and \(u\). Therefore, \(\langle f,\xi\rangle\leq\langle h,\xi\rangle\), and so by Lemma 8.2, \[\limsup_{N\to\infty}\mathbb{P}\left\{\sup_{0\leq t\leq T}\langle f,\xi_{t}^{N} \rangle>M\right\}\leq\limsup_{N\to\infty}\mathbb{P}\left\{\sup_{0\leq t\leq T} \langle h,\xi_{t}^{N}\rangle>M/c_{f}\right\}<\frac{K(T)c_{f}}{M}.\] \(\Box\) **Lemma 8.4** (Compact containment for \(\xi\)): _Let \(f_{1},f_{2},\ldots\) be a sequence of functions each satisfying the conditions of Lemma 8.3. Under the assumptions of Lemma 8.2, for any \(T\) and \(\delta>0\) there exists a sequence \((C_{1},C_{2},\ldots)\) of finite constants such that_ \[\limsup_{N\to\infty}\mathbb{P}\left\{\sup_{0\leq t\leq T}\langle f_{k},\xi_{t }^{N}\rangle>C_{k}\text{ for some }k\geq 1\right\}<\delta. \tag{8.4}\] In other words, the processes \(\xi^{N}\) stay in the set \[\left\{\xi\in\mathcal{M}(\overline{\mathbb{R}^{d}}\times[0,\infty))\ :\ \langle f_{k},\xi \rangle\leq C_{k}\text{ for all }k\geq 1\right\},\] for all \(0\leq t\leq T\) with uniformly high probability, a set which (as discussed in the proof of Theorem 5.4) is relatively compact for an appropriate choice of \(\{f_{k}\}_{k\geq 1}\). **Proof** [of Lemma 8.4] By a union bound, \[\mathbb{P}\left\{\sup_{0\leq t\leq T}\langle f_{k},\xi_{t}^{N}\rangle>C_{k} \text{ for some }k\geq 1\right\}\leq\sum_{k\geq 1}\mathbb{P}\left\{\sup_{0\leq t \leq T}\langle f_{k},\xi_{t}^{N}\rangle>C_{k}\right\},\] so (8.4) follows by taking \(C_{k}=2^{k-1}K(f_{k},T)/\delta\) and using Lemma 8.3. \(\Box\) Finally, we prove the key lemma. **Proof** [of Lemma 8.2] Applied to \(f(x,u)=h(u)\), the martingale representation (8.2) is \[\langle h,\xi_{t}^{N}\rangle =\langle h,\xi_{0}^{N}\rangle+M_{t}^{h}\] \[\qquad+\int_{0}^{t}\big{\langle}2c_{\theta}(x,\eta_{s}^{N})\int_ {u}^{N}h(v)dv,\xi_{s}^{N}(dx,du)\big{\rangle}ds\] \[\qquad\qquad+\int_{0}^{t}\big{\langle}\left(c_{\theta}(x,\eta_{s} ^{N})u^{2}-b_{\theta}(x,\eta_{s}^{N})u\right)h^{\prime}(u),\xi_{s}^{N}(dx,du) \big{\rangle}ds,\] where \(M_{t}^{h}\) is a martingale with angle bracket process \[\big{\langle}M^{h}\big{\rangle}_{t}=\int_{0}^{t}\langle 2c_{\theta}(x,\eta_{s} ^{N})\int_{u}^{N}h(v)^{2}dv,\xi_{s}^{N}(dx,du)\big{\rangle}ds.\] Now, note that \(0\leq c_{\theta}(x,\eta_{x}^{N})\leq C_{a}<\infty\) and \(b_{\theta}(x,\eta_{s}^{N})\leq C_{b}<\infty\), and we have assumed that \(h^{\prime}(u)\leq 0\) (since \(h\) is nonincreasing), so we may bound \[\begin{split}\langle h,\xi_{t}^{N}\rangle&\leq \langle h,\xi_{0}^{N}\rangle+M_{t}^{h}\\ &\qquad+\int_{0}^{t}\left\langle 2C_{a}\int_{u}^{\infty}h(v)dv+ \left(C_{a}u^{2}+C_{b}u\right)|h^{\prime}(u)|,\xi_{s}^{N}(dx,du)\right\rangle ds.\end{split} \tag{8.5}\] Now, since \(\xi_{t}^{N}\) is conditionally uniform given \(\eta_{t}^{N}\) in the sense of (5.12), we know that for compactly supported \(f\), \(\mathbb{E}[\langle f,\xi_{t}^{N}\rangle]=\mathbb{E}[\langle\widetilde{f}_{N}, \eta_{t}^{N}\rangle]\), where \(\widetilde{f}_{N}(x)=\int_{0}^{N}f(x,u)du\). By our assumptions on \(h\), we know that \[\int_{0}^{\infty}\left(2C_{a}\int_{u}^{\infty}h(v)dv+\left(C_{a}u^{2}+C_{b}u \right)|h^{\prime}(u)|\right)du<C\] for some \(C<\infty\), and so (by dominated convergence) \[\mathbb{E}\left[\langle h,\xi_{t}^{N}\rangle\right]\leq\mathbb{E}\left[ \langle h,\xi_{0}^{N}\rangle\right]+C\int_{0}^{t}\mathbb{E}\bigg{[}\langle 1, \eta_{s}^{N}\rangle\bigg{]}ds,\] which we know by Lemma 6.1 is bounded by \(C_{0}e^{C_{1}t}\) for some other constants \(C_{0}\) and \(C_{1}\). Now consider the maximum. By (8.5), using that the integrand is nonnegative, \[\sup_{0\leq t\leq T}\langle h,\xi_{t}^{N}\rangle \leq\langle h,\xi_{0}^{N}\rangle+\sup_{0\leq t\leq T}M_{t}^{h}\] \[\qquad+\int_{0}^{T}\left\langle 2C_{a}\int_{u}^{\infty}h(v)dv+ \left(C_{a}u^{2}+C_{b}u\right)|h^{\prime}(u)|,\xi_{s}^{N}(dx,du)\right\rangle ds.\] Since \(\sqrt{x}\leq 1+x\) for \(x\geq 0\), the Burkholder-Davis-Gundy inequality tells us that there is a \(C^{\prime}\) such that \[\begin{split}\mathbb{E}\left[\sup_{0\leq t\leq T}M_{t}^{h}\right] &\leq C^{\prime}\left(1+\mathbb{E}\left[[M^{h}]_{T}\right]\right) \\ &\leq C^{\prime}\left(1+\int_{0}^{T}\mathbb{E}\left[\langle 2c_{ \theta}(x,\eta_{s}^{N})\int_{u}^{\infty}h(v)^{2}dv,\xi_{s}^{N}(dx,du)\rangle \right]ds\right)\\ &\leq C^{\prime}\left(1+2C_{a}\int_{0}^{\infty}h(v)^{2}dv\int_{0} ^{T}\mathbb{E}\left[\langle 1,\xi_{s}^{N}(dx,du)\rangle\right]ds\right)\\ &\leq C_{2}e^{C_{1}T},\end{split}\] for a constant \(C_{2}\) which is finite by our assumption that \(\int_{0}^{\infty}h(v)^{2}dv<\infty\). Therefore, \[\mathbb{E}\left[\sup_{0\leq t\leq T}\langle h,\xi_{t}^{N}\rangle\right]\leq \mathbb{E}\left[\langle h,\xi_{0}^{N}\rangle\right]+(C_{2}+C_{0}/C_{1})e^{C_{ 1}T},\] and so \[\mathbb{P}\left\{\sup_{0\leq t\leq T}\langle h,\xi_{t}^{N}\rangle>K\right\}\leq \frac{\mathbb{E}\left[\langle h,\xi_{0}^{N}\rangle\right]+(C_{2}+C_{0}/C_{1})e^{C _{1}T}}{K}.\] \(\Box\) **Lemma 8.5**: _Let \(f\) be a bounded, continuous real-valued function on \(\mathbb{R}^{d}\times[0,\infty)\) with uniformly bounded first and second derivatives for which there exists a \(u_{0}\) such that if \(u>u_{0}\) then \(f(x,u)=0\). Then, the sequence of real-valued processes \((\langle f,\xi_{t}^{N}\rangle)_{t\geq 0}\) for \(N\geq 1\) is tight in \(\mathcal{D}_{[0,\infty)}(\mathbb{R})\)._ **Proof** [of Lemma 8.5] Again, we use the Aldous-Rebolledo criterion. Tightness of \(\langle f,\xi_{t}\rangle\) for a fixed \(t\) follows from Lemma 8.3, so we need only prove conditions analogous to (6.9) and (6.10) applied to the martingale representation of equations (8.2) and (8.3). Rewriting (8.2) with \(c_{\theta}=c_{\theta}(x,\eta_{s})\), \[\langle f,\xi_{t}\rangle=\langle f,\xi_{0}\rangle+M_{t}^{f}+\int _{0}^{t}\Big{\langle}c_{\theta}\int_{u}^{N}\int(f(y,u_{1})+f(x,u_{1}))q^{m}(x, dy,\eta)du_{1}\] \[\qquad\qquad+c_{\theta}(N-u)\int_{0}^{t}(f(y,u)-f(x,u))q^{m}(x, dy,\eta)+(c_{\theta}u^{2}-b_{\theta}u)\frac{d}{du}f,\xi_{s}\Big{\rangle}ds.\] The bounds analogous to (6.9) and (6.10) follow as in the proof of Lemma 6.3: for instance, observe that using that \(c_{\theta}\leq C_{a}\) for some \(C_{a}\), the predictable part of this semimartingale decomposition is bounded by \[\Big{\langle}2C_{a}\|f\|_{\infty}u_{f}+(1-u/N)\gamma B_{f}^{\theta}+(C_{a}u^{ 2}-b_{\theta}u)\frac{d}{du}f,\xi_{s}\Big{\rangle},\] the last term of which is bounded by \[\langle C_{a}u_{f}^{2}+\sup_{x}|b_{\theta}(x,\eta_{s})|u_{f}\|\frac{d}{du}f\| _{\infty}\rangle,\] which can be bounded as we did for (6.9). \(\Box\) ### Motion of ancestral lineages In this section we prove Theorem 2.23. The argument follows directly from the discussion in Section 5.3. **Proof** [of Theorem 2.23] For brevity, in the proof we write \(\gamma(x)\) or \(\gamma\) for \(\gamma(x,\eta)\). Here we have taken the high-density, deterministic limit (so, \(\theta,N\to\infty\) and \(\theta/N\to 0\)). We first proceed informally, as if the limiting process has a density \(\varphi_{t}(x)\) at location \(x\) and time \(t\) (which it may not), and follow this with an integration against test functions to make the argument rigorous. Let \(Y\) denote the spatial motion followed by a single line of descent. Above equation (5.19), we showed that \(Y\) is a diffusion with generator at time \(s\) \[\mathcal{L}^{Y}_{s}g(x)=\gamma(x,\eta_{s})(\mathcal{B}(r(\cdot,\eta_{s})g(\cdot ))(x)-g(x)\mathcal{B}r(x,\eta_{s})).\] The diffusion is time-inhomogeneous if the density is not constant in time. Let \(\varphi_{t}(x)\) be the limiting density, which is a weak solution to (1.1), \(\partial_{t}\varphi_{t}=r\mathcal{B}^{*}[\varphi_{t}\gamma]+\varphi_{t}F\). Formally, the intensity of individuals at \(y\) at time \(t\) that are descended from individuals that were at \(x\) at time \(s\) (with \(s<t\)) is \[\varphi_{s}(x)\mathbb{E}_{s,x}\left[\exp\left(\int_{s}^{t}(F+\gamma\mathcal{B} r)(Y_{u})du\right)\mathbf{1}_{Y_{t}=y}\right]dy, \tag{8.6}\] where the subscript \(s,x\) in the expectation indicates that \(Y_{s}=x\). To see why this should be true, suppose that an ancestor at time \(s\) has level \(v\). Conditional on its spatial motion \(\{Y_{u}\}_{s\leq u\leq t}\), its level at time \(t\) will be \(v\exp(-\int_{s}^{t}(F+\gamma\mathcal{B}r)(Y_{u})du)\). This will be less than a given level \(\lambda\) if \(v<\lambda\exp(\int_{s}^{t}(F+\gamma\mathcal{B}r)(Y_{u})du)\). The intensity of levels at \(y\) that are descended from individuals at \(x\) can therefore be obtained as the limit as \(\lambda\to\infty\) of \(1/\lambda\) times the number of levels at \(x\) at time \(s\) with \(u<\lambda\exp(\int_{s}^{t}(F+\gamma\mathcal{B}r)(Y_{u})du)\) and for which the corresponding individual is at \(y\) at time \(t\), which is precisely the quantity in (8.6). By our construction in Section 5.3, when we integrate (8.6) with respect to \(x\) we recover \(\varphi_{t}(y)dy\). Consider an individual sampled at location \(y\) at time \(t\), and write \(p(t,s,y,x)\) for the probability density that their ancestor at time \(s\) was at \(x\). As a consequence of (8.6), still formally, \[p(t,s,y,x)=\frac{\varphi_{s}(x)}{\varphi_{t}(y)}\mathbb{E}_{s,x}\left[\exp \left(\int_{s}^{t}(F+\gamma\mathcal{B}r)(Y_{u})du\right)\mathbf{1}_{Y_{t}=y} \right]\qquad\text{ for }s<t. \tag{8.7}\] To make (8.7) meaningful, we multiply by suitable test functions \(f\) and \(g\) and integrate. \[\int\int f(y)\varphi_{t}(y)p(t,s,y,x)g(x)dydx\] \[\qquad=\int g(x)\varphi_{s}(x)\mathbb{E}_{x,s}\left[\exp\left( \int_{s}^{t}(F+\gamma\mathcal{B}r)(Y_{u})du\right)f(Y_{t})\right]dx.\] Writing \(\widehat{T}_{t,s}\) for the time-inhomogeneous semigroup corresponding to the motion of ancestral lineages backwards in time (that is, \(\widehat{T}_{t,s}f(y)=\int p(t,s,x,y)f(x)dy\)), we can write this as \[\int f(y)\varphi_{t}(y)\widehat{T}_{t,s}g(y)dy=\int g(x)\varphi_{s}(x)\mathbb{ E}_{s,x}\left[\exp\left(\int_{s}^{t}(F+\gamma\mathcal{B}r)(Y_{u})du\right)f(Y_{t}) \right]dx. \tag{8.8}\] Next, we will differentiate this equation with respect to \(t\). There are two terms in the product on the left-hand side that depend on \(t\), so if we use that \(\partial_{t}\varphi_{t}=r\mathcal{B}^{*}[\varphi_{t}\gamma]+\varphi F\) (in a weak sense), and write \({\cal L}_{u}\) for the generator of \(\widehat{T}_{t,s}\) at time \(t=u\) so that \(\partial_{t}\widehat{T}_{t,s}g(y)\Big{|}_{t=s}={\cal L}_{s}g(y)\), then \[\frac{d}{dt}\int f(y)\varphi_{t}(y)\widehat{T}_{t,s}g(y)dy\Big{|}_ {t=s}\] \[=\int f(y)\left\{\varphi_{s}(y){\cal L}_{s}g(y)+\left[r(y){\cal B} ^{*}(\gamma\varphi_{s})(y)+\varphi_{s}(y)F(y)\right]g(y)\right\}dy.\] As for the right-hand side, since \(Y_{s}=x\) under \(\mathbb{E}_{x,s}\), \[\frac{d}{dt}\mathbb{E}_{x,s}\left[\exp\left(\int_{s}^{t}(F+\gamma{\cal B}r)(Y _{u})du\right)f(Y_{t})\right]\Bigg{|}_{t=s} = \left[F(x)+\gamma(x){\cal B}r(x)\right]f(x)\:+\:{\cal L}_{s}^{Y}f (x).\] Therefore, the derivative of (8.8) (with respect to \(t\), evaluated at \(t=s\)) is \[\int f(y)\left\{\varphi_{s}(y){\cal L}_{s}g(y)+\left(r(y){\cal B} ^{*}(\gamma\varphi_{s})(y)+\varphi_{s}(y)F(y)\right)g(y)\right\}dy\] \[=\int g(x)\varphi_{s}(x)\left({\cal L}_{s}^{Y}f(x)+\left[F(x)+ \gamma(x){\cal B}r(x)\right]f(x)\right)dx\] \[=\int f(x)\left(({\cal L}_{s}^{Y})^{*}(\varphi_{s}g)(x)+\left[F( x)+\gamma(x){\cal B}r(x)\right]\varphi_{s}(x)g(x)\right)dx,\] where \(({\cal L}_{s}^{Y})^{*}\) is the adjoint of \({\cal L}_{s}^{Y}\). Since \(f\) was arbitrary, \[{\cal L}_{s}g = \frac{1}{\varphi_{s}}\left[({\cal L}_{s}^{Y})^{*}(\varphi_{s}g) +\gamma\varphi_{s}g{\cal B}(r)-rg{\cal B}^{*}(\gamma\varphi_{s})\right].\] (Note that the \(\varphi_{s}Fg\) terms have cancelled.) Since the adjoint of \({\cal L}_{s}^{Y}\) is \[({\cal L}_{s}^{Y})^{*}f=r{\cal B}^{*}(\gamma f)-\gamma f{\cal B}r,\] we can rewrite the generator of a lineage as \[{\cal L}_{s}g = \frac{r}{\varphi_{s}}\left[{\cal B}^{*}(\gamma\varphi_{s}g)-g{ \cal B}^{*}(\gamma\varphi_{s})\right].\] This is equation (2.17). To simplify to equation (2.18), first define \({\cal D}f(x)=\sum_{ij}{\bf C}_{ij}\partial_{ij}f(x)\), and so the adjoint of \({\cal D}\) is \[{\cal D}^{*}f(x)=\sum_{ij}\partial_{ij}({\bf C}_{ij}f(x)).\] Note that \({\cal D}^{*}\) satisfies the following identity: \[{\cal D}^{*}(fg) =\sum_{ij}\left\{g\partial_{ij}({\bf C}_{ij}f)+2f\partial_{i}({ \bf C}_{ij})\partial_{j}(g)+2{\bf C}_{ij}\partial_{i}(f)\partial_{j}(g)+{\bf C }_{ij}f\partial_{ij}g\right\}\] \[=g{\cal D}^{*}f+2f\vec{c}\cdot\nabla g+2({\bf C}\nabla f)\cdot \nabla g+f{\cal D}g,\] where \(\vec{c}_{j}=\sum_{i}\partial_{j}{\bf C}_{ij}\). So, with \(f=\gamma\varphi_{s}\), \[{\cal L}_{s}g = \frac{r}{\varphi_{s}}\left[{\cal D}^{*}(\gamma\varphi_{s}g)- \nabla\cdot(\gamma\varphi_{s}g\vec{b})-g{\cal D}^{*}(\gamma\varphi_{s})+g\nabla \cdot(\gamma\varphi_{s}\vec{b})\right]\] \[= \frac{r}{\varphi_{s}}\left[\gamma\varphi_{s}{\cal D}g+2\gamma \varphi_{s}\vec{c}\cdot\nabla g+2({\bf C}\nabla(\gamma\varphi_{s}))\cdot \nabla g-\gamma\varphi_{s}\vec{b}\cdot\nabla g\right]\] \[= r\gamma\left[{\cal D}g+2\vec{c}\cdot\nabla g+2({\bf C}\nabla \log(\gamma\varphi_{s}))\cdot\nabla g-\vec{b}\cdot\nabla g\right],\] which is equation (2.18). \(\Box\) **Proof** [of Corollary 2.26] For the moment, we will write \(r(x)\) for \(r(x,\eta)\) and \(\gamma(x)\) for \(\gamma(x,\eta)\). First note that since in this case the semigroup does not depend on time, we can write \({\cal L}={\cal L}_{s}\), and \[{\cal L}f=\sigma^{2}r\gamma\left(\Delta f+\nabla(2\log(\gamma\varphi)-h/\sigma ^{2})\cdot\nabla f\right).\] Now, observe that \[\int_{\mathbb{R}^{d}}e^{H(x)}f(x)(\Delta+\nabla H(x)\cdot\nabla)g(x)dx=-\int_{ \mathbb{R}^{d}}e^{H(x)}\left\{\nabla f(x)\cdot\nabla g(x)\right\}dx,\] so that by choosing \(H(x)=2\log(\gamma(x)\varphi(x))-h(x)/\sigma^{2}\) and \[\pi(x)=e^{H(x)}/(\sigma^{2}r(x)\gamma(x))=\frac{\gamma(x)\varphi(x)^{2}e^{-h( x)/\sigma^{2}}}{\sigma^{2}r(x)},\] we have that \[\int_{\mathbb{R}^{d}}\pi(x)f(x){\cal L}g(x)dx=-\int_{\mathbb{R}^{d}}e^{H(x)} \nabla f(x)\cdot\nabla g(x)dx.\] Since this is symmetric in \(f\) and \(g\), the process \(Y\) is reversible with respect to \(\pi\); the constant factor of \(\sigma^{2}\) does not affect the result. \(\Box\) Markov Mapping Theorem The following appears as Theorem A.2 in Etheridge and Kurtz (2019), specialized slightly here to the case that the processes are cadlag and have no fixed points of discontinuity. For an \(S_{0}\)-valued, measurable process \(Y\), \(\widehat{\mathcal{F}}_{t}^{Y}\) denotes the completion of the \(\sigma\)-algebra generated by \(Y(0)\) and \(\{\int_{0}^{r}h(Y(s))ds,r\leq t,h\in B(S_{0})\}\). Also, let \(D_{S}[0,\infty)\) denote the space of cadlag, \(S\)-valued functions with the Skorohod topology, and \(M_{S}[0,\infty)\) the space of Borel measurable functions from \([0,\infty)\) to \(S\), topologized by convergence in Lesbegue measure. For other definitions see Etheridge and Kurtz (2019). **Theorem A.1** (Markov Mapping Theorem): _Let \((S,d)\) and \((S_{0},d_{0})\) be complete, separable metric spaces. Let \(A\subset C_{b}(S)\times C(S)\) and \(\psi\in C(S)\), \(\psi\geq 1\). Suppose that for each \(f\in\mathcal{D}(A)\) there exists \(c_{f}\) such that_ \[|Af(x)|\leq c_{f}\psi(x),\qquad x\in A,\] _and define \(A_{0}f(x)=Af(x)/\psi(x)\)._ _Suppose that \(A_{0}\) is a countably determined pre-generator, and suppose that \(\mathcal{D}(A)=\mathcal{D}(A_{0})\) is closed under multiplication and is separating. Let \(\gamma:S\to S_{0}\) be Borel measurable, and let \(\alpha\) be a transition function from \(S_{0}\) into \(S\) (\(y\in S_{0}\to\alpha(y,\cdot)\in\mathcal{P}(S)\) is Borel measurable) satisfying \(\int h\circ\gamma(x)\alpha(y,dx)=h(y)\) for \(y\in S_{0}\) and \(h\in B(S_{0})\), that is, \(\alpha(y,\gamma^{-1}(y))=1\). Assume that \(\widetilde{\psi}(y)\equiv\int_{S}\psi(z)\alpha(y,dz)<\infty\) for each \(y\in S_{0}\) and define_ \[C=\{\int_{S}f(z)\alpha(\cdot,dz),\int_{S}Af(z)\alpha(\cdot,dz)\;:\;f\in \mathcal{D}(A)\}.\] (A.1) _Let \(\mu_{0}\in\mathcal{P}(S_{0})\) and define \(\nu_{0}=\int\alpha(y,\cdot)\mu_{0}(dy)\)._ * _If_ \(\widetilde{Y}\) _satisfies_ \(\int_{0}^{t}\mathbb{E}[\widetilde{\psi}(\widetilde{Y}(s))]ds<\infty\) _for all_ \(t\geq 0\) _and_ \(\widetilde{Y}\) _is a solution of the martingale problem for_ \((C,\mu_{0})\)_, then there exists a solution_ \(X\) _of the martingale problem for_ \((A,\nu_{0})\) _such that_ \(\widetilde{Y}\) _has the same distribution on_ \(M_{S_{0}}[0,\infty)\) _as_ \(Y=\gamma\circ X\)_. If_ \(Y\) _and_ \(\widetilde{Y}\) _are cadlag, then_ \(Y\) _and_ \(\widetilde{Y}\) _have the same distribution on_ \(D_{S_{0}}[0,\infty)\)_._ * _For_ \(t\geq 0\)_,_ \[\mathbb{P}\{X(t)\in\Gamma\mid\widehat{\mathcal{F}}_{t}^{Y}\}=\alpha(Y(t),\Gamma ),\qquad\text{for $\Gamma\in\mathcal{B}(S)$}.\] * _If, in addition, uniqueness holds for the martingale problem for_ \((A,\nu_{0})\)_, then uniqueness holds for the_ \(M_{S_{0}}[0,\infty)\)_-martingale problem for_ \((C,\mu_{0})\)_. If_ \(\widetilde{Y}\) _has sample paths in_ \(D_{S_{0}}[0,\infty)\) _then uniqueness holds for the_ \(D_{S_{0}}[0,\infty)\)_-martingale problem for_ \((C,\mu_{0})\)_._ * _If uniqueness holds for the martingale problem for_ \((A,\nu_{0})\) _then_ \(Y\) _is a Markov process._ In our application, we have taken \(S\) to be the space of locally finite counting measures on \(\mathbb{R}^{d}\times[0,N)\) or on \(\mathbb{R}^{d}\times[0,\infty)\), and \(S_{0}\) the space of finite measures on \(\overline{\mathbb{R}}^{d}\). Then, \(A\) corresponds to the generator for the lookdown process (i.e., either \(A^{N}\) or \(A\)), and \(C\) corresponds to the generator for the spatial population process (i.e., either \({\cal P}^{N}\) or \({\cal P}\)). The "\(\gamma\)" of the theorem is our spatial projection operator that we have called \(\kappa^{N}\) or \(\kappa\), and the "\(\alpha\)" of the theorem will be named \(\Gamma^{N}\) or \(\Gamma\) below. Finally, "\(X\)" of the theorem is our lookdown process, \(\xi\), and "\(Y\)" is our spatial process, \(\eta\). ### Lookdown Generators In this section we verify one of the conditions of the Markov Mapping Theorem, namely, that "integrating out levels" in the generator of the lookdown process we obtain the generator of the projected process. In the notation of the theorem, we are verifying that \(C\) defined in (A.1) is in fact \({\cal P}^{N}\) (if defined with \(A^{N}\)) or \({\cal P}^{\infty}\) (if defined with \(A\)). We will work with test functions of the form \[f(\xi)=\prod_{(x,u)\in\xi}g(x,u)=\exp\left(\left\langle\log g,\xi\right\rangle \right),\] (A.2) where \(0\leq g\leq 1\) and \(g(x,u)=1\) for all \(u\geq u_{g}\) for some \(u_{g}<\infty\). Furthermore, recall that \(\kappa^{N}(\xi)(\cdot)=\xi(\cdot\times[0,N))/N\) is the "spatial projection operator", and define the transition function \(\Gamma^{N}:{\cal M}_{F}(\mathbb{R}^{d})\to{\cal M}(\mathbb{R}^{d}\times[0,N))\) so that for \(\eta\in{\cal M}_{F}(\mathbb{R}^{d})\), if \(\widehat{g}_{N}(x)=\int_{0}^{N}g(x,u)du/N\), then \[F_{g}^{N}(\eta) :=\int f(\xi)\Gamma^{N}(\eta,d\xi)\] \[=\exp\left(N\left\langle\log\frac{1}{N}\int_{0}^{N}g(x,u)du,\eta (dx)\right\rangle\right)\] \[=\exp\left(N\left\langle\log\widehat{g}_{N}(x),\eta(dx)\right\rangle \right),\] i.e., \(\Gamma^{N}\) assigns independent labels on \([0,N]\) to each of the points in \(\eta\). It follows from Lemma 6.5 that for test functions of this form the generator of \(\eta_{t}^{N}\) is \[\begin{split}{\cal P}^{N}F_{g}^{N}(\eta)=F_{g}^{N}(\eta)N\theta \Bigg{\langle}\gamma(x,\eta)&\int r(z,\eta)\left(\widehat{g}_{N} (z)-1\right)q_{\theta}(x,dz)\\ &+\mu_{\theta}(x,\eta)\left(\frac{1}{\widehat{g}_{N}(x)}-1 \right),\eta(dx)\Bigg{\rangle}.\end{split}\] (A.3) (Note that \(f\) here differs from the \(f\) used in Lemma 6.5 so as to agree with standard usage in the literature on lookdown processes.) The generator of \(\xi_{t}^{N}\) is \(A^{N}\), defined in equation (5.10). **Lemma A.2**: _For all finite counting measures \(\eta\) on \(\mathbb{R}^{d}\), if \(f\) is of the form (A.2), then_ \[\int A^{N}f(\xi)\Gamma^{N}(\eta,d\xi)={\cal P}^{N}F_{g}^{N}(\eta).\] (A.4) For the limiting process, recall that \(\kappa(\xi)(\cdot)=\lim_{u\to\infty}\xi(\cdot\times[0,u))/u\) is the "spatial projection operator", and define the probability kernel \(\Gamma:{\cal M}_{F}(\mathbb{R}^{d})\to{\cal M}(\mathbb{R}^{d}\times[0,\infty))\) so that for \(\eta\in{\cal M}_{F}(\mathbb{R}^{d})\), defining \(\widetilde{g}(x)=\int_{0}^{\infty}(g(x,u)-1)du\), \[F_{g}(\eta) :=\int f(\xi)\Gamma(\eta,d\xi)\] \[=\exp\left(\big{\langle}\int_{0}^{\infty}(g(x,u)-1)du,\eta(dx) \big{\rangle}\right)\] \[=e^{\langle\widetilde{g}(x),\eta(dx)\rangle}.\] i.e., \(\Gamma(\eta,\cdot)\) is the distribution of a conditionally Poisson process with intensity a product of \(\eta\) and Lebesgue measure. It again follows from Lemma 6.5 that for test functions of this form the generator of \(\eta_{t}\) is \[{\cal P}^{\infty}F_{g}(\eta)=F_{g}(\eta)\left\langle\gamma(x,\eta){\cal B}( \widetilde{g}(\cdot)r(\cdot))(x)+F(x,\eta)\widetilde{g}(x)+\alpha\gamma(x, \eta)r(x,\eta)\widetilde{g}^{2}(x),\eta(dx)\right\rangle.\] (A.5) The generator of \(\xi_{t}\) is \(A\), defined in equation (5.11). **Lemma A.3**: _For all \(\eta\in{\cal M}_{F}(\mathbb{R}^{d})\), if \(f\) is of the form (A.2), then_ \[\int Af(\xi)\Gamma(\eta,d\xi)={\cal P}^{\infty}F_{g}(\eta).\] (A.6) **Proof** [of Lemma A.2] First, break the generator \(A^{N}\) into three parts, \[A^{N}_{1}f(\xi) =f(\xi)\sum_{(x,u)\in\xi}2c_{\theta}(x,\eta)\int_{u}^{N}\Bigg{(} \frac{1}{2}\frac{g(x,v_{1})}{g(x,u)}\int_{\mathbb{R}^{d}}(g(y,u)-g(x,u))q^{m} _{\theta}(x,dy,\eta)\Bigg{)}dv_{1},\] \[A^{N}_{2}f(\xi) =f(\xi)\sum_{(x,u)\in\xi}2c_{\theta}(x,\eta)\int_{u}^{N}\Bigg{(} \frac{1}{2}\int_{\mathbb{R}^{d}}\Bigg{(}\frac{g(y,v_{1})+g(x,v_{1})}{2}-1 \Bigg{)}\,q^{m}_{\theta}(x,dy,\eta)\Bigg{)}dv_{1},\] \[A^{N}_{3}f(\xi) =f(\xi)\sum_{(x,u)\in\xi}\,\big{(}c_{\theta}(x,\eta)u^{2}-b_{ \theta}(x,\eta)u\big{)}\,\frac{\partial_{u}g(x,u)}{g(x,u)},\] where \(q^{m}\) was defined in equation (8.1), so that \[A^{N}f(\xi)=A^{N}_{1}f(\xi)+A^{N}_{2}f(\xi)+A^{N}_{3}f(\xi).\] We now integrate each piece against \(\Gamma^{N}\). First note that by the product form of \(f\), \[\int f(\xi)\sum_{(x,u)\in\xi}\frac{\ell(x,u)}{g(x,u)}\Gamma^{N}(\eta,d\xi)=F^{ N}_{g}(\eta)\bigg{\langle}\frac{1}{N\widehat{g}_{N}(x)}\int_{0}^{N}\ell(x,u)du,N \eta(dx)\bigg{\rangle}.\] Therefore, \[\int A^{N}_{1}f(\xi)\Gamma^{N}(\eta,d\xi)\] \[\qquad=F^{N}_{g}(\eta)\bigg{\langle}\frac{c_{\theta}(x,\eta)}{ \widehat{g}_{N}(x)}\Bigg{\{}\int_{\mathbb{R}^{d}}\Bigg{\{}\int_{0}^{N}\int_{u }^{N}g(x,v_{1})(g(y,u)-g(x,u))dv_{1}du\Bigg{\}}\,q^{m}_{\theta}(x,dy,\eta), \eta(dx)\bigg{\rangle}.\] For the second generator, we have \[\int A_{2}^{N}f(\xi)\Gamma^{N}(\eta,d\xi)\] \[\qquad=F_{g}^{N}(\eta)\bigg{\langle}\frac{c_{\theta}(x,\eta)}{ \widehat{g}_{N}(x)}\int_{0}^{N}g(x,u)\Bigg{\{}\int_{u}^{N}\left(\int_{\mathbb{ R}^{d}}\left(\frac{g(y,v_{1})+g(x,v_{1})}{2}-1\right)q_{\theta}^{m}(x,dy,\eta) \right)\!dv_{1}\Bigg{\}},\eta(dx)\bigg{\rangle}\] \[\qquad=F_{g}^{N}(\eta)\bigg{\langle}\frac{c_{\theta}(x,\eta)}{ \widehat{g}_{N}(x)}\int_{\mathbb{R}^{d}}\left\{\int_{0}^{N}\int_{u}^{N}g(x,u) \left(g(y,v_{1})+g(x,v_{1})-2\right)dv_{1}du\right\}\!q_{\theta}^{m}(x,dy,\eta ),\eta(dx)\bigg{\rangle}.\] For the third generator we have that \[\int A_{3}^{N}f(\xi)\Gamma^{N}(\eta,d\xi)\] \[\qquad=F_{g}^{N}(\eta)\bigg{\langle}\frac{1}{\widehat{g}_{N}(x) }\int_{0}^{N}\left(c_{\theta}(x,\eta)u^{2}-b_{\theta}(x,\eta)u\right)\partial_ {u}g(x,u)du,\eta(dx)\bigg{\rangle}\] \[\qquad=F_{g}^{N}(\eta)\bigg{\langle}\frac{1}{\widehat{g}_{N}(x) }\int_{0}^{N}\left(b_{\theta}(x,\eta)-2c_{\theta}(x,\eta)u\right)(g(x,u)-1)du,\eta(dx)\bigg{\rangle}.\] Note that \(2\int_{0}^{N}\int_{u}^{N}g(x,v_{1})g(y,u)dv_{1}du=N^{2}\widehat{g}_{N}(x) \widehat{g}_{N}(y)\), and so \[\int_{0}^{N}\int_{u}^{N}g(x,v_{1})(g(y,u)-g(x,u))dv_{1}du+\int_{0 }^{N}\int_{u}^{N}g(x,u)\left(g(y,v_{1})+g(x,v_{1})-2\right)dv_{1}du\] \[\qquad=N^{2}\widehat{g_{N}}(x)(\widehat{g_{N}}(y)-2)+2\int_{0}^{ N}ug(x,u)du.\] Combining the last equations, and using the fact that \(Nc_{\theta}(x,\eta)-b_{\theta}(x,\eta)=\theta\mu_{\theta}(x,\eta)\), we have \[\int\Bigg{(}A_{1}^{N}f(\xi)+A_{2}^{N}f(\xi)+A_{3}^{N}f(\xi) \Bigg{)}\Gamma^{N}(\eta,d\xi)\] \[\qquad=F_{g}^{N}(\eta)\bigg{\langle}c_{\theta}(x,\eta)N^{2}\int_ {\mathbb{R}^{d}}(\widehat{g}_{N}(y)-2)q^{m}(x,dy,\eta)+\frac{1}{\widehat{g}_{N }(x)}c_{\theta}(x,\eta)\int_{0}^{N}2udu\] \[\qquad\qquad\qquad+\frac{1}{\widehat{g}_{N}(x)}b_{\theta}(x,\eta )\int_{0}^{N}(g(x,u)-1)du,\eta(dx)\bigg{\rangle}\] \[\qquad=F_{g}^{N}(\eta)\bigg{\langle}c_{\theta}(x,\eta)N^{2}\int_ {\mathbb{R}^{d}}(\widehat{g}_{N}(y)-1)q^{m}(x,dy,\eta)+N^{2}c_{\theta}(x,\eta )\left(\frac{1}{\widehat{g}_{N}(x)}-1\right)\] \[\qquad\qquad\qquad+Nb_{\theta}(x,\eta)\left(1-\frac{1}{\widehat{ g}_{N}(x)}\right),\eta(dx)\bigg{\rangle}\] \[\qquad=F_{g}^{N}(\eta)N\bigg{\langle}Nc_{\theta}(x,\eta)\int_{ \mathbb{R}^{d}}(\widehat{g}_{N}(y)-1)q^{m}(x,dy,\eta)+\theta\mu_{\theta}(x, \eta)\left(\frac{1}{\widehat{g}_{N}(x)}-1\right),\eta(dx)\bigg{\rangle}.\] This matches equation (A.3), as desired, because \(Nc_{\theta}(x,\eta)q_{m}(x,dy,\eta)=\theta\gamma(x,\eta)q_{\theta}(x,dy)\). \(\square\) Before proving Lemma A.3, we recall an important equality for conditionally Poisson point processes (Kurtz and Rodrigues [2011] Lemma A.3). **Lemma A.4**: _If \(\xi=\sum_{i}\delta_{Z_{i}}\) is a Poisson random measure with mean measure \(\nu\), then for \(\ell\in L^{1}(\nu)\) and \(g\geq 0\) with \(\log g\in L^{1}(\nu)\),_ \[\mathbb{E}\left[\sum_{j}\ell(Z_{j})\prod_{i}g(Z_{i})\right]=\int \ell gd\nu e^{\int(g-1)d\nu}.\] (A.7) **Proof** [of Lemma A.3] By Lemma A.4, \[\int f(\xi)\sum_{(x,u)\in\xi}\frac{\ell(x,u)}{g(x,u)}\Gamma(\eta, \xi)=F_{g}(\eta)\bigg{\langle}\int_{0}^{\infty}\ell(x,u)du,\eta(dx)\bigg{\rangle}.\] Comparing this to the definition of \(A\) (equation (5.11)), we see that \[\int Af(\xi)\Gamma(\eta,d\xi)=F_{g}(\eta)\bigg{\langle}\int_{0}^{ \infty}(\ell_{1}(x,u)+\ell_{2}(x,u)+\ell_{3}(x,u))du,\eta(dx)\bigg{\rangle},\] where \[\ell_{1}(x,u) =\gamma(x,\eta)\left(\mathcal{B}(g(\cdot,u)r(\cdot,\eta))(x)-g(x, u)\mathcal{B}r(x,\eta)\right)\] \[=\gamma(x,\eta)\left(\mathcal{B}((g(\cdot,u)-1)r(\cdot,\eta))(x) -(g(x,u)-1)\mathcal{B}r(x,\eta)\right)\] and \[\ell_{2}(x,u)=2g(x,u)\alpha\gamma(x,\eta)r(x,\eta)\int_{u}^{ \infty}(g(x,v)-1)dv\] and \[\ell_{3}(x,u)=\left(\alpha\gamma(x,\eta)r(x,\eta)u^{2}-\{\gamma(x, \eta)\mathcal{B}r(x,\eta)+F(x,\eta)\}u\right)\partial_{u}g(x,u).\] First note that since \(\mathcal{B}\) acts on space, it commutes with the integral over levels, and so \[\int_{0}^{\infty}\ell_{1}(x,u)du=\gamma(x,\eta)\left(\mathcal{B} (\widetilde{g}(\cdot)r(\cdot,\eta))(x)-\widetilde{g}(x)\mathcal{B}r(x,\eta) \right),\] since \(\widetilde{g}(x)=\int_{0}^{\infty}(g(x,u)-1)du\). Next, \[\int_{0}^{\infty}\ell_{2}(x,u)du=\alpha\gamma(x,\eta)r(x,\eta)2 \int_{0}^{\infty}g(x,u)\int_{u}^{\infty}(g(x,v)-1)dvdu.\] Finally, integrating by parts, \[\int_{0}^{\infty}\ell_{3}(x,u)du =-\alpha\gamma(x,\eta)r(x,\eta)\int_{0}^{\infty}2u(g(x,u)-1)du\] \[\qquad+\{\gamma(x,\eta)\mathcal{B}r(x,\eta)+F(x,\eta)\} \widetilde{g}(x)\] Now, note that \[\int_{0}^{\infty}g(x,u)\int_{u}^{\infty}(g(x,v)-1)dvdu-\int_{0}^{ \infty}u(g(x,u)-1)du\] \[\qquad=\int_{0}^{\infty}g(x,u)\int_{u}^{\infty}(g(x,v)-1)dvdu-\int _{0}^{\infty}\int_{v}^{\infty}(g(x,u)-1)dudv\] \[\qquad=\int_{0}^{\infty}(g(x,u)-1)\int_{u}^{\infty}(g(x,v)-1)dvdu\] \[\qquad=\widetilde{g}(x)^{2}/2.\] Adding these together, we get that \[\int_{0}^{\infty}(\ell_{1}(x,u)+\ell_{2}(x,u)+\ell_{3}(x,u))du\] \[\qquad=\gamma(x,\eta)\mathcal{B}(\widetilde{g}(\cdot)r(\cdot, \eta))(x)+F(x,\eta)\widetilde{g}(x)+\alpha\gamma(x,\eta)r(x,\eta)\widetilde{g }(x)^{2},\] which agrees with (A.5), as desired. \(\square\) ## Appendix B Technical Lemmas ### Constraints on kernel widths **Lemma B.1**: _Suppose the first three conditions of Assumptions 2.8 hold, and furthermore the kernels \(\rho_{r}=p_{\epsilon_{r}^{2}}\) and \(\rho_{\gamma}=p_{\epsilon_{\gamma}^{2}}\) are each Gaussian with standard deviations \(\epsilon_{r}\) and \(\epsilon_{\gamma}\) respectively. Let \(\lambda=\sup_{x}\sup_{y:\|y\|=1}y^{T}\mathbf{C}(x)y\) be the largest eigenvalue of \(\mathbf{C}(x)\) across all \(x\). If \(\epsilon_{r}^{2}+\frac{2\lambda}{\theta}<\epsilon_{\gamma}^{2}\), then there is a \(C<\infty\) such that for all \(x\in\mathbb{R}^{d}\), \(\eta\in\mathcal{M}_{F}(\mathbb{R}^{d})\),_ \[\left|\theta\int_{\mathbb{R}^{d}}(\rho_{r}\!*\!\eta(y)-\rho_{r}\!*\!\eta(x))q_ {\theta}(x,dy)\right|\leq C\rho_{\gamma}\!*\!\eta(x)\] (B.1) _and_ \[\theta\int_{\mathbb{R}^{d}}\left(\rho_{r}\!*\!\eta(y)-\rho_{r}\!*\!\eta(x) \right)^{2}q_{\theta}(x,dy)\leq C\left(\rho_{\gamma}\!*\!\eta(x)\right)^{2}.\] (B.2) Note that the right hand side of each is the average density over a wider region (since \(\epsilon_{\gamma}>\epsilon_{r}\)). The key assumption here is that the spatial scale over which local density affects birth rate is larger than the scale over which it affects establishment. In the simple case of \(\vec{b}=0\) and \(\mathbf{C}=\sigma^{2}I\), the condition is simply that \(\epsilon_{r}^{2}+2\sigma^{2}/\theta<\epsilon_{\gamma}^{2}\). This gives a yet more concrete situation in which Condition 2 of Lemma 2.9 holds. **Proof** [of Lemma B.1] First we prove (B.1). Recall that \(\rho_{r}\!*\!\eta(x)=\int p_{\epsilon_{r}^{2}}(x-w)\eta(dw)\), where \(p_{t}\) is the density of a Gaussian with mean \(0\) and variance \(t\), so that applying Fubini, (B.1) is \[\left|\int_{\mathbb{R}^{d}}\theta\int_{\mathbb{R}^{d}}(p_{\epsilon_{r}^{2}}(y -w)-p_{\epsilon_{r}^{2}}(x-w))q_{\theta}(x,dy)\eta(dw)\right|.\] Write \(p_{s,x}(\cdot)\) for the density of a Gaussian with mean \(s\vec{b}(x)\) and covariance \(\epsilon_{r}^{2}I+s\mathbf{C}(x)\), so that \(\int p_{\epsilon_{r}^{2}}(y-w)q_{\theta}(x,dy)=p_{1/\theta,x}(w-x)\). It therefore suffices to show that for all \(x\) and \(w\in\mathbb{R}^{d}\), there exists \(K\) such that \[\left|\theta\int_{\mathbb{R}^{d}}(p_{\epsilon_{r}^{2}}(y-w)-p_{\epsilon_{r}^{2} }(x-w))q_{\theta}(x,dy)\right|=\theta\left|p_{1/\theta,x}(w-x)-p_{0,x}(w-x) \right|\leq Kp_{\epsilon_{\gamma}^{2}}(w-x).\] However, \(\theta(p_{1/\theta,x}(z)-p_{0,x}(z))=\partial_{s}p_{s,x}(z)\) for some \(0\leq s\leq 1/\theta\). Write \(\Gamma(s,x)=\epsilon_{r}^{2}I+s\mathbf{C}(x)\), so that \[p_{s,x}(z)=\frac{1}{(2\pi|\Gamma(s,x)|)^{d/2}}\exp\left(-\frac{1}{2}(z-s\vec{ b}(x))^{T}\Gamma(s,x)^{-1}(z-s\vec{b}(x))\right),\] and note that if \(\lambda_{i}\) are the eigenvalues of \(\mathbf{C}(x)\) then \(|\Gamma(s,x)|=\prod_{i}(\epsilon_{r}^{2}+s\lambda_{i})\), and \(\partial_{s}|\Gamma(s,x)|=\sum_{i}\lambda_{i}|\Gamma(s,x)|/(\epsilon_{r}^{2}+s \lambda_{i})\). Therefore, \[\partial_{s}p_{s,x}(z)=\bigg{(}\vec{b}(x)^{T}\Gamma(s,x)^{-1}(z- s\vec{b}(x))+(z-s\vec{b}(x))^{T}\Gamma(s,x)^{-1}\mathbf{C}(x)\Gamma(s,x)^{-1}(z-s \vec{b}(x))\\ -\sum_{i}\frac{\lambda_{i}}{\epsilon_{r}^{2}+s\lambda_{i}}\bigg{)} p_{s,x}(z),\] where \(z^{T}\) is the transpose of \(z\). This implies that \[\frac{\theta\int_{\mathbb{R}^{d}}(p_{\epsilon_{r}^{2}}(y-w)-p_{\epsilon_{r}^{2 }}(x-w))q_{\theta}(x,dy)}{p_{\epsilon_{\gamma}^{2}}(x-w).}=h(x-w)e^{k(x-w)},\] where \(h(z)\) and \(k(z)\) are quadratic polynomials in \(z\) whose coefficients depend on \(s\) and \(x\) but are uniformly bounded, and \[k(z)=\frac{1}{2\epsilon_{\gamma}^{2}}\|z\|^{2}-\frac{1}{2}(z-s\vec{b}(x))^{T} \Gamma(s,x)^{-1}(z-s\vec{b}(x)),\] Since \(\inf_{z}z^{T}\Gamma(s,x)^{-1}z/\|z\|^{2}=1/(s\lambda(x)+\epsilon_{r}^{2})\), where \(\lambda(x)=\sup z^{T}\mathbf{C}(x)z/\|z\|^{2}\) is the largest eigenvalue of \(\mathbf{C}(x)\), this is negative for all \(z\) outside a bounded region, and so equation (B.1) follows from the assumption that \(\epsilon_{r}^{2}+2\sup_{x}\lambda(x)/\theta<\epsilon_{\gamma}^{2}\). (Note that we do not yet need the factor of 2.) Next we prove equation (B.2), in a similar way. Again applying Fubini, \[\theta\int_{\mathbb{R}^{d}}\left(\rho_{r}\!*\!\eta(y)-\rho_{r}\!* \!\eta(x)\right)^{2}q_{\theta}(x,dy)\\ =\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\theta\int_{\mathbb{R} ^{d}}(p_{\epsilon_{r}^{2}}(y-w)-p_{\epsilon_{r}^{2}}(x-w))(p_{\epsilon_{r}^{2 }}(y-v)-p_{\epsilon_{r}^{2}}(x-v))q_{\theta}(x,dy)\eta(dv)\eta(dw),\] and so as before, equation (B.2) will follow if the integrand is bounded by \(Kp_{\gamma}(x-w)p_{\gamma}(x-v)\). Now, let \(Y_{1}\), \(Y_{2}\), and \(Z\) be independent \(d\)-dimensional Gaussians with mean zero, where and \(Y_{2}\) have covariance \(\epsilon_{r}^{2}I\), and \(Z\) has covariance \(\mathbf{C}(x)\). Write \(p_{s,t,x}(\cdot,\cdot)\) for the joint density of \(Y_{1}+\sqrt{s}Z+s\vec{b}(x)\) and \(Y_{2}+\sqrt{t}Z+t\vec{b}(x)\). Then, observe that \[\theta\int_{\mathbb{R}^{d}} (p_{\epsilon_{r}^{2}}(y-w)-p_{\epsilon_{r}^{2}}(x-w))(p_{\epsilon _{r}^{2}}(y-v)-p_{\epsilon_{r}^{2}}(x-v))q_{\theta}(x,dy)\] \[=\theta\left(p_{1/\theta,1/\theta,x}(x-w,x-v)-p_{0,1/\theta,x}(x- w,x-v)\right.\] \[\qquad\left.-p_{1/\theta,0,x}(x-w,x-v)+p_{0,0,x}(x-w,x-v)\right)\] \[=\partial_{s}p_{s,1/\theta,x}(x-w,x-v)-\partial_{t}p_{0,t,x}(x- w,x-v),\] for some \(0\leq s,t\leq 1/\theta\). As before, \[\frac{\theta\int_{\mathbb{R}^{d}}(p_{\epsilon_{r}^{2}}(y-w)-p_{ \epsilon_{r}^{2}}(x-w))(p_{\epsilon_{r}^{2}}(y-v)-p_{\epsilon_{r}^{2}}(x-v))q _{\theta}(x,dy)}{p_{\epsilon_{\gamma}^{2}}(x-w)p_{\epsilon_{\gamma}^{2}}(x-v)} =h(x-w,x-v)e^{k(x-w,x-v)},\] where \(h(z_{1},z_{2})\) is a polynomial with uniformly bounded coefficients and \[k(z_{1},z_{2})=(\|z_{1}\|^{2}+\|z_{2}\|^{2})/(2\epsilon_{\gamma}^{2})-\frac{1 }{2}[z_{1},z_{2}]^{T}\Gamma(s,t,x)^{-1}[z_{1},z_{2}],\] where \([z_{1},z_{2}]\) is the \(\mathbb{R}^{2d}\) vector formed by concatenating \(z_{1}\) and \(z_{2}\), and \(\Gamma(s,t,x)\) is the block matrix \[\Gamma(s,t,x)=\left[\begin{array}{cc}\epsilon_{r}^{2}I+s\mathbf{C}(x)&\sqrt {st}\mathbf{C}(x)\\ \sqrt{st}\mathbf{C}(x)&\epsilon_{r}^{2}I+t\mathbf{C}(x)\end{array}\right].\] If \(\mathbf{C}(x)u=au\) for some \(a\in\mathbb{R}\), then \([u\sqrt{s},u\sqrt{t}]\) is an eigenvector of \(\Gamma(s,t,x)\) with eigenvalue \(\epsilon_{r}^{2}+(s+t)a\), and \([u\sqrt{t},-u\sqrt{s}]\) is an eigenvector of \(\Gamma(s,t,x)\) with eigenvalue \(0\). This implies the largest eigenvalue of \(\Gamma(s,t,x)\) is equal to \(\epsilon_{r}^{2}+(s+t)\lambda(x)\), where \(\lambda(x)\) is again the largest eigenvalue of \(\mathbf{C}(x)\). Therefore, if \(s+t\leq 2/\theta\), \[(\|z_{1}\|^{2}+\|z_{2}\|^{2})/\epsilon_{\gamma}^{2}-[z_{1},z_{2}] ^{T}\Gamma(s,t,x)^{-1}[z_{1},z_{2}]\] \[\qquad\leq(\|z_{1}\|^{2}+\|z_{2}\|^{2})\left(\frac{1}{\epsilon_{ \gamma}^{2}}-\frac{1}{\epsilon_{r}^{2}+2\lambda(x)/\theta}\right),\] which is negative by assumption. Therefore, there is a \(K\) such that \[\frac{\left|\partial_{s}p_{s,1/\theta,x}(x-w,x-v)-\partial_{t}p_{0,t,x}(x-w,x- v)\right|}{p_{\epsilon_{\gamma}^{2}}(x-w)p_{\epsilon_{\gamma}^{2}}(x-v)}\leq K\] for all \(\theta>1\) and all \(x\), \(v\), and \(w\in\mathbb{R}^{d}\), proving equation (B.2) and hence the lemma. \(\Box\) ### Tightness of processes Here we record, for completeness, the fact used above that tightness for a family of processes, if determined by the Aldous-Rebolledo criterion, extends to sums and products of those processes. We first record for reference one version of the Aldous-Rebolledo criteria for tightness of a sequence real-valued processes (as it appears in Etheridge [2000]): **Theorem B.2** (Rebolledo [1980]): _Let \(\{Y^{(n)}\}_{n\geq 1}\) be a sequence of real-valued processes with cadlag paths. Suppose that the following conditions are satisfied._ 1. _For each fixed_ \(t\in[0,T]\)_,_ \(\{Y^{(n)}_{t}\}_{n\geq 1}\) _is tight._ 2. _Given a sequence of stopping times_ \(\tau_{n}\)_, bounded by_ \(T\)_, for each_ \(\epsilon>0\) _there exists_ \(\delta>0\) _and_ \(n_{0}\) _such that_ \[\sup_{n\geq n_{0}}\sup_{\theta\in[0,\min(\delta,T-\tau_{n})]}\mathbb{P}\left\{ \left|Y^{(n)}_{\tau_{n}+\theta}-Y^{(n)}_{\tau_{n}}\right|>\epsilon\right\} \leq\epsilon.\] _Then the sequence \(\{(Y^{(n)}_{t})_{t=0}^{T}\}_{n\geq 1}\) is tight._ **Lemma B.3**: _Let \(\{X^{(n)}\}_{n\geq 1}\) and \(\{Y^{(n)}\}_{n\geq 1}\) be sequences of jointly defined real-valued processes with cadlag paths satisfying the conditions of Theorem B.2. Then \(\{X^{(n)}Y^{(n)}\}_{n\geq 1}\) and \(\{X^{(n)}+Y^{(n)}\}_{n\geq 1}\) also satisfy the conditions of Theorem B.2._ By "jointly defined" we mean that \(X^{(n)}\) and \(Y^{(n)}\) are defined on the same probability space, so that the products and sums make sense. **Proof** [of Lemma B.3] The proof for \(X^{(n)}+Y^{(n)}\) is similar to but more straightforward than for \(X^{(n)}Y^{(n)}\), so on only prove the Lemma for the latter. First, note that for any \(\epsilon>0\), by tightness of \(X^{(n)}_{t}\) and \(Y^{(n)}_{t}\) there is a \(K\) such that \(\mathbb{P}\{X^{(n)}_{t}>\sqrt{K}\}\) and \(\mathbb{P}\{Y^{(n)}_{t}>\sqrt{K}\}\) are both less than \(\epsilon/2\), and hence \[\mathbb{P}\{X^{(n)}_{t}Y^{(n)}_{t}>K\}\leq\mathbb{P}\{X^{(n)}_{t}>\sqrt{K}\}+ \mathbb{P}\{Y^{(n)}_{t}>\sqrt{K}\}\leq\epsilon.\] Therefore, \(X^{(n)}_{t}Y^{(n)}_{t}\) is tight. Next, note that for \(0\leq\tau_{n}\leq T\), \[\sup_{0\leq\theta\leq\min(\delta,T-\tau_{n})}\left|X^{(n)}_{\tau_ {n}+\theta}Y^{(n)}_{\tau_{n}+\theta}-X^{(n)}_{\tau_{n}}Y^{(n)}_{\tau_{n}}\right|\] \[\qquad\leq\sup_{0\leq\theta\leq\min(\delta,T-\tau_{n})}\left|X^{(n )}_{\tau_{n}+\theta}\right|\left|Y^{(n)}_{\tau_{n}+\theta}-Y^{(n)}_{\tau_{n}} \right|+\left|X^{(n)}_{\tau_{n}+\theta}-X^{(n)}_{\tau_{n}}\right|\left|Y^{(n) }_{\tau_{n}}\right|\] \[\qquad\leq\sup_{0\leq t\leq T}\left|X^{(n)}_{t}\right|\sup_{0 \leq\theta\leq\min(\delta,T-\tau_{n})}\left|Y^{(n)}_{\tau_{n}+\theta}-Y^{(n)} _{\tau_{n}}\right|+\sup_{0\leq\theta\leq\min(\delta,T-\tau_{n})}\left|X^{(n)} _{\tau_{n}+\theta}-X^{(n)}_{\tau_{n}}\right|\sup_{0\leq t\leq T}\left|Y^{(n)} _{t}\right|,\] so that for any \(C\), \[\begin{split}\mathbb{P}\left\{\sup_{0\leq\theta\leq\min(\delta,T- \tau_{n})}\left|X^{(n)}_{\tau_{n}+\theta}Y^{(n)}_{\tau_{n}+\theta}-X^{(n)}_{ \tau_{n}}Y^{(n)}_{\tau_{n}}\right|>\epsilon\right\}\\ \qquad\leq\mathbb{P}\left\{\sup_{0\leq t\leq T}\left|X^{(n)}_{t} \right|>C\right\}+\mathbb{P}\left\{\sup_{0\leq\theta\leq\min(\delta,T-\tau_{n} )}\left|Y^{(n)}_{\tau_{n}+\theta}-Y^{(n)}_{\tau_{n}}\right|>\epsilon/C\right\} \\ \qquad+\mathbb{P}\left\{\sup_{0\leq\theta\leq\min(\delta,T-\tau_{ n})}\left|X^{(n)}_{\tau_{n}+\theta}-X^{(n)}_{\tau_{n}}\right|>\epsilon/C\right\}+ \mathbb{P}\left\{\sup_{0\leq t\leq T}\left|Y^{(n)}_{t}\right|>C\right\}\end{split}\] (B.3) Now, since \(\max_{0\leq t\leq T}X^{(n)}_{t}\) is tight (and likewise for \(Y\)) (see, e.g., Remark 3.7.3 in Ethier and Kurtz (1986)), we may choose a \(C\geq 4\) for which \[\mathbb{P}\left\{\sup_{0\leq t\leq T}\left|X^{(n)}_{t}\right|>C\right\}\leq \frac{\epsilon}{4}.\] Similarly, by assumption we can choose a \(\delta\) for which \[\mathbb{P}\left\{\sup_{0\leq\theta\leq\min(\delta,T-\tau_{n})}\left|X^{(n)}_{ \tau_{n}+\theta}-X^{(n)}_{\tau_{n}}\right|>\epsilon/C\right\}\leq\frac{ \epsilon}{C}.\] If we choose \(C\) and \(\delta\) that do this for both \(X^{(n)}\) and \(Y^{(n)}\), then each of the terms in equation (B.3) are bounded by \(\epsilon/4\), and condition (2) is satisfied for the product process.
2310.08217
TriRE: A Multi-Mechanism Learning Paradigm for Continual Knowledge Retention and Promotion
Continual learning (CL) has remained a persistent challenge for deep neural networks due to catastrophic forgetting (CF) of previously learned tasks. Several techniques such as weight regularization, experience rehearsal, and parameter isolation have been proposed to alleviate CF. Despite their relative success, these research directions have predominantly remained orthogonal and suffer from several shortcomings, while missing out on the advantages of competing strategies. On the contrary, the brain continually learns, accommodates, and transfers knowledge across tasks by simultaneously leveraging several neurophysiological processes, including neurogenesis, active forgetting, neuromodulation, metaplasticity, experience rehearsal, and context-dependent gating, rarely resulting in CF. Inspired by how the brain exploits multiple mechanisms concurrently, we propose TriRE, a novel CL paradigm that encompasses retaining the most prominent neurons for each task, revising and solidifying the extracted knowledge of current and past tasks, and actively promoting less active neurons for subsequent tasks through rewinding and relearning. Across CL settings, TriRE significantly reduces task interference and surpasses different CL approaches considered in isolation.
Preetha Vijayan, Prashant Bhat, Elahe Arani, Bahram Zonooz
2023-10-12T11:05:34Z
http://arxiv.org/abs/2310.08217v1
# TriRE: A Multi-Mechanism Learning Paradigm for Continual Knowledge Retention and Promotion ###### Abstract Continual learning (CL) has remained a persistent challenge for deep neural networks due to catastrophic forgetting (CF) of previously learned tasks. Several techniques such as weight regularization, experience rehearsal, and parameter isolation have been proposed to alleviate CF. Despite their relative success, these research directions have predominantly remained orthogonal and suffer from several shortcomings, while missing out on the advantages of competing strategies. On the contrary, the brain continually learns, accommodates, and transfers knowledge across tasks by simultaneously leveraging several neurophysiological processes, including neurogenesis, active forgetting, neuromodulation, metaplasticity, experience rehearsal, and context-dependent gating, rarely resulting in CF. Inspired by how the brain exploits multiple mechanisms concurrently, we propose TriRE, a novel CL paradigm that encompasses _retaining_ the most prominent neurons for each task, _revising_ and solidifying the extracted knowledge of current and past tasks, and actively promoting less active neurons for subsequent tasks through _rewinding_ and relearning. Across CL settings, TriRE significantly reduces task interference and surpasses different CL approaches considered in isolation.1 Footnote 1: Code is available at [https://github.com/NeurAI-Lab/TriRE](https://github.com/NeurAI-Lab/TriRE) ## 1 Introduction Continual learning (CL) over a sequence of tasks remains an uphill task for deep neural networks (DNNs) due to catastrophic forgetting of older tasks, often resulting in a rapid decline in performance and, in the worst-case scenario, complete loss of previously learned information [38]. Several approaches, such as parameter isolation [43; 4], weight regularization [56; 45], and experience rehearsal [41; 42; 9] have been proposed in the literature to address the problem of catastrophic forgetting in DNNs. Despite their relative success, these research directions have predominantly remained orthogonal and suffer from several shortcomings. Parameter isolation approaches suffer from capacity saturation and scalability issues in longer task sequences, while weight regularization approaches cannot discriminate classes from different tasks, thus failing miserably in scenarios such as class-incremental learning (Class-IL) [28]. In scenarios where buffer size is limited due to memory constraints (e.g., edge devices), rehearsal-based approaches are prone to overfitting on the buffered data [7]. As these research directions have rarely crossed paths, there is a need for an integrated approach to leverage the advantages of competing methods to effectively mitigate catastrophic forgetting in CL. Catastrophic forgetting is a direct consequence of a more general problem, namely the stability-plasticity dilemma [2; 36]: the extent to which the CL model must be plastic to accommodate newly acquired knowledge and stable to not interfere with previously learned information [38]. In stark contrast to DNNs, biological systems manage this dilemma better and are able to learn continually throughout their lifetime with minimal interference. CL in the brain is administered by a rich set of neurophysiological processes that encompass different kinds of knowledge, and conscious processing that integrates them coherently [17]. Empirical studies suggest that metaplasticity [27] and experience replay play a prominent role in memory consolidation in the brain [40; 15]. In addition, neurogenesis in the brain is crucial for the growth and restructuring necessary to accommodate new skills [3]. Neuromodulatory systems facilitate swift learning and adaptability in response to contextual changes induced by new stimuli or shifts in motivation [33]. Whereas context-dependent gating [34] and active forgetting [19] improve the separation between the representations of patterns belonging to different tasks. By simultaneously leveraging these processes, the brain exploits task similarity and exhibits positive forward transfer, rarely resulting in catastrophic forgetting. Inspired by the biological underpinnings of the CL mechanisms in the brain, we propose _'REtain, REvise & REwind' (TriRE)_, a novel CL paradigm to mitigate catastrophic forgetting. Specifically, TriRE involves experience rehearsal, scalable neurogenesis, selective forgetting, and relearning to effectively mitigate catastrophic forgetting. Within each task, the proposed method consists of three stages: (i) _Retain_, where the most active neurons and their corresponding most important weights of the task are extracted and retained to avoid task interference and drastic weight changes, (ii) _Revise_, where the extracted network is finetuned to revise and solidify the current task as well as the joint distribution of the past tasks, (iii) _Rewind_, where the free neurons undergo active forgetting and relearning to promote the less active neurons back into the learning circuit for the next task as illustrated in Figure 1. TriRE effectively combines multiple mechanisms and leverages the advantages offered by different CL approaches. We find that TriRE significantly reduces task interference and surpasses the aforementioned CL approaches across various CL scenarios. Specifically, TriRE outperforms rehearsal-based approaches in Seq-TinyImageNet for Class-IL scenario by almost 14%, even under low-buffer regimes, by promoting generalization through weight and function space regularization. Experience rehearsal enables discrimination of classes belonging to different tasks in TriRE, resulting in at least twice as good performance over weight regularization methods for the same dataset. Unlike parameter isolation approaches, TriRE is scalable and produces at least a 7% relative improvement compared to parameter isolation approaches in Seq-CIFAR100 Task-IL setting without requiring access to task identity at inference time. ## 2 Related works **Rehearsal-based Approaches:** Prior works attempted to address the problem of catastrophic forgetting by explicitly storing and replaying previous task samples, akin to experience rehearsal in the brain. Experience rehearsal (ER) approaches [41; 42] maintain a fixed capacity memory buffer to Figure 1: _TriRE_ consists of a three-phase learning paradigm that reduces task interference and drastic weight changes by using task modularity. In the _Retain_ stage, the method selects and preserves the most active neurons and weights in a mask \(\mathcal{S}_{t}\), which is used in the subsequent _Revise_ stage to finetune the joint distribution of current and past tasks along with a cumulative subnetwork mask \(\mathcal{S}\). The _Rewind_ stage is responsible for reintroducing less active neurons to the learning process for future tasks by actively forgetting and relearning the non-sparse subnetwork. store data sampled from previous task distributions. Several approaches are built on top of ER to better preserve the previous task information: GCR [49] proposed a coreset selection mechanism that approximates the gradients of the data seen so far to select and update the buffer. DER++ [9] and CLS-ER [6] enforce consistency in predictions using soft targets in addition to ground-truth labels. DRI [52] uses a generative model to further support experience rehearsal in low buffer regimes. More recent works like TARC [8], ER-ACE [11] and Co\({}^{2}\)L [12] focus on reducing representation drift right after task switch to mitigate forgetting through asymmetric update rules. Under low-buffer regimes and longer task sequences, however, these approaches suffer from overfitting on the buffered samples. **Weight Regularization Approaches:** Catastrophic forgetting mainly emanates from large weight changes in DNNs when learning a new task. Therefore, weight-regularization methods seek to penalize the sudden changes to model parameters that are crucial for the previous tasks. Depending on the type of regularization, these approaches can be broadly categorized into prior- and data-focused approaches. Prior-focused approaches, such as elastic weight consolidation (EWC) [25], online EWC (oEWC) [45], memory-aware synapses (MAS) [5], and synaptic intelligence (SI) [56] employ prior information on model parameters and estimate the importance of parameters associated with previous tasks based either on the gradients of the learned function output or through Fisher's information matrix. On the other hand, data-focused methods, such as Learning without Forgetting (LwF) [30] instead perform knowledge distillation from models trained on previous tasks when learning on new data. Although weight regularization approaches do not require a memory buffer and are scalable, they only impose a soft penalty, thus failing to entirely prevent forgetting of previous task information. **Parameter Isolation Approaches:** Parameter isolation has been predominantly done in two ways: either within a fixed capacity or by growing in size. In the former, dynamic sparse methods such as PackNet [32], CLNP [16], PAE [23], and NISPA [18] make use of DNN's over-parameterization to learn multiple tasks within a fixed model capacity. Similar to the brain, these models simultaneously learn both connection strengths and a sparse architecture for each task, thereby isolating the task-specific parameters. However, these methods suffer from capacity saturation in longer task sequences, limiting their ability to accommodate new tasks. In contrast, the latter methods, such as PNNs [39], Expert-gate [4] and DEN [55] expand in size, either naively or intelligently, to accommodate new tasks while minimizing forgetting. Although these approaches are extremely efficient in mitigating catastrophic forgetting, they do not scale well with longer task sequences, rendering them inapplicable in real-world scenarios. Contrary to DNNs, the brain simultaneously exploits multiple neurophysiological processes, including neurogenesis [3], active forgetting [19], metaplasticity [27], experience rehearsal [40], and context-dependent gating [34] to continually acquire, assimilate, and transfer knowledge across tasks without catastrophic forgetting [26]. Inspired by how the brain exploits multiple mechanisms concurrently, we propose a novel CL paradigm, TriRE, that leverages the advantages of multiple aforementioned mechanisms to effectively mitigate catastrophic forgetting in CL. ## 3 Method CL problems typically comprise \(t\in\{1,2,..,T\}\) sequential tasks, with \(c\) classes per task, and data that appear gradually over time. Each task has a task-specific data distribution associated with it \((x_{t},y_{t})\in D_{t}\). We take into account two well-known CL scenarios, class-incremental learning (Class-IL) and task-incremental learning (Task-IL). Our working model consists of a feature extractor network \(f_{\theta}\) and a single head classifier \(g_{\theta}\) that represents all classes of all tasks. Sequential learning through DNNs has remained a challenging endeavor, since learning new information tends to dramatically degrade performance on previously learned tasks. As a result, to better retain information from past tasks, we maintain a memory buffer \(D_{m}\) that contains data from tasks previously viewed. Considering the desiderata of CL, we assume that the model does not have infinite storage for previous experience and thus \(|D_{m}|\ll|D_{t}|\). To this end, we use _loss-aware balanced reservoir sampling_[10] to maintain the memory buffer. We update the working model, \(\Phi_{\theta}=g_{\theta}(f_{\theta}(.))\), using experience rehearsal at each iteration by sampling a mini-batch from both \(D_{t}\) and \(D_{m}\) as follows: \[\mathcal{L}=\underbrace{\mathbb{E}}_{(x_{i},y_{i})\sim D_{t}}[\mathcal{L}_{ce} (\sigma(\Phi_{\theta}(x_{i})),y_{i})]\ +\lambda\underbrace{\mathbb{E}}_{(x_{j},y_{j})\sim D_{m}}[\mathcal{L}_{ce}( \sigma(\Phi_{\theta}(x_{j})),y_{j})], \tag{1}\] where \(\mathcal{L}_{ce}\) is the cross-entropy loss, \(\sigma(.)\) is the softmax function, \(\mathcal{L}_{t}\) is the task-wise loss and \(\mathcal{L}_{er}\) is the rehearsal-based loss. The objective in Eq. 1 encourages plasticity through the supervisory signal from \(D_{t}\) and increases stability through \(D_{m}\). However, as CL training advances, model predictions carry more information per training sample than ground truth [7]. Hence, soft targets can be utilized in addition to ground-truth labels to better preserve the knowledge of the earlier tasks. Traditional methods to enforce consistency in predictions include using an exponential moving average (EMA) of the weights of the working model [6] or holding previous predictions in a buffer [9]. As the former result in better knowledge consolidation and decision boundaries, we use EMA of the working model weights to ensure consistency in predictions: \[\mathcal{L}_{cr}\triangleq\underset{(x_{j},y_{j})\sim D_{m}}{\mathbb{E}}\| \Phi_{\theta_{EMA}}(x_{j})-\Phi_{\theta}(x_{j})\|_{F}^{2}, \tag{2}\] where \(\Phi_{\theta_{EMA}}\) is the EMA of the working model \(\Phi_{\theta}\) and \(\|\cdot\|_{F}\) is the Frobenius norm. We update the EMA model as follows: \[\theta_{EMA}=\begin{cases}\mu\,\theta_{EMA}+(1-\mu)\;\theta,&\text{if }\zeta\geq\mathcal{U}(0,1)\\ \theta_{EMA},&\text{otherwise}\end{cases} \tag{3}\] where \(\mu\) is the decay parameter and \(\zeta\) is the update rate. Finally, the EMA model acts as a self-ensemble of models with distinct task specializations for inference rather than the working model. **Notations:** Let \(\mathcal{S}_{t}\) be the extracted subnetwork mask corresponding exclusively to the current task and \(\mathcal{S}\) be the cumulative dynamic network mask corresponding to the tasks learned so far. At the end of the training, \(\mathcal{S}\) would contain the most active neurons and the best corresponding weights across all tasks. In the following, we describe in detail various components of TriRE learning paradigm. ### Retain In CL, typically, models from previous tasks are seen as initialization and are "washed out" by new updates from the current task, which causes CF [35]. However, the brain uses context-dependent gating [26] to selectively filter neural information based on the context in which it is presented, allowing the development of specialized modules that can be added or removed from the network without disrupting previously learned skills [37; 50]. Inspired by this, the _Retain_ phase induces modularity in the model by training a hyper-network first and then extracting a subnetwork that is equivalently representational of the current task knowledge. This extracted subnetwork not only helps in creating task-wise specialized modules, but also helps the model preserve capacity for future tasks. Retention of this subnetwork is done using heterogeneous dropout of activations and weight pruning. The _Retain_ stage appears at the beginning of each task. At this stage, initially \(\{f_{\theta}\mid\theta\notin\mathcal{S}\}\) is trained using a mini-batch of \(D_{t}\) and \(\{f_{\theta}\mid\theta\in\mathcal{S}\}\) is trained using a mini-batch of \(D_{m}\). This is to ensure that the weights not in the cumulative network learn the new task to maintain plasticity, while the weights in the cumulative network learn a combination of old and new tasks to maintain stability. At the convergence of this training, we perform activation pruning followed by weight pruning to extract \(\mathcal{S}_{t}\) as shown in Figure 2. **Activation Pruning:** This involves identifying and extracting neurons that contribute the most to the overall activation or output of the network. We monitor the frequency of neuron activations when a network is trained on a task. In essence, each neuron is given an activation counter that increases when a neuron's activation is among the top-k activations in its layer. We use these activation counts to extract the k-winner activations and retain them as the knowledge base for the current task. Heterogeneous dropout [1] is used to map the activation counts of each neuron to a Bernoulli variable, indicating whether the said neuron is extracted or dropped. This, essentially, leaves the less activated neurons free to learn the next tasks. **Weight Pruning:** After retaining the most activated neurons for the task, we prune the less important connections corresponding to these neurons. In contrast to conventional methods, which only leverage weight magnitude or Fisher information for pruning, our method also takes into account the significance of weights with respect to data saved in the rehearsal buffer. Continual Weight Importance (CWI) [53] criteria ensure that we maintain: (1) weights of greater magnitude for output stability, (2) weights significant for the current task for learning capacity, and (3) weights significant for past data to prevent catastrophic forgetting. For the working model, the CWI of weight \(\theta\) is defined as follows, \[CWI(\theta)=\|\theta\|_{1}+\alpha\|\frac{\delta\tilde{\mathcal{L}_{ce}}(D_{t}; \theta)}{\delta\theta}\|_{1}+\beta\|\frac{\delta\mathcal{L}_{ce}(D_{m};\theta) }{\delta\theta}\|_{1} \tag{4}\] where \(\alpha\) and \(\beta\), are coefficients that regulate the weight of current and buffered data, respectively. In addition, \(\tilde{\mathcal{L}_{ce}}\) denotes the single-head form of the cross-entropy loss, which only takes into account the classes relevant to the current task by masking out the logits of other classes. At the end of this stage, we end up with a mask of the most activated neurons and their most important weights for the current task, \(\mathcal{S}_{t}\). ### Revise Although context-dependent gating is a helpful phenomenon for CL in tackling CF, there are other biological phenomena such as neuromodulation, metaplasticity, and neurogenesis among others which also contribute to the brain's ability to combat CF [26]. Neuromodulation, for instance, is used to decide whether a new feature is novel and unfamiliar (that is, creates a new memory) or common and familiar (that is, consolidates into an existing memory). Taking the cue from that, _Revise_ stage mainly focuses on revising and solidifying the knowledge that the extracted subnetwork of the current task, \(\mathcal{S}_{t}\), and the cumulative subnetwork of past tasks, \(\mathcal{S}\), currently possess. It is clear that \(\mathcal{S}_{t}\) is specialized on the current task and \(\mathcal{S}\) is specialized on the past tasks. However, finetuning these two networks jointly can improve the knowledge overlap, and the features thus generated would be a better approximation of all the seen classes. Firstly, \(\{f_{\theta}\mid\theta\notin(\mathcal{S}\cap\mathcal{S}_{t})\}\) is trained with a mini-batch of \(D_{t}\) so that a part of the extracted network, \(\mathcal{S}_{t}\), can be utilized to regain the performance for the current task. Subsequently, \(\{f_{\theta}\mid\theta\in(\mathcal{S}\cap\mathcal{S}_{t})\}\) is trained with a mini-batch of \(D_{m}\) to preserve forward transfer and knowledge overlap between the past and the current tasks. The optimization learning rate is also considerably reduced at this stage compared to the _Retain_ stage to prevent drastic weight changes in subnetworks, which in turn decreases the forgetting. This could be seen as an adaptation of the metaplasticity in the brain [26; 20] which refers to the ability to adjust the amount of plasticity in the network based on the current and future needs of the organism. At the end of this finetuning, the currently extracted subnetwork \(\mathcal{S}_{t}\) is merged with the cumulative extracted mask from past tasks, \(\mathcal{S}\). The merging of the subnetworks can be seen as a process of integrating new neurons (\(\mathcal{S}_{t}\)) into the existing neural network (\(\mathcal{S}\)), similar to the way neurogenesis allows for the integration of new neurons into existing neural circuits to accommodate new memories. ### Rewind Evidence implies that the brain uses active forgetting as a tool for learning due to its limited capacity [47; 29]. Studies also suggest that although learning and memory become more difficult to access during the forgetting process, they still exist [54; 48]. There are still memory remnants in the brain that can be reactivated [31]. In this work, we define the rewinding of weights to an earlier state as active forgetting. We also make sure that rather than reinitializing the model back to random or very Figure 2: Schematic representation of extraction of subnetwork at the end of _Retain_ stage. The dense network is first pruned using k-WTA criteria, resulting in a subnetwork of the most activated neurons. This subnetwork is then pruned using CWI criteria, resulting in a final extracted subnetwork, \(\mathcal{S}_{t}\). early weights, it is rewound to a point where the model has learned some features and has a generic perception of the objective closer to convergence (but not absolute convergence). To aid this, the weights from a later epoch \(k\) from the _Retain_ phase are saved to be used in the _Rewind_ stage. Specifically, after the _Retain_ and _Revise_ steps, we rewind the weights belonging to non-cumulative subnetwork \(\{f_{\theta}\mid\theta\notin\mathcal{S}\}\) back to the epoch \(k\) weights. Then the rewinded weights are finetuned for a few epochs using a mini-batch of \(D_{t}\). This is helpful because studies [46, 13] show that in the human brain, less active neurons follow a _'use-it-or-lose-it'_ philosophy. Therefore, forgetting and relearning act as a warm-up for these less active neurons making them relevant again for the learning circuit and making them more receptive to learning the next task. In summary, TriRE (Retatin-Revise-Rewind) involves iteratively applying the three phases mentioned above to each task within a lifelong learning setting. Our method effectively combines multiple biological phenomena and harnesses the advantageous characteristics provided by popular CL approaches. The step-by-step procedure is given in Algorithm 1. ``` input: Data streams \(\mathcal{D}_{t}\), working model \(\Phi_{\theta}=g_{\theta}(f_{\theta}(.))\), EMA model \(\Phi_{\theta_{EMA}}=g_{\theta_{EMA}}(f_{\theta_{EMA}}(.))\), sparsity factor \(\gamma\), learning rates \(\eta\gg\eta^{\prime}\), Retain epochs \(E_{1}\), Revise epochs \(E_{2}\), Rewind epochs \(E_{3}\). 1:\(\mathcal{S}\leftarrow\{\}\), \(\mathcal{M}\leftarrow\{\}\) 2:for all tasks \(t\in\{1,2,..,T\}\)do 3:for epochs \(e_{1}\in\{1,2,...E_{1}\}\)do\(\triangleright\)Retain 4:for minibatch \(\{(x_{i},y_{i})\}_{i=1}^{B}\in\mathcal{D}_{t}\) and \(\{(x_{m},y_{m})\}_{m=1}^{B}\in\mathcal{M}\)do 5: Update \(\{f_{\theta}\mid\theta\notin\mathcal{S}\},g_{\theta}\) with \(\eta\) on \(\{(x_{i},y_{i})\}_{i=1}^{B}\) using Eq. 1 (\(\mathcal{L}_{et}\)) 6: Update \(\{f_{\theta}\mid\theta\in\mathcal{S}\},g_{\theta}\) with \(\eta\) on \(\{(x_{m},y_{m})\}_{m=1}^{B}\) using Eqs. 1 (\(\mathcal{L}_{er}\)) and 2 7:if\(e==k\)then 8: Save the weights \(\theta_{k}\) 9: Update \(\theta_{EMA}\) using Eq. 3 10: Extract new subnetwork \(\mathcal{S}_{t}\) with \(\gamma\) sparsity based on CWI from Eq. 4 11:for epochs \(e_{2}\in\{1,2,...E_{2}\}\)do\(\triangleright\)Revise 12:for minibatch \(\{(x_{i},y_{i})\}_{i=1}^{B}\in\mathcal{D}_{t}\) and \(\{(x_{m},y_{m})\}_{m=1}^{B}\in\mathcal{M}\)do 13: Finetune \(\{f_{\theta}\mid\theta\notin(\mathcal{S}\cap\mathcal{S}_{t})\},g_{\theta}\) with \(\eta^{\prime}\) on \(\{(x_{i},y_{i})\}_{i=1}^{B}\) 14: Finetune \(\{f_{\theta}\mid\theta\in(\mathcal{S}\cap\mathcal{S}_{t})\},g_{\theta}\) with \(\eta^{\prime}\) on \(\{(x_{m},y_{m})\}_{m=1}^{B}\) 15: Update \(\theta_{EMA}\) 16: Update cumulative set \(\mathcal{S}=\mathcal{S}\cup\mathcal{S}_{t}\) 17: Reinitialize non-cumulative weights \(\{f_{\theta}\mid\theta\notin\mathcal{S}\}\) with \(\theta_{k}\) 18:for epochs \(e_{3}\in\{1,2,...E_{3}\}\)do\(\triangleright\)Rewind 19:for minibatch \(\{(x_{i},y_{i})\}_{i=1}^{B}\in\mathcal{D}_{t}\)do 20: Update \(\{f_{\theta}\mid\theta\notin\mathcal{S}\},g_{\theta}\) with \(\eta\) on \(\{(x_{i},y_{i})\}_{i=1}^{B}\) 21: Update \(\theta_{EMA}\) 22: Update buffer \(\mathcal{M}\) 23:return model \(\Phi_{\theta}\), model \(\Phi_{EMA}\) ``` **Algorithm 1** Proposed Approach - TriRE ## 4 Results **Experimental Setup:** We expand the Mammoth CL repository in PyTorch [9]. On the basis of Class-IL and Task-IL scenarios, we assess the existing CL techniques against the proposed one. Although the training procedure for Class-IL and Task-IL is the same, during inference, Task-IL has access to the task-id. We consider a number of rehearsal-based, weight regularization, and parameter-isolation approaches as useful baselines because TriRE necessitates experience rehearsal and model modularity. We use ResNet-18 [21] as the feature extractor for all of our investigations. In order to reduce catastrophic forgetting, we additionally offer a lower bound SGD without any support and an upper bound Joint where the CL model is trained using the full dataset. **Experimental Results:** We compare TriRE with contemporary rehearsal-based and weight regularization methods in Class-IL and Task-IL settings. As shown in Table 1, TriRE consistently outperforms rehearsal-based methods across most datasets, highlighting the significance of dynamic masking in CL. Although methods like Co\({}^{2}\)L and ER-ACE excel on simpler datasets, they struggle with more challenging ones. The same applies to methods like DRI and GCR, which augment memory buffers through core-set and generative replay. Their performance, for instance, lags behind TriRE in Seq-TinyImageNet, where the buffer-to-class ratio is low. Retaining task-wise dynamic subnetworks and revising extracted subnetworks to preserve past knowledge significantly reduces task interference in TriRE. In essence, TriRE boosts generalization through weight and function space regularization, selective forgetting, and relearning, yielding superior performance across tasks. As evident from Table 1, weight regularization methods such as LWF, oEWC, and SI perform miserably in both Class-IL and Task-IL settings across datasets. The reason being, these approaches encounter classes solely from the current task at any point in CL training. Therefore, they fail to discriminate between classes from different classes resulting in subpar performance. On the other hand, TriRE leverages samples from previous classes through experience rehearsal to learn discriminatory features across tasks. In addition, TriRE entails forming subnetworks through _Retain_ and _Revise_ resulting in modularity and reduced task interference between tasks. Parameter isolation approaches minimize task interference in CL by creating distinct sub-networks either within a given model capacity or by dynamically growing the network. Figure 3 illustrates a comparison between methods trained on Seq-CIFAR100 with 20 tasks i.e., it depicts final accuracies \begin{table} \begin{tabular}{c l|c c|c c|c c} \hline \multirow{2}{*}{ \begin{tabular}{c} Buffer \\ size \\ \end{tabular} } & \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{Seq-CIFAR10} & \multicolumn{2}{c|}{Seq-CIFAR100} & \multicolumn{2}{c}{Seq-TinyImageNet} \\ \cline{3-8} & & Class-IL & Task-IL & Class-IL & Task-IL & Class-IL & Task-IL \\ \hline \multirow{3}{*}{-} & SGD & 19.62\(\pm\)0.05 & 61.02\(\pm\)3.33 & 17.49\(\pm\)0.28 & 40.46\(\pm\)0.99 & 07.92\(\pm\)0.26 & 18.31\(\pm\)0.68 \\ & Joint & 92.20\(\pm\)0.15 & 98.31\(\pm\)0.12 & 70.56\(\pm\)0.28 & 86.19\(\pm\)0.43 & 59.99\(\pm\)0.19 & 82.04\(\pm\)0.10 \\ \hline \multirow{3}{*}{-} & LwF & 19.61\(\pm\)0.05 & 63.29\(\pm\)2.35 & 18.47\(\pm\)0.14 & 26.45\(\pm\)0.22 & 8.46\(\pm\)0.22 & 15.85\(\pm\)0.58 \\ & oEWC & 19.49\(\pm\)0.12 & 68.29\(\pm\)3.92 & - & - & 7.58\(\pm\)0.10 & 19.20\(\pm\)0.31 \\ & SI & 19.48\(\pm\)0.17 & 68.05\(\pm\)5.91 & - & - & 6.58\(\pm\)0.31 & 36.32\(\pm\)0.13 \\ \hline \multirow{3}{*}{200} & ER & 44.79\(\pm\)1.86 & 91.19\(\pm\)0.94 & 21.40\(\pm\)0.22 & 61.36\(\pm\)0.35 & 8.57\(\pm\)0.04 & 38.17\(\pm\)2.00 \\ & DER++ & 64.88\(\pm\)1.17 & 91.92\(\pm\)0.60 & 29.60\(\pm\)1.14 & 62.49\(\pm\)1.02 & 10.96\(\pm\)1.17 & 40.87\(\pm\)1.16 \\ & CLS-ER\({}^{\dagger}\) & 61.88\(\pm\)2.43 & **93.59\(\pm\)**0.87 & 43.38\(\pm\)1.06 & **72.01\(\pm\)**0.97 & 17.68\(\pm\)1.65 & 52.60\(\pm\)1.56 \\ & ER-ACE & 62.08\(\pm\)1.14 & 92.20\(\pm\)0.57 & 35.17\(\pm\)1.17 & 63.09\(\pm\)1.23 & 11.25\(\pm\)0.54 & 44.17\(\pm\)1.02 \\ & Co\({}^{2}\)L & 65.57\(\pm\)1.37 & 93.43\(\pm\)0.78 & 31.90\(\pm\)0.38 & 55.02\(\pm\)0.36 & 13.88\(\pm\)0.40 & 42.37\(\pm\)0.74 \\ & GCR & 64.84\(\pm\)1.63 & 90.8\(\pm\)1.05 & 33.69\(\pm\)1.40 & 64.24\(\pm\)0.83 & 13.05\(\pm\)0.91 & 42.11\(\pm\)1.01 \\ & DRI & 65.16\(\pm\)1.13 & 92.87\(\pm\)0.71 & - & - & 17.58\(\pm\)1.24 & 44.28\(\pm\)1.37 \\ & TriRE & **68.17\(\pm\)**0.33 & 92.45\(\pm\)0.18 & **43.91\(\pm\)**0.18 & 71.66\(\pm\)0.44 & **20.14\(\pm\)**0.19 & **55.95\(\pm\)**0.78 \\ \hline \end{tabular} \end{table} Table 1: Comparison of prior methods across various CL scenarios. We provide the average top-1 (\(\%\)) accuracy of all tasks after training. \({}^{\dagger}\) Results of the single EMA model. Figure 3: Comparison of TriRE against evolving architectures in terms of Task-IL accuracy on Seq-CIFAR100 dataset divided into 20 tasks. The graph reports the accuracy of individual tasks at the end of CL training. on \(1^{st}\), \(2^{nd}\).. \(20^{th}\) task after training. Upon completing all tasks, TriRE achieves an average accuracy of 80.85%, surpassing the performance of the baselines considered. TriRE leverages the benefits of experience rehearsal, weight regularization, and function space regularization to learn compact and overlapping subnetworks, resulting in reduced task interference while maintaining scalability. In line with biological principles, TriRE incorporates selective forgetting and relearning mechanisms to activate less active neurons and enhance their receptiveness to learning subsequent tasks, thereby mitigating the risk of capacity saturation. ## 5 Model Analysis **Task Interference:** Figure 4 shows the changes in neuronal activity for the subset of neurons in the last convolutional layer (top) and the last shortcut layer (bottom) of ResNet-18 trained on 5 tasks in Seq-CIFAR100 dataset. The neurons that are blacked out are the most active neurons for each particular task after the _Retain_ phase. For each task, it is observed that there are exclusive subnetworks forming on their own (horizontally) that capture task-specific information. However, with CL training, several neurons become generalizable by capturing information that can be shared across tasks. Therefore, there is neuron overlap between the extracted neurons across tasks (vertically). More so in the shortcut layer, since the number of parameters in these layers is low. TriRE optimally manages the model capacity vs. task modularity trade-off better by re-using neurons that can be shared across tasks while maintaining the modularity of knowledge in each task intact. **How Much to Rewind?** In Figure 5, we examine Class-IL accuracy when the model is rewound to different points in the _Retain_ stage in order to comprehend how rewinding affects the accuracy of the model. We observe that the accuracy of the inference model decreases if the model forgets too much and is rewound to an early stage in the training. This is in alignment with the observations made by [14] that rewinding to extremely early stages is not recommended for DNNs because the network has Figure 4: (Top) depicts the distribution of neuron activations across tasks in the last convolutional layer of the feature extractor and (Bottom) depicts the same for the last shortcut layer. The black cubes represent the extracted ones after _Retain_ stage. Figure 5: The effect of rewinding on Class-IL accuracy for all three datasets. The region from 70% to 90% of all epochs gives the best results consistently across datasets. not learned enough meaningful features by then to regain the lost accuracy. Additionally, we notice that accuracy also suffers when the model is rewound to a point extremely close to the end of training time. Rewinding to a very late point in the training phase close to convergence is not ideal because there is not enough time for relearning. Our experiments indicate that rewinding to between 70% and 90% of the training time results in the best accuracy. **Ablation Study:** As explained previously, TriRE employs a three-stage learning paradigm to reduce task interference and improve weight reuse in CL. We seek to uncover how each of _Retain_, _Revise_, and _Rewind_ in TriRE influence Class-IL and Task-IL accuracies in Seq-CIFAR100 and Seq-TinyImageNet datasets through Table 2. It can be seen that although _Retain_ alone can extract the subnetworks containing the most active neurons and decrease task interference, it falls short in aspects of forward transfer and weight reuse. Similarly, _Retain_ and _Revise_ together can solidify the knowledge extracted from current and past tasks, but such a model suffers from capacity issues without the reactivation of less active neurons for future tasks. Likewise, _Retain_ and _Rewind_ together can encourage task-wise delimitation of knowledge and promote efficient usage of available networks, but lose out on the forward transfer introduced by the learning of joint distributions. Finally, analogous to the brain, it is evident that the harmony of all components is what achieves the best results in both datasets. **Memory and Computational Cost:** We conduct a comparative analysis of the computational and memory overhead of TriRE in contrast to related works. Table 3 provides an analysis of the learnable parameters and memory required by TriRE in contrast to those of DER++, EWC, and PNNs, (Opting for one method from each individual family of CL methods). Firstly, similar to DER++ and EWC, TRiRE does not add any learnable parameters to the model. However, it is evident that PNNs have an infeasible amount of learnable parameters which gets progressively worse with longer task sequences. Secondly, the observed increase in memory consumption in TriRE can be attributed to several factors: (1) the application of multiple masking mechanisms for parameter isolation; (2) the incorporation of the Rewind phase necessitating weight retention from a previous epoch; and (3) the utilization of the Exponential Moving Average (EMA) model to enhance knowledge consolidation. All of these factors hold memory but do not add any learnable parameter to the training. ## 6 Conclusion We introduced TriRE, a novel biologically inspired CL paradigm that entails experience rehearsal, scalable neurogenesis, and selective forgetting and relearning to effectively mitigate catastrophic forgetting in CL. Within each task, TriRE entails retaining the most prominent neurons for each task, revising the extracted knowledge of current and past tasks, and actively promoting less active neurons for subsequent tasks through rewinding and relearning. Analogous to multiple neurophysiological mechanisms in the brain, TriRE leverages the advantages of different CL approaches, thus significantly lowering task interference and surpassing different CL approaches when considered in isolation. For Seq-TinyImageNet, TriRE outperforms the closest rival in rehearsal-based baselines by 14%, \begin{table} \begin{tabular}{c c c|c c|c c} \hline \hline \multirow{2}{*}{Retain} & \multirow{2}{*}{Revise} & \multirow{2}{*}{Rewind} & \multicolumn{2}{c|}{Seq-CIFAR100} & \multicolumn{2}{c}{Seq-TinyImageNet} \\ \cline{4-7} & & & \multicolumn{2}{c|}{Class-IL} & \multicolumn{2}{c|}{Task-IL} & \multicolumn{2}{c}{Class-IL} & \multicolumn{2}{c}{Task-IL} \\ \hline ✓ & ✗ & ✗ & 38.01 & 66.23 & 11.54 & 40.22 \\ ✓ & ✓ & ✗ & 33.08 & 60.03 & 8.44 & 31.90 \\ ✓ & ✗ & ✓ & 43.03 & **72.09** & 16.25 & 48.89 \\ ✓ & ✓ & ✓ & **43.91** & 71.66 & **20.14** & **55.95** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of the contribution of each phase in TriRE. Note that the combination of _Revise_ alone or _Revise & Rewind_ has not been considered, as it is not feasible without the _Retain_ phase. \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{Learnable Parameters (Million)} & \multicolumn{2}{c}{Memory Consumption (Million)} \\ \cline{2-7} & 5 Tasks & 10 Tasks & 20 Tasks & 5 Tasks & 10 Tasks & 20 Tasks \\ \hline DER ++ & \(1\)x & \(1\)x & \(1\)x & \(1\)x & \(1\)x & \(1\)x \\ EWC & \(1\)x & \(1\)x & \(1\)x & \(3\)x & \(3\)x & \(3\)x \\ TriRE & \(1\)x & \(1\)x & \(1\)x & \(6\)x & \(6\)x & \(6\)x \\ PNNs & \(27\)x & \(79\)x & \(240\)x & \(27\)x & \(79\)x & \(240\)x \\ \hline \hline \end{tabular} \end{table} Table 3: Relative number of learnable parameters and corresponding memory footprint in Seq-CIFAR100 with varying number of task sequence. surpasses the best parameter isolation baseline by 7%, and nearly doubles the performance of the best weight regularization method. Extending our method to CL scenarios oblivious to task boundaries and to few- and zero-shot learning settings are some of the future research directions for this work. ## 7 Limitations and Future Work We proposed TriRE, a novel paradigm that leverages multiple orthogonal CL approaches to effectively reduce catastrophic forgetting in CL. As orthogonal CL approaches may not always be complementary, the selection of such approaches needs careful consideration in TriRE. In addition, having multiple objective functions naturally expands the number of hyperparameters, thereby requiring more tuning to achieve optimal performance. Therefore, additional computational complexity and memory overhead due to the staged approach and extensive hyperparameter tuning are some of the major limitations of the proposed method. For the same reason, we highlight that TriRE is not directed toward compute-intensive architectures such as vision transformers. TriRE involves different stages of training within each task, requiring knowledge of task boundaries. In line with state-of-the-art methods in CL, each task entails a non-overlapping set of classes, and data within each task is shuffled to guarantee i.i.d. data. However, in the case of online learning where data streams and the distribution gradually shift, TriRE cannot be applied in its current form. Therefore, additional measures such as task-boundary approximation and modification to learning objectives are necessary to enable TriRE to work in such scenarios. Furthermore, traditional CL datasets considered in this work entail independent tasks and data points without intrinsic cumulative structure. As TriRE does not leverage structures learned in previously encountered tasks, structure learning forms another limitation of this proposed method. Reducing computational and memory overhead, extending to task-free CL scenarios with recurring classes, and leveraging intrinsic structures within underlying data are some of the future research directions for this work. ## 8 Broader Impacts Inspired by the multifaceted learning mechanisms of the brain, we propose TriRE, which replicates the brain's ability to leverage multiple mechanisms for CL, enhancing the generalization capabilities of CL models across tasks. Its success not only encourages the exploration of neuro-inspired methods for deep neural networks, but also opens up opportunities to augment existing CL approaches by leveraging the advantages of competing strategies. By enabling models to learn continuously and adapt to new tasks, TriRE contributes to the responsible and ethical deployment of AI technologies, as models can improve and update their knowledge without requiring extensive retraining. This advancement has significant implications for various real-world applications and promotes the development of AI systems that can continually improve and adapt their performance. Acknowledgement:The work was conducted while all the authors were affiliated with NavInfo Europe.
2304.05874
Adaptive Gated Graph Convolutional Network for Explainable Diagnosis of Alzheimer's Disease using EEG Data
Graph neural network (GNN) models are increasingly being used for the classification of electroencephalography (EEG) data. However, GNN-based diagnosis of neurological disorders, such as Alzheimer's disease (AD), remains a relatively unexplored area of research. Previous studies have relied on functional connectivity methods to infer brain graph structures and used simple GNN architectures for the diagnosis of AD. In this work, we propose a novel adaptive gated graph convolutional network (AGGCN) that can provide explainable predictions. AGGCN adaptively learns graph structures by combining convolution-based node feature enhancement with a correlation-based measure of power spectral density similarity. Furthermore, the gated graph convolution can dynamically weigh the contribution of various spatial scales. The proposed model achieves high accuracy in both eyes-closed and eyes-open conditions, indicating the stability of learned representations. Finally, we demonstrate that the proposed AGGCN model generates consistent explanations of its predictions that might be relevant for further study of AD-related alterations of brain networks.
Dominik Klepl, Fei He, Min Wu, Daniel J. Blackburn, Ptolemaios G. Sarrigiannis
2023-04-12T14:13:09Z
http://arxiv.org/abs/2304.05874v3
Adaptive Gated Graph Convolutional Network for Explainable Diagnosis of Alzheimer's Disease using EEG Data ###### Abstract Graph neural network (GNN) models are increasingly being used for the classification of electroencephalography (EEG) data. However, GNN-based diagnosis of neurological disorders, such as Alzheimer's disease (AD), remains a relatively unexplored area of research. Previous studies have relied on functional connectivity methods to infer brain graph structures and used simple GNN architectures for the diagnosis of AD. In this work, we propose a novel adaptive gated graph convolutional network (AGGCN) that can provide explainable predictions. AGGCN adaptively learns graph structures by combining convolution-based node feature enhancement with a well-known correlation-based measure of functional connectivity. Furthermore, the gated graph convolution can dynamically weigh the contribution of various spatial scales. The proposed model achieves high accuracy in both eyes-closed and eyes-open conditions, indicating the stability of learned representations. Finally, we demonstrate that the proposed AGGCN model generates consistent explanations of its predictions that might be relevant for further study of AD-related alterations of brain networks. _Keywords:_ Alzheimer's disease, graph neural network, classification, EEG ## I Introduction The brain is a complex, densely connected system that operates across multiple spatial and temporal scales. Neurological diseases, such as Alzheimer's disease (AD), can alter the connectivity of the brain and thus disrupt brain function [1; 2; 3; 4]. AD is the most common cause of dementia and affects millions of patients worldwide. Currently, the diagnosis of AD is typically made using a combination of cognitive and neurological assessments, as well as neuroimaging techniques, such as positron emission tomography (PET) or magnetic resonance imaging (MRI), which can be time-consuming and expensive. The development of rapid, economical, and explainable diagnosis methods is becoming increasingly important. Electroencephalography (EEG) is an economical and non-invasive neuroimaging method that records the sum of electrical potentials generated by various brain areas. EEG is extensively used in the research of AD-related alterations in brain function and functional connectivity. Although EEG is not yet widely used in clinical settings, numerous studies have demonstrated the high effectiveness of an EEG-based diagnosis of AD [5; 6; 7; 8; 9; 10]. AD causes disruption of synaptic connections across multiple scales [11; 3; 12] and can thus be viewed as a network disorder [1]. The synaptic disconnection can be observed in EEG signals as alterations of synchronisation and functional connectivity (FC) [3; 13]. Furthermore, the slowing of EEG signals is a reliable characteristic of AD [11; 14], observed as a shift of spectral power towards low-frequency components. Graph-theoretic studies of AD also report reduced complexity, disruption of small-world properties, decreased integration, and increased segregation [15; 16; 7; 17; 12]. However, one of the challenges in EEG-based predictive models is the efficient utilisation of the information collected over multiple electrodes, since there is information to be gained both at the electrode level, e.g. frequency spectrum, and the cross-electrode level, e.g. FC. Machine learning-based approaches often require domain knowledge and rely on manual feature extraction. For example, Oltu et al. [19] calculate power spectrum density (PSD) and coherence across multiple EEG electrodes, and then use descriptive statistics, such as sum and variance, as input features. Other feature-based methods use FC [20; 21; 9]. These methods first reconstruct the brain graph via measures of FC, such as phase lagging index [9], generalised composite multiscale entropy vector [20], or phase synchronisation index [21]. The features can then be extracted via statistics [20] or graph-theoretic measures [21; 9]. In contrast, deep learning methods can extract features automatically from the input. However, utilising the information from multiple electrodes with classical deep learning methods is challenging. To overcome this issue, several studies have transformed EEG signals into images to make use of convolutional neural networks (CNN) [22; 23; 24; 10; 25], which are efficient in image classification. For instance, Ieracitano et al. [22] compute the PSD across channels and compose them to form a channel by PSD image. Bi et al. [24] use spectral topology images and leverage the colour channels of an image to represent three frequency bands. Finally, Huggins et al. [25] create tiled images where each tile contains the continuous wavelet transform of an EEG electrode. Although these methods utilise multiple channels, the cross-electrode information is still omitted. To address this limitation, a CNN is utilized on FC-based adjacency matrices has been proposed [26]. However, CNN is not well suited for such input since the adjacency matrix is irregular and non-euclidean. Graph neural network (GNN) is an extension of CNN to process graph-structured inputs. Multiple studies propose GNN-based architectures to process EEG. However, GNN methods for EEG-based diagnosis of AD are limited [5; 8]. GNN-EEG implementations often include several steps: (1) input construction, i.e. graph structure and node features; (2) GNN encoder to learn node embeddings; and (3) aggregation of node embeddings to a graph embedding, which can be used in the final classification step. There are various approaches to realise the graph construction in step (1). Node features are commonly defined as EEG time-series signal [8; 27; 28; 29], or a statistical summary of the signal in the time domain [30; 31], the frequency domain [5; 32], or the differential entropy [33; 34; 35; 36; 27; 32; 33]. Based on network neuroscience literature, many approaches define the brain graph using FC measures [5; 8; 27; 29; 30; 31; 37; 38]. The graph structure can also be based on the distance between EEG electrodes [31; 33; 34]. However, such an approach largely ignores brain connectivity information. Alternatively, the brain graph can be automatically learned by the model, either as a learnable mask shared across samples [39; 27; 32] or by pairwise node feature distance minimisation regularised by an additional graph loss function [35; 36; 40]. While such approaches are flexible and should converge to an optimal graph structure with respect to a given learning task, the learned brain graph might not be representative of the underlying brain connectivity, thus limiting the interpretability of such learned brain graphs. In this work, we propose an adaptive graph learning mechanism based on node feature enhancement via CNN and subsequent graph construction. This is achieved by using a standard FC measure (Pearson's correlation) and sparsified via k-nearest neighbour (KNN) edge selection. Thus, it combines the strength of the FC-based and automated graph learning methods. The design of GNN encoders in step (2) for EEG applications has been mainly limited to simple architectures, such as the Chebyshev graph convolution (ChebConv) [28; 29; 31; 32; 33; 34; 35; 38], and simple graph convolution (GCN) [5; 39; 40; 32; 33; 41; 27]. However, we hypothesise that such node embedding updating mechanisms are not optimal for EEG tasks. These graph convolutions update node embeddings by summing the initial embedding and the aggregated messages from the neighbouring nodes. Such updating implies that information from different scales contributes equally to the final node embeddings, hence graph embeddings as well. While brain disruptions caused by AD occur across multiple spatial scales, their predictive power is likely different. Therefore, a gating mechanism is crucial for filtering and weighting the information collected across different scales. We propose to adopt the gated graph convolution [42] to address this issue. Finally, we implement the aggregation of node embeddings in step (3) by adopting the adaptive structure-aware pooling (ASAP) node pooling mechanism [43] to first learn the most important clusters of nodes, which are in turn concatenated to form the graph embedding. This is in contrast to the previous approaches that do not use any node pooling and form graph embeddings via simple element-wise readout layers [5; 27; 30; 41; 44; 40] or concatenating all nodes of the graph [38; 8]. Other node pooling approaches were tested for EEG applications [44; 45]. In contrast to ASAP pooling, these approaches pool the graph by selecting a specified number of nodes without considering their local context within the graph. Therefore, important information might be lost due to such node pooling. In this paper, we propose a novel GNN model for explainable AD classification, which can adaptively enhance node features and dynamically construct brain graph structures as shown in Fig. 1. The learned brain graphs can then be used for the interpretation of predictions. Moreover, a clustering-based node pooling mechanism is adopted to coarsen the brain graph, thus localising the brain regions that contribute to the predictions. Finally, we carry out extensive ablation and parameter sensitivity experiments to elucidate the importance of the individual blocks within the proposed model architecture. ## II Data EEG recordings were collected from 20 AD patients and 20 healthy control participants (HC) younger than 70 years. A detailed description of the experimental design and confirmation of the diagnosis is provided in [46]. All the AD participants were recruited from the Sheffield Teaching Hospital memory clinic. AD participants were diagnosed between one month and two years before data collection. All of them were in the mild to moderate stage of the disease at the time of recording, with an average Mini Mental State Examination (MMSE) score of 20.1 (sd = 4). High-resolution structural magnetic resonance imaging (MRI) scans of all patients were acquired to eliminate alternative causes of dementia. Age and gender-matched HC participants with normal neuropsychological tests and structural MRI scans were recruited. EEG data was acquired using an XLTEK 128-channel headbox, Ag/AgCl electrodes with a sampling frequency of 2 kHz using a modified 10-10 overlapping a 10-20 international electrode placement system with a referential montage with a linked earlobe reference. The recordings lasted 30 minutes, during which the participants were instructed to rest and not think about anything specific. Within each recording were two-minute-long epochs during which the participants had their eyes closed, alternating with an equal duration of eyes-open epochs. All the recordings were reviewed by an experienced neurophysiologist on the XLTEK review station with time-locked video recordings (Optima Medical LTD). For each participant, three 12-second-long artefact-free epochs were isolated. Finally, the following 23 bipolar channels were created: F8-F4, F7-F3, F4-C4, F3-C3, F4-FZ, FZ-CZ, F3-FZ, T4-C4, T3-C3, C4-CZ, C3-CZ, CZ-PZ, C4-P4, C3-P3, T4-T6, T3-T5, P4-PZ, P3-PZ, T6-O2, T5-O1, P4-O2, P3-O1 and O1-O2 [46]. As a neurophysiologist confirmed the EEG signal to be artefact-free, we did not perform any further cleaning of the signals. The signals are filtered using a band-pass Butterworth filter to a range of 0.5 Hz and 45 Hz and down-sampled to 200 Hz. Finally, 1-second long windows with 50% overlap are created to increase the sample size. ## III Methods The proposed adaptive gated graph convolutional network (AGGCN) model consists of three blocks, namely, a graph learning module, a GNN encoder and a classifier. The graph learning module receives a node feature matrix as input, enhances it using a 1D-CNN and learns the brain graph structure. The GNN encoder then uses the output of the graph learning module as input, i.e. a featured, weighted, undirected graph. The encoder generates a graph embedding used by the classifier to output the predicted probabilities. ### Node feature and graph learning The node features are defined as power spectral density computed from 1-second-long EEG signals with 1 Hz increments from 1 to 45 Hz. Hence, the input is a node Figure 1: Architecture of the proposed adaptive gated graph convolutional network. Node features are defined as power spectral density from 1 to 45 Hz. The node features are then used as input to the graph learning module (green box), where they are enhanced by a 1D convolutional neural network. The brain graph structure is then constructed as a correlation graph and made sparse by a k-nearest-neighbour edge selection. The enhanced node features and the learned graph structure are then passed to an encoder (purple box) consisting of a gated graph convolutional layer (repeated for R iterations) and the ASAP node pooling mechanism. The node pooling coarsens the graph. The node features of the coarsened graph are then flattened and passed to a multilayer-perceptron classifier, which outputs the predictions. feature matrix \(X\in\mathbb{R}^{N\times D_{in}},D_{in}=45\). The input is then passed to a convolutional neural network (CNN) with batch normalisation, \(L_{CNN}\) 1D convolutional layers and a maximum pooling with kernel size 2 and step size 2. The output is flattened and fed to a fully connected layer with hidden size \(h_{CNN}\) and batch normalisation. This neural network outputs a matrix of enhanced node features \(X^{\prime}\in\mathbb{R}^{N\times D_{h_{CNN}}}\). A graph structure is then inferred from the enhanced node features by computing the absolute value of Pearson's correlation for each pair of nodes. Thus, a unique graph structure is learned for each input sample and is defined by an adjacency matrix \(A\in\mathbb{R}^{N\times N}\) with \(N=23\) being the number of EEG channels. In order to produce sparse graphs, the k-nearest-neighbours algorithm is utilised. This means that the \(k\) strongest edges are preserved for each node. This proposed graph learning module has multiple hyperparameters that control its architecture. Namely, these are the number of convolutional layers \(L_{CNN}\), the kernel size (which is equal to the step size), the number of filters, the hidden size \(h_{CNN}\), the dropout rate \(drop_{CNN}\) and the \(k_{KNN}\) parameter that controls the graph sparsity. ### Graph neural network encoder and classifier A graph convolution extends the classical convolution from the Euclidean domain to the graph domain. The input graph is given by \(G=(N,A,X^{\prime})\) where \(N\) is the set of nodes, \(A\) is the learned graph, and \(X^{\prime}\) is the enhanced node feature matrix. A simple graph convolution is defined by the message-passing mechanism wherein the node embedding of node \(i\) is learned by aggregating information from its 1-hop neighbourhood, i.e. nodes connected with an edge, as follows: \[x_{i}^{l+1}=x_{i}^{l}+\Theta\sum_{j\in N(i)}e_{ij}x_{j}^{l}, \tag{1}\] where \(x_{i}^{l}\) are the node features of node \(i\) at the \(l^{\text{th}}\) layer, \(x_{i}^{0}\) is the \(i^{\text{th}}\) row of the input node feature matrix \(F^{\prime}\), and \(\Theta\) is a learnable linear transformation. \(N(i)\) and \(e_{ij}\) are the neighbourhood of node \(i\) and the edge weight connecting nodes \(i\) and \(j\) given by the adjacency matrix \(A\), respectively. Stacking \(L\) graph convolutional layers then means aggregating information iteratively from 1-hop to \(L\)-hop neighbourhoods, thus gradually going from local to global information about the graph. Note that the aggregated message is added to the initial node embedding \(x_{i}^{l}\). Thus, the entire information collected from each L-hop neighbourhood is always fully integrated into the node embedding. However, information might be distributed unequally across spatial scales in brain graphs. The gated graph convolution (GGCN) [42] addresses this problem by introducing a mechanism to decide what information should be retained at each scale selectively: \[m_{i}^{(r+1)} =\sum_{j\in N(i)}e_{ji}\cdot\Theta^{r+1}\cdot x_{j}^{(r)}, \tag{2}\] \[x_{i}^{(r+1)} =\text{GRU}(m_{i}^{(r+1)},x_{i}^{(r)}), \tag{3}\] where \(m_{i}\) are the aggregated messages, \(\sum\) is the aggregation function, \(\Theta^{r}\) is a learnable matrix for iteration \(r\), which maps the node features from shape \([1,D_{h_{CNN}}]\) to \([1,D_{h_{CNN}}]\), and \(GRU\) is the gated recurrent unit [47]. Briefly, a GRU is a recurrent neural network layer with an update, reset, and new gates that allow the network to recursively update or forget information about the input. The node embeddings are learned recursively up to \(R\) iterations with a shared \(GRU\) gate, which is equivalent to stacking \(R\) GCN layers. The node embeddings are then passed through an activation function and a batch normalisation layer. Finally, the node embeddings are passed to the node pooling module. The hyperparameters of the proposed encoder are the number of iterations \(R\), the hidden size \(h_{GNN}\), the activation function, the aggregation function and the dropout rate \(drop_{GNN}\) applied after the encoder. #### ii.2.1 Node pooling After learning the node embeddings, the model learns a coarsened graph using the ASAP pooling mechanism [43]. This pooling first learns \(N\) clusters, each centred at one node, also named ego-graphs. The membership of a node \(j\) in the ego-cluster centred at node \(i\) is given by the \(S_{ij}\) matrix. Note that this is a soft-cluster assignment matrix; thus, each node can belong to multiple clusters with varying membership strengths. The clusters are learned as follows: \[S_{ij} =a_{ij}, \tag{4}\] \[a_{ij} =\text{softmax}\left(\theta^{\text{T}}\sigma\left(\Theta x_{i}^{m }\|x_{j}\right)\right),\] (5) \[x_{i}^{m} =\max_{j\in N(i)}x_{j}, \tag{6}\] where \(a_{ij}\) is the attention score and the membership strength, \(\theta\) and \(\Theta\) are learnable vector and matrix, respectively. \(\sigma\) is the LeakyReLU activation function, and \(x_{i}^{m}\) is the master query representing the initial cluster embedding. The attention scores are also subject to a dropout probability \(drop_{pool}\). The final cluster embedding is then calculated as an attention-weighted sum, which is additionally weighted by the cluster score \(\phi_{i}\): \[x_{i}^{c}=\phi_{i}\sum_{j\in N(i)}a_{ij}x_{j}, \tag{7}\] where the cluster score \(\phi_{i}\) is computed by the local extremum graph convolution [43]: \[\phi_{i}=\Theta_{1}\cdot x_{i}+\sum_{j\in N(i)}e_{ji}\cdot\left(\Theta_{2}x_{i }-\Theta_{3}x_{j}\right), \tag{8}\] which is designed to measure the relative importance of each cluster. The cluster embedding \(x_{i}^{c}\) is then used to select the top \(k\) scoring clusters, which will be included in the coarsened graph: \[\bar{i} =Top_{k}(X^{c}),k\in[1,2,...N],\ \ \ \ \ \bar{S} =S(:,\bar{i}) \tag{9}\] \[A^{p} =\bar{S}^{\mathrm{T}}\cdot A\cdot\bar{S}, X^{\mathrm{p}} =X^{c}(:,\bar{i}) \tag{10}\] where \(Top_{k}\) is a function that returns the indices of clusters \(\bar{i}\). \(\bar{S}\) and \(X^{\mathrm{p}}\) is the pruned soft-cluster assignment matrix and the pruned cluster embedding matrix, respectively, and \(A^{p}\) is the adjacency matrix of the coarsened graph. The graph pooling module has the following hyperparameters: the size of the pooled graph \(k_{pool}\), the dropout rate \(drop_{pool}\) and the negative slope of the LeakyReLU activation. #### iii.2.2 Multilayer perceptron classifier The cluster embedding matrix \(X^{p}\) of the coarsened graph returned by the node pooling module is flattened and fed to a multilayer perceptron (MLP) classifier. Specifically, a \(L_{MLP}\)-layer MLP with hidden size \(h_{MLP}\) is utilised with a block of batch normalisation, activation function, and dropout layers utilised between the fully connected layers. The final layer outputs a two-dimensional vector of log probabilities for each class. The classifier has the following hyperparameters: the number of layers \(L_{MLP}\), hidden size \(h_{MLP}\), activation function and dropout rate \(drop_{MLP}\). ### Model implementation and evaluation The proposed AGGCN model was implemented using PyTorch 1.10 [48], and PyTorch Geometric 2.0.2 [49] and trained on a laptop with Intel i7 CPU, 16 GB RAM and an NVIDIA RTX 2070 GPU. The model is trained by minimising the binary cross-entropy loss. The model performance is evaluated using repeated (100 times) 10-fold stratified cross-validation and trained on the dataset collected during the eyes-closed condition. Since there are multiple samples from the same participant, keeping all samples within the same fold is crucial to prevent information leakage. A stochastic gradient descent (SGD) optimiser and an exponential learning rate scheduler are used to train the model with a batch size of 90 for 100 epochs. Additionally, zero-mean Gaussian noise with standard deviation \(\sigma\) is added to the input during training with probability \(p_{noise}\) to improve the generalisability of the model. Eventually, the best model was identified using the average cross-validated accuracy measured on the test folds. The selected model was then tested on the dataset obtained during the eyes-open condition as well as on the combined dataset from both conditions. Note that the hyperparameters of the proposed model are optimised using Bayesian optimisation. Ten warm-up random iterations were used to initialise the optimisation, followed by 100 optimisation iterations. Moreover, we carry out parameter-sensitivity experiments to verify the influence of a few key hyperparameters of the proposed model architecture. Specifically, these are the number of iterations of the GGCN encoder, the size of the pooled graphs, the sparsity of the learned graph and the choice of aggregation function of the GGCN encoder. The hyperparameters of the model are reported in our supplementary materials. ## IV Results and discussion In this section, we demonstrate the experimental results of our AGGCN. As illustrated in Table 1, our AGGCN has shown robustness across all the conditions, indicating its potential for real-world applications. Note that the highest accuracy was reached during the EC condition. This is likely because with eyes closed, the ocular artefacts are minimised; thus, the underlying dynamics are easier to detect. However, the performance remains high even in EO and EC+EO conditions suggesting that the proposed model can detect underlying patterns in all the conditions. In addition, the hyperparameter values of the optimised model are reported in Table S1 in Supplementary Materials. ### Comparison with the baselines The proposed model was compared to three baseline models that were proposed in the literature across the three conditions. The first baseline is the best-performing model from our previous work [5]. It is a GNN with two spatial graph convolutional layers, maximum readout and brain graph defined using the amplitude-envelope-correlation (AEC-GNN). The second baseline model is the spatio-temporal GNN (STGCN) that uses temporal convolutions and ChebConv layers and defines the brain graphs using wavelet coherence [8]. Finally, we use a naive baseline model where the input node feature matrix is flattened and classified using an MLP [5] without using graph-domain information. Table 2 shows the AUC values of various methods across different conditions. Note that all four methods were evaluated under the same setting (e.g. the same 1-second EEG window). We can observe that our proposed AGGCN outperforms the baselines significantly. Moreover, STGCN was evaluated using non-stratified cross-validation in their original paper. It is well expected that its performance drops significantly when evaluated using stratified cross-validation in our experiments. ### Model ablation study We perform ablation experiments to determine the contribution of each module of the proposed model. The following seven ablated variants of the proposed model were tested in our experiments. * **A**: no node pooling; * **B**: graph learning replaced with a fully connected graph; * **C**: GGCN replaced with a \(R^{\text{th}}\)-order ChebConv; * **D**: variants A and B combined; * **E**: variants A and C combined; * **F**: variants B and C combined; * **G**: variants A, B and C combined. The ablation results in Fig. 2 reveal that each of the proposed modules contributes significantly to the high performance of the proposed architecture. For variant A, we can observe that the contribution of the node pooling module is significant, albeit relatively small. However, this module reduces the number of parameters of the model and helps to produce explainable predictions (Fig. 4 and Fig. 5 in the next subsection). Without the node pooling, the final MLP classifier would have \(N\times h_{GNN}\times h_{MLP}\) parameters (\(N=23\)), but node pooling reduces it to \(k_{pool}\times h_{GNN}\times h_{MLP}\) (\(k_{pool}=5\)). For variant B, it is not surprising that its performance decreases significantly, as the graph learning module is replaced with a fully connected graph and variant B cannot leverage any graph-domain information. Next, we demonstrate that the GGCN encoder improves performance significantly compared to a ChebConv encoder according to variant C. A ChebConv layer is similar to a GGCN in its iterative nature, i.e. ChebConv iteratively updates node embeddings by approximating the eigendecomposition of graph Laplacian. However, ChebConv does not have any gating mechanism, which means that information across scales contributes to the final embedding equally. Since all of the major modules of the proposed are shown to contribute to the final performance significantly, it is unsurprising that the rest of the ablated models with more than one of these modules perform significantly worse as well (Variants D-G in Fig. 2). The parameter sensitivity experiments also confirm the optimal values of crucial hyperparameters of the proposed model (Supplementary Materials, Figs S1-S4). It is worth noting that the proposed architecture allows training relatively deep models (using up to ten GGCN iterations) with only a minor performance decrease (Fig. S1). ### Explainability of AGGCN The proposed model generates plausible and consistent explanations for its predictions. Specifically, the graph learning module learns a clear difference between the AD and HC cases, as shown in Fig. 3. The learned brain graphs show that AD cases have increased connectivity globally, while HC graphs seem more sparse with few strongly connected clusters. A well-defined cluster is present in both groups within the parietal, temporal and occipital regions. The node pooling mechanism then highlights the importance of this cluster, as most of the nodes within this cluster are included in the coarsened graph in Fig. 4. The coarsened graph also captures the pattern of increased connectivity of AD cases compared to HC cases. Note that the nodes of the pooled graphs are, in fact, cluster \begin{table} \begin{tabular}{c c c c} \hline Model & EC & EO & EC+EO \\ \hline AEC-GNN [5] & 87.15 \(\pm\) 2.45 & 81.26 \(\pm\) 2.58 & 83.16 \(\pm\) 1.84 \\ MLP [5] & 89.09 \(\pm\) 0.69 & 86.43 \(\pm\) 1.03 & 89.21 \(\pm\) 0.94 \\ STGCN [8] & 69.50 \(\pm\) 3.74 & 63.82 \(\pm\) 2.53 & 67.16 \(\pm\) 2.68 \\ **Proposed** & **94.93 \(\pm\) 0.93** & **93.75 \(\pm\) 1.27** & **94.05 \(\pm\) 1.03** \\ \hline \end{tabular} \end{table} Table 2: The AUC values of the baseline models and the proposed method across conditions. The best-performing model is highlighted in bold. \begin{table} \begin{tabular}{c c c c c} \hline Condition & Accuracy & AUC & Sensitivity & Specificity \\ \hline EC & 90.83 \(\pm\) 1.18 & 94.93 \(\pm\) 0.93 & 93.37 \(\pm\) 1.86 & 88.24 \(\pm\) 1.92 \\ EO & 88.77 \(\pm\) 1.67 & 93.75 \(\pm\) 1.27 & 90.41 \(\pm\) 3.21 & 87.06 \(\pm\) 2.88 \\ EC+EO & 89.61 \(\pm\) 1.54 & 94.05 \(\pm\) 1.03 & 89.65 \(\pm\) 2.79 & 89.57 \(\pm\) 2.22 \\ \hline \end{tabular} \end{table} Table 1: Performance of the proposed AGGCN in eyes closed (EC), eyes open (EO) and combined (EC+EO) conditions. Figure 2: Accuracy of model variants. The asterisks report the p-value of a nonparametric Mann-Whitney U test measuring the difference between AGGCN and the ablated variants. representations, i.e. attention weighted sum of node embeddings as shown in Fig. 5. These clusters are primarily located in the parietal and temporal channels and a few central and frontal channels. Surprisingly, the cluster assignments are quite similar in AD and HC in Fig. 5 and are similar to the learned graph structure shown in Fig. 3. Finally, the role of the gating mechanism is further elucidated by analysing the amount of information gathered at each scale, i.e. iteration of GGCN (Fig. 6). We measure this by computing the average Euclidean distance between the initial and updated node embedding at each iteration, i.e. \(x_{i}^{(r)}\) and \(m_{i}^{(r+1)}\) in Eq. 3. For instance, a small distance means a small amount of information was gathered from that scale, i.e. iteration. Local information contributes highly to the node embeddings of the HC cases, and then the degree of contributions linearly decreases with increasing graph scale. The opposite pattern is observed for AD cases, where the later iterations influence the node embeddings. This highlights the distributed nature of the neural disruptions caused by AD as the later iterations gather more global and distributed information. ### Limitations and future work Although our approach achieves competitive performance, we identify a few drawbacks. First, the relatively small size of our dataset imposes a limit on fitting complex models. We address this issue by segmenting the EEG signals into short windows. The short window length means that the model might not be able to represent information from low-frequency components of the signal. Next, we do not explore alternative node feature representations beyond PSD in this study. PSD is merely a linear frequency-domain representation of the signal. Figure 4: Average adjacency matrix of pooled graphs of AD and HC cases. Figure 5: Average attention scores of preserved clusters in the pooled graphs of AD and HC cases. Figure 3: Average adjacency matrix of learned graphs of AD and HC cases. Figure 6: The average distance between initial node embedding and updated node embeddings shows the amount of information retained in each iteration of GGCN, i.e. going from local to global information. The asterisks denote the p-value of statistical tests comparing the average distance between AD and HC cases. Including time-domain and nonlinear information in the node features might improve the expressiveness of the model. Similarly, the proposed graph learning mechanism is limited to linear patterns of FC, because (1) it is inferred from the node features and (2) it is expressed as Pearson's correlation coefficient. Future work should explore other forms of FC that might be integrated into the graph learning mechanism; and study ways to include more complex frequency-dependent coupling information. ## V Conclusion This work proposes a novel graph learning model that performs highly in the AD diagnosis task. Additionally, we show that the model produces robust and clinically relevant explanations for its predictions via the novel graph structure learning module and the node pooling mechanism. Finally, we highlight the importance of utilising the gating mechanism within a message-passing encoder. This allows the model to accurately represent the multiscale distributed network disruptions displayed in the AD cases. ## Acknowledgement The EEG data was funded by a grant from Alzheimer's Research UK (ARUK-PPG20114B-25). The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health. The work was supported by A*STAR, AI, Analytics and Informatics (AI3) Horizontal Technology Programme Office (HTPO) seed grant C211118015.
2308.06070
The next case of Andrásfai's conjecture
Let $\mathrm{ex}(n,s)$ denote the maximum number of edges in a triangle-free graph on $n$ vertices which contains no independent sets larger than $s$. The behaviour of $\mathrm{ex}(n,s)$ was first studied by Andr\'asfai, who conjectured that for $s>n/3$ this function is determined by appropriately chosen blow-ups of so called Andr\'asfai graphs. Moreover, he proved $\mathrm{ex}(n, s)=n^2-4ns+5s^2$ for $s/n\in [2/5, 1/2]$ and in earlier work we obtained $\mathrm{ex}(n, s)=3n^2-15ns+20s^2$ for $s/n\in [3/8, 2/5]$. Here we make the next step in the quest to settle Andr\'asfai's conjecture by proving $\mathrm{ex}(n, s)=6n^2-32ns+44s^2$ for $s/n\in [4/11, 3/8]$.
Tomasz Łuczak, Joanna Polcyn, Christian Reiher
2023-08-11T11:11:20Z
http://arxiv.org/abs/2308.06070v1
# The next case of Andrasfai's conjecture ###### Abstract. Let \(\mathrm{ex}(n,s)\) denote the maximum number of edges in a triangle-free graph on \(n\) vertices which contains no independent sets larger than \(s\). The behaviour of \(\mathrm{ex}(n,s)\) was first studied by Andrasfai, who conjectured that for \(s>n/3\) this function is determined by appropriately chosen blow-ups of so called Andrasfai graphs. Moreover, he proved \(\mathrm{ex}(n,s)=n^{2}-4ns+5s^{2}\) for \(s/n\in[2/5,1/2]\) and in earlier work we obtained \(\mathrm{ex}(n,s)=3n^{2}-15ns+20s^{2}\) for \(s/n\in[3/8,2/5]\). Here we make the next step in the quest to settle Andrasfai's conjecture by proving \(\mathrm{ex}(n,s)=6n^{2}-32ns+44s^{2}\) for \(s/n\in[4/11,3/8]\). Key words and phrases:Ramsey-Turan theory, extremal graph theory, triangle-free, independent sets 2010 Mathematics Subject Classification: Primary: 05C35, Secondary: 05C69 The first author was supported in part by National Science Centre, Poland, grant 2022/47/B/ST1/01517. Introduction Let \(G\) be a graph with vertex set \(V\) and let \(\Gamma_{k}\) be the set of vertices of \(G\). The graph \(\Gamma_{k}\) is called the _\(k\)-regular graph_ if it is a \(k\)-regular graph on \(3k-1\) vertices. The _\(k\)-regular graph_\(\Gamma_{k}\) is called the _\(k\)-regular graph_ if it is a \(k\)-regular graph on \(3k-1\) vertices. The _\(k\)-regular graph_\(\Gamma_{k}\) is called the _\(k\)-regular graph_ if it is a \(k\)-regular graph on \(3k-1\) vertices. Consequently, if \(s=kn/(3k-1)\) for some \(k\geq 1\), then \(\operatorname{ex}(n,s)\) is determined by the blow-up of an Andrasfai graph. Andrasfai conjectured that this holds, much more generally, whenever \(s>n/3\) (we refer to [9] for more information on this conjecture). Whenever \(kn/(3k-1)\leq s<(k-1)n/(3k-4)\) we define the "canonical" blow-up \(G(n;k,s)\) of \(\Gamma_{k}\) by replacing the vertices \(1\), \(k\), \(2k\) by sets of \((k-1)n-(3k-4)s\) vertices each, and the remaining \(3k-4\) vertices of \(\Gamma_{k}\) by sets of \(3s-n\) vertices each. (The case \(k=4\) is drawn in Figure 1.2.) Elementary calculations show that \(G(n;k,s)\) has \(n\) vertices, independence number \(s\), and \[g_{k}(n,s)=\tfrac{1}{2}k(k-1)n^{2}-k(3k-4)ns+\tfrac{1}{2}(3k-4)(3k-1)s^{2} \tag{1.2}\] edges (see [9, Fact 1.5]). It can be shown (cf. [9, Fact 2.6]) that \[\operatorname{ex}(n,s)\leq g_{k}(n,s)\quad\text{ whenever }s\notin\left(\tfrac{k}{3k-1}n,\tfrac{k-1}{3k-4}n\right). \tag{1.3}\] Furthermore, if \(s\in\left(\tfrac{k}{3k-1}n,\tfrac{k-1}{3k-4}n\right)\), then among all blow-ups of \(\Gamma_{k}\) with \(n\) vertices and independence number \(s\) the canonical blow-up \(G(n;k,s)\) has the maximum number of edges (cf. [8, Lemma 3.3]). Thus, Andrasfai's conjecture admits the following formulation. **Conjecture 1.1**.: _For all integers \(k\), \(s\), and \(n\) such that \(kn/(3k-1)\leq s\leq(k-1)n/(3k-4)\) we have_ \[\operatorname{ex}(n,s)=g_{k}(n,s)\,.\] Andrasfai himself [1] proved this for \(k=2\), in [9] we added the case \(k=3\), and in [8] we showed that for every \(k\) there is a small constant \(\gamma_{k}>0\) such that the conjecture holds whenever \(kn/(3k-1)\leq s<(k/(3k-1)+\gamma_{k})n\). Here we make a further step towards the solution of Andrasfai's conjecture by addressing the case \(k=4\). Figure 1.2. The canonical blow-up \(G(n;4,s)\). Each ‘small’ set \(V_{1}\), \(V_{4}\), \(V_{8}\) contains \(3n-8s\) vertices, the remaining ‘large’ sets consist of \(3s-n\) vertices. **Theorem 1.2**.: _If \(4n/11\leqslant s\leqslant 3n/8\), then \(\operatorname{ex}(n,s)=g_{4}(n,s)=6n^{2}-32ns+44s^{2}\)._ We would like to point out that, in general, \(G(n;k,s)\) is not the only graph in the family \(\mathfrak{E}(n,s)\). For instance, in Figure 1.2 one can move some vertices between the two sets \(V_{4}\) and \(V_{5}\). Provided that the cardinalities of the resulting sets \(V_{4}^{\prime}\) and \(V_{5}^{\prime}\) remain in the interval \([3n-8s,3s-n]\) the graph keeps having the independence number \(s\) and the number of edges remains constant as well. For more information on the families \(\mathfrak{E}(n,s)\) we refer to [8]. We use standard graph theoretical notation. For a graph \(G\) we denote by \(V(G)\) and \(E(G)\) its vertex and edge set, respectively, and we write \(\nu(G)=|V(G)|\), \(e(G)=|E(G)|\). If \(A,B\subseteq V(G)\) are disjoint, we write \(e_{G}(A,B)\) for the number of edges in \(G\) from \(A\) to \(B\) and \(e_{G}(A)\) refers to the number of edges induced by \(A\). By \(\operatorname{N}_{G}(B)\) we mean the neighbourhood of a set \(B\subseteq V(G)\). For every \(v\in V(G)\) we abbreviate \(\operatorname{N}_{G}(\{v\})\) to \(\operatorname{N}_{G}(v)\) and we put \(\deg_{G}(v)=|\operatorname{N}_{G}(v)|\). The size of a largest independent set in \(G\) is denoted by \(\alpha(G)\). Moreover, we often omit subscripts unless they are necessary to avoid confusion. If \(A\) and \(B\) are two disjoint sets, then \(K(A,B)\) denotes the edge set of the complete bipartite graph with vertex partition \(A\mathbin{\mathaccent 0{\cdot}\cup}B\), i.e., \[K(A,B)=\left\{ab\colon a\in A\text{ and }b\in B\right\}.\] Given a set \(X\) we write \(\not{\wp}(X)\) for its power set and \(X^{(2)}\) for the collection of all two-element subsets of \(X\). ## 2. Moulds In this section we introduce the rough idea behind the proof of Theorem 1.2. Let us start with a few definitions. **Definition 2.1**.: The **fortress**\(\mathcal{F}_{G}\) of an extremal graph \(G\in\mathfrak{E}(n,s)\) is the graph with \[V(\mathcal{F}_{G})=\{X\subseteq V(G)\colon|X|=s\text{ and }X\text{ is independent}\}\] and \[E(\mathcal{F}_{G})=\{XY\in V(\mathcal{F}_{G})^{(2)}\colon X\cap Y=\varnothing \}\,.\] If \(G\) is clear from the context, we will often write \(\mathcal{F}\) instead of \(\mathcal{F}_{G}\). For instance, if \(s/n\in(4/11,3/8)\), then the fortress of the canonical graph \(G(n;4,s)\) (see Figure 1.2) has 10 vertices, namely the neighbourhoods of the vertex classes \(V_{i}\) with \(i\neq 8\). More precisely, the neighbourhoods of the large vertex classes form a copy of \(\Gamma_{3}\) in \(\mathcal{F}\) and \(\operatorname{N}(V_{1})\), \(\operatorname{N}(V_{4})\) are twins of \(\operatorname{N}(V_{11})\), \(\operatorname{N}(V_{5})\), respectively. The fortresses of other extremal graphs obtained by moving some vertices from \(V_{5}\) to \(V_{4}\) have only nine vertices (because \(\operatorname{N}(V_{1})\) disappears), but these fortresses still contain copies of \(\Gamma_{3}\). **Definition 2.2**.: Let \(G\in\mathfrak{E}(n,s)\) and \(H\) be two graphs. An **imprint** of \(H\) in \(G\) is an embedding of \(H\) into \(\mathcal{F}_{G}\), i.e., an injective map \[\varphi\colon V(H)\longrightarrow V(\mathcal{F}_{G})\] such that 1. \(\forall x,y\in V(H)\)\([xy\in E(H)\)\(\iff\)\(\varphi(x)\varphi(y)\in E(\mathcal{F}_{G})]\). The canonical blow-ups \(G(n;4,s)\) have much more structure than just an imprint of \(\Gamma_{3}\) -- in addition to that, the eight large vertex classes form a blow-up of \(\Gamma_{3}\). This interplay of an imprint with a blow-up is captured by the notion of a mould. **Definition 2.3**.: Given two graphs \(G\in\mathfrak{E}(n,s)\) and \(H\) an \(H\)**-mould** for \(G\) is a pair \((\varphi,\psi)\) of maps from \(V(H)\) to \(\mathcal{P}(V(G))\) such that \(\varphi\) is an imprint and for all \(x\in V(H)\), 1. \(\psi(x)\) is an independent set of size \(3s-n\); 2. \(K(\varphi(x),\psi(x))\subseteq E(G)\), i.e., for every vertex \(z\in\psi(x)\) we have \(\mathrm{N}_{G}(z)=\varphi(x)\). If some such \(H\)-mould exists for \(G\), we say that \(G\) contains an \(H\)-mould. When discussing moulds, we always assume tacitly that \(s>n/3\), which causes the sets \(\psi(x)\) to be nonempty. Here are some immediate consequences of the definition of a mould. **Fact 2.4**.: _Let \((\varphi,\psi)\) be an \(H\)-mould for some \(G\in\mathfrak{E}(n,s)\). If \(x,y\in V(H)\) are distinct, then \(\psi(x)\cap\psi(y)=\varnothing\)._ Proof.: Let us suppose that \(v\in\psi(x)\cap\psi(y)\). Now _(M3)_ tells us \(\varphi(x)=\mathrm{N}(v)=\varphi(y)\), which contradicts \(x\neq y\) (recall that \(\varphi\) is required to be injective). **Fact 2.5**.: _Let \((\varphi,\psi)\) be an \(H\)-mould for some \(G\in\mathfrak{E}(n,s)\). If \(X\in V(\mathcal{F}_{G})\) denotes an independent set of size \(s\) and \(y\in V(H)\), then the following statements are equivalent._ 1. \(\psi(y)\subseteq X\)_,_ 2. \(\psi(y)\cap X\neq\varnothing\)_,_ 3. \(\varphi(y)\cap X=\varnothing\)_._ Proof.: Clearly, (_i_ ) implies (_ii_ ). Next, if \(v\in\psi(y)\cap X\), then \(\mathrm{N}(v)=\varphi(y)\) needs to be disjoint to the independent set \(X\) and so (_ii_ ) implies (_iii_ ). Finally, assume that \(\varphi(y)\cap X=\varnothing\). Since \(\mathrm{N}(\psi(y))=\varphi(y)\), the set \(\psi(y)\cup X\) is independent. In view of \(|X|=s=\alpha(G)\) this shows \(\psi(y)\subseteq X\). **Fact 2.6**.: _If \((\varphi,\psi)\) is an \(H\)-mould for some \(G\in\mathfrak{E}(n,s)\), then the subgraph of \(G\) induced by \(\bigcup_{v\in V(H)}\psi(v)\) is the blow-up of \(H\) obtained by replacing every vertex \(v\in V(H)\) by the vertex class \(\psi(v)\) of size \(3s-n\)._ Proof.: Simplifying the notation we suppose \(V(H)=\{1,\ldots,\ell\}\) for some natural number \(\ell\) and we set \(A_{i}=\varphi(i)\), \(B_{i}=\psi(i)\) for every \(i\in[\ell]\). By Fact 2.4 the sets \(B_{1},\ldots,B_{\ell}\) are mutually disjoint. Thus it is enough to show the following statements for every pair \(ij\in[\ell]^{(2)}\). 1. If \(ij\in E(H)\), then \(K(B_{i},B_{j})\subseteq E(G)\). 2. If there exists an edge \(b_{i}b_{j}\in E(G)\) with \(b_{i}\in B_{i}\) and \(b_{j}\in B_{j}\), then \(ij\in E(H)\). Suppose first that \(ij\in E(H)\), which means \(A_{i}\cap A_{j}=\varnothing\). By Fact 2.5 applied to \(B_{i},A_{i},A_{j}\) here in place of \(\psi(y),\varphi(y),X\) there this yields \(B_{i}\subseteq A_{j}\), whence \[K(B_{i},B_{j})\subseteq K(A_{j},B_{j})\subseteq E(G)\,.\] Having thus proved (_a_ ) we proceed with (_b_ ). Since \(G\) is triangle-free, \(b_{i}b_{j}\in E(G)\) entails \(\operatorname{N}(b_{i})\cap\operatorname{N}(b_{j})=\varnothing\). So \(A_{i}\) and \(A_{j}\) are disjoint and, therefore, \(ij\in E(H)\). Recall that the canonical blow-ups \(G(n;4,s)\) contain \(\Gamma_{3}\)-moulds. The starting point of our proof of Theorem 1.2 is the realisation that, in fact, such a mould is all one needs for counting edges. **Lemma 2.7**.: _If \(4n/11<s<3n/8\) and some graph \(G\in\mathfrak{E}(n,s)\) contains a \(\Gamma_{3}\)-mould, then \(\operatorname{ex}(n,s)\leqslant g_{4}(n,s)\), i.e., the pair \((n,s)\) satisfies Theorem 1.2._ Proof.: Let \((\varphi,\psi)\) be a \(\Gamma_{3}\)-mould for \(G\in\mathfrak{E}(n,s)\), and set \(B_{i}=\psi(i)\) for \(i=1,2,\ldots,8\). Our aim is to show that \(G\) has at most as many edges as the canonical blow-up \(H=G(n;4,s)\). The calculations given below become almost obvious if one looks at Figure 1.2. Note that the eight sets \(B_{i}\) are pairwise disjoint, consist of \(3s-n\) vertices each, and by Fact 2.6 their union \(W=\bigcup_{i\in[8]}B_{i}\) induces a balanced blow-up of \(\Gamma_{3}\) in \(G\). We shall compare \(W\) with the union \(L\) of the eight large vertex classes of \(H\) and its complement \(\bar{W}=V(G)\smallsetminus W\) with the union of the three small sets, i.e., with \(\bar{L}=V(H)\smallsetminus L\). Due to \(|W|=|L|=8(3s-n)\) we have \(|\bar{W}|=|\bar{L}|=3t\), where \(t=3n-8s\) denotes the common size of the small sets of \(H\). It is plain that \(e_{G}(W)=12(3s-n)^{2}=e_{H}(L)\). Next, every vertex \(w\in W\) has degree \(s\) and \(3(3s-n)\) neighbours of \(w\) are themselves in \(W\). Thus \(w\) sends \(s-3(3s-n)=t\) edges to \(\bar{W}\), which shows that the number \(e_{G}(W,\bar{W})\) of edges in \(E(G)\) joining \(W\) to \(\bar{W}\) is \[e_{G}(W,\bar{W})=8(3s-n)t=e_{H}(L,\bar{L})\,. \tag{2.1}\] Consequently, it remains to prove \[e_{G}(\bar{W})\leqslant 2t^{2}=e_{H}(\bar{L})\,. \tag{2.2}\] To this end we consider the maximum degree \(\Delta\) of \(G[\bar{W}]\). Since \(G\) is triangle-free, a well-known strengthening of Mantel's theorem yields \(e_{G}(\bar{W})\leqslant\Delta(3t-\Delta)=2t^{2}-(\Delta-2t)(\Delta-t)\) So if \(\Delta\geq 2t\) we are done immediately and henceforth we can suppose \[\Delta\leq 2t\,. \tag{2.3}\] Now we revisit (2.1) from the perspective of \(\bar{W}\). For every vertex \(\bar{w}\in\bar{W}\) the set \(\mathrm{N}(\bar{w})\cap W\) is a (possibly empty) union of several of the sets \(B_{i}\). In fact, it can be the union of at most \(\alpha(\Gamma_{3})=3\) such sets. Let us call \(\bar{w}\)**heavy** if \(\mathrm{N}(\bar{w})\cap W\) consists of exactly three sets \(B_{i}\), so that \(|\mathrm{N}(\bar{w})\cap W|=3(3s-n)\). Clearly, if \(\bar{w}\in\bar{W}\) fails to be heavy, then \(|\mathrm{N}(\bar{w})\cap W|\leq 2(3s-n)\). Concerning the number \(a\) of heavy vertices we thus obtain \[e_{G}(W,\bar{W})=\sum_{\bar{w}\in\bar{W}}|\mathrm{N}(\bar{w})\cap W|\leq(3s-n )[3a+2(3t-a)]=(3s-n)(6t+a)\,,\] which together with (2.1) shows \(8t\leq 6t+a\), i.e., \(a\geq 2t\). As each heavy vertex has at most \(s-3(3s-n)=t\) neighbours in \(\bar{W}\), this yields \[2e_{G}(\bar{W})=\sum_{\bar{w}\in\bar{W}}|\mathrm{N}(\bar{w})\cap\bar{W}| \stackrel{{\eqref{eq:2.3}}}{{\leq}}at+(3t-a)\cdot 2t=6t^{2}-at\leq 4t^{2}\,,\] which proves (2.2). Observe that not all graphs \(G\in\mathfrak{E}(n,s)\) have \(\Gamma_{3}\) as their moulds. Indeed, if starting from \(G(n;4,s)\) we move some vertices between \(V_{5}\) and \(V_{4}\) such that both new sets have fewer than \(3s-n\) vertices, then the resulting graph only contains an imprint of \(\Gamma_{3}\), but not a \(\Gamma_{3}\)-mould. This state of affairs motivates us to establish that if there is an imprint of \(\Gamma_{k}\) in some graph belonging to \(\mathfrak{E}(n,s)\), then there also exists a graph in \(\mathfrak{E}(n,s)\) containing a \(\Gamma_{k}\)-mould (see Lemma 2.10 below). The proof hinges on a symmetrisation procedure the basic idea of which can be traced back to the work of Zykov [12] on Turan's theorem. Here we follow the notation introduced in [9]. Given a graph \(G\) and two disjoint sets \(A,B\subseteq V(G)\), we say that a graph \(G^{\prime}\) on the same vertex set as \(G\) arises from \(G\) by the **generalised Zykov symmetrisation \(\mathbf{Sym}(A,B)\)** if it is obtained by deleting all edges incident with \(B\) and afterwards adding all edges from \(A\) to \(B\). Explicitly, this means \[V(G^{\prime})=V(G)\quad\text{ and }\quad E(G^{\prime})=\big{(}E(G)\smallsetminus \{e\in E(G)\colon e\cap B\neq\varnothing\}\big{)}\cup K(A,B)\,.\] We shall write \(G^{\prime}=\mathbf{Sym}(G\,|\,A,B)\) in this situation. Let us now state a straightforward consequence of [9, Fact 2.4] and [9, Lemma 2.5]. **Lemma 2.8**.: _Given two integers \(n\geq 0\) and \(s\in[n/3,n/2]\), let \(G\in\mathfrak{E}(n,s)\), and let \(A_{1},A_{2}\subseteq V(G)\) be any two disjoint independent sets of size \(s\). If \(M\) is a matching from \(V(G)\smallsetminus(A_{1}\cup A_{2})\) to \(A_{2}\) whose size is as large as possible and \(B^{\prime}\subseteq A_{2}\smallsetminus V(M)\), then \(G^{\prime}=\mathbf{Sym}(G\,|\,A_{1},B^{\prime})\) is again in \(\mathfrak{E}(n,s)\). _ In practice there might be several possible choices for the maximum matching \(M\) and thus one can try to exercise some control over the location of the set \(B^{\prime}\). There will be one step in a later argument where we actually need this freedom, but in all other cases only the cardinality of \(B^{\prime}\) will matter. Due to \(|V(G)\smallsetminus(A_{1}\cup A_{2})|=n-2s\) we will automatically have \(|A_{2}\smallsetminus V(M)|\geq s-(n-2s)=3s-n\) and thus we can always achieve \(|B^{\prime}|=3s-n\). It seems worthwhile to state this separately. **Corollary 2.9**.: _If \(s\in[n/3,n/2]\), \(G\in\mathfrak{E}(n,s)\) and \(A_{1},A_{2}\subseteq V(G)\) are two disjoint independent sets of size \(s\), then there exists a set \(B\subseteq A_{2}\) of size \(|B|=3s-n\) such that \(\mathbf{Sym}(G|A_{1},B)\in\mathfrak{E}(n,s)\). _ To mention just one example of this procedure, consider a graph \(G\in\mathfrak{E}(n,s)\) obtained from \(G(n;4,s)\in\mathfrak{E}(n,s)\) by moving some vertices from \(V_{5}\) to \(V_{4}\). For \(A_{1}=V_{9}\cup V_{10}\cup V_{11}\cup V_{1}\) and \(A_{2}=V_{4}\cup V_{5}\cup V_{6}\cup V_{7}\) there has to exist a set \(B\subseteq A_{2}\) of size \(3s-n\) such that \(G^{\prime}=\mathbf{Sym}(G|A_{1},B)\) is again an extremal graph. Since \(\mathrm{N}(V_{9})\cup B\) and \(\mathrm{N}(V_{1})\cup B\) are independent in \(G^{\prime}\), we need to have \(V_{5}\subseteq B\subseteq V_{4}\cup V_{5}\) and, consequently, \(G^{\prime}\) is isomorphic to the canonical blow-up \(G(n;4,s)\). Thus we can use symmetrisation in order to 'canonise' graphs from \(\mathfrak{E}(n,s)\). We apply this technique to show the following result. **Lemma 2.10**.: _Let integers \(n\geq 0\), \(s\in(n/3,n/2]\), and \(k\geq 1\) be given. If some graph in \(\mathfrak{E}(n,s)\) contains an imprint of \(\Gamma_{k}\), then \(\Gamma_{k}\) is a mould for some \(G\in\mathfrak{E}(n,s)\)._ Proof.: Let us recall that \(V(\Gamma_{k})=\mathds{Z}/(3k-1)\mathds{Z}\) and \[E(\Gamma_{k})=\left\{ij\in V(\Gamma_{k})^{(2)}\colon i-j\in\{k,k+1,\dots,2k-1 \}\right\}.\] We call a number \(r\leq 3k-1\)**nice** if there is a graph \(G\in\mathfrak{E}(n,s)\) containing independent sets \(A_{1},\dots,A_{3k-1}\) of size \(s\) and independent sets \(B_{1},\dots,B_{r}\) of size \(3s-n\) such that 1. \(ij\in E(\Gamma_{k})\Longleftrightarrow A_{i}\cap A_{j}=\varnothing\); 2. \(B_{\ell}\subseteq A_{\ell+k}\cap A_{\ell+k+1}\cap\dots\cap A_{\ell+2k-1}\) for all \(\ell\in[r]\); 3. \(K(A_{\ell},B_{\ell})\subseteq E(G)\) for all \(\ell\in[r]\). For \(r=0\) the conditions (_ii_) and (_iii_) are void and (_i_) states that \(G\) contains an imprint of \(\Gamma_{k}\). So, by our assumption, \(0\) is nice. Let \(r_{\star}\) be the largest nice number. Our goal is to show that \(r_{\star}=3k-1\), since then the maps \(\varphi(i)=A_{i}\) and \(\psi(i)=B_{i}\) will certify that \(\Gamma_{k}\) is a mould for \(G\), and we are done. Thus, let us assume that \(r_{\star}<3k-1\). As the sets \(A_{r_{\star}+1}\) and \(A_{r_{\star}+k+1}\) are disjoint, Corollary 2.9 delivers a set \(B_{r_{\star}+1}\subseteq A_{r_{\star}+k+1}\) of size \(3s-n\) such that \(\mathbf{Sym}(A_{r_{\star}+1},B_{r_{\star}+1})\) produces a graph \(G^{\circ}\) in \(\mathfrak{E}(n,s)\). Since \(A_{r_{\star}+1}\) is disjoint to \(A_{r_{\star}+k+1},\dots,A_{r_{\star}+2k}\), Fact 2.5 applied to \(G^{\circ}\) yields 1. \(B_{r_{\star}+1}\subseteq A_{r_{\star}+k+1}\cap\dots\cap A_{r_{\star}+2k}\). Each of the vertices \(r_{\star}+2k+1,\ldots,r_{\star}+k\) of \(\Gamma_{k}\) is adjacent to one of \(r_{\star}+k+1,\ldots,r_{\star}+2k\) and thus (_i_ ) and (_a_ ) yield * \(B_{r_{\star}+1}\) is disjoint to \(A_{r_{\star}+2k+1},\ldots,A_{r_{\star}+k}\). Altogether, every \(A_{i}\) is disjoint to either \(A_{r_{\star}+1}\) or \(B_{r_{\star}+1}\), whence * \(A_{1},\ldots,A_{3k-1}\) are independent in \(G^{\circ}\). Next, for every \(\ell\in[\![r_{\star}]\!]\) there is some \(j\in\mathrm{N}_{\Gamma_{k}}(\ell)\smallsetminus\mathrm{N}_{\Gamma_{k}}(r_{\star }+1)\). Since (_ii_ ) and (_b_ ) yield \(B_{\ell}\subseteq A_{j}\) as well as \(A_{j}\cap B_{r_{\star}+1}=\varnothing\), this proves that * \(B_{r_{\star}+1}\) is disjoint to \(B_{1},\ldots,B_{r_{\star}}\). We contend that * \(K(A_{\ell},B_{\ell})\subseteq E(G^{\circ})\) for every \(\ell\in[\![r_{\star}]\!]\). If \(\{\ell,r_{\star}+1\}\notin E(\Gamma_{k})\) this is because (_b_ ) and (_d_ ) entail \((A_{\ell}\cup B_{\ell})\cap B_{r_{\star}+1}=\varnothing\), which means that the edges provided by (_iii_ ) are not affected by the symmetrisation at all. On the other hand, if \(\{\ell,r_{\star}+1\}\in E(\Gamma_{k})\), then (_ii_ ) and (_a_ ) imply \(B_{\ell}\subseteq A_{r_{\star}+1}\) and \(B_{r_{\star}+1}\subseteq A_{\ell}\). So the symmetrisation first removes the edges in \(K(B_{\ell},B_{r_{\star}+1})\) and then they are immediately put back, so that altogether (_iii_ ) remains valid. Thereby (_e_ ) is proved. Clearly \(K(A_{r_{\star}+1},B_{r_{\star}+1})\subseteq E(G^{\circ})\) and together with (_a_ ), (_c_ ), (_e_ ) this shows that \(G^{\circ}\) exemplifies that \(r_{\star}+1\) is nice, contrary to the maximality of \(r_{\star}\). So the hypothesis of Lemma 2.7 can be weakened considerably. **Corollary 2.11**.: _If \(4n/11<s<3n/8\) and some graph \(G\in\mathfrak{E}(n,s)\) contains a \(\Gamma_{3}\)-imprint, then \(\mathrm{ex}(n,s)\leq g_{4}(n,s)\). _ ## SS3. Basic fortress tricks In this section we take a closer look at the fortresses of extremal graphs. The observation that a triangle in a fortress is tantamount to three mutually disjoint independent sets of size \(s\) has the following consequence. **Fact 3.1**.: _Whenever \(s>n/3\) and \(G\in\mathfrak{E}(n,s)\) the fortress of \(G\) is triangle-free. _ For Corollary 2.11 to be useful we need to argue that, at least under some inductive assumptions on \(n\) and \(s\), one can build a \(\Gamma_{3}\)-imprint in some graph \(G\in\mathfrak{E}(n,s)\). The very first step in this direction was undertaken in [9, Lemma 2.8], where we proved such a statement for \(\Gamma_{1}=K_{2}\) rather than \(\Gamma_{3}\). **Lemma 3.2**.: _If \(n\) and \(s\) are two integers with \(n\geq 2s\geq 0\), then every graph \(G\in\mathfrak{E}(n,s)\) contains two disjoint independent sets of size \(s\). _ If, starting from an edge, one attempts to obtain larger subgraphs of a fortress \(\mathcal{F}\), it would be helpful to know some construction principles telling us that certain configurations in \(\mathcal{F}\) are extendible in a somewhat controlled manner. One of the most powerful of these principles we are aware of (see Corollary 3.5 below) can be shown to apply to a given pair \((n,s)\) if Theorem 1.2 itself is assumed to be valid for \((n+3,s+1)\). This rules out a straightforward induction on \(n\) or \(s\), but we can still argue by induction on \(11s-4n\), which is what we shall actually do. **Definition 3.3**.: Set \(\delta(n,s)=11s-4n\) for all nonnegative integers \(n\) and \(s\). The pair \((n,s)\) is said to be a **minimal counterexample** if \(\operatorname{ex}(n,s)>g_{4}(n,s)\) and \[\operatorname{ex}(n^{\prime},s^{\prime})\leq g_{4}(n^{\prime},s^{\prime}) \quad\text{ for all }(n^{\prime},s^{\prime})\text{ with }\delta(n^{\prime},s^{\prime})<\delta(n,s)\,.\] Let us recall at this juncture that if the estimate \(\operatorname{ex}(n,s)\leq g_{4}(n,s)\) fails for any pair of integers \(n\geq s\geq 0\), then (1.3) yields \[\frac{4n}{11}<s<\frac{3n}{8}\,, \tag{3.1}\] whence \(\delta(n,s)>0\). This means that if minimal counterexamples do not exist, then \(\operatorname{ex}(n,s)\leq g_{4}(n,s)\) holds unconditionally and together with the existence of the canonical blow-ups \(G(n;4,s)\) mentioned in the introduction this completes the proof of Theorem 1.2. Thus it suffices to show in the remainder of this article that there are no minimal counterexamples. When studying such a minimal counterexample \((n,s)\) we will write \(\delta\) instead of \(\delta(n,s)\) and the bounds (3.1) will often be used without further referencing. **Lemma 3.4**.: _Let \((n,s)\) be a minimal counterexample, \(G\in\mathfrak{E}(n,s)\), and \(XY\in E(\mathcal{F})\). If \(C\) denotes a further independent set in \(G\) with_ \[|C|>4n-10s\,,\] _then there exists some \(Z\in V(\mathcal{F})\) such that \(C\cap Z=\varnothing\) and either \(XZ\in E(\mathcal{F})\) or \(YZ\in E(\mathcal{F})\)._ Proof.: Construct a graph \(G^{+}\) by adding three new vertices \(x,y,c\) to \(G\) as well as all edges from \(x\) to \(X\), from \(y\) to \(Y\), from \(c\) to \(C\), and finally the edge \(xy\) (see Figure 3.1). Clearly \(K_{3}\nsubseteq G^{+}\) and \(\nu(G^{+})=n+3\). Figure 3.1. The graph \(G^{+}\). Due to \(\delta(n+3,s+1)=\delta-1\) the minimality of \((n,s)\) yields \[\operatorname{ex}(n+3,s+1) \leqslant g_{4}(n+3,s+1)=g_{4}(n,s)+2s+(4n-10s)+2\] \[<\operatorname{ex}(n,s)+|X|+|Y|+|C|+1=e(G^{+})\,.\] Hence the graph \(G^{+}\) contains an independent set \(Z^{+}\) of size \(s+2\). Clearly \(Z^{+}\) can contain at most \(s\) old vertices and, due to the edge \(xy\), at most two new vertices. So without loss of generality we can assume \(Z^{+}=Z\cup\{x,c\}\) for some \(Z\in V(\mathcal{F})\). Now the independence of \(Z^{+}\) in \(G^{+}\) yields \(Z\cap X=\varnothing\) and \(Z\cap C=\varnothing\). In most applications of this lemma we take \(C\) to be an independent set of size \(s\). We thus arrive at the following structural property of fortresses. **Corollary 3.5**.: _If \((n,s)\) is a minimal counterexample, then the fortress of every extremal graph \(G\in\mathfrak{E}(n,s)\) has the following property: For all \(X,Y,Z\in V(\mathcal{F})\) such that \(XY\in E(\mathcal{F})\) there exists some \(T\in V(\mathcal{F})\) with \(ZT\in E(\mathcal{F})\) and either \(XT\in E(\mathcal{F})\) or \(YT\in E(\mathcal{F})\) (see Figure 3.2). _ We shall also encounter an application of Lemma 3.4 later where \(C\) is simply the neighbourhood of a vertex whose degree is sufficiently large. At that occasion, it will be useful to know that only a small number of vertices is not suitable for this purpose. **Lemma 3.6**.: _If \(4n/11<s<3n/8\) and \(G\in\mathfrak{E}(n,s)\), then all but less than \(3s-n\) vertices \(x\in V(G)\) satisfy \(\deg_{G}(x)>4n-10s\)._ Proof.: Let \(q\) denote the number of small-degree vertices under consideration. By summing up the vertex degrees we obtain \[ns-(3n-8s)(11s-4n) =2g_{4}(n,s)\leqslant 2e(G)\] \[\leqslant(n-q)s+q(4n-10s)\] \[=ns-q(11s-4n)\,,\] whence \[q\leqslant 3n-8s<(3n-8s)+(11s-4n)=3s-n\,.\qed\] Figure 3.2. Vertices \(X\), \(Y\), \(Z\), and \(T\) in the fortress \(\mathcal{F}\) of \(G\). Utilising Lemma 3.2 and Corollary 3.5 it is not hard to see that for minimal counterexamples \((n,s)\) the fortresses of extremal graphs have no isolated vertices. A somewhat different argument leads to a marginally stronger result, the following. **Lemma 3.7**.: _If \((n,s)\) is a minimal counterexample, \(G\in\mathfrak{E}(n,s)\), and \(Q\subseteq V(G)\) is an independent set of size at least \(s-1\), then there exists an independent set \(X\) of size \(s\) with \(Q\cap X=\varnothing\)._ Proof.: We start by showing that \[\operatorname{ex}(n+1,s)<\operatorname{ex}(n,s)+(s-1)\,. \tag{3.2}\] In the special case \(\delta\leq 2\) the trivial bound (1.1) entails indeed \[2\mathrm{ex}(n+1,s) \leq(n+1)s=2g_{4}(n,s)+(11s-4n)(3n-8s)+s\] \[<2\mathrm{ex}(n,s)+\delta(3n-8s)+s\leq 2\mathrm{ex}(n,s)+2(3n-8s)+s\] \[=2\mathrm{ex}(n,s)+(4n-11s)+2(n-3s)+2s<2\mathrm{ex}(n,s)+2s-2\,.\] Otherwise \(\delta\geq 3\) and \(\delta(n+1,s)=\delta-4<\delta\) combined with the minimality of \((n,s)\) yields \[\operatorname{ex}(n+1,s) \leq g_{4}(n+1,s)=g_{4}(n,s)+12n-32s+6\] \[=g_{4}(n,s)+s+6-3\delta<\operatorname{ex}(n,s)+s-1\,.\] Thereby (3.2) is proved. Now we construct a triangle-free graph \(G^{+}\) by adding to \(G\) a new vertex \(q\) and all edges from \(q\) to \(Q\). Owing to (3.2) we have \(e(G^{+})>\operatorname{ex}(n+1,s)\) and, therefore, \(G^{+}\) contains an independent set size \(s+1\). Such a set needs to be of the form \(X\cup\{q\}\), where \(X\) is as required. ## SS4. Imprints of the pentagon The main result of this section asserts that for minimal counterexamples \((n,s)\) the fortresses of extremal graphs \(G\in\mathfrak{E}(n,s)\) cannot contain pentagons. The first idea one might have towards proving this is that such a pentagon could lead, due to some general properties of fortresses, to a \(\Gamma_{3}\)-imprint, which would contradict Corollary 2.11. After a quick look at a picture of \(\Gamma_{3}\) and a pentagon \(\Gamma_{2}\) contained in it we notice that one of the three extra vertices of \(\Gamma_{3}\) has precisely one neighbour in \(V(\Gamma_{2})\). This raises the question whether it is always true that given a pentagon in a fortress \(\mathcal{F}\), there exists a vertex of \(\mathcal{F}\) with exactly one neighbour in the pentagon. Here is a positive answer for moulds. **Lemma 4.1**.: _If \((n,s)\) is a minimal counterexample, \(G\in\mathfrak{E}(n,s)\), and \((\varphi,\psi)\) denotes a \(\Gamma_{2}\)-mould for \(G\), then there is a vertex \(X\in V(\mathcal{F})\) adjacent to exactly one of the vertices \(\varphi(v)\) with \(v\in V(\Gamma_{2})\)._ Proof.: Put \(\varphi(i)=A_{i}\) and \(\psi(i)=B_{i}\) for every \(i\in V(\Gamma_{2})=\mathds{Z}/5\mathds{Z}\). Recall that \(A_{i}\in V(\mathcal{F})\), while \(B_{i}\) is an independent set of size \(3s-n\) such that \(K(A_{i},B_{i})\subseteq E(G)\). Moreover, a pair \(ij\in[5]^{(2)}\) satisfies \(A_{i}A_{j}\in E(\mathcal{F})\) if and only if \(|i-j|\in\{2,3\}\). Pick arbitrary vertices \(b_{i}\in B_{i}\) and let \(G^{-}\) be the graph obtained from \(G\) by deleting these five vertices \(b_{i}\). Because of \(\delta(n-5,s-2)=\delta-2\) and the minimality of \((n,s)\) we have \[\operatorname{ex}(n-5,s-2) \leq g_{4}(n-5,s-2)=g_{4}(n,s)+4n-16s+6\] \[<\operatorname{ex}(n,s)-\delta-5s+6\leq\operatorname{ex}(n,s)-(5 s-5)\] \[=e(G)-(5s-5)=e(G^{-})\,,\] which together with \(\nu(G^{-})=n-5\) shows \(\alpha(G^{-})\geq s-1\). Let \(Q\) be an independent set in \(G^{-}\) of size \(|Q|=\alpha(G^{-})\in\{s-1,s\}\) and put \(I=\{i\in\mathds{Z}/5\mathds{Z}\colon B_{i}\cap Q\neq\varnothing\}\). As \(Q^{+}=Q\cup\{b_{i}\colon i\in I\}\) is independent in \(G\), we have \(|Q|+|I|=|Q^{+}|\leq s\), whence \(|I|\leq 1\). Suppose first that \(|I|=1\), which guarantees \(|Q^{+}|=s\). Moreover, if \(i\) denotes the unique element of \(I\), then \(Q^{+}\) is adjacent in \(\mathcal{F}\) to \(A_{i}\) but to none of the sets \(A_{j}\) with \(j\neq i\), i.e., \(Q^{+}\) has the desired property. It remains to consider the case \(I=\varnothing\). By Lemma 3.7 there exists an independent set \(X\) in \(G\) of size \(s\) with \(Q\cap X=\varnothing\). Each of the sets \(B_{i}\) is either contained in \(X\) or disjoint to \(X\) (by Fact 2.5) and, in fact, at most \(\alpha(\Gamma_{2})=2\) of them can be subsets of \(X\). Thus \(Q\), \(X\), and three of the sets \(B_{i}\) are mutually disjoint subsets of \(V(G)\), wherefore \[n\geq|Q|+|X|+3(3s-n)\geq(s-1)+s+(9s-3n)=n+\delta-1\geq n\,.\] Now equality needs to hold throughout, which means that \[\alpha(G^{-})=|Q|=s-1\quad\text{ and }\quad 11s-4n=\delta=1\,.\] In particular, \(s\) is odd. Next we look at the set \[C=\{v\in V(G)\colon|\mathrm{N}_{G}(v)\cap\{b_{1},\dots,b_{5}\}|\leq 1\}\,.\] Since every vertex \(v\in V(G)\smallsetminus C\) belongs to exactly two of the sets \(A_{1},\dots,A_{5}\), we have \(|V(G)\smallsetminus C|\leq[5s/2]=(5s-1)/2=(2n-3s)\) and, hence, \(|C|\geq n-(2n-3s)=3s-n\). Thus Lemma 3.6 yields some \(c\in C\) such that \(\deg_{G}(c)\geq 4n-10s+1=s\). Now \(Z=\mathrm{N}_{G}(c)\) is an independent set of size \(s\). Due to \(\alpha(G^{-})=s-1\) it needs to contain at least one the vertices \(b_{i}\) and combined with the definition of \(C\) this shows that there is a unique vertex in \(Z\cap\{b_{1},\dots,b_{5}\}\). If \(b_{i}\) denotes that vertex, then \(\mathrm{N}_{\mathcal{F}}(Z)\cap\{A_{1},A_{2},A_{3},A_{4},A_{5}\}=\{A_{i}\}\) and, consequently, \(Z\) is as desired. Before going any further we state two more facts on moulds for graphs in \(\mathfrak{E}(n,s)\), both of which rely on space limitations caused by \(s>\frac{4}{11}n\). **Fact 4.2**.: _Let \(G\in\mathfrak{E}(n,s)\) be an extremal graph, where \(s>\frac{4}{11}n\). Suppose further that \((\varphi,\psi)\) is an \(H\)-mould in \(G\) for some graph \(H\). If \(XY\) denotes an arbitrary edge of \(\mathcal{F}\) and \(i,j,k\in V(H)\) are distinct, then some edge of \(\mathcal{F}\) connects a vertex from \(\{X,Y\}\) with a vertex from \(\{\varphi(i),\varphi(j),\varphi(k)\}\)._ Proof.: In view of \[|X|+|Y|+|\psi(i)|+|\psi(j)|+|\psi(k)|=2s+3(3s-n)=11s-3n>n\] the five sets \(X\), \(Y\), \(\psi(i)\), \(\psi(j)\), \(\psi(k)\) cannot be mutually disjoint. But \(XY\in E(\mathcal{F})\) means that the first two sets are disjoint and Fact 2.4 informs us that the last three sets are disjoint. So without loss of generality \(X\cap\psi(i)\neq\varnothing\), which due to Fact 2.5 implies \(X\varphi(i)\in E(\mathcal{F})\). **Fact 4.3**.: _If \(s>\frac{4}{11}n\) and \((\varphi,\psi)\) denotes a \(K_{1,3}\)-mould in some graph \(G\in\mathfrak{E}(n,s)\), then \(\{\varphi(v)\colon v\in V(K_{1,3})\}\) dominates \(V(\mathcal{F})\)._ Proof.: Let \(\{x\}\) and \(\{u,v,w\}\) be the vertex classes of \(K_{1,3}\). Consider an arbitrary fortress vertex \(Z\in V(\mathcal{F})\) that is non-adjacent to \(\varphi(x)\). We need to prove that at least one of \(\varphi(u)\), \(\varphi(v)\), or \(\varphi(w)\) is a neighbour of \(Z\). Fact 2.5 tells us \(Z\cap\psi(x)=\varnothing\) and together with the independence of \((Z\smallsetminus\varphi(x))\cup\psi(x)\) this yields \(|Z\cap\varphi(x)|\geqslant|\psi(x)|\geqslant 3s-n\). Now \(Z\cap\varphi(x)\), \(\psi(u)\), \(\psi(v)\), and \(\psi(w)\) are four subsets of \(\varphi(x)\) whose cardinalities sum up to at least \(4(3s-n)>s\). Hence, these sets cannot be mutually disjoint and Fact 2.4 allows us to suppose, without loss of generality, that \(Z\cap\psi(u)\neq\varnothing\). But now \(Z\varphi(u)\in E(\mathcal{F})\) is immediate from Fact 2.5. The final result of this section improves upon Corollary 2.11. **Lemma 4.4**.: _If \((n,s)\) is a minimal counterexample, then no graph in \(\mathfrak{E}(n,s)\) contains a \(\Gamma_{2}\)-imprint._ Proof.: Throughout the argument we shall encounter four graphs \(G_{0},G_{1},G_{2},G_{3}\in\mathfrak{E}(n,s)\) and without further notice we shall always denote the fortress of \(G_{j}\) by \(\mathcal{F}_{j}\). Roughly speaking, our strategy is to show that starting with a \(\Gamma_{2}\)-mould one can obtain a \(\Gamma_{3}\)-imprint by means of two symmetrisation steps. **Stage A: The graph \(G_{0}\).** Assuming that our claim fails Lemma 2.10 yields a graph \(G_{0}\in\mathfrak{E}(n,s)\) containing a \(\Gamma_{2}\)-mould \((\varphi,\psi)\). Set \(A_{i}=\varphi(i)\) and \(B_{i}=\psi(i)\) for every \(i\in V(\Gamma_{2})\). By Lemma 4.1 some vertex \(A_{6}\in V(\mathcal{F}_{0})\) is adjacent to exactly one of the sets \(A_{i}\), say to \(A_{3}\). Now Figure 1a shows an induced subgraph of \(\mathcal{F}_{0}\). Due to Fact 2.5 we have \(B_{3}\subseteq A_{6}\), while \(B_{1}\), \(B_{2}\), \(B_{4}\), \(B_{5}\) are disjoint to \(A_{6}\). By Corollary 3.5 applied to the edge \(A_{2}A_{4}\) and the vertex \(A_{6}\) there exists some \(A_{7}\in V(\mathcal{F}_{0})\) such that, without loss of generality, \(A_{2}A_{7}\) and \(A_{6}A_{7}\) are edges of \(\mathcal{F}_{0}\) (see Figure 4.1b). Next we apply Fact 4.2 to the edge \(A_{6}A_{7}\) and the vertices \(A_{1}\), \(A_{4}\), \(A_{5}\) of \(\mathcal{F}_{0}\). Since \(\mathcal{F}_{0}\) is triangle-free, the only possible outcome is \(A_{1}A_{7}\in E(\mathcal{F}_{0})\) and Figure 4.1c shows an induced subgraph of \(\mathcal{F}_{0}\). **Stage B: The graph \(G_{1}\).** Corollary 2.9 leads to the existence of a set \(B_{7}\subseteq A_{6}\) of size \(|B_{7}|=3s-n\) such that \(G_{1}=\mathbf{Sym}(G_{0}|A_{7},B_{7})\) is again in \(\mathfrak{E}(n,s)\). It turns out that everything we know about \(\mathcal{F}_{0}\) remains valid for \(\mathcal{F}_{1}\). In particular, \(A_{6}\) is still independent in \(G_{1}\). Moreover, we contend that \[K(A_{i},B_{i})\subseteq E(G_{1})\quad\text{ for every }i\in[7]\smallsetminus\{6\}. \tag{4.1}\] This is completely clear for \(i=7\). The edges \(A_{1}A_{7}\), \(A_{2}A_{7}\) show \(B_{7}\subseteq A_{1}\cap A_{2}\) as well as \((B_{1}\cup B_{2})\subseteq A_{7}\), and thus (4.1) holds for \(i=1,2\) as well. In order to take care of the three remaining cases, it suffices to show \(B_{7}\cap(A_{i}\cup B_{i})=\varnothing\) for \(i=3,4,5\). Since \(B_{7}\) is contained in \(A_{1}\), it is indeed disjoint to \(A_{3}\), \(A_{4}\), and \(B_{5}\) and, similarly, \(B_{7}\subseteq A_{2}\) yields \(B_{7}\cap A_{5}=B_{7}\cap B_{3}=\varnothing\). Finally, \(B_{7}\subseteq A_{6}\) implies \(B_{7}\cap B_{4}=\varnothing\). **Stage C: The graphs \(G_{2}\) and \(G_{3}\).** Let us now consider a matching \(M\) between \(A_{7}\) and \(V(G_{1})\smallsetminus(A_{6}\cup A_{7})\) such that * \(|E(M)|\) is maximum; * \(|E(M)|\) is maximum; * \(|E(M)|\) is maximum; * \(|E(M)|\) is maximum; Figure 4.1. Extending a \(\Gamma_{2}\)-mould in \(\mathcal{F}_{0}\). Large red dots indicate vertices \(A_{i}\) of \(\mathcal{F}_{0}\) for which there is a set \(B_{i}\) such that \(|B_{i}|=3s-n\) and \(K(A_{i},B_{i})\subseteq E(G)\). * and, subject to this, \(|V(M)\cap B_{1}|\) is maximum. Using \(B_{1}\subseteq A_{7}\), \(B_{4}\subseteq V(G_{1})\smallsetminus(A_{6}\cup A_{7})\), and \(K(B_{1},B_{4})\subseteq E(G_{1})\) one can easily see that \(M\) covers \(B_{1}\). By Lemma 2.8, if \(B_{6}\subseteq A_{7}\smallsetminus V(M)\) denotes any set of size \(3s-n\), then \(G_{2}=\mathbf{Sym}(G_{1}|A_{6},B_{6})\) is again in \(\mathfrak{E}(n,s)\). Arguing as in the previous stage, one can show that with the possible exception of \(A_{4}\) everything we know about \(\mathcal{F}_{1}\) survives in \(\mathcal{F}_{2}\). More precisely, we shall show \[K(A_{i},B_{i})\subseteq E(G_{2})\quad\text{ for every }i\in[7]\smallsetminus\{4\}. \tag{4.2}\] This time, the case \(i=6\) is obvious and the edges \(A_{3}A_{6}\), \(A_{6}A_{7}\) take care of the cases \(i=3,7\). The loss of control over \(A_{4}\) is a severe problem. We address this situation by setting \(\widetilde{B}_{6}=B_{6}\smallsetminus A_{4}\) and working mainly with the graph \(G_{3}=\mathbf{Sym}(G_{1}|A_{6},\widetilde{B}_{6})\), which by Lemma 2.8 belongs to \(\mathfrak{E}(n,s)\), too. Due to \(\widetilde{B}_{6}\cap(A_{4}\cup B_{4})=\varnothing\) the set \(A_{4}\) is still independent in \(G_{3}\) and the usual arguments show \[K(A_{i},B_{i})\subseteq E(G_{3})\quad\text{ for every }i\in[7]\smallsetminus\{6\}. \tag{4.3}\] So, roughly speaking, \(G_{3}\) inherits all useful properties of \(G_{1}\). In fact, at this moment it is not straightforward to say what the advantage of \(G_{3}\) over \(G_{1}\) is, or whether these graphs are actually distinct from one another. This will only become apparent towards the very end of the proof. **Stage D: The eighth vertex.** Now \(A_{1}A_{7}A_{2}A_{5}A_{3}\) is a pentagon in \(\mathcal{F}_{3}\) and together with the sets \(B_{1}\), \(B_{7}\), \(B_{2}\), \(B_{5}\), \(B_{3}\) it forms a \(\Gamma_{2}\)-mould for \(G_{3}\) (see Figure 4.4). Figure 4.3. A subgraph of \(\mathcal{F}_{2}\). Thus Lemma 4.1 yields a vertex \(A_{8}\in V(\mathcal{F}_{3})\) adjacent to exactly one of \(A_{1}\), \(A_{2}\), \(A_{3}\), \(A_{7}\), or \(A_{5}\). If this unique neighbour of \(A_{8}\) is \(A_{1}\), then Fact 4.3 applied to the claw in Figure 4.5a yields \(A_{4}A_{8}\in E(\mathcal{F}_{3})\) and thus \(A_{1}A_{4}A_{8}\) is a triangle in \(\mathcal{F}_{3}\), which is absurd. Working with Figure 4.5b a similar contradiction can be obtained if \(A_{2}A_{8}\in E(\mathcal{F}_{3})\). Suppose next that \(A_{3}A_{8}\in E(\mathcal{F}_{3})\) or \(A_{7}A_{8}\in E(\mathcal{F}_{3})\). Due to \(B_{6}\subseteq A_{3}\cap A_{7}\) we then have \(B_{6}\cap A_{8}=\varnothing\) and, therefore, \(A_{8}\) remains independent in \(G_{2}\). So we can apply Fact 4.3 in \(\mathcal{F}_{2}\) to \(A_{8}\) and the claws drawn in Figures 4.5c and 4.5d, thus reaching the same contradiction as before. Summarising this discussion, \(A_{8}\) needs to be adjacent to \(A_{5}\) and none of \(A_{1}\), \(A_{2}\), \(A_{3}\), \(A_{7}\) (see Figure 4.6a). **Stage E: The last two edges.** Returning to \(\mathcal{F}_{3}\) we apply Fact 4.2 to the edge \(A_{5}A_{8}\) and to the vertices \(A_{1}\), \(A_{4}\), \(A_{7}\), thus learning \(A_{4}A_{8}\in E(\mathcal{F}_{3})\) (see Figure 4.6b). Assume for the sake of contradiction that \(B_{6}\cap A_{8}=\varnothing\). This means that \(A_{8}\) stays independent in \(G_{2}\) and by Fact 4.2 applied in \(\mathcal{F}_{2}\) to the edge \(A_{5}A_{8}\) and the vertices \(A_{1}\), \(A_{6}\), \(A_{7}\) we obtain \(A_{6}A_{8}\in E(\mathcal{F}_{2})\), which in turn yields the contradiction \(B_{6}\subseteq A_{8}\). We have thereby shown \(B_{6}\cap A_{8}\neq\varnothing\) and together with \(A_{4}\cap A_{8}=\varnothing\) we reach \(\widetilde{B}_{6}\cap A_{8}\neq\varnothing\). Thus the definition of \(G_{3}\) reveals \(A_{6}A_{8}\in E(\mathcal{F}_{3})\) (see Figure 4.6c). Altogether our eight vertices \(A_{i}\) form an imprint of \(\Gamma_{3}\) in \(\mathcal{F}_{3}\) with natural ordering \(A_{1}A_{2}A_{8}A_{3}A_{7}A_{4}A_{5}A_{6}\), contrary to Corollary 2.11. ## 5. Bipartite fortresses The next result supersedes the previous section. **Lemma 5.1**.: _If \((n,s)\) is a minimal counterexample, then all graphs in \(\mathfrak{E}(n,s)\) have bipartite fortresses._ Proof.: Assume contrariwise that the fortress \(\mathcal{F}\) of some \(G\in\mathfrak{E}(n,s)\) contains an odd cycle. Let \(t\) be minimal such that \(\mathcal{F}\) contains a cycle \(\mathcal{C}=A_{1}A_{2}\ldots A_{2t+1}\) of length \(2t+1\). Due to Fact 3.1 and Lemma 4.4 we have \(t\geq 3\). Now Corollary 3.5 applied to the edge \(A_{1}A_{2}\) and the vertex \(A_{5}\) yields some \(T\in V(\mathcal{F})\) such that \(A_{5}T\in E(\mathcal{F})\) and either \(A_{1}T\in E(\mathcal{F})\) or \(A_{2}T\in E(\mathcal{F})\). But this creates a closed walk in \(\mathcal{F}\), which passes through \(T\) and has length \(2t-1\) or \(5\) (see Figure 5.1). In both cases we reach a contradiction to the minimality of \(t\). The reason why being bipartite is a useful property of fortresses is that it leads to the existence of an edge with a remarkable property. **Lemma 5.2**.: _For every minimal counterexample \((n,s)\) and every graph \(G\in\mathfrak{E}(n,s)\) there exists an edge \(XY\in E(\mathcal{F})\) such that every \(Z\in V(\mathcal{F})\) satisfies \(|X\cap Z|<3s-n\) or \(|Y\cap Z|<3s-n\)._ Proof.: The previous lemma informs us that \(\mathcal{F}\) is bipartite, say with vertex classes \(\mathcal{X}\) and \(\mathcal{Y}\). Since \(\mathcal{F}\) has at least one edge (by Lemma 3.2), these classes cannot be empty. The minimality of \((n,s)\) is used as follows. Figure 4.6. Construction of a \(\Gamma_{3}\)-imprint in \(\mathcal{F}_{3}\). **Claim 5.3**.: _There do not exist an integer \(r\geqslant 2\), fortress vertices \(X_{1},X_{2},\ldots,X_{r}\in\mathcal{X}\), \(Y_{1},Y_{2},\ldots,Y_{r}\in\mathcal{Y}\), and independent sets \(C_{1},C_{2},\ldots,C_{r}\subseteq V(G)\) such that for every \(i\in\mathds{Z}/r\mathds{Z}\) we have_ 1. \(X_{i}Y_{i}\in E(\mathcal{F})\)_;_ 2. \(|C_{i}|>4n-10s\)_;_ 3. \(Y_{i}\cap C_{i}=X_{i+1}\cap C_{i}=\varnothing\)_._ Proof.: Assume first that such a configuration exists for \(r=2\). We construct a graph \(G^{+}\) by adding six new vertices \(x_{1}\), \(y_{1}\), \(c_{1}\), \(x_{2}\), \(y_{2}\), \(c_{2}\) to \(G\) as well as all edges from \(x_{i}\) to \(X_{i}\), from \(y_{i}\) to \(Y_{i}\), from \(c_{i}\) to \(C_{i}\) (where \(i=1,2\)), and finally the hexagon \(x_{1}y_{1}c_{1}x_{2}y_{2}c_{2}\) (see Figure 5.2). Due to (_i_) and (_iii_) this graph is triangle-free. Moreover, \(\delta(n+6,s+2)=\delta-2\) and the minimality of \((n,s)\) reveal \[\operatorname{ex}(n+6,s+2) \leqslant g_{4}(n+6,s+2)=g_{4}(n,s)+4s+2(4n-10s)+8\] \[<\operatorname{ex}(n,s)+|X_{1}|+|Y_{1}|+|C_{1}|+|X_{2}|+|Y_{2}|+| C_{2}|+6=e(G^{+})\,,\] for which reason \(G^{+}\) contains an independent set \(Z^{+}\) of size \(s+3\). Clearly, \(Z^{+}\) contains at most \(s\) old vertices and at most three new ones. Thus there is some \(Z\in V(\mathcal{F})\) such that \(Z^{+}\) is either \(Z\cup\{x_{1},y_{2},c_{1}\}\) or \(Z\cup\{x_{2},y_{1},c_{2}\}\). In both cases \(Z\) has a neighbour in \(\mathcal{X}\) and a neighbour in \(\mathcal{Y}\), which contradicts the fact that \(\mathcal{X}\cup\mathcal{Y}\) bipartises \(\mathcal{F}\). This proves the case \(r=2\) of our assertion. Now we keep assuming that our claim fails and consider a counterexample with \(r\) minimum. As we have just seen, \(r\) is at least \(3\). By Lemma 3.4 applied to \(X_{1}Y_{1}\) and \(C_{2}\) there exists a set \(Z\in V(\mathcal{F})\) such that \(C_{2}\cap Z=\varnothing\) and either \(X_{1}Z\) or \(Y_{1}Z\) is an edge of \(\mathcal{F}\). Suppose first that \(X_{1}Z\in E(\mathcal{F})\), which yields \(Z\in\mathcal{Y}\). Now the sets \(X_{1},X_{3},\ldots,X_{r}\in\mathcal{X}\), \(Z,Y_{3},\ldots,Y_{r}\in\mathcal{Y}\), and \(C_{2},\ldots,C_{r}\subseteq V(G)\) contradict the minimality of \(r\). This proves \(Y_{1}Z\in E(\mathcal{F})\), whence \(Z\in\mathcal{X}\). But now the sets \(Z,X_{2},\in\mathcal{X}\), \(Y_{1},Y_{2}\in\mathcal{Y}\), and \(C_{1},C_{2}\subseteq V(G)\) yield the same contradiction. **Claim 5.4**.: _There do not exist \(r\geqslant 1\), \(X_{1},\ldots,X_{r}\in\mathcal{X}\), and \(Y_{1},\ldots,Y_{r}\in\mathcal{Y}\) with_ [MISSING_PAGE_POST] 14. \(X_{i _._ * \(|X_{i}\cap Y_{i+1}|\geqslant 3s-n\)__ _for all \(i\in\mathds{Z}/r\mathds{Z}\)._ Proof.: Assume contrariwise that such a situation exists. Since \(X_{1}\) is adjacent to \(Y_{1}\) but not to \(Y_{2}\), we have \(r\geqslant 2\). For every \(i\in\mathds{Z}/r\mathds{Z}\) Lemma 3.6 and (_ii_) yield a vertex \(c_{i}\in X_{i}\cap Y_{i+1}\) such that the cardinality of \(C_{i}=\mathrm{N}(c_{i})\) exceeds \(4n-10s\). As the sets \(C_{i}\) are disjoint to \(X_{i}\) and \(Y_{i+1}\), they lead to a contradiction to Claim 5.3. Let us now consider the sets \[\mathcal{X}^{\prime} =\left\{X\in\mathcal{X}\colon|X\cap Y|<3s-n\text{ for every }Y\in\mathcal{Y}\right\}\] \[\text{and}\quad\mathcal{Y}^{\prime} =\left\{Y\in\mathcal{Y}\colon|X\cap Y|<3s-n\text{ for every }X\in\mathcal{X}\right\}.\] **Claim 5.5**.: _There exists a vertex in \(\mathcal{Y}\) all of whose neighbours are in \(\mathcal{X}^{\prime}\)._ Proof.: Recall that \(\mathcal{F}\) has no isolated vertices. Thus the failure of our claim would imply that every \(Y\in\mathcal{Y}\) has a neighbour in \(\mathcal{X}\smallsetminus\mathcal{X}^{\prime}\). Starting with an arbitrary vertex \(Y_{1}\in\mathcal{Y}\) this allows us to construct recursively an infinite sequence \(Y_{1},X_{1},Y_{2},X_{2},\dots\) such that for every \(i\in\mathds{N}\) we have * \(Y_{i}\in\mathcal{Y}\), \(X_{i}\in\mathcal{X}\smallsetminus\mathcal{X}^{\prime}\); * \(X_{i}Y_{i}\in E(\mathcal{F})\); * \(|X_{i}\cap Y_{i+1}|\geqslant 3s-n\). Since \(\mathcal{Y}\) is finite, there need to exist indices \(p<q\) such that \(Y_{p}=Y_{q}\). But now the cyclic sequences \(X_{p},\dots,X_{q-1}\in\mathcal{X}\) and \(Y_{p},\dots,Y_{q-1}\in\mathcal{Y}\) of length \(r=q-p\) contradict the previous claim. As \(\mathcal{F}\) has no isolated vertices, Claim 5.5 yields, in particular, \(\mathcal{X}^{\prime}\neq\varnothing\) and by symmetry \(\mathcal{Y}^{\prime}\) cannot be empty either. Due to the definitions of \(\mathcal{X}^{\prime}\) and \(\mathcal{Y}^{\prime}\), the proof of our lemma can be completed by showing that there is an edge \(XY\) with \(X\in\mathcal{X}^{\prime}\) and \(Y\in\mathcal{Y}^{\prime}\). To this end we pick a vertex \(Y_{\star}\in\mathcal{Y}\) such that \(\mathrm{N}_{\mathcal{F}}(Y_{\star})\subseteq\mathcal{X}^{\prime}\), a vertex \(Y\in\mathcal{Y}^{\prime}\), as well as an arbitrary neighbour \(X_{\star}\) of \(Y\) (see Figure 5.3). By Corollary 3.5 applied to the edge \(X_{\star}Y\) and the vertex \(Y_{\star}\) there is a neighbour \(X\) of \(Y_{\star}\) adjacent to either \(X_{\star}\) or \(Y\). Our choice of \(Y_{\star}\) guarantees \(X\in\mathcal{X}^{\prime}\) and the independence of \(\mathcal{X}\) yields \(XX_{\star}\notin E(\mathcal{F})\). Thus \(XY\) is the desired edge. Now all that still separates us from the main result are one computation and two symmetrisations. **Fact 5.6**.: _If \((n,s)\) is a minimal counterexample, then_ \[\mathrm{ex}(n-2,s-1)<\mathrm{ex}(n,s)-(2s-1)\,. \tag{5.1}\] Proof.: For \(\delta=1\) the trivial bound (1.1) implies \[2\mathrm{ex}(n-2,s-1) \leqslant(n-2)(s-1)=2g_{4}(n,s)+(11s-4n)(3n-8s)-n-2s+2\] \[<2\mathrm{ex}(n,s)+2(n-3s)-2(2s-1)<2\mathrm{ex}(n,s)-2(2s-1)\,.\] Otherwise \(\delta\geqslant 2\), and \(\delta(n-2,s-1)=\delta-3\) combined with the minimality of \((n,s)\) yields \[\mathrm{ex}(n-2,s-1) \leqslant g_{4}(n-2,s-1)=g_{4}(n,s)+8n-24s+4\] \[<\mathrm{ex}(n,s)-2\delta-2s+4<\mathrm{ex}(n,s)-(2s-1)\,.\qed\] Proof of Theorem 1.2.: If the result failed, there would exist a minimal counterexample \((n,s)\). Let \(G\in\mathfrak{E}(n,s)\) denote an arbitrary extremal graph. According to Lemma 5.2 there exists an edge \(A_{1}A_{2}\in E(\mathcal{F})\) such that every \(Z\in V(\mathcal{F})\) satisfies \(|A_{1}\cap Z|<3s-n\) or \(|A_{2}\cap Z|<3s-n\). Now two successive applications of Corollary 2.9 lead to sets \(B_{1}\subseteq A_{2}\) and \(B_{2}\subseteq A_{1}\) of size \(3s-n\) such that the graph \(G_{\star}\) obtained from \(G\) by symmetrising first \(\mathbf{Sym}(A_{1},B_{1})\) and then \(\mathbf{Sym}(A_{2},B_{2})\) still belongs to \(\mathfrak{E}(n,s)\). Pick arbitrary vertices \(b_{i}\in B_{i}\) and set \(G^{-}=G_{\star}-\{b_{1},b_{2}\}\). Due to \[e(G^{-})=e(G)-(2s-1)=\mathrm{ex}(n,s)-(2s-1)\stackrel{{\eqref{eq: 1}}}{{>}}\mathrm{ex}(n-2,s-1)\] there is an independent set \(Z\) of size \(s\) in \(G^{-}\). But now for \(i=1,2\) the set \(Z\cup\{b_{i}\}\) cannot be independent in \(G_{\star}\), which proves \(Z\cap A_{i}\neq\varnothing\) and thus \(Z\cap B_{i}=\varnothing\). Consequently, \(Z\) was already independent in \(G\). Moreover, the sets \((Z\smallsetminus A_{i})\cup B_{i}\) are independent in \(G_{\star}\), whence \(|Z\cap A_{i}|\geqslant|B_{i}|=3s-n\). Altogether \(Z\) contradicts our choice of the edge \(A_{1}A_{2}\in E(\mathcal{F})\).
2307.08800
regulAS: A Bioinformatics Tool for the Integrative Analysis of Alternative Splicing Regulome using RNA-Seq data
The regulAS software package is a bioinformatics tool designed to support computational biology researchers in investigating regulatory mechanisms of splicing alterations through integrative analysis of large-scale RNA-Seq data from cancer and healthy human donors, characterized by TCGA and GTEx projects. This technical report provides a comprehensive overview of regulAS, focusing on its core functionality, basic modules, experiment configuration, further extensibility and customisation. The core functionality of regulAS enables the automation of computational experiments, efficient results storage and processing, and streamlined workflow management. Integrated basic modules extend regulAS with features such as RNA-Seq data retrieval from the public multi-omics UCSC Xena data repository, predictive modeling and feature ranking capabilities using the scikit-learn package, and flexible reporting generation for analysing gene expression profiles and relevant modulations of alternative splicing aberrations across tissues and cancer types. Experiment configuration is handled through YAML files with the Hydra and OmegaConf libraries, offering a user-friendly approach. Additionally, regulAS allows for the development and integration of custom modules to handle specialized tasks. In conclusion, regulAS provides an automated solution for alternative splicing and cancer biology studies, enhancing efficiency, reproducibility, and customization of experimental design, while the extensibility of the pipeline enables researchers to further tailor the software package to their specific needs. Source code is available under the MIT license at https://github.com/slipnitskaya/regulAS.
Sofya Lipnitskaya
2023-07-17T19:33:49Z
http://arxiv.org/abs/2307.08800v1
regulars: A Bioinformatics Tool for the Integrative Analysis of Alternative Splicing Regulome using RNA-Seq data ###### Abstract The regulars software package is a bioinformatics tool designed to support computational biology researchers in investigating regulatory mechanisms of splicing alterations through integrative analysis of large-scale RNA-Seq data from cancer and healthy human donors, characterized by TCGA and GTEx projects. This technical report provides a comprehensive overview of regulars, focusing on its core functionality, basic modules, experiment configuration, further extensibility and customisation. The core functionality of regulars enables the automation of computational experiments, efficient results storage and processing, and streamlined workflow management. Integrated basic modules extend regulars with features such as RNA-Seq data retrieval from the public multi-omics UCSC Xena data repository, predictive modeling and feature ranking capabilities using the scikit-learn package, and flexible reporting generation for analysing gene expression profiles and relevant modulations of alternative splicing aberrations across tissues and cancer types. Experiment configuration is handled through YAML files with the Hydra and OmegaConf libraries, offering a user-friendly approach. Additionally, regulars allows for the development and integration of custom modules to handle specialized tasks. In conclusion, regulars provides an automated solution for alternative splicing and cancer biology studies, enhancing efficiency, reproducibility, and customization of experimental design, while the extensibility of the pipeline enables researchers to further tailor the software package to their specific needs. Source code is available under the MIT license at [https://github.com/slipnitskaya/reguiAS](https://github.com/slipnitskaya/reguiAS). ## I Introduction Alternative splicing (AS) relates to a molecular mechanism that allows the generation of multiple mRNAs from a single gene to produce functionally distinct isoforms. This process is largely regulated by RNA-binding proteins (RBPs) that control the recruitment of the splicing machinery defining which exons are included in the resulting transcripts. Regulation of pre-mRNA splicing by RBPs is crucial for generating biological diversity in mammalian genomes, and this process is especially complicated in pathological conditions, such as cancer [1]. The regulAS package allows easy and reliable exploration of the landscape of alternative splicing events and its candidate modulators across human tumor and healthy tissues through integrative transcriptomics analysis of large-scale RNA sequencing (RNA-Seq) datasets from diverseomics data sources, and by utilizing machine learning (ML) approach. The purpose of this technical report is to provide a comprehensive overview and documentation of the regulAS software package for investigating alternative splicing regulation, utilizing external omics and associated phenotype datasets generated by The Cancer Genome Atlas (TCGA1) and Genotype-Tissue Expression (GTEx2) projects. Aimed at supporting researchers, regulAS offers a robust set of tools and functionalities to automate computational experiments, store, and process obtained results, and facilitate efficient workflows for alternative splicing research and cancer studies. Footnote 1: [https://www.cancer.gov/tcga](https://www.cancer.gov/tcga) Footnote 2: [https://www.gtexportal.org/home](https://www.gtexportal.org/home) This report serves as a guide for both new and experienced users, offering detailed insights into the design principles, core features, and extensibility of regulAS. By providing a high-level understanding of the software package, readers will gain the necessary knowledge to effectively utilize and harness the full potential of regulAS in their research endeavors. The scope of this report encompasses a thorough exploration of the core functionality of regulAS, including its ability to automate computational experiments and handle result storage and processing. Additionally, the report delves into the basic modules integrated into regulAS, such as data retrieval from external data sources, feature ranking capabilities, and the generation of tabular or visual summary output reports. Furthermore, this report highlights the support for flexible experiment configuration offered by regulAS through the use of YAML files. Lastly, the report addresses the extensibility aspect of regulAS, empowering end-users to expand its functionality beyond the provided modules. By enabling the development of custom modules in Python, regulAS offers researchers the freedom to integrate their specialized algorithms, models, or data processing techniques seamlessly. ### _Problem Definition_ In the field of computational biology and bioinformatics, researchers often face numerous challenges when integrating and analysing large-scale multi-omics datasets based on various public data sources [2]. These challenges include the need for efficient experimental design and automation, effective management and processing of results, seamless integration with external databases, and the ability to generate informative reports for downstream analyses and interpretation purposes to support the findings and experimental results. Traditionally, researchers have had to rely on manual execution of experiments, leading to time-consuming and error-prone processes. Additionally, the management and processing of the vast amount of data generated from these experiments pose significant difficulties, often requiring extensive manual effort and specialized software tools. Furthermore, integrating with external multi-omics data sources and retrieving relevant datasets for analysing gene expression profiles and alternative splicing patterns across different conditions (e.g., tumor and/or healthy), cancer types (e.g., primary tumor and/or tumor-adjacent tissues) appear to be challenging and time-consuming, also commonly necessitating specialized knowledge and technical skills. Other challenges are associated with the raw data processing and sample aggregation, both of which are essential for effective subsequent analysis--such as feature importance assessment or predictive survival analysis--and interpretation of the results (e.g., for studying gene expression abnormalities across tissues and physiological conditions). Additionally, comprehensive summary reports and visualizations that present research findings in a clear and concise manner are necessary for effective communication and collaboration. Thus, researchers need standardized computational solutions for integrative omics analysis to streamline data exploration and computational workflows towards data-informed decision-making about downstream experiments, hence contributing to a better understanding of biological systems. ### _Design Principles_ The design of regulAS follows a few key concepts. First, a low barrier to entry was prioritized, which implied following the "low-code" paradigm. Specifically, the base modules allow performing computational experiments by describing the desired workflow in a form of YAML configuration files, without writing code in Python. Second, the consistency among external and internal interfaces, and intermediate representations was addressed, which required considerable efforts in designing the architecture, however, paid off in an easy extensibility of regulAS to user-defined data loaders, ML-models, metric evaluators, and report generators. Third, as the use of the scikit-learn[3] package has become a _de facto_ standard for performing computational experiments in a Python-based environment, regulAS focuses on compatibility with the scikit-learn model API. As a part of the experimental workflow, regulAS utilizes a supervised ML-based approach for predictive modelling and feature ranking tasks to identify relevant candidate RBPs of AS changes based on RNA-Seq data of gene expression profiles of prospective RNA splicing modulators and matching junction reads data to reflect exon-skipping events [4] for genes of interest. Fourth, the base workflow steps such as data acquisition and preparation, model training and evaluation, and generation of reports were isolated--as depicted in Figure 1--to preserve modularity (and, therefore, extensibility) and support reuse of modules in the experiments. The isolation of base steps is accomplished by keeping the intermediate results in a persistent storage, namely a relational database built upon the SQLite engine [5]. Finally, because bioinformatics and computational biology are rapidly evolving fields, the specific methods and approaches may become outdated quite fast. To address this, the architecture of regulAS was designed to be _modular_ and _extensible_, to allow practitioners to leverage existing libraries and experimenting with their own components. For that, the implementation of regulAS strived for a clean, readable, and consistent code. ## II Package Organization As the main priority in the design was ease of use, regulAS follows a flat package structure and relies on the NumPy [6] and Pandas [7] types and data structures. The extensibility of regulAS through custom modules provides researchers with unparalleled flexibility in adapting Fig. 1: Workflow of the regulAS bioinformatics package for integrative transcriptomic analysis of alternative splicing events based on machine learning approach and RNA-Seq data from TCGA and GTEx data sources. the software package to their unique research needs. By leveraging the Python programming language, researchers can harness the rich ecosystem of libraries and tools to develop custom modules that seamlessly integrate with regulAS's core functionality. Custom modules can be easily integrated into regulAS, utilizing its modular architecture and well-defined extension points. This allows researchers to leverage the existing functionalities of regulAS while incorporating their own custom logic and algorithms. The integration process ensures a smooth and cohesive workflow, enabling researchers to exploit the full potential of regulAS while building upon its foundations. ### _Core Functionality_ regulAS provides researchers with the ability to automate computational experiments, reducing the manual effort and time required for experiment execution. By defining experiment configurations and parameters, researchers can easily set up and execute complex computational experiments with ease. This automation feature allows for increased productivity, reproducibility, and scalability in research workflows. regulAS offers a flexible and user-friendly approach to experiment configuration through the use of YAML (YAML Ain't Markup Language) files. Experiment configuration is facilitated by the Hydra library [8], developed by Facebook Research, and the underlying OmegaConf library, which provides a powerful configuration system. Hydra, a popular open-source Python library released by YAML files, override default configurations, and access experiment configuration. Hydra simplifies the process of working with YAML files by providing a hierarchical and modular configuration system. It allows researchers to easily define and organize their experiment parameters, enabling the configuration of various components and options. Underlying Hydra, OmegaConf provides the core configuration capabilities. It enables researchers to read and merge YAML files, override default configurations, and access experiment parameters programmatically. OmegaConf also supports interpolation, which allows referencing and reusing values across the configuration, enhancing flexibility and reusability. As shown in Figure 1, after successfully loading the experiment configuration file, regulAS loads the data, performs the training/testing split, and stores the experimental setup in its database. Next, based on the configuration file and data splits, regulAS extracts individual tasks and submits them asynchronously to a pool of pre-allocated Python multiprocessing workers. Then, each worker returns a task identifier, a success flag and the corresponding output or None if the run failed. For a successful run, regulAS stores the obtained predictions and feature importance scores (if available) into the database. ### _Data Loading_ regulAS is bundled with base data loading modules that encompass retrieval from both local (Python pickle) and remote (UCSC Xena3 data hubs) data sources [9]. The latter data loader acquires the required information from the remote databases, performs filtering and transformation if needed, and stores the ready-to-process pickle-serialized data locally. Respectively, the local data loader allows deserializing the downloaded data and feeding them into the downstream processing pipeline. Footnote 3: [https://xena.ucsc.edu/](https://xena.ucsc.edu/) Researchers can also develop custom modules to handle specialized data loading tasks within regulAS. These modules can be designed to integrate with specific data sources, formats, or APIs, allowing researchers to seamlessly import and preprocess their unique datasets. By extending regulAS with custom data loading modules, researchers gain the flexibility to incorporate their proprietary or specialized data sources into their computational experiments, ensuring compatibility and efficiency. ### _Persistence_ regulAS offers efficient storage and processing of obtained results, ensuring that researchers can effectively manage and analyze their data. One of the key components of the result storage in regulAS is the utilization of the SQLite database engine. This lightweight and fast database engine enables reliable storage and retrieval of experiment results, providing researchers with a robust and scalable solution for data management. With the built-in capabilities for querying and accessing stored results, researchers can efficiently explore and extract valuable insights from their data. The visual representation of the database schema is depicted in Figure 2. Table Data represents the root entity of the database. An Experiment depends on Data and, in turn, encapsulates several machine learning Pipelines. Each Pipeline is associated with a number of data transformation steps (TransformationSequence), and every step is defined by the Transformation itself, HyperParameters used, and their corresponding values (HyperParameterValue). Finally, a successfully finished Pipeline yields a list of model Predictions equipped with the corresponding _true_ values, and--in case the underlying ML-model (Transformation) supports it--numerical scores for FeatureRanking. ### _Report Generation_ The reporting module encompasses both intermediate and final report generation. The former type of report focuses on the assessment of performance of an ML-model and scoring of the feature importance, while the latter is responsible for producing documents for end-users and includes tabular (CSV) and visual (bar graph and scatter plot) reports. By design, the report configuration organizes the report steps in a directed acyclic graph, those leaf nodes are expected to produce (although not required) tabular or visual documents. In turn, each leaf node depends on some preceding nodes, which are to produce intermediate reports and not required to store any documents. When loading the report configuration file, regulAS checks for missing and circular dependencies. In case of any, the report generation interrupts, and a corresponding error appears in the standard output. If there is no cycle in dependencies, and all of them are satisfied, regulAS organizes the reporting steps into a linear structure that is processed sequentially, starting from the independent reports. The built-in report generation module enables regulAS to produce model evaluation and feature ranking in a tabular form (implemented on top of pandas.DataFrameFrame). For the visual reports, bar graphs are available for the performance evaluation and feature ranking purposes, while scatter plots allow visualizing correlations. Additionally, custom modules can be developed to expand regulAS's report generation capabilities. Researchers can design and implement modules that generate reports in specific formats, layouts, or styles, aligning with their specific reporting needs. These custom report generation modules can produce reports that include additional visualizations, statistical summaries, or customized annotations, providing researchers with comprehensive and tailored reports for their analyses and presentations. ## III Conclusion This document provides a brief yet comprehensive summary of regulAS, a software package that addresses the needs of researchers in computational biology and bioinformatics. By automating computational experiments, managing results, and providing flexible configuration options, regulAS simplifies research workflows and enhances productivity. The integration of basic modules and the ability to extend regulAS with custom modules further expand its capabilities, ensuring that researchers can leverage the software package to its full potential for their unique research objectives. The project is under active development, and the future work includes performance improvements and extended functionality of the experiment management. ### _Citing regulAS_ When using regulAS in academic work, authors are encouraged to reference this work via Zenodo4 or by following the instructions on the GitHub repository5. Fig. 2: Schema of the regulAS database for storing experimental setups and results.
2301.05072
Chemical kinetics and stochastic differential equations
We propose a general stochastic formalism for describing the evolution of chemical reactions involving a finite number of molecules. This approach is consistent with the statistical analysis based on the Chemical Master Equation, and provides the formal setting for the existing algorithmic approaches (Gillespie algorithm). Some practical advantages of this formulation are addressed, and several examples are discussed pointing out the connection with quantum transitions (radiative interactions).
Chiara Pezzotti, Massimiliano Giona
2023-01-12T15:17:46Z
http://arxiv.org/abs/2301.05072v1
# Chemical kinetics and stochastic differential equations ###### Abstract We propose a general stochastic formalism for describing the evolution of chemical reactions involving a finite number of molecules. This approach is consistent with the statistical analysis based on the Chemical Master Equation, and provides the formal setting for the existing algorithmic approaches (Gillespie algorithm). Some practical advantages of this formulation are addressed, and several examples are discussed pointing out the connection with quantum transitions (radiative interactions). All the chemical physical processes involve, in an atomistic perspective, a stochastic description of the events, be them reactive or associated with a change of phase (for instance adsorption) [1]. Nonetheless, in the overwhelming majority of the cases of practical and laboratory interest, the number of molecules involved is so large to justify a mean field approach, essentially based on the Boltzmannian hypothesis of molecular chaos (the "stosszahlansatz") [2]. The mean field formulation represents the backbone of the classical theory of chemical reaction kinetics [3; 4]. It is well known that, in all the cases where the number of molecule is small (and this occurs in subcellular biochemical reactions, in nanoscale systems, or in the growth kinetics of microorganisms [5; 6; 7]), the effects of fluctuations become significant, motivating a stochastic description of chemical kinetic processes, involving the number of molecules present in the system, thus explicitly accounting for due to their finite number [8; 9; 10; 11]. The statistical theory of chemical kinetics in these conditions is grounded on the Chemical Master Equation (CME) [12; 13], expressing the evolution equation for the probabilities \(p(\mathbf{N},t)\) of all the possible number-configurations \(\mathbf{N}(t)=(N_{1}(t),\ldots,N_{s}(t))\), where \(N_{h}(t)\) is the number of molecules of the \(h\)-th reacting species at time \(t\), \(h=1,\ldots,s\). However, apart from a handful of simple cases, for which the CME can be solved analytically [14], numerical methods should be applied to it in order to compute mean values and higher-order moments. But also this choice reveals itself to be unfeasible in most of the situations of practical and theoretical interests, due to the extremely large number of configurations involved, making the multi-index matrix \(p(\mathbf{N},t)\) so huge to exceed reasonable computational facilities. In order to solve this problem, Gillespie proposed an algorithmic solution to the numerical simulation of stochastic reacting systems, based on the Markovian nature of the reactive events [15; 16]. The original Gillespie algorithm has been extended and improved over time, providing a variety of slightly different computational alternatives. A common denominator of the first family of the Gillespie algorithms (namely those based on the direct method, the first reaction method or their derivates [17; 18; 19]) is to associate to every time step the occurrence of just one reaction. This formulation comes directly from the assumption that, if the time step is small enough, the probability that more than one reaction will occur is negligible. While correct, this choice brings to significant computational costs for complex reaction schemes. This problem has been highlighted several times, from the Gillespie group itself, as _stiffness_ in stochastic chemical reacting systems [20]. A brilliant way to overcome this limit originates the famous tau-leaping method, which, unfortunately, requires to check that the propensity functions remain _almost constant_ at each iteration and can be applied just if this condition is verified [21; 22]. The algorithmic solution associated with the formalism here introduced combines the accuracy of the first SSA with the computational advantages of the \(\tau\)-leaping method. There is, moreover, a missing link between the CME theory and the Gillespie algorithm, consisting in the straight mathematical formulation of the stochastic differential equations associated with a chemical reacting system, the statistical description of which would correspond to the CME. To clarify this issue, consider the conceptually analogous problem of particle diffusion over the real line, the statistical description of which is expressed by the parabolic equation \(\partial p(x,t)/\partial t=D\,\partial^{2}p(x,t)/\partial x^{2}\), for the probability density \(p(x,t)\) of finding a particle at position \(x\) at time \(t\). Setting \(x_{n}=x(n\,\Delta t)\), an algorithm describing this process can be simply expressed by the discrete evolution equation \(x_{n+1}=x_{n}+\sqrt{2\,D\,\Delta t}\,r_{n+1}\), where \(r_{h}\), \(h=1,2,\dots\) represent independent random variables sampled from a normal distribution (with zero mean, and unit variance) [23]. This represents an efficient algorithmic solution of the problem, whenever the time resolution \(\Delta t\) is small enough. Nevertheless, the mere algorithmic approach cannot be considered physically satisfactory, in a comprehensive formulation of transport theory embedded in a continuous space-time (in which both position \(x\) and time \(t\) are real valued). In point of fact, only with the mathematical formulation due to K. Ito of stochastic differential equations driven by the increments \(dw(t)\) of a Wiener process (Langevin equations) [24], namely \(dx(t)=\sqrt{2\,D}\,dw(t)\) the theory of diffusive motion has found a proper mathematical physical setting. A similar situation applies to the case of stochastic models of chemical reaction kinetics, and the present Letter is aimed at filling this gap. The basic idea is that any reactive process corresponds to a system of elementary events (the single reaction) possessing a Markovian transitional structure, and, consequently, amenable to a description by means of the increments of counting processes (Poisson processes, in the Markovian case). This topic has been also pointed out in [25] in terms of Poisson measures, although the latter formulation is much less simple and physically intuitive than the approach proposed in the present Letter. To begin with, consider the simple case of a first-order chemical reaction \(A\stackrel{{ k_{1}}}{{\underset{k_{-1}}{\rightleftharpoons}}}B\) (for instance, an isomerization). This model is perfectly analogous to the radiative transition of a molecule possessing two energy states, due to emission and adsorption of an energy quantum (figure 1). Let \(N_{A}(0)+N_{B}(0)=N_{g}\) the total number of molecules at time \(t=0\). The state of the system is characterized by the state functions \(\sigma_{h}(t)\), \(h=1,\ldots,N_{g}\) for each molecule, attaining values \(\{0,1\}\), and such that \(\sigma_{h}(t)=0\) if the energy state at time \(t\) is \(E_{0}\) (or equivalently if the molecule finds itself in the state \(A\)), and \(\sigma_{h}(t)=1\) in the opposite case (energy state \(E_{1}\), or isomeric state \(B\)). Let \(\{\chi_{h}^{(1)}(t,k_{1}),\chi_{h}^{(2)}(t,k_{-1})\}_{h=1}^{N_{g}}\) be two systems of independent Poisson processes, characterized by the transition rates \(k_{1}\), and \(k_{-1}\), respectively. The evolution of \(\sigma_{h}(t)\) can be expressed via the stochastic differential equation \[\frac{d\sigma_{h}(t)}{dt}=(1-\sigma_{h}(t))\ \frac{d\chi_{h}^{(1)}(t,k_{1})}{dt}- \sigma_{h}(t)\,\frac{d\chi_{h}^{(2)}(t,k_{-1})}{dt} \tag{1}\] \(h=1,\ldots,N_{g}\), where \(d\chi(t,\lambda)/dt\) is the distributional derivative of the Poisson process \(\chi(t,\lambda)\), corresponding to a sequence of unit impulsive functions at the transition instants \(t_{i}^{*}\), \(i=1,2,\ldots\), \(0<t_{i}^{*}<t_{i+1}^{*}\), where for \(\varepsilon>0\), \(\lim_{\varepsilon\to 0}\int_{t_{i}^{*}-\varepsilon}^{t_{i}^{*}+ \varepsilon}d\chi(t,\lambda)=1\). Summing over \(h=1,\ldots N_{g}\), and observing that \(N_{A}(t)=\sum_{h=1}^{N_{g}}{(1-\sigma_{h}(t))}\), \(N_{B}(t)=\sum_{h=1}^{N_{g}}{\sigma_{h}(t)}\), we have \[\frac{dN_{B}(t)}{dt}=\sum_{h=1}^{N_{A}(t)}\frac{d\chi_{h}^{(1)}(t,k_{1})}{dt}- \sum_{h=1}^{N_{B}(t)}\frac{d\chi_{h}^{(2)}(t,k_{-1})}{dt} \tag{2}\] Figure 1: Schematic representation of the analogy between a two-level quantum system and a first-order chemical kinetics, such as an isomerization. and \(dN_{A}(t)/dt=-dN_{B}(t)/dt\), representing the evolution equation for \(N_{A}(t)\) and \(N_{B}(t)\), attaining integer values. The stochastic evolution of the number of molecules \(N_{A}(t)\), \(N_{B}(t)\) is thus expressed as a differential equation with respect to the continuous physical time \(t\in\mathbb{R}^{+}\), over the increments of a Poisson process. Inepreted in a mean-field way, if \(c_{\rm tot}\) is the overall concentration of the reactants at time \(t=0\), then the concentrations \(c_{\alpha}(t)\) at time \(t\) can be recovered from eq. (2) as \[c_{\alpha}(t)=c_{\rm tot}\,\frac{N_{\alpha}(t)}{N_{g}}\,,\qquad\alpha=A,B \tag{3}\] representing the calibration relation connecting the stochastic description in terms of number of molecules \(N_{\alpha}(t)\) and the concentrations \(c_{\alpha}(t)\), \(\alpha=A,\,B\) entering the mean-field description. The analytical formulation of a stochastic differential equation for chemical kinetics, expressed in terms of the number of molecules of the chemical species involved, rather than an algorithm defined for discretized times, permits to develop a variety of different numerical strategies, that naturally perform a modified tau-leaping procedure, as the occurrence of several distinct reactive events in any elementary time step \(\Delta t\) is intrinsically accounted for. This can be easily seen by considering the simple reaction defined by the evolution equation (2). In terms of increments, eq. (2) can be written as \(dN_{B}(t)=\sum_{h=1}^{N_{A}(t)}d\chi^{(1)}(t,k_{1})-\sum_{h=1}^{N_{B}(t)}d\chi ^{(2)}(t,k_{-1})\). If \(\Delta t\) is the chosen time step, it follows from this formulation, a simple numerical approximation for eq. (2), namely, \[\Delta N_{B}(t)=N_{B}(t+\Delta t)-N_{B}(t)=\sum_{h=1}^{N_{A}(t)}\xi_{h}^{(1)}( k_{1}\,\Delta t)-\sum_{h=1}^{N_{B}(t)}\xi_{h}^{(2)}(k_{-1}\,\Delta t) \tag{4}\] where \(\xi^{(1)}(k_{1}\,\Delta t)\), \(\xi_{h}^{(2)}(k_{-1}\,\Delta t)\)\(h=1,2,\dots\), are two families of independent binary random variables, where \[\xi_{h}^{(\alpha)}(p)=\left\{\begin{array}{ll}1&\quad\mbox{with probability $p$}\\ 0&\quad\mbox{otherwise}\end{array}\right. \tag{5}\] \(\alpha=1,2\), \(h=1,2,\dots\). The time step \(\Delta t\), can be chosen in eq. (4) from the condition \[K\,\Delta t<1\,,\qquad K=\max\{k_{1},k_{-1}\} \tag{6}\] In practice, we choose \(\Delta t=0.1/K\). As can be observed, the choice of \(\Delta t\) is limited by the intrinsic rates of the process. The advantage of deriving different algorithmic schemes for solving numerically the stochastic kinetic equations becomes more evident in dealing with bimolecular reactions (addressed below). Due to the intrinsic limitations of this communication, a thorouh discussion of this issue is postponed to a future more extensive article [26]. The same approach can be extended to include amongst the elementary events not only the reactive steps, but also feeding conditions, thus representing the evolution of chemically reacting systems with a finite number of molecules in a perfectly stirred open reactor. This is the case of the tank-loading problem, in which a tracer is injected in an open vessel assumed perfectly mixed, for which, in the absence of chemical reactions, the mean field equation for the concentration of the tracer reads \[\frac{dc(t)}{dt}=D\ (c_{0}-c(t)) \tag{7}\] where \(c_{0}\) is the inlet concentration and \(D\) the dilution rate (reciprocal of the mean retention time), and \(c(0)=0\). Fixing \(N_{g}\) so that \(c(t)=c_{0}\,N(t)/N_{g}\), the corresponding stochastic differential equation for the integer \(N(t)\) involves, also in this case, two families of counting processes, one for the loading at constant concentration \(c_{0}\), and the other for tracer discharge in the outlet stream, characterized by the same transition rate \(D\), \[\frac{dN(t)}{dt}=\sum_{h=1}^{N_{g}}\frac{d\chi_{h}^{(1)}(t,D)}{dt}-\sum_{k=1}^ {N(t)}\frac{d\chi_{h}^{(2)}(t,D)}{dt} \tag{8}\] starting from \(N(0)=0\). Figure 2 depicts several realizations of the tank-loading process, obtained by discretizing eq. (8) with a time step \(\Delta t=10^{-3}\). Despite the simplicity of the process, this example permits to highlight the role of \(N_{g}\), that can be referred to as the _granularity number_, and the way stochastic models of chemical reactions can be fruitfully applied. Indeed, there is a two-fold use of the stochastic formulation of chemical kinetic schemes. The first refers to a chemical reacting system involving a small number of molecules, and in this case \(N_{g}\) represents the effective number of molecules present in the system. The other is to use stochastic algorithms for simulating reacting systems in an alternative (and sometimes more efficient way) with respect to the solution of the corresponding mean-field equations. In the latter case, the granularity number \(N_{g}\) represents essentially a computational parameter, tuning the intensity of the fluctuations. Two choices are then possible: (i) it can be chosen large enough, in order to obtain from a single realization of the process an accurate approximation of the mean-field behavior, or (ii) it can be chosen small enough in order, to deal with extremely fast simulations of a single realization of the process, that could be averaged over a statistically significant number of realizations in due time. These two choices are depicted in figure 2 (panel c), choosing \(N_{g}=10^{3}\), and in figure 3 panel (a) obtained for \(N_{g}=30\). Of course, the latter approach is valid as long as the low-granularity (low values of \(N_{g}\)) does not influence the qualitative properties of the kinetics. The second (computational) use of stochastic simulations of chemical kinetics requires a further discussion. At a first sight, it may appear that any stochastic simulation would be computationally less efficient than the solution of the corresponding mean-field equations. This is certainly true for classical chemical reaction schemes in a perfectly mixed system, Figure 2: \(c(t)=N(t)/N_{g}\) vs \(t\) from a single realization of the tank-loading process eq. (8) with \(D=1\), \(c_{0}=1\) a.u.. Panel (a): \(N_{g}=30\), panel (b) \(N_{g}=100\), panel (c) \(N_{g}=1000\). The solid horizontal lines represent the steady-state value \(c^{*}=1\). for which the mean-field model reduces to a system of ordinary differential equations for the concentrations of the reactants. But there are kinetic problems e.g., associated with the growth of microorganisms and eukaryotic cell lines in bioreactors (these growth phenomena, are indeed amenable to a description in terms of equivalent chemical reactions), the mean-field model of which is expressed in the form of higher-dimensional nonlinear integro-differential equations. For this class of problems, the use of stochastic simulations is the most efficient, if not the only way to achieve a quantitative description of the process, in those cases where the number \(n_{p}\) of internal parameters describing the physiological state of an eukaryotic cell becomes large enough, \(n_{p}\geq 3\). This issue is addressed in detail in [27]. This case is altogether similar to some transport problems, such as Taylor-Aris dispersion for high Peclet numbers or the analysis of microfluidic separation processes (DLD devices) for which the stochastic simulation of particle motion is far more efficient that the corresponding solution of the corresponding mean-field model expressed in the form of advection-diffusion equations [28; 29]. To complete the analysis of the tank-loading problem, the associated CME reads \[\frac{dp(n,t)}{dt}=D\,N_{g}\,\left[p(n-1,t)\,\eta_{n-1}-p(n,t)\right]+D\left[(n +1)\,p(n+1,t)-n\,p(n,t)\right] \tag{9}\] where \(\eta_{h}=1\) for \(h\geq 0\) and \(\eta_{h}=0\) otherwise. It follows that \(\langle c\rangle(t)=c_{0}\sum_{n=1}^{\infty}n\,p(n,t)/N_{g}\) satisfies identically the mean-field equation (due to the linearity of the problem), while the variance \(\sigma_{c}(t)\), with \(\sigma_{c}^{2}(t)=c_{0}^{2}\sum_{n=1}^{\infty}n^{2}\,p(n,t)/N_{g}^{2}-\left(c _{0}\sum_{n=1}^{\infty}n\,p(n,t)/N_{g}\right)^{2}\), satisfies the equation \[\frac{d\sigma_{c}^{2}}{dt}=-2\,D\,\sigma_{c}^{2}+D\,\left(\frac{1}{N_{g}}+ \frac{\langle c\rangle}{N_{g}}\right) \tag{10}\] Figure 3 panel (b) compares the results of stochastic simulations against the solutions of eq. (10) for two values of \(N_{g}\). The above approach can be extended to any system of nonlinear reaction schemes involving unimolecular and bimolecular reaction, and in the presence of slow/fast kinetics. The structure of the reaction mechanism can be arbitrarily complicated without adding any further complexity (other than purely notational) in the formulation of the stochastic evolution expressed in terms of number of molecules. The only practical issue, is that the number of different families of stochastic processes grows with the number of elementary reactive processes considered. For instance, in the case of the subtrate-inhibited Michaelin-Menten kinetics \[\begin{array}{c}E+S\,\mathop{\stackrel{{ k_{1}}}{{k_{-1}}}}\limits_{k_{-1}}ES\\ ES\,\mathop{\stackrel{{ k_{2}}}{{\rightarrow}}}\limits_{k}E+P\\ ES+S\,\mathop{\stackrel{{ k_{3}}}{{k_{-3}}}}\limits_{k_{-3}}ESS \end{array} \tag{11}\] there are five reactive processes (five channels in the language of the Gillespie algorithm) and consequently five families of counting processes \(\{\chi_{i_{h}}^{(h)}(t,\cdot)\}\), \(h=1,\ldots,5\), should be introduced, so that the formulation of the discrete stochastic dynamics reads \[\frac{dN_{S}(t)}{dt} =-\sum_{i=1}^{N_{S}(t)}\frac{d\chi_{i}^{(1)}(t,\widetilde{k}_{1}\,N _{E}(t))}{dt}+\sum_{j=1}^{N_{ES}(t)}\frac{d\chi_{j}^{(2)}(t,k_{-1})}{dt}\] \[\frac{dN_{E}(t)}{dt} =-\sum_{i=1}^{N_{S}(t)}\frac{d\chi_{i}^{(1)}(t,\widetilde{k}_{1} \,N_{E}(t))}{dt}+\sum_{j=1}^{N_{ES}(t)}\frac{d\chi_{j}^{(2)}(t,k_{-1})}{dt}+ \sum_{h=1}^{N_{ES}(t)}\frac{d\chi_{h}^{(3)}(t,k_{2})}{dt}\] \[\frac{dN_{ES}(t)}{dt} =\sum_{i=1}^{N_{S}(t)}\frac{d\chi_{i}^{(1)}(t,\widetilde{k}_{1} \,N_{E}(t))}{dt}-\sum_{j=1}^{N_{ES}(t)}\frac{d\chi_{j}^{(2)}(t,k_{-1})}{dt}- \sum_{h=1}^{N_{ES}(t)}\frac{d\chi_{h}^{(3)}(t,k_{2})}{dt}-\sum_{k=1}^{N_{S}(t) }\frac{d\chi_{k}^{(4)}(t,\widetilde{k}_{3}\,N_{ES}(t))}{dt}\] \[+\sum_{l=1}^{N_{ESS}(t)}\frac{d\chi_{l}^{(5)}(t,k_{-3})}{dt} \tag{12}\] \[\frac{dN_{ESS}(t)}{dt} =\sum_{k=1}^{N_{S}(t)}\frac{d\chi_{k}^{(4)}(t,\widetilde{k}_{3} \,N_{ES}(t))}{dt}-\sum_{l=1}^{N_{ESS}(t)}\frac{d\chi_{l}^{(5)}(t,k_{-3})}{dt}\] \[\frac{dN_{P}(t)}{dt} =\sum_{h=1}^{N_{ES}(t)}\frac{d\chi_{h}^{(3)}(t,k_{2})}{dt}\] equipped with the initial conditions \(c_{S}(0)=c_{S,0}\), \(c_{E}(0)=c_{E,0}\), \(c_{ES}(0)=c_{ESS}(0)=c_{P}(0)=0\). Observe that for the bimolecular steps we have used a number-dependent rate coefficient. This is just one possibility, out of other fully equivalent alternatives, of defining bimolecular reacting processes, and out of tem a numerical algorithm for solving them. This issue, and its computational implications will be addressed elsewhere [26]. The granularity number \(N_{g}\) can be fixed, so that \[N_{S}(0)=\left[c_{S,0}\,N_{g}\right],\qquad N_{E,0}=\left[c_{E,0}\,N_{g}\right] \tag{13}\] where \([\xi]\) indicates the integer part of \(\xi\), thus defining the relation betwen \(N_{\alpha}(t)\) and \(c_{\alpha}(t)\), namely \(c_{\alpha}(t)=N_{\alpha}(t)/N_{g}\), \(\alpha=S\), \(E\), \(ES\), \(ESS\), \(P\). This implies also that the effective rate parameters entering the discrete stochastic evolution equation (12), and associated with the two bimolecular reactive steps, are given by \(\widetilde{k}_{1}=k_{1}/N_{g}\), and \(\widetilde{k}_{3}=k_{3}/N_{g}\). Consider the case \(k_{-1}=k_{2}=k_{3}=k_{-3}=1\), \(c_{S,0}=4\), \(c_{E,0}=0.1\). In this case the quasi steady-state approximation of the \(c_{ES}\)-\(c_{S}\) curve (representing the slow manifold of the kinetics takes the expression \[c_{ES}=\frac{c_{E,0}\,c_{S}}{K_{M}+c_{S}+\beta\,c_{S}^{2}}\,,\qquad K_{M}= \frac{k_{-1}+k_{2}}{k_{1}}\,,\quad\beta=\frac{k_{-3}}{k_{3}} \tag{14}\] Figure 4 depicts the \(c_{ES}\)-\(c_{S}\) graph obtained from a single realization of the stochastic process eq. (11) at several values of \(k_{1}\) so as to modify the Michaelis-Menten constant \(K_{M}\) for a value \(N_{g}=10^{6}\) of the granularity number. Apart from the initial transient giving rise to an overshot in the values of \(c_{ES}\) near \(c_{S}\simeq c_{S,0}\), the dynamics rapidly collapses towards the slow manifold and the stochastic simulations at high \(N_{g}\)-value provide a reliable description of the mean-field behavior starting from a single stochastic realization. To conclude, we want to point out some advantages and extensions of the present approach: * it shows a direct analogy between chemical reaction kinetics, radiative processes and stochastic formulation of open quantum systems, thus, paving the way for a unified treatment of the interpaly between these phenomena, that is particularly important in the field of photochemistry, and in the foundation of statistical physics [30; 31]; * it can be easily extended to semi-Markov transition. This is indeed the case of the growth kinetics of eukaryotic microorganisms, the physiological state of which can be parametrized with respect to internal (hidden) parameters such as the age, the cytoplasmatic content, etc.; * it can be easily extended to include transport phenomena. In point of fact, the occurrence of Markovian or semi-Markovian transitions in modeling chemical kinetics is Figure 4: \(c_{ES}\) vs \(c_{S}\) plot of the substrate-inhibited enzymatic kinetics discussed in the main text. Symbols (in color) are the results of stochastic simulations of a single realization of the process eq. (11), (black) solid lines the graph of the quasi steady-state approximation. The arrow indicates increasing values of \(K_{M}\), i.e. decreasing values of \(k_{1}=20\), \(6\), \(2\). analogous to the transitions occurring in the direction of motion (Poisson-Kac processes, Levy flights, Extended Poisson-Kac processes) or in the velocity (linearized Boltzmann schemes) [32; 33; 34]. * it is closely related to the formulation of stochastic differential equations for the thermalization of athermal system [35], in which the classical mesoscopic description of thermal fluctuations, using the increments of a Wiener process, is replaced by a dynamic model involving the increments of a counting process. Due to the limitations of a Letter, all these issues will be addressed in forthcoming works. But apart for these extensions and improvements, the proposed formulation indicates that the stochastic theory of chemical reactions can be built upon a simple and consistent mathematical formalism describing the elementary reactive events as Markovian or semi-Markovian counting processes [36], that perfectly fits with the description of molecular non reactive events (molecular collisions), providing an unifying stochastic formalism of elementary (classical and quantum) molecular events.
2308.09343
Surprise machines: revealing Harvard Art Museums' image collection
Surprise Machines is a project of experimental museology that sets out to visualize the entire image collection of the Harvard Art Museums, intending to open up unexpected vistas on more than 200,000 objects usually inaccessible to visitors. Part of the exhibition Curatorial A(i)gents organized by metaLAB (at) Harvard, the project explores the limits of artificial intelligence to display a large set of images and create surprise among visitors. To achieve such a feeling of surprise, a choreographic interface was designed to connect the audience's movement with several unique views of the collection.
Dario Rodighiero, Lins Derry, Douglas Duhaime, Jordan Kruguer, Maximilian C. Mueller, Christopher Pietsch, Jeffrey T. Schnapp, Jeff Steward
2023-08-18T07:05:30Z
http://arxiv.org/abs/2308.09343v1
# Surprise machines ###### Abstract Surprise Machines is a project of experimental museology that sets out to visualize the entire image collection of the Harvard Art Museums, with a view to opening up unexpected vistas on more than 200,000 objects usually inaccessible to visitors. The project is part of the exhibition organized by metaLAB (at) Harvard entitled Curatorial A(i)gents and explores the limits of artificial intelligence to display a large set of images and create surprise among visitors. To achieve this feeling of surprise, a choreographic interface was designed to connect the audience's movement with several unique views of the collection. Artificial intelligence, choreographic interface, digital archives, experimental museology, network Surprise Machines, digital archives, experimental museology, network Surprise Machines, digital archives, experimental museology, network Surprise Machines, digital archives, experimental museology, network Surprise Machines, digital archives, experimental museology, network Surprise Machines, digital archives, experimental museology, network Surprise Machines, digital archives, experimental museology, network ## 1 Introduction Although "the humanities so far has focused on literary texts, historical text records, and spatial data," as stated by Lev Manovich in _Cultural Analytics_(Manovich, 2020, p. 10), the recent advancements in artificial intelligence are driving more attention to other media. For example, disciplines such as digital humanities now embrace more diverse types of corpora (Champion, 2016). Yet this shift of attention is also visible in museums, which recently took a step forward by establishing the field of experimental museology (Kenderdine et al., 2021). This article illustrates the visualization of an extensive image collection through digital means. Following a growing interest in the digital mapping of images--proved by the various scientific articles published on the subject (Bludau et al., 2021; Crockett, 2019; Seguin, 2018), Ph.D. theses (Krautli, 2016; Vane, 2019), software (American Museum of Natural History, 2020/2022; Diagne et al., 2018; Pietsch, 2018/2022), and presentations (Benedetti, 2022; Klinke, 2021)--this text describes an interdisciplinary experiment at the intersection of information design, experimental museology, and cultural analytics. Surprise Machines is a data visualization that maps more than 200,000 digital images of the Harvard Art Museums (HAM) and a digital installation for museum visitors to understand the collection's vastness. Part of a temporary exhibition organized by metaLAB (at) Harvard and entitled Curatorial A(i)gents, Surprise Machines is enriched by a choreographic interface that allows visitors to interact with the visualization through a camera capturing body gestures. The project is unique for its interdisciplinarity, looking at the prestigious collection of Harvard University through cutting-edge techniques of AI.
2303.13069
Human Guided Ground-truth Generation for Realistic Image Super-resolution
How to generate the ground-truth (GT) image is a critical issue for training realistic image super-resolution (Real-ISR) models. Existing methods mostly take a set of high-resolution (HR) images as GTs and apply various degradations to simulate their low-resolution (LR) counterparts. Though great progress has been achieved, such an LR-HR pair generation scheme has several limitations. First, the perceptual quality of HR images may not be high enough, limiting the quality of Real-ISR outputs. Second, existing schemes do not consider much human perception in GT generation, and the trained models tend to produce over-smoothed results or unpleasant artifacts. With the above considerations, we propose a human guided GT generation scheme. We first elaborately train multiple image enhancement models to improve the perceptual quality of HR images, and enable one LR image having multiple HR counterparts. Human subjects are then involved to annotate the high quality regions among the enhanced HR images as GTs, and label the regions with unpleasant artifacts as negative samples. A human guided GT image dataset with both positive and negative samples is then constructed, and a loss function is proposed to train the Real-ISR models. Experiments show that the Real-ISR models trained on our dataset can produce perceptually more realistic results with less artifacts. Dataset and codes can be found at https://github.com/ChrisDud0257/HGGT
Du Chen, Jie Liang, Xindong Zhang, Ming Liu, Hui Zeng, Lei Zhang
2023-03-23T06:53:14Z
http://arxiv.org/abs/2303.13069v1
# Human Guided Ground-truth Generation for Realistic Image Super-resolution ###### Abstract How to generate the ground-truth (GT) image is a critical issue for training realistic image super-resolution (Real-ISR) models. Existing methods mostly take a set of high-resolution (HR) images as GTs and apply various degradations to simulate their low-resolution (LR) counterparts. Though great progress has been achieved, such an LR-HR pair generation scheme has several limitations. First, the perceptual quality of HR images may not be high enough, limiting the quality of Real-ISR outputs. Second, existing schemes do not consider much human perception in GT generation, and the trained models tend to produce over-smoothed results or unpleasant artifacts. With the above considerations, we propose a human guided GT generation scheme. We first elaborately train multiple image enhancement models to improve the perceptual quality of HR images, and enable one LR image having multiple HR counterparts. Human subjects are then involved to annotate the high quality regions among the enhanced HR images as GTs, and label the regions with unpleasant artifacts as negative samples. A human guided GT image dataset with both positive and negative samples is then constructed, and a loss function is proposed to train the Real-ISR models. Experiments show that the Real-ISR models trained on our dataset can produce perceptually more realistic results with less artifacts. Dataset and codes can be found at [https://github.com/ChrisDud0257/HGGT](https://github.com/ChrisDud0257/HGGT). ## 1 Introduction Owing to the rapid development of deep learning techniques [14, 18, 19, 22, 44], the recent years have witnessed the great progress in image super-resolution (ISR) [2, 8, 9, 10, 12, 26, 27, 28, 29, 31, 32, 33, 35, 45, 46, 48, 51, 52, 54, 56, 44], which aims at generating a high-resolution (HR) version of the low-resolution (LR) input. Most of the ISR models (_e.g._, CNN [37, 38] or transformer [5, 29] based ones) are trained on a large amount of LR-HR image pairs, while the generation of LR-HR image pairs is critical to the real-world performance of ISR models. Most of the existing ISR methods take the HR images (or after some sharpening operations [46]) as ground-truths (GTs), and use them to synthesize the LR images to build the LR-HR training pairs. In the early stage, bicubic downsampling is commonly used to synthesize the LR images from their HR counterparts [8, 9, 23, 33, 42, 56]. However, the ISR models trained on such HR-LR pairs can hardly generalize to real-world images whose degradation process is much more complex. Therefore, some researchers proposed to collect HR-LR image pairs by using long-short camera focal lengths [3, 4]. While such a degradation process is more reasonable than bicubic downsampling, it only covers a small subspace of possible image degradations. Recently, researchers [12, 30, 32, 34, 46, 50, 51, 59] have proposed Figure 1: From left to right and top to bottom: one original HR image (Ori) in the DIV2K [1] dataset, two of its enhanced positive versions (Pos-1 and Pos-2) and one negative version (Neg). The positive versions generally have clearer details and better perceptual quality, while the negative version has some unpleasant visual artifacts. **Please zoom in for better observation.** to shuffle or combine different degradation factors, such as Gaussian/Poisson noise, (an-)isotropic blur kernel, downsampling/upsampling, JPEG compression and so on, to synthesize LR-HR image pairs, largely improving the generalization capability of ISR models to real-world images. Though great progress has been achieved, existing LR-HR training pair generation schemes have several limitations. First, the original HR images are used as the GTs to supervise the ISR model training. However, the perceptual quality of HR images may not be high enough (Fig. 1 shows an example), limiting the performance of the trained ISR models. Second, existing schemes do not consider much human perception in GT generation, and the trained ISR models tend to produce over-smoothed results. When the adversarial losses [27, 40, 48] are used to improve the ISR details, many unpleasant artifacts can be introduced. In order to tackle the aforementioned challenges, we propose a human guided GT data generation strategy to train perceptually more realistic ISR (Real-ISR) models. First, we elaborately train multiple image enhancement models to improve the perceptual quality of HR images. Meanwhile, one LR image can have multiple enhanced HR counterparts instead of only one. Second, to discriminate the visual quality between the original and enhanced images, human subjects are introduced to annotate the regions in enhanced HR images as "Positive", "Similar" or "Negative" samples, which represent better, similar or worse perceptual quality compared with the original HR image. Consequently, a human guided multiple-GT image dataset is constructed, which has both positive and negative samples. With the help of human annotation information in our dataset, positive and negative LR-GT training pairs can be generated (examples of the positive and negative GTs can be seen in Fig. 1), and a new loss function is proposed to train the Real-ISR models. Extensive experiments are conducted to validate the effectiveness and advantages of the proposed GT image generation strategy. With the same backbone, the Real-ISR models trained on our dataset can produce more perceptually realistic details with less artifacts than models trained on the current datasets. ## 2 Related Work According to how the LR-HR image pairs are created, the existing ISR methods can be categorized into three major groups: simple degradation based, long-short focal length based, and complex degradation based methods. **Simple Degradation based Training Pairs.** Starting from SRCNN [8, 9], most of the deep learning based ISR methods synthesize the LR images from their HR counterparts by bicubic downsampling or direct downsampling after Gaussian smoothing. By using such a simple degradation model to generate a large amount of training data, researchers focus more on the ISR network module design, such as residual [23]/dense [58] connection, channel-attention [6, 17, 56], multiple receptive field [16, 28] or self-attention [5, 29, 54]. The fidelity based measures, such as PSNR and SSIM [49], are used to evaluate and compare the performance of different ISR methods. Later on, many works [31, 35, 39, 40, 41, 47, 48] have been developed to adopt the Generative Adversarial Network (GAN) [11] techniques to train Real-ISR models so as to produce photo-realistic textures and details. **Long-short Focal Length based Training Pairs.** Instead of synthesizing LR-HR pairs using simple degradation operators, researchers have also tried to use long-short camera focal length to collect real-world LR-HR pairs. The representative works include CameraSR [4] and RealSR [3]. The former builds a dataset using DSLR and mobile phone cameras to model degradation between the image resolution and field-of-view. The latter utilizes different focal lengths of the DSLR camera to shot the same scene at different resolutions, and employs an image registration method to crop and align the LR-HR image pairs. Nonetheless, ISR models trained on those datasets might fail when applied to images from different resources (, different degradation, different focal length and cameras). **Complex Degradation based Training Pairs.** The image degradation in real-world scenarios can be too complex to model using a simple operator. To enable the Real-ISR models having higher generalization capability, BSRGAN [51] and Real-ESRGAN [46] have been proposed to synthesize LR-HR training pairs with more complex image degradations. They employ a set of degradation factors, such as different types of noise, blur kernels, scaling factors, JPEG compression,, to enlarge the degradation space. BSRGAN [51] shuffles and combines different degradations, while Real-ESRGAN [46] employs a two-stage synthesis progress. In DASR [32], Liang. partitioned the complex degradation space into different levels, and proposed a degradation adaptive method for Real-ISR. **Other Training Pairs.** Beside the above three groups of ISR methods, MCinCGAN [57] and Pseudo-SR [36] utilize unpaired training images to do unsupervised learning. They utilize one or more discriminators to tell the HR GT from the unpaired SR output. AdaTarget [21] employs a transformation CNN block to generate a training-friendly GT from the original GT during the training progress. Nevertheless, the quality of the generated training-friendly GT might not have a good perception quality. ## 3 Human Guided Ground-truth Generation ### Overview As discussed in Section 2, almost all existing methods [8, 9, 15, 25, 34, 55, 37] directly take the HR images as the GT to construct the training pairs. Unfortunately, the perceptual quality of many HR images may not be good enough to serve as GTs, limiting the performance trained Real-ISR models. Therefore, we propose to enhance the quality of HR images so that they can better serve as GTs. In particular, human guidance can be introduced in the GT generation process so that perceptually more realistic Real-ISR models can be trained. As illustrated in Fig. 2, the proposed human guided GT generation method has three steps. First, we elaborately train multiple image enhancement models to improve the perceptual quality of HR images. Second, those patches which have enough textural and structural details and have certain differences between the enhanced version and the original version are extracted. Third, human subjects are introduced to discriminate the visual quality between the enhanced patches and the original patch, and label them as "Positive" (_i.e_., better quality), "Similar" (_i.e_., similar quality) or "Negative" (_i.e_., worse quality) samples. In the following subsections, we describe these three steps in detail. ### Design of the Enhancement Models In order to generate visually more pleasing GTs from the original HR image, we train multiple image enhancement models and apply them to the HR image. To this end, the commonly used DF2K-OST dataset (including 800 high quality images from DIV2K [1], 2650 high quality images from Flickr2K [43] and 10,324 images from OST [47]) is employed. The original images are denoted by \(I^{H}\), and the low quality ones, denoted by \(I^{L}\), are degraded from \(I^{H}\) by using the following degradation model [46, 51]: \[I^{L}=[(I^{H}\otimes K)_{R}+V]_{J}, \tag{1}\] where \(K\) means isotropic/an-isotropic blur kernel, \(R\) means resize operation, \(V\) is Gaussian/Poisson noise and \(J\) denotes JPEG compression. With (\(I^{L}\), \(I^{H}\)) as training pairs, we can train enhancement models. Note that before inputting the low-quality image \(I^{L}\) into the model, we resize it to the size of \(I^{H}\) since here we are training enhancement models, where the input and output have the same size. Considering that the quality of HR image to be further enhanced is generally not bad, we deliberately control the degradation settings in Eq. (1) within weak to middle levels. Otherwise, the learned models can over-enhance the HR images and generate many artifacts. Since the major issues of real world images are noise corruption and blurring, we employ two degradation settings, one focusing on processing slightly high noise and the other focusing on dealing with slightly strong blur. The detailed degradation settings can be found in the **supplementary file**. We select one CNN-based network RCAN [56] and one transformer-based network ELAN [54] as the backbones of our enhancer. RCAN [56] adopts deep residual learning together with channel-attention [18], while ELAN [54] employs a multi-scale self-attention [44] block to extract long-range independence. We remove the up-sampling layer in those models since the input and output share the same size in our case. We choose both CNN and transformer as backbones because though transformers have stronger capability in restoring large scale structures and repetitive patterns, CNN can better characterize some small scale and local image details. With the two different degradation settings and two different backbones, we train four image enhancement models with \(L_{1}\), perceptual and adversarial losses. The UNet discriminator [46] is used in adversarial training. Figure 2: Illustration of our human guided ground-truth (GT) generation process. We first train four image enhancement models to enhance the original high-resolution (HR) image, and then extract the patches which have rich textural and structural details while having certain differences between the original and enhanced versions. Finally, human subjects are involved to annotate the extract patches as “Positive”, “Similar” and “Negative” samples. ### Patch Selection and Annotation We apply the trained four enhancement models to 1,600 HR images collected from three representative resources: 1) 800 images from the DIV2K [1] dataset; 2) 400 images from Internet which could be used for free, such as Pixabay ([https://pixabay.com](https://pixabay.com)) and Unsplash ([https://unsplash.com](https://unsplash.com)); 3) 400 images shot by us using mobile phones. Note that though those HR images have high resolution (2K\(\sim\)4K), they could contain certain noise, blurred details or other real-world degradations, as we shown in Fig. 1. It is expected that their perceptual quality can be improved by our enhancement models so that they can better serve as GTs in Real-ISR model training. After applying the four enhancement models to the 1,600 HR images, we obtain 6,400 enhanced images. However, it is inappropriate to directly take them as GTs. On one hand, many regions in these images are smooth and less informative. On the other hand, there is no guarantee that the enhancement models can always produce perceptually better outputs in all regions. Therefore, we extract patches from those enhanced images and invite human volunteers to label them. In specific, we randomly crop \(512*512\) patches from each image with the overlapping area less than \(1/2\) of patch area. We then filter out the patches that have large smooth background regions according to the quantity of details and textures, which is measured by the standard deviation (std) of the patch in image domain and the std of high-frequency components in a Laplacian pyramid. At last, we remove the patches on which the difference between the original version and the enhanced version is small (_i.e._, no much enhancement). The patch selection process avoids the cost of annotating flat patches, and can speed up the training process since flat patches have small gradients. Finally, we select 20,193 groups of patches of \(512*512\) size, each group having one original HR patch and 4 enhanced patches. We then invite 60 volunteers with different background to annotate the quality of enhanced patches by comparing them with the original HR patch. A software program, whose interface is shown in the **supplementary file**, is developed for this purpose. The original patch is positioned at the left side of the screen, while the four enhanced versions are located on the right side in random order. Those patches whose perceptual quality is better than the original one are labeled as "Positive", and the patches with worse perceptual quality are labeled as "Negative". In case the quality is tied, the enhanced patch will be labeled as "Similar". Before annotating, all volunteers are briefly trained to ensure that they will focus on the image perceptual quality (_e.g._, sharpness, noise, details, artifacts, _etc._) but not on the image content. ### Statistics of the Annotated Dataset We invite 60 volunteers to annotate the 20,193 patch groups, each consisting of an original HR patch and 4 enhanced patches. Each group is annotated by 3 different volunteers, and each volunteer is assigned with about 1,010 groups to annotate. In total, we obtain 20,193 groups of annotations, and \(20,193*4\times 3=242,316\) annotated patches. The average annotation time for one group is 22.79s. **Distribution of the patch annotations.** Tab. 1 shows the distribution of "Positive", "Similar" and "Negative" labels for each enhancement model, as well as the overall distribution. We see that there are overall 176,042 "Positive" (72.65%), 50,904 "Similar" (21.00%) and 15,370 "Negative" (6.35%) patches. Such statistics imply that our enhancement models improve the visual quality of HR patches in most cases, but there are indeed some bad cases. **Distribution of the final patch labels.** For each of the \(20,193*4=80,772\) enhanced patches, we have three annotations from three different volunteers. We take the majority as the final label of the patch, _i.e._, if one patch has two or three same annotations, it will be labeled by that annotation. In case the three annotations are different from each other (_i.e._, one "Positive", one "Similar" and one "Negative"), the final label is marked as "Similar". Tab. 2 shows the distribution of the final labels of the enhanced patches. We can see that finally there are 63,583 "Positive" (78.72%), 14,675 "Similar" (18.17%) and 2,514 "Negative" (3.11%) patches. Most of the final labels are "Positive" ones, and only a small portion (3.11%) are "Negative" ones. The maximum divergence of "Positive" labels is 3,329 (5.24%) between Model 2 and Model 3. The examples of "Positive", "Similar" and "Negative" patches can be found in the **supplementary file**. **Distribution of the number of final "Positive" patches per group.** For each group of patches, there can be \(0\sim 4\) final "Positive" samples. Tab. 3 shows the distribution of the number of final "Positive" patches per group. One can see that among the 20,193 groups, 11,413 (56.52%) groups have 4 "Positive" patches, 3,901 (19.32%) have 3 "Positive" patches, 2,616 (12.95%) have 2 "Positive" patches, 996 (4.93%) have 1 "Positive" patch, and 1,267 (6.28%) have none. We will use those "Positive" patches as "Positive" GTs, and those "Negative" patches as "Negative" GTs \begin{table} \begin{tabular}{|c||c c c c||c|} \hline \multirow{2}{*}{Label} & \multicolumn{4}{c||}{Enhance Model} & \multirow{2}{*}{Total} \\ & 1 & 2 & 3 & 4 \\ \hline \hline Positive & 42362 & 39031 & 47251 & 47398 & 176042 \\ Similar & 14623 & 17615 & 10259 & 8407 & 50904 \\ Negative & 3594 & 3933 & 3069 & 4774 & 15370 \\ \hline \hline Total & 60579 & 60579 & 60579 & 60579 & 242316 \\ \hline \end{tabular} \end{table} Table 1: The distribution of annotations in our dataset. There are 20,193 groups of patches, while each group consists of an original HR patch and 4 enhanced patches. Each enhanced patch is annotated by 3 different volunteers, resulting in a total of \(20,193*4\times 3=242,316\) annotations. to train Real-ISR models. The patches with "Similar" labels are not employed in our training progress. ## 4 Training Strategies As described in Sec. 3, for an original HR patch, denoted by \(I^{H}\), we may have several (less than 4) positive GTs, denoted by \(I^{Pos}\), and several negative GTs, denoted by \(I^{Neg}\). To construct the positive or negative LR-GT pairs for Real-ISR model training, we apply the degradation model in Eq. 1 to \(I^{H}\) and obtain the corresponding LR image, denoted by \(I^{L}\). (The setting of degradation parameters will be discussed in Sec. 5.1). In total, there are 63,583 positive LR-GT pairs \((I^{L},I^{Pos})\) and 2,514 negative LR-GT pairs \((I^{L},I^{Neg})\). Note that in our dataset, one LR image may correspond to multiple positive GTs or negative GTs. **Training with positive pairs only.** By removing those groups that do not have any positive GT from the 20,193 training groups, we have 18,926 groups with \(1\sim 4\) GTs, and 63,583 positive LR-GT pairs to train Real-ISR models. As in previous works [46, 51], we employ the \(L_{1}\) loss, perceptual loss \(L_{p}\) and GAN loss \(L_{GAN}\) to train the model. Since one LR image \(I^{L}\) may have multiple positive GTs, each time we randomly choose one positive GT to calculate the \(L_{1}\), \(L_{p}\) and \(L_{GAN}\) losses of the corresponding LR image \(I^{L}\), and update the discriminator and generator networks. The overall training loss is as follows: \[L_{Total}=\alpha L_{1}+\beta L_{p}+\gamma L_{adv}, \tag{2}\] where \(\alpha\), \(\beta\) and \(\gamma\) are balance parameters. **Training with both positive and negative pairs.** By filtering out those groups that only contain "Similar" GTs, we obtain 19,272 groups that have at least one "Positive" or "Negative" GT, totally 63,583 positive LR-GT pairs and 2,514 negative LR-GT pairs. When training with the positive GTs, we adopt the same strategy as described above. For each negative LR-GT pair, we introduce a negative loss, denoted by \(L_{neg}\), to update the model. It is observed that most of the negative GTs have over-sharpened details, strong noise or false details (example images are provided in the **supplementary file**). Inspired by LDL [31], we build a map \(\mathbf{M}^{Neg}\) to indicate the local residual variation of a negative GT, which is defined as \(\mathbf{M}^{Neg}_{i,j}=var(\mathbf{R}^{Neg}_{i,j}(3,3))^{a}\), where \(\mathbf{R}^{Neg}=|I^{Neg}-I^{H}|\) is the residual between the original HR image and the negative GT, \(\mathbf{R}^{Neg}_{i,j}(3,3)\) is a local \(3\times 3\) window of \(\mathbf{R}^{Neg}\) centered at \((i,j)\), \(var\) denotes the variance operation and \(a\) is the scaling factor (we set \(a\) to \(\frac{3}{4}\) in our experiments). Similarly, we can build a residual variation map \(\mathbf{M}^{Pos}_{i,j}=var(\mathbf{R}^{Pos}_{i,j}(3,3))^{a}\) for the positive GT, where \(\mathbf{R}^{Pos}=|I^{Pos}-I^{H}|\). At location \((i,j)\), if the negative residual variation is higher than the positive one, we identify this pixel at \(I^{Neg}\) as a truly negative pixel, which should be used to update the model. Therefore, we first define an indication map \(\mathbf{M}^{Ind}_{i,j}\): \[\mathbf{M}^{Ind}_{i,j}=\left\{\begin{array}{cc}0,&\mathbf{M}^{Neg}_{i,j}<=\mathbf{M}^{ Pos}_{i,j}\\ \mathbf{M}^{Neg}_{i,j},&\mathbf{M}^{Neg}_{i,j}>\mathbf{M}^{Pos}_{i,j}\end{array}\right. \tag{3}\] and then define the negative loss \(L_{neg}\) as follows: \[L_{neg}=||\mathbf{M}^{Ind}\odot(I^{Neg}-I^{SR})||_{1}, \tag{4}\] where \(\odot\) means dot product. Finally, the overall training loss is defined as: \[L_{Total}=\alpha L_{1}+\beta L_{p}+\gamma L_{adv}-\delta L_{neg}, \tag{5}\] where \(\alpha\), \(\beta\), \(\gamma\) and \(\delta\) are balance parameters. ## 5 Experimental Results ### Experiment Setup To validate the effectiveness of our human guided GT (HGGT) dataset and the role of negative GTs, we perform two sets of experiments. First, in Sec. 5.2, we train several representative Real-ISR models, such as Real-ESRGAN [46], BSRGAN [51], AdaTarget [21] and LDL [31] on the DF2K-OST dataset and our HGGT dataset, and compare their performance. Second, in Sec. 5.3, we train two commonly used Real-ISR backbones (RRDB [46, 48] and SwinIR [29]) on our dataset by using only the postive GTs and using both the positive and negative GTs. **Implementation details.** Before training a model on our dataset, we first pre-train it on the DF2K-OST dataset by using the pixel-wise \(\ell_{1}\) loss to get a stable initialization. Since the original degradation settings in Real-ESRGAN [46] and \begin{table} \begin{tabular}{|c||c c c c||c|} \hline Final & \multicolumn{5}{c||}{Enhance Model} & Total \\ Label & 1 & 2 & 3 & 4 & Total \\ \hline \hline Positive & 15250 & 13907 & 17236 & 17190 & 63583 \\ Similar & 4379 & 5635 & 2517 & 2144 & 14675 \\ Negative & 564 & 651 & 440 & 859 & 2514 \\ \hline \hline Total & 20193 & 20193 & 20193 & 20193 & 80772 \\ \hline \end{tabular} \end{table} Table 2: The distribution of final patch labels in our dataset. There are \(20,193\times 4=80,772\) enhanced patches, each having three annotations. We take the majority annotation label as the final label of each patch. \begin{table} \begin{tabular}{|c||c c c c c|} \hline “Positive” Count & 0 & 1 & 2 & 3 & 4 & Total \\ \hline Groups count & 1267 & 996 & 2616 & 3901 & 11413 & 20193 \\ \hline \end{tabular} \end{table} Table 3: The distribution of the number (\(0\sim 4\)) of final “Positive” patches per group in our dataset. BSRGAN [51] is too strong to use in practical ISR applications, we adopt a single-stage degradation process, including blur, noise, down-sampling and JPEG compression with moderate intensities. Detailed settings and visual examples are provided in the **supplementary file**. For the two backbones, RRDB and SwinIR, we utilize the UNet discriminator [46] for adversarial training, resulting in a RRDB-GAN model and a SwinIR-GAN model. We conduct Real-ISR experiments with scaling factor 4 in this paper. We randomly crop training patches of size \(256*256\) from the GT images, and resize the corresponding regions in the LR images to \(64*64\). The batch size is set to 12 for RRDB backbone and 8 for SwinIR backbone to save GPU memory. We train our model on one NVIDIA RTX 3090 GPU for 300K iterations using the Adam [24] optimizer. The initial learning rate is set to \(1e-4\), and we halve it after 200K iterations for RRDB backbone, and 200K, 250K, 275K and 287.5K iterations for SwinIR backbone. The balance parameters \(\alpha\), \(\beta\), \(\gamma\) and \(\delta\) in Eq. 5 are set to 1, 1, 0.1 and 300, respectively. \(\delta\) is set much larger than others because the number of negative GTs is much smaller than positive ones. **Testing set.** To evaluate the performance of Real-ISR models trained on our dataset quantitatively, we construct a test set using the same steps as in the construction of our training set. In specific, \(100\) patch groups with at least 2 'Positive' GTs are constructed. The input LR patches are generated by using the same degradation process as in the training process. The LR patches together with their GTs are used to quantitatively evaluate the Real-ISR models. We denote this dataset as _Test-100_. **Evaluation protocol.** For the quantitative evaluation on _Test-100_, we adopt the commonly used PSNR, SSIM [49] LPIPS [53] and DISTS [7] as quality metrics. Since in _Test-100_ one LR image has at least 2 positive GTs, we average the PSNR/SSIM/LPIPS/DISTS scores respectively over the multiple positive GTs as the final scores. For the qualitative evaluation, we invite 12 volunteers to perform subjective assessment, and report the count of preferred Real-ISR models as the user study results. ### DF2K-OST Dataset vs. Our HGGT Dataset We first evaluate the effectiveness of the proposed dataset by training representative Real-ISR models respectively on the DF2K-OST dataset and the positive GTs of our HGGT dataset. Four state-of-the-art Real-ISR models are employed, including Real-ESRGAN [46], BSRGAN [51], AdaTarget [21] and LDL [31]. For Real-ESRGAN and BSRGAN, we adjust the degradation parameters so that the quality of synthesized training LR images is comparable to the LR images in our test set. For AdaTarget and LDL, we use the single-stage degradation as explained in Sec. 5.1, and employ the loss functions in the original papers. All models are firstly pre-trained on DF2K-OST with \(\ell_{1}\) loss. The UNet discriminator [46] is used for adversarial training in our experiments. Quantitative comparison are reported in Table 4 and visual comparisons are shown in Figure 3. As shown in Table 4, training on our HGGT dataset leads to much better LPIPS/DISTS scores against the DF2K-OST dataset. Specifically, the LPIPS/DISTS scores are significantly improved by \(10.14\)%/\(12.40\)%, \(16.30\)%/\(15.27\)%, \(17.45\)%/\(18.91\)% and \(19.23\)%/\(21.99\)%, respectively, for Real-ESRGAN, BSRGAN, LDL and AdaTarget-GAN. This indicates a clear advantage of perceptual quality brought by our dataset. Some visual examples are shown in Figure 3. One can see that the models trained on our positive GTs can produce perceptually more pleasing results against the models trained on DF2K-OST. The reconstructed images by our dataset have sharper textures and richer details. This is because the original GTs in the DF2K-OST dataset have mixed visual qualities, where a large number of local patches are smooth. In comparison, in our HGGT dataset, the perceptual quality of positive GTs is much enhanced, and the smooth or artifactual patches are mannually removed. These improvements on the training data bring clear advantages to the trained Real-ISR models. More visual results are put in the **supplementary file**. As a common problem of GAN-based models, the superior perceptual quality sacrifices the pixel-wise fidelity which is depicted by PSNR and SSIM. This trade-off, which is mainly caused by the ill-posed nature of the image restoration tasks, has been discussed in previous researches [51]. It is well-known that the pixel-wise metrics do not correlate well to the visual quality [41, 27, 40]. In addition, in our proposed HGGT dataset, the perceptual quality of GT is improved by using GAN-based enhancement models so that the pixel-wise correlations may not be well-preserved in the data. However, human observers generally prefer the enhanced images in our annotation process, while the perceptually more pleasing results demonstrate the significance of our proposed HGGT dataset in improving the upper bound of the Real-ISR tasks. The main goal of the proposed HGGT dataset is to improve the perceptual quality of Real-ISR outputs by introducing human perceptions into the training pair generation. We perform a user study to validate the effectiveness of our strategy by inviting 12 volunteers to evaluate the Real-ISR results on the _Test-100_ dataset. For each of the four Real-ISR methods, _i.e_., Real-ESRGAN, BSRGAN, AdaTarget-GAN and LDL, the two models trained on the DF2K-OST dataset and the positive GTs of our HGGT dataset are compared. Each time, the Real-ISR results of the two models on the same LR input are shown to the volunteers in random order, and the volunteers are asked to chose the perceptually better one based on their evaluation. The statistics of the user study are shown in Fig. 5. It should be noted the vol unteers invited in this user study do not participate in the annotation process of our dataset. As shown in Fig. 5, the majority of participants (more than 80% for all tests) prefer the models trained on our HGGT dataset. This validates the effectiveness of the proposed approach and the dataset, which can be plug-and-play to most of the existing Real-ISR methods and improve their performance by a large margin. For the images where models trained on DF2K-OST are selected, we observe that they mostly contain much flat and smooth regions, and the results of the two models are actually very close. ### The Effectiveness of Negative GTs We then evaluate the effectiveness of the negative GTs in our HGGT dataset. We first train the baseline model on the original HR images that are used to build our dataset. Then, we train the models on positive GTs only by using Eq. (2), as illustrated in Section 4. Finally, we train the models on both positive and negative GTs by using Eq. (5). The CNN-based RRDB and transformer-based SwinIR backbones are used to train Real-ISR models. Due to the limit of space, quantitative comparisons of the trained models are reported in the **supplementary file**. Visual comparisons are shown in Figure 4, which provides more intuitive evidences on the effectiveness of the annotated negative GTs. As shown in the second column, the models trained on original HR images yield blurry details and irregular patterns, especially on the area with dense textures. This is mainly caused by the low and mixed quality of the original HR images. In contrast, training on our Figure 3: Visual comparison of state-of-the-art models trained on the DF2K-OST and our proposed HGGT datasets. The 1st and 3rd rows show the results of models trained on DF2K-OST, while the 2nd and 4th rows show the results of models trained on ours positive GTs. The left column shows the original GT and the positive GT in our dataset. **Please zoom in for better observation**. positive GTs can produce much sharper textures and richer details, whereas there remain some false details and visual artifacts (see the windows of the building). Further, training on both positive and negative GTs leads to a more balanced visual performance. Some over-enhanced local pixels can be suppressed, while the textures remain sharp and regular. This is owing to the effective annotation of negative samples, which bring useful human perception guidance into the data for model training. More visual results can be found in the **supplementary file**. ## 6 Conclusion In this paper, we elaborately designed a human guided ground-truth (GT) generation method for realistic image super-resolution (Real-ISR). We first trained four image enhancement models to improve the perceptual quality of original high resolution images, and then extracted structural and textural patches from the enhanced images. Finally, human subjects were invited to annotate the perceptual quality of extracted patches as positive and negative GTs, resulting in the human guided ground-truth (HGGT) dataset. The sharper textures and richer details in the positive GTs could largely improve the performance of trained Real-ISR models, while the negative GTs could provide further guidance for the model to avoid generating visual artifacts. Extensive experiments validated the effectiveness of the proposed HGGT dataset and the training strategies. **Acknowledgement.** We thank Dr. Lida LI for providing support in GPU server configuration and the many people participating in data collection and annotation. \begin{table} \begin{tabular}{|c||c||c|} \hline \multirow{2}{*}{Method} & Train & \multirow{2}{*}{PSNR/SSIM/LPIPS/DISTS} \\ \cline{3-3} & Dataset & & \\ \hline \hline \multirow{2}{*}{Real-ESRGAN} & DF2K-OST & 21.9797/0.6173/0.2593/0.1806 \\ & Positive GT & 21.5379/0.6078/0.2330/0.1582 \\ \hline \multirow{2}{*}{BSRGAN} & DF2K-OST & 21.7083/0.6092/0.2865/0.1880 \\ & Positive GT & 20.9037/0.5898/0.2398/0.1593 \\ \hline \hline \multirow{2}{*}{LDL} & DF2K-OST & 22.4724/0.6394/0.2304/0.1676 \\ & Positive GT & 22.0190/0.6325/0.1902/0.1359 \\ \hline \hline \multirow{2}{*}{ AdaTarget-GAN} & DF2K-OST & 22.3944/0.6360/0.2335/0.1687 \\ & Positive GT & 21.9216/0.6301/0.1886/0.1316 \\ \hline \end{tabular} \end{table} Table 4: The quantitative results of different Real-ISR models trained on DF2K-OST and our HGGT datasets on _Test-100_. Figure 4: Visualizations of RRDB-GAN ans SwinIR-GAN models trained on the original HR (Ori HR) patches, positive GTs (Pos GT) only, and both positive and negative GTs (Pos+Neg GT) in our HGGT dataset. The top and bottom rows show the results of RRDB-GAN and SwinIR-GAN, respectively. From left to right are the results of bicubic interpolation and the models trained on the Ori HR, Pos GT, Pos+Neg GT, respectively. **Please zoom in for better observation**. Figure 5: User study results on the Real-ISR models trained on the DF2K-OST dataset (the blue bar) and the positive GTs in our HGGT dataset (the red bar).
2307.11389
Two-stage, low noise quantum frequency conversion of single photons from silicon-vacancy centers in diamond to the telecom C-band
The silicon-vacancy center in diamond holds great promise as a qubit for quantum communication networks. However, since the optical transitions are located within the visible red spectral region, quantum frequency conversion to low-loss telecommunication wavelengths becomes a necessity for its use in long-range, fiber-linked networks. This work presents a highly efficient, low-noise quantum frequency conversion device for photons emitted by a silicon-vacancy (SiV) center in diamond to the telecom C-band. By using a two-stage difference-frequency mixing scheme SPDC noise is circumvented and Raman noise is minimized, resulting in a very low noise rate of $10.4 \pm 0.7$ photons per second as well as an overall device efficiency of $35.6\, \%$. By converting single photons from SiV centers we demonstrate the preservation of photon statistics upon conversion.
Marlon Schäfer, Benjamin Kambs, Dennis Herrmann, Tobias Bauer, Christoph Becher
2023-07-21T07:00:18Z
http://arxiv.org/abs/2307.11389v1
Two-stage, low noise quantum frequency conversion of single photons from silicon-vacancy centers in diamond to the telecom C-band ###### Abstract The silicon-vacancy center in diamond holds great promise as a qubit for quantum communication networks. However, since the optical transitions are located within the visible red spectral region, quantum frequency conversion to low-loss telecommunication wavelengths becomes a necessity for its use in long-range, fiber-linked networks. This work presents a highly efficient, low-noise quantum frequency conversion device for photons emitted by a silicon-vacancy (SiV) center in diamond to the telecom C-band. By using a two-stage difference-frequency mixing scheme SPDC noise is circumvented and Raman noise is minimized, resulting in a very low noise rate of \(10.4\pm 0.7\) photons per second as well as an overall device efficiency of \(35.6\,\%\). By converting single photons from SiV centers we demonstrate the preservation of photon statistics upon conversion. ## 1 Introduction The vast majority of systems suitable as a quantum emitter or memory for quantum communications feature optical transitions in the visible and near infrared spectral region, experiencing strong absorption losses in optical fibers. For this reason, quantum frequency conversion (QFC) into low-loss telecom bands in combination with advanced concepts of quantum communication such as quantum repeaters [1, 2] is the key enabling technology for long-range fiber-based quantum networks. Using this technology, important primitives of quantum network elements were realized recently, e.g. a telecom-wavelength quantum repeater node with trapped ions [3], entanglement of remote rubidium atom quantum memories via telecom photons [4, 5] and two-photon interference from independent NV centers in diamond [6], representing an advanced hardware platform for quantum networks [7]. Among the various hardware platforms for quantum communication the silicon-vacancy (SiV) center in diamond stands out due to a number of favorable properties [8]. In particular, the long spin coherence time [9], Fourier-limited linewidths [10], and excellent coupling to nanophotonic resonators with high cooperativity [11, 12] enabled the demonstration of essential elements of quantum repeaters [13, 14, 15]. However, quantum frequency conversion of SiV photons into the telecom C-Band is particularly demanding. Direct conversion schemes using a 1409 nm pump beam to reach the target wavelength of 1550 nm suffer from strong pump-induced noise caused by Raman scattering and spontaneous parametric down-conversion (SPDC), prohibitive of reaching the single photon conversion regime [16]. Similar constraints hold for the direct conversion of photons from NV centers in diamond, typically pursued employing pump light at 1064 nm [17, 18, 19]. We here present efficient and low-noise QFC of single SiV photons following a two-stage conversion scheme. Periodically poled lithium niobate (PPLN) waveguides are used to first transduce the SiV photons to an intermediate wavelength, followed by a subsequent second conversion to the target telecom wavelength. For the two-stage difference frequency generation, in contrast to direct conversion, the chosen pump wavelength at 2812.6 nm is far above the target wavelength. Thereby, we circumvent SPDC noise and minimize Raman noise. This two-stage QFC technique was proposed by Pelc et al. [20] and first implemented by Esfandyarpour et al. [21] in a QFC device down-converting 650 nm photons to 1590 nm in two cascaded waveguides integrated on a chip. There, an overall device efficiency of 36 % was achieved, however, the converted signal was not coupled into a fiber, but measured in free space. A further implementation of two-stage QFC demonstrated conversion of photons from a trapped Ba\({}^{+}\) ion at at 493 nm to the telecom C-band, albeit with efficiencies of a few percent [22], and, recently, to the telecom O-band with low noise and 11 % overall efficiency [23]. ## 2 Two-stage QFC device The quantum frequency conversion set-up transduces photons from a wavelength of 737.1 nm via 998.9 nm to 1549.0 nm in a two-stage, cascaded difference frequency generation process using two separate nonlinear crystals and the same pump wavelength of 2812.6 nm for both conversion stages. A Cr\({}^{2+}\):ZnSe laser (_IPG Photonics_) is used to generate a high intensity, single mode, single frequency pump beam. The laser is tunable in a range of 2808 nm to 2820 nm, however, due to absorption bands in air [24] not all pump wavelengths are equally suitable. The experimental set-up is schematically depicted in Figure 1. It consists of two separate PPLN waveguides (_NTT Electronics_), where the single photons Figure 1: Schematic representation of the two-stage quantum frequency conversion set-up. Single photons resonant to SiV centers (737 nm, red) are mixed with a strong pump beam (2813 nm, blue) and down-converted in two separated periodically poled lithium niobate (PPLN) waveguides. In the first waveguide conversion to an intermediate wavelength (999 nm, green) takes place, which is then subsequently transduced to the target telecom wavelength (1549 nm, purple). By using 90\({}^{\circ}\) off-axis parabolic mirrors (OAPM) signal and pump wavelength are simultaneously coupled to the PPLN waveguide without chromatic aberration. Waveplates (WP) are used to manipulate the polarization of the pump light in the two waveguides and thus effectively control the pump power contributing to the DFG process in both waveguides independently. Broadband filtering with a bandpass and narrowband filtering with a Volume Bragg Grating (VBG) of 25 GHz bandwidth clears the converted signal of pump-induced noise photons before coupling it into the output fiber. and the high-intensity pump beam are coupled in simultaneously. The waveguides are temperature controlled via a Peltier element. The individual beams of different wavelengths are combined and split by means of dichroic mirrors (_Layertec_). In order to be able to set the pump power for both stages independently, we exploit the fact that only the fraction in s-polarization is relevant for the type-0 conversion process employed here. The ideal pump power for each conversion step is thus adjusted by rotating the linear polarization in front of the waveguides. In contrast to Esfandyarpour et al. [21] we do not use two waveguides integrated on the same chip, but two spatially separate waveguide chips instead. Thanks to this approach, the temperatures of the waveguides can be set independently, and thus the phase-matching temperatures for both DFG processes do not have to match. Moreover we can control the pump power contributing to the conversion in both stages independently by manipulating its polarization. We use \(90^{\circ}\) off-axis parabolic mirrors (OAPM) to simultaneously couple the beams with different wavelengths into the waveguides, thereby avoiding chromatic aberration. For the same reason parabolic mirrors have already been used before in a set-up for polarization preserving frequency conversion by Krutyanskiy et al. [25]. After the first conversion stage, the transmitted pump light and the single photons are separated again with dichroic mirrors to allow for spectral filtering of the intermediate wavelength and polarization manipulation of the pump. As lithium niobate shows birefringence, the pump beam may be elliptically polarized after passing the first conversion crystal. For this reason, a quarter-wave plate is needed in addition to a half-wave-plate in order to to obtain an s-polarized pump beam for the second conversion stage. Noise photons photons induced in the first conversion stage are removed using a bandpass filter (_Semrock_; bandwidth: \(234\,\mathrm{nm}\)). Subsequently, the two wavelengths are overlapped again and coupled into the second waveguide, where conversion to \(1549\,\mathrm{nm}\) takes place. A final dichroic mirror is used to separate the photons converted to the telecom C-band from the pump light. Before coupling into a single-mode fiber, a bandpass filter (_Thorlabs_) with a bandwidth of \(12\,\mathrm{nm}\) as well as a volume Bragg grating (_Optigrate_, \(25\,\mathrm{GHz}\) FWHM) clean the converted signal from noise photons. ## 3 Performance of the QFC device The conversion efficiency of the device was measured using a Ti:Sa laser tuned in resonance with the zero-phonon line of SiV centers at \(737.12\,\mathrm{nm}\). Here, we achieve an overall external device efficency of \(35.6\,\%\pm 0.1\,\%\), which is the conversion efficiency including all filtering and coupling losses, including coupling from and to single mode fibers. The influence of the different contributions to the external efficiency is detailed in the following. Internal efficiencies are \(96.4\,\%\pm 0.1\,\%\) and \(75.8\,\%\pm 0.1\,\%\) % for the first respectively second conversion stage, resulting in an overall internal efficiency of \(73.1\,\%\pm 0.1\,\%\). Internal efficiencies were determined by measuring signal depletion as a function of pump power for both stages separately, which is shown in Figure 2. As can be seen in Figure 2b, the internal efficiency of the second conversion stage from 999 nm to telecom wavelength is limited by the amount of available pump power. The strong loss of pump power in the setup is significantly favored by the different absorption bands present at wavelengths around 2.8 um. To begin with, strong water absorption bands exist in the spectral region of the pump wavelength, leading to humidity dependent absorption in air [24]. This being said, the wavelength of 2812.6 nm was chosen to minimize absorption in air, resulting in a loss of about 10 % up to the second waveguide. A second loss channel is given by absorption losses in the fused silica substrate of the dichroic mirrors. For fused silica it is well known that Hydroxy (OH) groups lead to a strong absorption band in the spectral region around 2750 nm [26, 27, 28]. In the fused silica used here, this absorption band was significantly weakened due to a reduced OH content. Still, for each dichroic mirror an absorption of about 10 % is measured, which is in agreement with the specifications by the vendor. Finally, also the PPLN waveguides are found to be a significant loss channel. For LiNbO\({}_{3}\) doped with 5 mol % MgO and s-polarized light, absorption coefficients between 0.08 cm\({}^{-1}\)[29] and 0.088 cm\({}^{-1}\)[30] for the absoprtion maximum at 2826 nm are found in the literature. For the pump beam we measured a transmission of 63 % and 68 % for the first and second waveguide, respectively. However, we cannot specify the fraction of these losses due to absorption, since we cannot distinguish between absorption and coupling losses. Further contributions to the external device efficiency, i.e. losses at spectral filters, mirrors as well as the coupling efficiency to the output single mode fiber, \begin{table} \begin{tabular}{l c} \hline Contribution to external efficiency & \% \\ \hline Transmission waveguide stage 1 & \(82.2\pm 0.3\) \\ Signal depletion stage 1 & \(96.4\pm 0.1\) \\ Bandpass 1000/234 & \(96.5\pm 0.3\) \\ Transmission waveguide stage 2 & \(84.3\pm 0.2\) \\ Signal depletion stage 2 & \(75.8\pm 0.1\) \\ Bandpass 1550/12 & \(98.0\pm 0.9\) \\ Volume Bragg grating & \(97.6\pm 0.5\) \\ Coupling efficiency in telecom fiber & \(83.0\pm 0.4\) \\ Reflection/Transmission other optical components & \(88.6\pm 1.7\) \\ \hline \(\Sigma\) & \(34.5\pm 0.8\) \\ \hline \end{tabular} \end{table} Table 1: The individual contributions to the measured external device efficiency of \(35.6\,\%\pm 0.1\,\%\). The transmission of the waveguides also includes the coupling efficiency. The measurement uncertainty, results from fluctuations in pump power and signal power during measurement. were all measured separately. The results are listed in Table 1. This way, we obtain a calculated external efficiency of \(34.5\%\pm 0.8\%\). The slight deviation from the external efficiency measured directly can be explained by fluctuations in the internal efficiency due to fluctuations of the power and spectral and spatial mode profiles of the pump laser. Conversion noise is measured by by setting the pump power to the optimal operating point and blocking the signal input while detecting the photon rate in the output fiber with superconducting nanowire single-photon detectors (SNSPD, _Single Quantum_). Integrating over 15 minutes yielded \(17.4\pm 0.2\) cps, of which \(12.1\pm 0.2\) cps can be assigned to dark counts of the detectors. Substracting the dark counts and correcting by the detection efficiency of the SNSPD (72 %) and transmission losses to the detector (89 %) we get a noise photon rate induced by the pump during conversion of only \(10.4\pm 0.7\) photons per second or, correspondingly, 0.4 photons per second per gigahertz filter bandwidth. These few remaining noise photons can most probably be attributed to residual Raman scattering. Although the spectral gap between pump (2812.6 nm) and target (1550 nm) wavelengths is large (2900 cm\({}^{-1}\)), our assumption is supported by the measurement of a non-Lorentzian pedestal for high frequency shifts in the Raman spectrum of LiNbO\({}_{3}\) by Pelc et al. [31] showing the presence of Raman scattered photons at frequency shifts of 1600 cm\({}^{-1}\) and above. By comparison, for the two-stage QFC set-up of Esfandyarpour et al. [21] noise was estimated as 1.5 photons/s/GHz from measured Raman spectra (spectral gap: 1740 cm\({}^{-1}\)). The results are in good agreement as for the converter presented here a lower Raman noise is expected due the larger spectral gap between pump and target wavelength. Figure 2: Signal depletion of the (a) first and (b) second conversion stage, the latter being limited by the available power of the pump beam. The data was fitted (red line) according to \(\eta_{\rm int}(P_{\rm pump})=\eta_{\rm max}\sin^{2}(\sqrt{\kappa_{\rm norm}P_{ \rm pump}}L)\), where \(L=4\) cm is the length of the PPLN waveguide and the fit parameter \(\eta_{\rm max}\) denotes the maximum internal efficiency. Single photon conversion To demonstrate the low-noise performance of the two-stage QFC we here prove the preservation of single photon statistics by measuring the second order autocorrelation function \(g^{(2)}(\tau)\) before and after conversion. It has to be noted that the experiment was performed with an earlier version of the quantum frequency converter at that time achieving 29 % external efficiency and a higher noise photon rate of about 500 cps. The latter is due to the fact that the Volume Bragg Grating had not yet been integrated in the device and only a bandpass with 12 nm bandwith was used as a filter in front of the output fiber. Despite the higher noise count rate of the converter preservation of the single photon character is unambiguously shown. Single photons were created from a sample consisting of diamond nanopillars containing SiV centers. The sample was cooled down in a helium flow cryostat to about 10 K. Under non-resonant excitation at 532 nm, we obtained a photon rate of about \(5.5\cdot 10^{5}\) cps collected into a single-mode fiber. The emission spectrum shown in Figure 3a reveals the characteristic four line fine structure of the SiV center. Photon correlation measurements are performed using Hanbury-Brown Twiss (HBT) setups with avalanche photodiodes (APD, _Excelitas Technologies_) for photons at 737 nm and SNSPDs (_Single Quantum_) for telecom photons. For the unconverted single photons a nonlinear least squares fit using a Levenberg-Marquardt algorithm yields a dip of the \(g^{(2)}(\tau)\) function to \(g^{(2)}(0)=0.48\pm 0.03\), see Figure 3c. The non-vanishing value of the \(g^{(2)}\)-function for \(\tau=0\) can be fully accounted for by the jitter of the detectors and the background fluorescence of the sample. Using the signal-to-background (SBR) ratio as free fit parameter and setting the jitter to the 550 ps specified for of the APDs, the fit result indicates a SBR of 7.5 dB with the 95 % confidence interval ranging from 6.5 dB to 8.8 dB. This is in agreement with the value of 8.5 dB calculated from the measured spectrum in Figure 3a by fitting the background fluorescence from the sample with a a super-Gaussian and a sum of Lorentzian peaks. Due to the QFC device acceptance bandwidth of 77 GHz only one fine structure line of the SiV spectrum can be converted. Since of all four fine structure lines most of the intensity is emitted into the C-transition, the acceptance bandwidth of the converter was centered around the C-transition at 737.12 nm by tuning the temperatures of the nonlinear crystals. As expected, the spectrum of the converted photons in 3b shows a single peak at 1549 nm. After conversion to telecom wavelength the measurement of the \(g^{(2)}(\tau)\)-function (cf. Figure 3d) reveals a single photon purity of \(g^{(2)}(0)=0.24\pm 0.08\) which is better than for the unconverted photons. There are two reasons for this: On the one hand, the SNSPD's jitter of 310 ps (FWHM) is significantly lower compared to the APDs that were used for the unconverted photons. Second, and despite the additional noise photons induced by the pump light, we in total get a higher signal-to-background ratio due to the spectral filtering of the SiV background by the 77 GHz acceptance bandwidth of the QFC device. Within the 77 GHz acceptance bandwidth the background fluorescence is negligible compared to Figure 3: (a) Spectrum of the SiV center used for conversion in semilogarithmic representation. A bandpass filter with a central wavelength of 735 nm and a linewidth of 11 nm is used as the detection filter. In orange the background flourescence from the sample is fitted with a super-gaussian and a sum of lorentz peaks. The signal-to-background ratio (SBR) is obtained by dividing the background subtracted signal and the background. (b) Spectrum of the single SiV photons after conversion. Due to the acceptance bandwith of 77(4) GHz only the C-transition of the SiV center is converted. The linewidth of the converted spectrum is limited by the spectrometer’s resolution of about 20 GHz. (c) \(g^{(2)}\)-measurement of the unconverted photons. (d) \(g^{(2)}\)-measurement of the converted photons. The signal-to-background ratio is extracted by a fitting routine that uses the jitter of the detection unit as a fixed parameter. the signal. The signal-to-background ratio of the converted photons is thus determined by the pump-induced noise during conversion. In this experiment, of the 20 kcps single photon rate detected by the SNSPDs, 500 cps was due to conversion-induced noise, hence, yielding a signal to noise ratio of 15 dB for the converted photons. This estimation is consistent with the fit result of the \(g^{(2)}\)-function, yielding a SBR of 11.5 dB with the 95 % confidence interval ranging from 8.4 dB to 18.4 dB. Note that the \(g^{(2)}\)-measurement of the converted photons looks much more noisy compared to the unconverted photons, since only about 20 kcps of the 550 kcps emitted by the SiV center are left after conversion. This can be explained as follows: First, the acceptance bandwidth of the converter implies that only the C-transition of the fine structure can be converted containing only about 40 % of the total emission by the SiV center. Second, the conversion of the SiV photons took place in a different laboratory than the excitation of the emitter and photon collection. Due to losses at optical fiber patches, the transmission of the interconnecting fiber is about 61 %. Another loss factor is the 29 % device efficiency of the converter. Eventually, the detection efficiency of the SNSPDs used in this experiment is 35 %, which is lower than the 60 % efficiency of the APDs. Combining all these factors yields for the converted photon rate: \(550\,\mathrm{kcps}\cdot 0.4\cdot 0.61\cdot 0.29\cdot\frac{0.35}{0.6}=23\, \mathrm{kcps}\), which corresponds very well to the count rate during the \(g^{(2)}\)-measurement. ## 5 Conclusion and outlook In conclusion we have shown a very low-noise yet highly efficient quantum frequency conversion device for converting photons resonant to silicon-vacancy centers in diamond to the telecom C-band. We used a two-stage conversion scheme, where photons are first converted to an intermediate wavelength followed by a subsequent conversion to the target wavelength. In this way, we succeeded in achieving a very low conversion induced noise rate of \(10.4\pm 0.7\) photons per second or, taking into account the 25 GHz filter bandwidth, 0.4 photons per second per Gigahertz filter bandwidth. This small noise level should be compared to the direct, one-stage conversion of attenuated laser pulses at 738 nm to the telecom C-band which resulted in a noise rate of about 400'000 photons per second in a 95 GHz filter bandwidth [16]. Even though it was possible to reduce the noise rate by additional temporal filtering, a minimum noise level of 2000 photons per second or, equivalently, 21 photons per second per Gigahertz was achieved. This in addition required a reduction of the pump power and thus the internal efficiency to about 60 %. The low noise level demonstrated here also favorably compares to other QFC schemes at shorter wavelengths [17, 21, 22, 18, 23, 19]. The overall device efficiency of 35.6 % is mainly limited by the pump power being absorbed in the optical components of the device, since the pump wavelength is in the range of the absorption bands of air, fused silica and lithium niobate. As a result, the remaining pumping power in the second conversion stage is not sufficient to achieve maximum conversion efficiency, leading to an internal efficiency of only 76 % in the second stage. By increasing the pump power in the second conversion stage, either by using a more powerful pump laser or components with lower absorption, an internal efficiency of the second stage of 90 % or more is expected. Assuming further that it is possible to achieve the same coupling efficiencies as have been achieved in [32] for 780 nm photons, i.e. 87.6 % for fiber coupling and 90.0 % for waveguide coupling, an improvement of the overall device efficiency to about 50 % should be feasible for a two-stage SiV converter. Finally, we successfully converted single photons emitted by a silicon-vacancy center in diamond and demonstrated the preservation of the single photon statistics upon conversion. Despite the conversion-induced noise photons, the single photon purity improved to a value of \(g^{(2)}(0)=0.24\pm 0.08\) after conversion, which can be explained by the spectral filtering of the background fluorescence within the conversion process. The single photon conversion results presented here for an early version of the converter are potentially further improved by the better noise performance of the final device. Looking ahead, the two-stage conversion design presented here can also be used to the convert single photons of other other single quantum emitters or qubit systems that suffer from similar high noise rates after direct, one-stage quantum frequency conversion, e.g. the nitrogen-vacancy [17] and the tin-vacancy center [33] in diamond, emitting at 637 nm and 619 nm, respectively. ## Acknowledgment We thank Johannes Gorlitz for help with the single photon conversion experiments and David Lindler for helpful discussions. We acknowledge support by the German Federal Ministry of Education and Research (Bundesministerium fur Bildung und Forschung, BMBF) through projects HiFi (13N15926) and QR.X (16KISQ001K).
2303.15418
Toward Robust Corrections for Stellar Contamination in JWST Exoplanet Transmission Spectra
Transmission spectroscopy is still the preferred characterization technique for exoplanet atmospheres, although it presents unique challenges that translate into characterization bottlenecks when robust mitigation strategies are missing. Stellar contamination is one such challenge that can overpower the planetary signal by up to an order of magnitude, and thus not accounting for it can lead to significant biases in the derived atmospheric properties. Yet this accounting may not be straightforward, as important discrepancies exist between state-of-the-art stellar models and measured spectra and between models themselves. Here we explore the extent to which stellar models can be used to reliably correct for stellar contamination and yield a planet's uncontaminated transmission spectrum. We find that discrepancies between stellar models can significantly contribute to the noise budget of JWST transmission spectra of planets around stars with heterogeneous photospheres, the true number of unique photospheric spectral components and their properties can only be accurately retrieved when the stellar models have sufficient fidelity, and under such optimistic circumstances the contribution of stellar contamination to the noise budget of a transmission spectrum is considerably below that of the photon noise for the standard transit observation setup. Therefore, we advocate for further development of model spectra of stars and their active regions in a data-driven manner, empirical approaches for deriving spectra of photospheric components using the observatories with which the atmospheric explorations are carried out, and analysis techniques accounting for multimodal posterior distributions for photospheric parameters of interest, which will be increasingly revealed by precise JWST measurements.
Benjamin V. Rackham, Julien de Wit
2023-03-27T17:40:53Z
http://arxiv.org/abs/2303.15418v2
# Towards robust corrections for stellar contamination in JWST exoplanet transmission spectra ###### Abstract Transmission spectroscopy is still the preferred characterization technique for exoplanet atmospheres, although it presents unique challenges which translate into characterization bottlenecks when robust mitigation strategies are missing. Stellar contamination is one of such challenges that can overpower the planetary signal by up to an order of magnitude, and thus not accounting for stellar contamination can lead to significant biases in the derived atmospheric properties. Yet, accounting for stellar contamination may not be straightforward, as important discrepancies exist between state-of-the-art stellar models and measured spectra and between models themselves. Here we explore the extent to which stellar models can be used to reliably correct for stellar contamination and yield a planet's uncontaminated transmission spectrum. We find that (1) discrepancies between stellar models can dominate the noise budget of _JWST_ transmission spectra of planets around stars with heterogeneous photospheres; (2) the true number of unique photospheric spectral components and their properties can only be accurately retrieved when the stellar models have a sufficient fidelity; and (3) under such optimistic circumstances the contribution of stellar contamination to the noise budget of a transmission spectrum is considerably below that of the photon noise for the standard transit observation setup. Therefore, we suggest (1) increased efforts towards development of model spectra of stars and their active regions in a data-driven manner; and (2) the development of empirical approaches for deriving spectra of photospheric components using the observatories with which the atmospheric explorations are carried out. Transmission spectroscopy (2133); Stellar atmospheres (1584); Planet hosting stars (1242); Exoplanet atmospheres (487); Fundamental parameters of stars (555); Starspots (1572) + Footnote †: 51 Pegasi b Fellow 0000-0002-3870-788X]Benjamin V. Rackham 0000-0002-4880-7880]Julien de Wit ## 1 Introduction Transmission spectroscopy, the multiwavelength study of the shadows cast by transiting exoplanets (e.g., Seager and Sasselov, 2000; Brown, 2001), provides a powerful tool for constraining the physical structure and chemical composition of exoplanet atmospheres, as recently demonstrated by the _JWST_ Early Release Science observations of WASP-39b (JWST Transiting Exoplanet Community Early Release Science Team et al., 2023; Ahrer et al., 2023; Alderson et al., 2023; Feinstein et al., 2023; Rustamkulov et al., 2023). However, the transmission spectrum only contains information related to the wavelength-dependent opacity of a planet's atmosphere alone when the stellar disk is limb darkened but otherwise featureless. For stars with notable coverage of photospheric heterogeneities like spots and faculae, the difference in the hemisphere-averaged emission spectrum of the star and the transit-chord-averaged one imprints features in the transmission spectrum (e.g., Sing et al., 2011; McCullough et al., 2014), a phenomenon dubbed the transit light source (TLS) effect (Rackham et al., 2018, 2019). Active FGK stars and nearly all M dwarfs are expected to produce detectable TLS, or "stellar contamination," signals in precise transmission spectra (Rackham et al., 2018, 2019). Given this context, recent work has sought to constrain and mitigate for the heterogeneity of the stellar disk at the time of transit by leveraging the stellar spectrum collected during the out-of-transit baseline. Specifically, the temperatures and filling factors of the different photospheric components are constrained to later correct for their contributions to the joint in-transit spectrum (e.g., Zhang et al., 2018; Wakeford et al., 2019; Garcia et al., 2022). Early on, these mitigation studies revealed two bottlenecks. First, fitting host-star spectra with the precisions afforded by space-based platforms is a challenge for current models, especially for late-M dwarfs such as TRAPPIST-1 (Gillon et al., 2016, 2017). Zhang et al. (2018) showed that the uncertainties on the HST/WFC3/G141 spectra of TRAPPIST-1 need to be inflated by factors of \(\sim\)23 to produce adequate fits with respect to stellar models. The subsequent studies of Wakeford et al. (2019) and (Garcia et al., 2022) yielded consistent challenges, which are expected to worsen in the _JWST_ era following a substantial increase in precision (see recent review from Rackham et al., 2022). Second, with the current data quality and model fidelity, an ensemble of models may fit an out-of-transit spectrum equally well, leading to a range of corrected atmospheric spectra with a scatter many times larger than the photon noise (see, e.g., Fig. 6 from Wakeford et al., 2019 and Fig. 7 from Garcia et al., 2022). As stellar contamination can overpower the planetary signal by up to an order of magnitude (Rackham et al., 2018), not accounting for it when performing atmospheric retrieval can lead to important biases in inferred planetary atmospheric parameters (Iyer and Line, 2020). A zeroth-order mitigation strategy to account for the imperfections of stellar models and avoid biases in the corrected planetary spectra is thus to inflate the uncertainties of the stellar spectra (e.g., Zhang et al., 2018), thereby decreasing the precision of planetary inferences. However, the optimal study of exoplanet atmospheres with current facilities demands refined mitigation approaches that can harness the precision of these observations to reduce biases and uncertainties as much as possible. Here we explore the limits of using baseline out-of-transit observations to infer the photospheric properties of cool stars and mitigate for stellar contamination in transmission spectra, with a particular consideration for the fidelity of current stellar models. Our analysis complements that recently conducted for opacity models by Niraula et al. (2022, hereafter N22). Finally, we evaluate how the contribution of the stellar contamination to the noise budget scales with the ratio out- vs in-transit of observations. We first investigate the utility of out-of-transit _JWST_ spectra for identifying a complex photosphere with multiple spectral components and whether inferences are limited by the data quality at hand or the fidelity of stellar spectral models. If the later, this means that new stellar models (theoretical or empirical) may help us move towards a photon-noise-dominated regime. Then, setting aside model fidelity, we assess whether this approach permits inferences of photospheric parameters that are accurate and precise enough to reduce biases in transmission spectra. Finally, we evaluate the contribution of stellar contamination to the total noise and how this scales with the ratio of the out-of-transit to in-transit observations. Note that we focus in this paper on configurations in which heterogeneities are present but not occulted by the transiting exoplanet. Mitigation strategies for occulted active regions can be found in other studies (e.g., Fu et al., 2022). This paper is organized as follows. Section 2 presents our approach for generating the synthetic datasets for analysis. Section 3 details our retrieval approach for inferring constraints from simulated out-of-transit stellar spectra. Section 4 shares our results, and Section 5 summarizes our findings while placing them in the larger context of JWST observations. ## 2 Data synthesis In order to explore the ability of current stellar models to support the reliable correction of stellar contamination in _JWST_ exoplanet transmission spectra, we follow a sensitivity analysis similar to that introduced in N22 for opacity models. We explore two systems and five level of heterogeneities in our sensitivity analysis, which we describe in the following section. ### Properties of the Synthetic Systems We adopt synthetic systems similar to those introduced in N22 as representative examples of planets that would be high-priority targets for _JWST_. These correspond to an Earth-sized planet around a M-dwarf star and a Jupiter-sized planet around a K-dwarf star. The warm Jupiter has a mass of \(1\,M_{\rm jup}\), radius of \(1\,R_{\rm jup}\), a reference temperature of \(500\,\)K, and a transit duration of \(5.80\,\)hr. The super Earth has a mass of \(1\,M_{\oplus}\), radius of \(1\,R_{\oplus}\), a reference temperature of \(300\,\)K, and a transit duration of \(1.00\,\)hr. The details of the atmospheric model of each planet, given in Table 2 of N22, are not important for this analysis, as we are interested instead in the impact of the host stars. The K dwarf has an effective temperature of \(T_{\rm eff}=5270\,\)K, a stellar mass of \(M_{s}=0.88\,M_{\odot}\), and a stellar radius of \(R_{s}=0.813\,R_{\odot}\), parameters which correspond to a K0 dwarf (Pecaut and Mamajek, 2013). For the M dwarf, \(T_{\rm eff}=2810\,K\), \(M_{s}=0.102\,M_{\odot}\), and \(R_{s}=0.137\,R_{\odot}\), corresponding to an M6 dwarf (Pecaut and Mamajek, 2013). In both cases, we consider solar metallicity stars ([Fe/H] = 0.0). We also adopt a brightness of \(J=11\) for both host stars, giving distances of \(191\,\)pc and \(20.5\,\)pc for the K0 and M6 stars, respectively. For each host star, we consider five photospheric heterogeneity scenarios, detailed in Table 1. The first case, case 1, is a quiescent star, for which the quiescent-photosphere spectrum is the only spectral component present on the stellar disk. The next two cases, case 21 and case 2h, are for a star with two spectral components, those of the quiescent photosphere and a spot. The spot coverage is 1% in the low-activity case (case 21) and 5% in the high-activity case (case 2h). The last two cases, case 31 and case 3h, are for a star with three spectral components, those of the quiescent photosphere, spots, and faculae. The coverages of the spots are the same as the previous low-activity and high-activity cases, and the coverages of the faculae are 10% and 30% for case 31 and case 3h, respectively. For each star, we adopt the effective temperature as the temperature of the quiescent photosphere. For the K dwarf, we set the spot and facula temperatures to 3830 K and 5380 K, respectively, following Rackham et al. (2019). For the M dwarf, we set the spot temperature to 86% of the photospheric temperature (2420 K) and the facula temperature to 2910 K, following Afram and Berdyugina (2015) and Rackham et al. (2018), respectively. We generate the model truth of the (out-of-transit) stellar spectrum as the linear combination of the constituent spectra weighted by their filling factors. We do not assume any specific position on the stellar disk for the spots and faculae besides that they are present outside of the transit chord and thus undetectable via crossing events (e.g., Fu et al., 2022). As a result, we take the component spectra to be representative of spots at all positions and neglect the impact of limb darkening. ### Stellar Spectral Models We perform what we call "direct" and "cross" retrievals to explore the impact of imperfections in the stellar spectral models on our inferences. In both cases, the synthetic data are generated using the PHOENIX stellar spectral model grid1(Husser et al., 2013). Relevant to our purposes, the PHOENIX grid spans effective temperatures of \(T_{\rm eff}\in[2300,7000]\) K in 100 K steps and surface gravities of \(\log g\in[0.0,6.0]\) in steps of 0.5. For all spectral models, we linearly interpolate between grid points in terms of \(T_{\rm eff}\), [Fe/H], and \(\log g\) using the speclib package2. Footnote 1: [http://phoenix.astro.physik.uni-goettingen.de/](http://phoenix.astro.physik.uni-goettingen.de/) Footnote 2: [https://github.com/brackham/speclib](https://github.com/brackham/speclib) For the cross retrievals, we use other model grids to retrieve on the data. This allows us to examine potential limitations introduced by the models under the assumption that the differences between state-of-the-art models provide a proxy of the differences between the models and reality. At the sampling of our simulated datasets (see Section 2.4), these differences average \(\sim\)10 ppt for K0 stars and earlier types and \(\sim\)200 ppt for M6 stars--with local differences above 100% (Figure 1). Considering that planetary signals within reach of _JWST_'s precision can be of the order of a few hundred parts per million (e.g., Rustankulov et al., 2023), it is crucial to explore how uncertainties stemming from model fidelity will challenge our retrievals that incorporate TLS signals. Due to the different temperature regimes of the state-of-the-art model grids, we used different models for the K0 and M6 cross retrievals. For the K0 case, we used the MPS-ATLAS model grid (Witzke et al., 2021; Kostogryz et al., 2023). This grid spans effective temperatures of \(T_{\rm eff}\in[3500,9000]\) K in 100 K steps and surface gravities of \(\log g\in\{3.0,3.5,4.0,4.2,4.3,4.4,4.5,4.6,4.7,5.0\}\). For the M6 case, we used the SPHINX model grid (Iyer et al., 2023). This grid spans effective temperatures of \(T_{\rm eff}\in[2000,4000]\) K in 100 K steps and surface gravities of \(\log g\in[4.0,5.5]\) in steps of 0.25. As with the direct retrievals, we fixed the metallicity ([Fe/H] or [M/H]) of all spectra to 0. For the SPHINX model grid, we also fixed C/O = 0.5. All three model grids--PHOENIX, MPS-ATLAS, and SPHINX--are calculated at higher spectral resolutions than provided by NIRSpec/PRISM (\(R\)\(\sim\)100), the instrument for our simulated observations (see Section 2.4), so we downsampled the spectra to match the wavelengths and resolution of the data. While the wavelength range of the PHOENIX and MPS-ATLAS models span the 0.6-5.3 \(\mu\)m range of NIRSpec/PRISM, we note that the SPHINX spectra have a long-wavelength limit of 3 \(\mu\)m. We discuss the impact of this on our analysis in Section 3.4. \begin{table} \begin{tabular}{c l r r} \hline \hline Case & Description & \(f_{\rm spot}\,(\%)\) & \(f_{\rm fac}\,(\%)\) \\ \hline 1 & no activity & 0 & 0 \\ 21 & spots, low activity & 1 & 0 \\ 2h & spots, high activity & 5 & 0 \\ 31 & spots and faculae, low activity & 1 & 10 \\ 3h & spots and faculae, high activity & 5 & 30 \\ \hline \end{tabular} \end{table} Table 1: Parameters of the five heterogeneity cases. ### Simulated precisions We simulated the precision of _JWST_ observations of our synthetic targets using PandExo (Batalha et al., 2017). We focus on observations with the Near Infrared Spectrograph (NIRSpec) with the low-resolution (\(R\)\(\sim\)100) PRISM disperser, following the approach of the observations of WASP-39b (JWST Transiting Exoplanet Community Early Release Science Team et al., 2023; Rustamkulov et al., 2023) through the _JWST_ Transiting Exoplanet Community Early Release Science Program (Bean et al., 2018). At higher resolutions, such as those available with NIRSpec's gratings and NIRISS SOSS, we expect that both the impact of stellar contamination and the precision of the associated inferences will be increased owing to a higher information content. NIRSpec/PRISM thus provides a good case to explore our interest in the limitations of out-of-transit inferences for a common _JWST_ transit observation mode. We used NIRSpec in Bright Object Time Series mode with the \(1\farcs 6\times 1\farcs 6\) fixed-slit aperture (s1600a1) and PRISM disperser. This setup provides spectra spanning 0.6-5.3 \(\mu\)m at a spectral resolving power of \(R\)\(\sim\)100. We also used the SUB512 subarray, five groups per integration, and the NIRSAPID read mode. We set the total observing time to three times the transit duration. We assumed a constant noise floor of 10 ppm, consistent with the 3\(\sigma\) upper limit of 14 ppm measured in lab time series (Rustamkulov et al., 2022) and the lack of significant systematic errors noted in the observations of WASP-39b (Rustamkulov et al., 2023). We note that our choice of \(J\)=11 apparent magnitudes for the host stars places them among the best targets that can be observed with NIRSpec/PRISM, making our simulated datasets among the best single-visit datasets possible with this observing mode. Combining the data from the full out-of-transit baseline, our simulated spectra have Figure 1: State-of-the-art model spectra of quiescent K0 and M6 dwarfs at the wavelengths and resolution of NIRSpec/PRISM. The left column shows spectra of a K0 dwarf drawn from the PHOENIX and MPS-ATLAS grids. The top panel shows the spectra, and the bottom panel shows the flux difference between them, normalized to the flux of the PHOENIX model. The right column shows the same for the PHOENIX and SPHINX spectra of the M6 dwarf used as an example in this work. Note that PHOENIX and MPS-ATLAS spectra span the full wavelength range of NIRSpec/PRISM, though SPHINX spectra have a long-wavelength limit of 3 \(\mu\)m. See also Iyer et al. (2023, Fig. 1) for a comparison of M-dwarf spectra across many stellar models. a typical per-pixel signal-to-noise ratio (SNR) of 16 000 (\(\sim\)62 ppm error). ## 3 Retrievals We retrieved on each of the 10 simulated out-of-transit stellar spectra (2 host stars \(\times\) 5 activity levels) using four models, accounting for one, two, three, and four spectral components, respectively. We refer to these as the "1-comp," "2-comp," "3-comp", and "4-comp" models hereafter. The rationale behind testing this range of model complexity is that it encompasses all of the true complexity of our input models and more, thereby allowing us to assess when biases emerge from our inability to robustly constrain the true complexity of the observed photosphere. The following section provides further details on the models and the retrieval procedure. ### Model Definition We model the flux at wavelength \(\lambda\) received from each host star \(F_{\lambda}\) as \[F_{\lambda}=\sum_{i=1}^{N}f_{i}S_{i,\lambda}\left(\frac{R_{s}}{D_{s}}\right)^{ 2}, \tag{1}\] in which \(f_{i}\) and \(S_{i,\lambda}\) are the filling factors and emergent spectra of the \(i\)th spectral component present on the stellar disk, \(N\) is the number of spectral components, \(R_{s}\) is the stellar radius, and \(D_{s}\) is the stellar distance. The units of our model and simulated data are \(\rm erg\,s^{-1}\,cm^{-2}\,\AA^{-1}\). The goal of the retrieval procedure is to identify the values that maximize the likelihood \(\mathcal{L}\) of the model (\(F_{\rm model}\)) when compared to the data (\(F_{\rm data}\)). For the natural logarithm3 of the likelihood function, we adopt Footnote 3: We use log to refer to the natural logarithm throughout this work. \[\log\mathcal{L}=-\frac{1}{2}\sum\left(\frac{(F_{\rm data,\lambda}-F_{\rm model,\lambda})^{2}}{\sigma_{\lambda}^{2}}+\log\left(2\pi\sigma_{\lambda}^{2} \right)\right). \tag{2}\] Following Foreman-Mackey et al. (2013, 2019)4, we model the \(\sigma_{\lambda}\) as the quadrature sum of photon noise \(\sigma_{\rm phot,\lambda}\), given by the simulations in Section 2.4, and an additional noise term \(\sigma_{\rm jitter,\lambda}\), which encapsulates any additional noise present in the data. We parameterize the additional noise as a fractional underestimation of the variance following Footnote 4: See [https://emcee.readthedocs.io/en/stable/tutorials/line/](https://emcee.readthedocs.io/en/stable/tutorials/line/), for an example implementation. \[\sigma_{\rm jitter,\lambda}=f_{\rm var}F_{\rm model,\lambda}, \tag{3}\] which means that the amplitude of \(\sigma_{\rm jitter,\lambda}\) scales with the model flux. While we did not inject systematic noise into the simulations, this approach adds a level of realism to our retrievals, effectively inflating the data uncertainty to account for any shortcomings of our models in describing the data. ### Priors Table 2 summarizes the free parameters for our four models and their priors. We parameterize the spectral components by their temperatures (\(T_{1}\), \(T_{2}\)\(T_{3}\), and \(T_{4}\)), and we place wide, uniform priors on all temperatures. To prevent degenerate solutions and automatically ensure the number order of the components corresponds to their prevalence on the stellar disk, we fit for their filling factors using a set of ratio parameters (\(r_{1}\), \(r_{2}\), and \(r_{3}\)) with specific priors. In brief, the ratio parameters describe the ratio of the stellar disk filled by the component of interest relative to less prevalent spectral components. Thus, a model with \(N\) spectral components will include \(N-1\) ratio parameters. For example, in the 1-comp model we do not need to fit for any filling factors (\(f_{1}=1\), by definition), and so we do not fit for any ratio parameters. By contrast, in the 4-comp model we need to fit for four filling factors (\(f_{1}\), \(f_{2}\), \(f_{3}\), and \(f_{4}\)), and so we do that using three ratio parameters (\(r_{1}\), \(r_{2}\), and \(r_{3}\)). \begin{table} \begin{tabular}{c l l l l l l} \hline \hline \multicolumn{1}{c}{ Model} & \multicolumn{1}{c}{\(T\,({\rm K})\)} & \multicolumn{1}{c}{\(r_{1}\)} & \multicolumn{1}{c}{\(r_{2}\)} & \multicolumn{1}{c}{\(r_{3}\)} & \multicolumn{1}{c}{\(R_{s}\,(R_{\odot})\)} & \multicolumn{1}{c}{\(\log f_{\rm var}\)} \\ \hline 1-comp & \(\mathcal{U}(2300,5500)\) &... &... &... & \(\mathcal{U}(0.08,1.00)\) & \(\mathcal{U}(-50,0)\) \\ 2-comp & \(\mathcal{U}(2300,5500)\) & \(\mathcal{U}(1/2,1)\) &... &... & \(\mathcal{U}(0.08,1.00)\) & \(\mathcal{U}(-50,0)\) \\ 3-comp & \(\mathcal{U}(2300,5500)\) & \(\mathcal{U}(1/3,1)\) & \(\mathcal{U}(1/2,1)\) &... & \(\mathcal{U}(0.08,1.00)\) & \(\mathcal{U}(-50,0)\) \\ 4-comp & \(\mathcal{U}(2300,5500)\) & \(\mathcal{U}(1/4,1)\) & \(\mathcal{U}(1/3,1)\) & \(\mathcal{U}(1/2,1)\) & \(\mathcal{U}(0.08,1.00)\) & \(\mathcal{U}(-50,0)\) \\ \hline \end{tabular} Note. –\(\mathcal{U}(a,b)\) designates a uniform prior over the range \((a,b)\). \end{table} Table 2: Free parameters and their priors for the four retrieval models. Mathematically, we define each \(n\)th ratio parameter as \[r_{n}=\frac{f_{n}}{\sum_{i=n}^{N}f_{i}}, \tag{4}\] in which \(N\) is again the total number of components. We place a uniform prior \(\mathcal{U}_{n}(a,b)\) on each \(n\)th ratio parameter defined by \(a=1/(N+1-n)\) and \(b=1\). Importantly, the definitions of \(r_{n}\) and \(\mathcal{U}_{n}\) depend on \(N\) and this differ between models. For example, as shown in Table 2, in the 3-comp model \(r_{2}=f_{2}/(f_{2}+f_{3})\) and its prior is \(\mathcal{U}_{2}(1/2,1)\), whereas in the 4-comp model \(r_{2}=f_{2}/(f_{2}+f_{3}+f_{4})\) and its prior is \(\mathcal{U}_{2}(1/3,1)\). In any case, for the models in which they are defined, the filling factors \(f_{1}\) to \(f_{4}\) can be calculated as \[f_{1}=r_{1}, \tag{5a}\] \[f_{2}=(1-r_{1})r_{2},\] (5b) \[f_{3}=(1-r_{1})(1-r_{2})r_{3},\] (5c) and \[f_{4}=(1-r_{1})(1-r_{2})(1-r_{3})r_{4}, \tag{5d}\] by setting \(r_{i}=1\) for \(i\geq N\). The stellar radius and distance are fully degenerate parameters in our model (Equation 1). Rather than assuming errors to use for normal priors on these parameters, we fix \(D_{s}\) to the adopted distance and fit for \(R_{s}\) with a uniform prior. We also fix the stellar metallicity [Fe/H] to 0 and the surface gravity \(\log g\) to the value given by the mass and radius of host star provided in Section 2.1. The final free parameter in each model is the fractional underestimation of the variance \(f_{\rm var}\), which accounts for additional noise in the data (Equation 3). To ensure it is always positive and to allow the sampling to explore a large dynamic range, we actually fit for \(\log f_{\rm var}\) with a uniform prior of \((-50,0)\). We note that the median value of \(\log(\sigma_{\rm phot,\lambda}/F_{\rm data,\lambda})\) is \(-10\), and so the parameter space included in this prior spans scenarios in which systematic errors are many orders of magnitude below or above the photon noise. In total, there are three, five, seven, and nine free parameters for the 1-comp, 2-comp, 3-comp, and 4-comp models, respectively. ### Model Inference We derive the posterior probability distributions of the model parameters with the nested sampling Monte Carlo algorithm MLFriends (Buchner, 2014, 2017) using the UltraNest5 Python package (Buchner, 2021). We use slice sampling to efficiently explore the parameter space, defining the number of steps as 10 times the number of parameters and setting the maximum number of improvement loops to 3 to limit computational runtimes without appreciably affecting the posterior inferences. At each sampling step, we use the speclib6 Python package to generate the component spectra included in the model. We use the SpectralGrid object within speclib to do this efficiently, loading a spectral grid into memory once with the fixed metallicity and surface gravity values and linearly interpolating between temperature grid points to produce the sample spectra. We note that linear interpolation is likely not the best approach in the high signal-to-noise regime in which we are operating here. We discuss this complication further in Section 5. Footnote 5: [https://johannesbuchner.github.io/UltraNest/](https://johannesbuchner.github.io/UltraNest/) Footnote 6: [https://github.com/brackham/speclib](https://github.com/brackham/speclib) ### Model Selection Our studied parameter space covers 10 simulated datasets (2 host stars \(\times\) 5 activity levels). We retrieve on each using two spectral grids, the PHOENIX grid or another (MPS-ATLAS or SPHINX). For each dataset-grid pair, we would like to test our four model complexities (1-comp to 4-comp) and determine which model best describes the data. We do this with UltraNest by computing the Bayesian evidence (\(\log\mathcal{Z}\)) of each model, which we use as the basis for model selection. We define the best model as the simplest model that produces a significantly better fit than other models. We adopt a Bayes factor of \(\Delta\log\mathcal{Z}=5.0\) as the threshold for significance, corresponding to an odds ratio of \(\sim\)\(150:1\)(Trotta, 2008) or a \(3.6\sigma\) result (Benneke & Seager, 2013). In other words, we selected a more complex model over a simpler one only when it provides a marginal increase in the \(\log\mathcal{Z}\) of 5.0 or more. The SPHINX spectra have a long-wavelength end of \(3\,\mu\)m. Thus, to fairly compare evidences, we perform retrievals of the M6 spectra with the PHOENIX and SPHINX grids using datasets truncated at \(3\,\mu\)m. These are in addition to direct retrievals of the full datasets using the PHOENIX grid. In total, the nested-sampling retrievals in this analysis cover 2 host stars, 5 activity levels, 2 spectral model grids, and 4 model complexities. In analyzing the results of the retrievals, we focus on three topics: whether we can infer the correct level of complexity, whether we can retrieve the correct input parameters and thus reduce biases, and the impact of accounting for the heterogeneity on the uncertainty budget. We present each of these topics in turn in the following section. ### Inferring the Correct Level of Complexity We start by reviewing the results for the direct-retrieval cases, which assume model fidelity. As detailed in Section3.4, we analyzed 10 simulated spectra, trying to fit to each four models with varying levels of complexity. We find that the correct level of complexity was inferred in nine out of 10 cases (Figure2). In other words, in nearly all cases the model with the appropriate number of components provided a large enough increase in \(\log\mathcal{Z}\) to warrant its use and more complex models were not warranted. As a result, in nearly all cases, retrieving the right level of complexity enabled unbiased inferences on the heterogeneity properties and the optimal correction of the stellar contamination. In the full set of direct retrievals, the median standard deviations of the inferred temperatures and filling factors were 2 K and 0.6%, respectively. The exception to this pattern was the M6 case 3l dataset. In this case, the 4-comp model gave a fit that improved the Bayesian evidence by \(\Delta\log\mathcal{Z}=57.5\) with respect to the (appropriate) 3-comp model. This indicates a preference for the more complicated model that is much greater than our \(3.6\sigma\) significance threshold or even the more rigorous threshold of \(\Delta\log\mathcal{Z}=11\), corresponding to an odds ratio of \(43000:1\) or a \(5\sigma\) result (Benneke & Seager, 2013). Inspecting the results of the 3-comp and 4-comp retrievals shows that the inclusion of the fourth component gives the algorithm flexibility to compensate for the particular noise instance, allowing for a lower posterior constraint on \(\log f_{\rm var}\) and thus a higher Bayesian evidence. We discuss the practical impact of this mislabeling of the M6 case 3l dataset in terms of the ultimate correction applied to the transmission spectrum in Section4.2. We now turn to the results of the cross-retrieval cases. Here the results are similar for all cases of a given host star (Figure3). In each K0 case, the MPS-ATLAS retrievals indicate a preference for the 2-comp model when compared to other MPS-ATLAS models. Similarly, in each M6 case, the SPHINX retrievals find the 1-comp model to be best. As a result, the K0 and M6 retrievals infer the correct level of complexity in 2/5 and 1/5 cases, Figure 2: Bayesian evidences for model fits with different levels of complexity in the case of the direct retrievals. Here the same model grid was used to simulate and retrieve on the data. The top row gives results for the K0 star. The bottom row gives results for the M6 star. From left to right, the columns gives results for the 1, 2l, 2h, 3l, and 3h cases, respectively. In each panel, the marginal Bayesian evidence for the 1-, 2-, 3-, and 4-component models are shown. The marginal Bayesian evidence is defined relative to the selected model, with more positive values indicating a higher preference. Triangles point to evidences that fall below the limit of the y-axis, indicating that model is a relatively poor fit to the the data (\(>\)5\(\sigma\) preference against). The shaded regions highlight the appropriate complexity for a dataset; they are green if the inferred complexity is correct and red if not. In all but one case, the direct retrieval (i.e., model fidelity) identify the correct complexity. respectively. For the MPS-ATLAS cross retrievals, the median standard deviations of the inferred temperatures and filling factors were 39 K and 0.8%, respectively. For the SPHINX cross retrievals, the median standard deviations of the inferred temperatures was 21 K and the selected models had no filling factors to fit. Nonetheless, whether the cross retrievals happened to identify the right level of complexity or not, they universally provide poor fits to the data in this high signal-to-noise regime. When compared to the results of the PHOENIX retrievals for the same case (and wavelength range, for the PHOENIX-SPHINX comparison), all cross-retrieval models are strongly disfavored at \(\gg\)5\(\sigma\). Typical values of \(\Delta\log\mathcal{Z}\) are \(\sim\)10\({}^{4}\) in favor of the PHOENIX models. In terms of reduced chi-square values, PHOENIX model fits have \(\chi_{r}^{2}\sim 1\) before accounting for the inflated uncertainties, while the corresponding values for the cross-retrievals are \(\chi_{r}^{2}\sim 10^{4}\) for the MPS-ATLAS models and \(\chi_{r}^{2}\sim 10^{6}\) for the SPHINX models. As an example, we highlight the K0 case 1 cross-retrieval with the MPS-ATLAS grid (Figure 4). Like our other simulated spectra, this spectrum has a typical per-pixel SNR of 16 000 (\(\sim\)62 ppm error). At this precision, the differences between the PHOENIX spectra used to simulate the data and the MPS-ATLAS spectra used in the retrieval are readily apparent. The bottom panel of Figure 4 shows that the residuals for the all cross-retrieval models are many orders of magnitude higher than those of the correctly inferred direct-retrieval model. This example underscores that the fidelity of the model grid is crucially important for arriving at the appropriate inferences. We caution that this does not mean one should simply select the model grid that provides the best fits when inferring photospheric properties from out-of-transit spectra. Instead, the results of this exercise raise concerns about model-based inferences of photospheric heterogeneity in general, assuming that the differences between modern model spectra provide a proxy for the differences between models and actual spectra of photospheric components. We return to this point in the discussion. ### Inferring Corrections and Reducing Biases We now focus on the direct retrievals only. As noted in Section 4.1, these identified the correct level of complexity in all cases. We are now interested in whether this translates to a reduction in bias on the transmission spectrum. We calculate the impact of the photospheric heterogeneity on the transmission spectrum as \[\epsilon_{\lambda}=\frac{S_{1,\lambda}}{\sum_{i=1}^{N}f_{i}S_{i,\lambda}} \tag{6}\] in which \(S_{i}\) is the spectrum of the \(i\)th spectral component and \(f_{i}\) is its filling factor. This expression is Figure 3: Bayesian evidences for model fits with different levels of complexity in the case of the cross retrievals. The figure elements are the same as those of Figure 3. The top row gives results for the K0 star, simulated with PHOENIX and retrieved with MPS-ATLAS. The bottom row gives results for the M6 star, simulated with PHOENIX and retrieved with SPHINX. In most cases, the cross retrievals (i.e., no model fidelity) fail to identify the correct complexity. equivalent to those presented by Rackham et al. (2018, 2019) but terms are rearranged to clearly convey their origin. The numerator corresponds to the mean stellar spectrum illuminating the exoplanet atmosphere during the transit, whereas the denominator corresponds to the full-disk stellar spectrum observed outside of the transit. The observed transmission spectrum is then \[D_{\mathrm{obs},\lambda}=\epsilon_{\lambda}D_{\mathrm{true},\lambda}, \tag{7}\] in which \(D_{\mathrm{true},\lambda}\) is the true planetary transmission spectrum, i.e., the square of the wavelength-dependent planet-to-star radius ratio \((R_{p,\lambda}/R_{s})^{2}\). When \(N=1\), \(\epsilon_{\lambda}=1\) and there is no contamination. The implicit assumption with Equation 6 is that the planet transits the dominant spectra component, whereas the other spectral components are present elsewhere on the stellar disk. This owes to our focus in this study on indirectly constraining heterogeneities whose presence cannot be inferred directly through occultations by the transiting exoplanet (e.g., Fu et al., 2022). In the case where multiple spectral components are present in the transit chord, the numerator of Equation 6 can be replaced with another summation using the filling factors of the components within the transit chord. We calculated the posterior samples of \(\epsilon_{\lambda}\) using the parameter values at each step in the sampling. We then calculated the inferred values of \(D_{\mathrm{true},\lambda}\) using Equation 7 and propagating the measurement uncertainties of \(D_{\mathrm{obs},\lambda}\) and \(\epsilon_{\lambda}\) through the equation using a Monte Carlo approach. To distinguish our inferences from actual true values of the transmission spectra, which we know in this exercise, we refer to our inferences as \(D_{\mathrm{cor},\lambda}\) hereafter. Figure 5 shows the observed and corrected transmission spectra from the direct retrievals using the model complexities identified in Section 4.1. To assess the change in bias, we calculated the root-mean-square (rms) residual between the data and the model truth for both \(D_{\mathrm{obs},\lambda}\) and \(D_{\mathrm{cor},\lambda}\). We find that the corrections reduce the bias in the transmission spectra in 7 of 8 spectra (no corrections are possible for another two spectra from case 1, for which stellar contamination is not an issue and \(\epsilon_{\lambda}=1\) by definition). In these seven cases, the correction reduced the root-mean-squared (rms) residual between the data and the true planetary transmission spectrum by 177 ppm on average, with the smallest reduction being 3 ppm (M6 case 21) and the largest being 696 ppm (M6 case 3h). The case for which the correction procedure actually increased the bias in the transmission spectrum was the K0 31 case. As a reminder, the input parameters for the K0 simulations were \(T_{\mathrm{phot}}=5270\,\mathrm{K}\), \(T_{\mathrm{spot}}=3830\,\mathrm{K}\), and \(T_{\mathrm{fac}}=5380\,\mathrm{K}\) with \(f_{\mathrm{spot}}\) and \(f_{\mathrm{fac}}\) of 1% and 10% in case 31. While the model comparison identified the correct level of complexity here, the algorithm identified another combination of three component spectra that satisfactorily describes the integrated spectrum, leading to an improper correction. This provides an interesting counterpoint to the example of the M6 case 31 spectrum, for which the 4-comp model was preferred (Section 4.1). In this case, the incorrectly identified 4-comp still lead to an rms bias reduction of 132 ppm, whereas the 3-comp model, which had the appropriate complexity but the wrong inferred parameters, would have led to an rms bias reduction of only 96 ppm, if selected. Relying on "perfect" spectral models but producing incorrect inferences, these results both underscore the limitations of this approach in general in this highly complicated parameter space, a point we return to in the discussion. ### Impact on the Uncertainty Budget Figure 4: Best fits to the K0 case 1 spectrum for MPS-ATLAS models with one to four components. The top panel shows the simulated NIRSpec/PRISM spectrum in black, along with the four fits. Uncertainties for the data and posterior models are smaller than the line widths. Likewise, differences between the data, simulated with the PHOENIX grid, and the models, interpolated from the MPS-ATLAS grid, are too small to be apparent at this scale. The bottom panel shows the residuals for the fits, normalized to the data. In this view, residuals on the order of 10 ppt (1%) are evident. For comparison, the gray line shows the residuals for a direct retrieval with the best-fit one-component PHOENIX model. The relative scales of the residuals for the cross and direct retrievals gives a sense of the impact of uncertainties due to stellar models versus photon noise. The final result that we consider here is the impact on the uncertainty budget. We are interested in the impact of applying the derived corrections from the selected models for the stellar photosphere and propagating their uncertainties on the final uncertainties of the transmission spectra. Figure 6 illustrates the most salient point in this context, which is that the ultimate uncertainty contribution of model-based corrections for stellar contamination depends strongly on how well the models are able to describe the true spectra behind the data (i.e., the "model fidelity"). Focusing on the M6 case 2h dataset, this figure shows that in the case of the direct retrieval the relative uncertainty on \(\epsilon\) is vanishingly small compared to that on \(D\), owing to the high degree of fidelity between the synthetic stellar spectrum and the PHOENIX-based retrieval, leading to a correction that imparts no notable additional uncertainty on the final transmission spectrum. In this case and all other direct-retrieval cases, the median per-point uncertainty of the transmission spectrum increased by no more than 1 ppm. On the other hand, in the cross-retrieval case the relative uncertainty on \(\epsilon\) is two orders of magnitude larger, stemming from the need to inflate uncertainties to produce adequate fits, and thus the stellar-contamination correction dominates the final uncertainty of the transmission spectrum. We also explored how these results depend on the duration of the out-of-transit baseline, repeating our entire retrieval analysis with simulated datasets that had two and five times longer out-of-transit baselines (four and ten times the transit duration, respectively). We find that the uncertainties on \(\epsilon\) decreases with increasing baseline, as expected, but that the final uncertainties on the corrected transmission spectra remain high in the case of the cross retrievals, owing to the need to compensate for model differences with inflated uncertainties. Nonetheless, we note that in "real life" applications, the lack of model fidelity and other effects (such as stellar variability) may make the need for longer baselines more pressing. ## 5 Conclusions & Future Work Figure 5: Change in bias of transmission spectra after applying corrections implied by the direct retrievals. The top and bottom panels show results for the K0 and M6 star, respectively. From left to right, the columns correspond to cases 1, 21, 2h, 31, and 3h, respectively. In each panel the black lines give the true transmission spectra of the planet (warm Jupiter in the top row, super Earth in the bottom), and the blue and orange points show the observed, contaminated transmission spectrum and the corrected transmission spectrum, respectively. The rms of the residuals between the observed and corrected spectra are shown in blue and orange, respectively. Smaller rms values indicate spectra that are less biased with respect to the true transmission spectrum. We investigated the use of out-of-transit stellar spectra to enhance _JWST_'s scientific return while reducing biases in exoplanet transmission spectra, with a focus on the impact of stellar model fidelity. Our analysis produced two primary findings. 1. The fidelity of stellar models is crucially important for identifying the right complexity of a photosphere and deriving appropriate corrections for transmission spectra. The differences between existing model grids dominate by orders of magnitude the total noise budget. This translates into needing to inflate photon-noise errorbars by orders of magnitude (Figure 6), which prevents efforts from harnessing the full potential of _JWST_ for transits of stars with heterogeneous photospheres. We note that even when accounting for this inflation, significant biases on the derived properties of the stellar photosphere are possible, leading to improper corrections. This finding is similar to earlier findings of de Wit et al. (2012) and N22, which have shown that an apparently good fit can hide a compensation for a model's lack of fidelity via biases in the model parameters. 2. If the model fidelity is on par with the precision of the spectra, it is possible to reliably infer the correct model parameters (including the true number of components). This means that with sufficient model fidelity, one can expect to correct for stellar contamination to the maximum extent possible, given the information content of the data, and there is no model-driven bottleneck. In this context, we show that the uncertainty associated Figure 6: Median uncertainties of the transmission spectrum (\(D\)) and the stellar contamination signal (\(\epsilon\)) for the K0 case 2h dataset. The left (right) column shows result for the direct (cross) retrieval of the dataset with the 2-comp model, which was the preferred model complexity in both cases. In the top row, the black points show the median fractional uncertainty on the transit depth as a function of the out-of-transit baseline, and the green points show the same for the stellar contamination signal. The fraction uncertainty of the stellar contamination signal is consistently smaller and decreases with increasing out-of-transit baseline, as expected. The bottom row shows the true (black), observed (blue), and corrected (orange) transmission spectra resulting from these retrievals. The results of the direct retrieval show that when the model fidelity is sufficient, the contribution of the stellar-contamination correction to the noise budget of the planetary spectrum is negligible. By contrast, the results of the cross retrieval show that while the uncertainty on the stellar contamination signal appears to be on the same order of magnitude as the transit depth uncertainties, the uncertainty inflation necessary to provide an adequate fit of the stellar spectrum actually leads the poorly constrained stellar contamination signal to dominate the final uncertainties of the transmission spectrum. with the correction of the stellar contamination is marginal compared to the photon noise on the transmission spectrum, thereby allowing photon-limited science (i.e., harnessing the full potential of _JWST_). These findings should motivate both further theoretical developments in modeling the spectra of stars and their heterogeneities, as well as new observation strategies to derive empirical constraints on stellar photospheric heterogeneity from highly precise _JWST_ spectra. In fact, we suggest that observational strategies could be developed to acquire these empirical constraints with the observatories with which the planetary atmospheres will be explored to ensure a "fidelity" on par with the data driving the atmospheric characterization. We provide in the following a few additional considerations for future works. ### Spectral Model Grids and Interpolation Schemes Our posterior inferences on component temperatures are roughly 2 K, while the spacing of the temperature grids is 100 K for all three model grids used. This means that the sampling of the model grid is insufficient for the high-SNR data at hand. In addition, we use a linear interpolation scheme for simplicity, which could also contribute to reducing the fidelity of the models over the coarse grid available (see, e.g., Czekala et al., 2015). In order to support the reliable correction of stellar contamination in _JWST_ exoplanet transmission spectra, we suggest it would be useful to generate model grids with spacings in each of their dimensions that are two orders of magnitude smaller than those currently available. We also recommend that future work explore the impact in this context of linear interpolation versus more complex approaches, such as bicubic interpolation or spectral emulation via principal component analysis (e.g., Czekala et al., 2015). ### Heterogeneities Are Not Your Average Photosphere Out-of-transit spectra are currently fitted using a combination of stellar spectra weighted by different filling factors. This approach thus assumes that spot and fac Figure 7: Flux changes due to temperature variations for the range of stellar models we consider. The left column shows PHOENIX models relevant to our K0 case, and the right shows PHOENIX models relevant to our M6 case. The top panels show the spectra in absolute flux units, while the bottom panels show each set normalized to the middle-temperature model in each set. The wavelengths and resolution of the spectra are relevant to NIRSpec/PRISM. Larger flux differences are evident for the set of models relevant to the M6 case, which lead to more successful inferences from the retrievals. ulae spectra can be approximated by stellar spectra derived from 1D radiative-convective models. Although this assumption may be passable for spots (Rackham et al., 2022), it has been shown to be a poor assumption for facula spectra, which contain magnetically induced features that are not captured well by the simplified 1D models (Witzke et al., 2022). The increased differentiation of a component's features, while more problematic because the components are more challenging to approximate with current models, will also make their contribution easier to disentangle in observations. While a new generation of calculations for the spectra of heterogeneities are underway (e.g., with MPS-ATLAS; Witzke et al., 2021), the prospect of supporting the benchmarking of said models with empirical constraints within reach with _JWST_ is tantalizing. ### When Worse Can Also Mean Better As with facula spectra, for which the challenge of fitting them is also a unexpected benefit, the challenge of constraining photospheric heterogeneity in cooler stars may be lessened by that same heterogeneity. In Section 4.2 we found that the correction derived from the direct retrieval of the K0 case 31 spectrum actually increased the bias in the transmission spectrum, owing to incorrect inferences derived from the out-of-transit stellar spectrum. Ultimately, these incorrect inferences deriving from high-SNR spectra simulated and fitted with the same spectral grid highlight the challenge of deriving constraints in this temperature regime. Figure 7 shows the sensitivity of stellar spectra to temperature variations over ranges covering the components of our K0 and M6 cases. It highlights that the sensitivity of the spectra relevant to a K0 and its photospheric components is smaller than for the M6 case (Figure 8), which relates to the expectation of a lower level of stellar contamination. Yet, the strength of temperature-sensitive features in the spectra actually support the detection and characterization of these heterogeneities, and thus the correction of stellar contamination. Thus, we note that the lack of significant differentiation in the K0 models can also lead to biased inferences when a particular noise realization permits another nearby, nearly blackbody model to fit the data, though working at higher resolving power with other _JWST_ observational modes (e.g., NIRSpec/G395H, NIRISS SOSS) will likely help here. ### Empirical Heterogeneity Constraints We have worked under the assumption that different stellar model grids are equally good. Another possibility is that advances, including among other things updated opacities for a wide range of sources (e.g., Tennyson et al., 2016, 2020; Gordon et al., 2017, 2022), have allowed more recent grids to more closely resemble reality. In this case, the differences between model grids then reflect the growth in our understanding rather than a remaining understanding gap to cross. To assess this possibility, we recommend that these techniques be applied to real _JWST_ data, starting with an inactive star and advancing to more active stars to understand the limits of model-based inferences with real data--keeping in mind that a "good fit" does not automatically imply "model fidelity." We also recommend the exploration of empirical approaches for deriving the unique spectral components of a photosphere and their filling factors, enabling corrections that are independent of spectral models. ## Acknowledgements We thank Prajwal Niraula for providing the transmission spectra from N22. We also thank Aisha Iyer for providing the SPHINX spectral grid (Iyer et al., 2023) and Sasha Shapiro and Nadia Kostogryz for pointing us to MPS-ATLAS model library (Witzke et al., 2021). B.V.R. thanks the Heising-Simons Foundation for support. This material is based upon work supported by the National Aeronautics and Space Administration under Agreement No. 80NSSC21K0593 for the program Figure 8: The sensitivity of PHOENIX stellar models, sampled at the wavelength range and resolution NIRSpec/PRISM, to changes in stellar temperature. The relative change in stellar flux is shown as a function of the model effective temperature, normalized to the value for a model of 2500 K. The sensitivity to temperature decreases with increasing temperature, with models around 6000 K displaying roughly 20% of the sensitivity of the 2500 K model. "Alien Earths". The results reported herein benefited from collaborations and/or information exchange within NASA's Nexus for Exoplanet System Science (NExSS) research coordination network sponsored by NASA's Science Mission Directorate. The authors acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing (HPC, database, consultation) resources that have contributed to the research results reported within this paper/report.
2309.00728
Quantum-Geometric Origin of Out-of-plane Stacking Ferroelectricity
Stacking ferroelectricity (SFE) has been discovered in a wide range of van der Waals materials and holds promise for applications, including photovoltaics and high-density memory devices. We show that the microscopic origin of out-of-plane stacking ferroelectric polarization can be generally understood as a consequence of nontrivial Berry phase borne out of an effective Su-Schrieffer-Heeger model description with broken sublattice symmetry, thus elucidating the quantum-geometric origin of polarization in the extremely non-periodic bilayer limit. Our theory applies to known stacking ferroelectrics such as bilayer transition-metal dichalcogenides in 3R and T$_{\rm d}$ phases, as well as general AB-stacked honeycomb bilayers with staggered sublattice potential. Our explanatory and self-consistent framework based on the quantum-geometric perspective establishes quantitative understanding of out-of-plane SFE materials beyond symmetry principles.
Benjamin T. Zhou, Vedangi Pathak, Marcel Franz
2023-09-01T20:35:22Z
http://arxiv.org/abs/2309.00728v2
# Quantum-Geometric Origin of Stacking Ferroelectricity ###### Abstract Stacking ferroelectricity has been discovered in a wide range of van der Waals materials and holds promise for applications, including photovoltaics and high-density memory devices. We show that the microscopic origin of stacking ferroelectric polarization can be generally understood as a consequence of nontrivial Berry phase borne out of an effective Su-Schrieffer-Heeger model description with broken sublattice symmetry, thus uniting novel two-dimensional ferroelectricity with the modern theory of polarization. Our theory applies to known stacking ferroelectrics such as bilayer transition-metal dichalcogenides in 3R and T\({}_{d}\) phases, as well as general AB-stacked bilayers with honeycomb lattice and staggered sublattice potential. In addition to establishing a unifying microscopic framework for stacking ferroelectrics the quantum-geometric perspective provides key guiding principles for the design of new van der Waals materials with robust ferroelectric polarization. _Introduction.--_ Two-dimensional (2D) ferroelectrics can serve as building blocks of high-density non-volatile memories [1; 2], but they remain rare among materials found in nature. Recent developments in synthesis of layered van der Waals materials have seen a revival of activity in 2D ferroelectricity and, as a result, switchable polarity has been reported in a wide range of materials in the bilayer limit [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15]. Intriguingly, the constituent monolayers in these stacking ferroelectrics (SFEs) are generally non-polar and spontaneous polarization arises from unusual stacking orders with suitable symmetry breaking conditions allowing the emergence of electric polarity [3; 16]. Despite elegant symmetry arguments and extensive _ab initio_ studies, a fundamental conceptual question regarding the origin of SFE remains unaddressed: according to the modern theory of polarization, the electric polarization stems from the nontrivial quantum geometry encoded in the Bloch wave functions - the well-established Berry phase formalism for conventional bulk ferroelectrics [17; 18; 19; 20; 21]. However, SFEs in the bilayer limit appear as an outsider to this formalism: the polarization \(\mathbf{P}\) in SFE is often found in the direction perpendicular to the two-dimensional plane, along which translation symmetry is broken due to the finite thickness and the nonzero electrostatic potential difference caused precisely by \(\mathbf{P}\). The ill-defined Bloch momentum along this direction thus poses a challenge for interpreting the SFE polarization in terms of the Berry phase. In this Letter, we argue that the origin of SFE is indeed rooted in the nontrivial Berry phase generated by its asymmetric stacking order, which is exemplified through a mapping from the effective SFE Hamiltonian to the two-cell limit of the celebrated Su-Schrieffer-Heeger (SSH) chain [22], characterized in the presence of staggered sublattice potentials by a _polar_ Berry phase. Our self-consistent microscopic model further reveals that the quantum-geometric property remains intact even in the bilayer limit where the Bloch momentum along the perpendicular \(z\)-direction becomes ill-defined. We apply our theory to various known SFE materials and demonstrate quantitative agreement with the existing DFT and experimental results. _SFE as two-cell limit of sublattice-broken SSH chain.--_ We start by a brief review of the polarization physics in an SSH chain. The SSH model describes a dimerized polyacetylene chain with A/B sublattice sites and alternating bonds (Fig. 1a). The momentum-space Hamiltonian in the Bloch basis \(\ket{k,A},\ket{k,B}\) of the dimerized chain is characterized by a four-component \(\mathbf{d}\)-vector \[H_{\text{SSH}}(k)=\sum_{\alpha=0,x,y,z}d_{\alpha}(k)\sigma_{\alpha}, \tag{1}\] where \(d_{0}(k)=(\epsilon_{A}+\epsilon_{B})/2\), \(d_{x}(k)=(t+\delta t)+(t-\delta t)\cos(ka)\), \(d_{y}(k)=(t-\delta t)\sin(ka)\), \(d_{z}(k)=(\epsilon_{A}-\epsilon_{B})/2\) with \(\epsilon_{A}\), \(\epsilon_{B}\) as the on-site energies on sublattice A, B, and the Pauli matrices \(\sigma_{\alpha}\) act on the sublattice space. According to the modern theory of polarization [17; 18; 20; 21]\(P\) of the 1D chain is written as \[P=\frac{e}{2\pi}\oint_{k\in\text{BZ}}\bra{u_{-}(k)}-i\partial_{k}|u_{-}(k) \rangle\,dk, \tag{2}\] where \(|u_{-}(k)\rangle\) is the eigenstate of the filled lower band of the two-level Hamiltonian (1) and the loop integral Figure 1: (a) Schematic of a 1D SSH atomic chain with intra-cell hopping \(t+\delta t\) and inter-cell hopping \(t-\delta t\) between A, B sublattice sites. A polar structure forms under a staggered sublattice potential \(\epsilon_{A}\neq\epsilon_{B}\). (b) Winding of \(\mathbf{d}\)-vector defined in Eq. (1) as momentum \(k\) is varied adiabatically across the 1D Brillouin zone. Berry phase is given by \(1/2\) of the solid angle \(\Omega\) subtended by \(\mathbf{d}\). is precisely the Berry phase \(\gamma\) acquired by \(|u_{-}(k)\rangle\) as \(k\) evolves adiabatically across the 1D Brillouin zone (BZ). Here \(\gamma\) is equal to \(1/2\) of the solid angle \(\Omega\) subtended by \(\mathbf{d}\) on the Bloch sphere (Fig. 1b). In the usual setting \(H_{\text{SSH}}\) has a sublattice symmetry \(\epsilon_{A}=\epsilon_{B}\) which implies _global_ inversion symmetry: \(\mathcal{I}H_{\text{SSH}}(-k)\mathcal{I}^{-1}=H_{\text{SSH}}(k)\), with the inversion operator \(\mathcal{I}=\sigma_{x}\). The \(\mathcal{I}\)-symmetry enforces \(\gamma=0,\pi\) (modulo \(2\pi n\), \(n\in\mathbb{Z}\) due to the ambiguity of Berry phase in 1D), which characterize _non-polar_ phases [18; 21]. A simple way to polarize the SSH chain is to introduce a staggered sublattice potential \(\Delta_{AB}\equiv(\epsilon_{A}-\epsilon_{B})/2\neq 0\) such that \(\mathcal{I}\) is broken and \(d_{\alpha}\neq 0\) for all \(\alpha=x,y,z\), which is also known as the Rice-Mele model [23]. \(\Omega\) now takes on a general value \(\Omega\in(0,2\pi)\) and \(\gamma=\Omega/2\in(0,\pi)\) indicates non-vanishing polarization. The relevant parameter choice for SFE corresponds to the limit where intra-cell bonding vanishes, \(t=-\delta t\). The Berry phase then assumes a simple form \[\gamma=\pi\left[1-\frac{\Delta_{AB}}{\sqrt{4t^{2}+\Delta_{AB}^{2}}}\right]\ ( \text{mod }2n\pi,n\in\mathbb{Z}). \tag{3}\] Note that the polarity of the SSH chain is robust when \(\gamma\) is kept away from the non-polar values \(0\) and \(\pi\), which is attained for \(\Delta_{AB}\simeq t\) according to Eq. (3). This result can be understood pictorially as follows: the non-polar \(\gamma=0\) phase corresponds to \(\Delta_{AB}\gg t\) where the \(\mathbf{d}\)-vector is pinned at the poles of the Bloch sphere, while the non-polar \(\gamma=\pi\) phase is attained for \(\Delta_{AB}\ll t\) where \(\mathbf{d}\) stays on the equator. With \(\Delta_{AB}\simeq t\), the \(\mathbf{d}\)-vector lies midway between the poles and the equator, and the system is firmly embedded in the polar phase. Next, we discuss how the polarization in SFE materials can be interpreted as a consequence of the Berry phase in Eq. (3). (i) _Type-I SFEs: AB-stacked honeycomb bilayers._--A prototypical class of SFE materials is the AB-stacked bilayers with honeycomb lattice symmetry, such as hexagonal boron nitride (hBN), gallium nitride (GaN), and silicon carbide (SiC) [3; 7; 8]. The crystalline structure of each constituent monolayer has the non-polar \(D_{3h}\)-point group, which resembles graphene with intrinsically broken AB sublattice symmetry. The AB stacking breaks the horizontal mirror symmetry \(\sigma_{h}\), which further reduces the symmetry to the polar \(C_{3v}\) point group compatible with a nonzero \(P_{z}\). To elucidate the connection between Eq. (3) and \(P_{z}\), we note that the low-energy effective bilayer Hamiltonian in the Bloch basis of \(p_{z}\)-orbitals on A1, B1, A2, B2 sites takes the general form \[H_{AB,\xi}(\mathbf{p})=\begin{pmatrix}\epsilon_{A1}&vp_{\xi,-}&v_{1}p_{\xi,-}&v_{ 2}p_{\xi,+}\\ vp_{\xi,+}&\epsilon_{B1}&g&v_{1}p_{\xi,-}\\ v_{1}p_{\xi,+}&g&\epsilon_{A2}&vp_{\xi,-}\\ v_{2}p_{\xi,-}&v_{1}p_{\xi,+}&vp_{\xi,+}&\epsilon_{B2}\end{pmatrix}, \tag{4}\] where \(p_{\xi,\pm}=\xi p_{x}\pm ip_{y}\) is measured from the two inequivalent \(K\)-points indexed by \(\xi=\pm\). \(\epsilon_{ml}\) is the on-site energy on sublattice \(m=\text{A}\), \(\text{B}\) and layer \(l=1,2\) with \(\epsilon_{A,l}\neq\epsilon_{B,l}\) due to different atoms on AB sublattices, \(g\) is the direct inter-layer hopping between B1 and A2 sites (Fig. 2a). By identifying the AB sublattice in each layer as the AB sublattice in each cell of the SSH chain, Eq. (4) near the \(K\)-points with \(\mathbf{p}\simeq\mathbf{0}\) becomes simply the two-cell limit of the SSH chain (Fig. 1), where the intra-cell hopping \(t+\delta t\equiv vp_{\xi,-}\simeq 0\), and the inter-cell hopping between adjacent AB sites \(t-\delta t=2t\equiv g\). By extending the bilayer Hamiltonian to the 3D limit with the number of layers \(N_{z}\rightarrow\infty\), \(P_{z}\) is precisely the integral over \(k_{z}\) in Eq. (2) and \(g\) enters Eq. (3) as \(2t\). Since \(P_{z}\) is an intensive quantity that does not scale with \(N_{z}\), \(\gamma\) obtained in the large \(N_{z}\) limit necessarily implies a nonzero \(P_{z}\) with the same origin in the bilayer (\(N_{z}=2\)). (ii) _Type-II SFEs: Rhombohedral (3R) bilayer TMDs._--The 3R-structure of bilayer TMDs is formed by two monolayers in the usual 2H-phase but assembled with the rhombohedral stacking order [24]. It has a similar crystalline structure as AB-stacked honeycomb bilayers, while the relevant degrees of freedom in 3R-bilayer TMDs are different from the \(p_{z}\)-orbitals in AB-stacked honeycomb systems: the basis states at \(K\)-points are formed by the conduction band states \(|c,\pm K,l\rangle\) and valence band states \(|v,\pm K,l\rangle\) originating from transition-metal \(d\)-orbitals with different angular momenta \(m_{z}\)[25; 26; 27] (Fig. 2b). The AB stacking order causes a relative shift in \(m_{z}\) at \(\pm K\) between states from different layers such that \(m_{z}\) for states in different layers have an extra difference of \(\pm 1\)[12; 11; 28]. As such, the \(\mathcal{C}_{3z}\) symmetry enforces the inter-layer coupling at \(\pm K\) to be _asymmetric_ as tunneling is allowed only between \(|c,\pm K,1\rangle\) and \(|v,\pm K,2\rangle\) with the same \(\mathcal{C}_{3z}\) eigenvalues (Fig. 2b). By identifying the conduction (valence) band states as the A (B) sublattice in the SSH chain, the effective Hamiltonian near \(\pm K\) has a form similar to Eq. (4), where \(\Delta_{AB}\equiv(E_{c}-E_{v})/2\) is half of the direct semiconducting gap - a detailed derivation is presented in the Supplemental Material (SM) [29]. The system is thus characterized by a polar Berry phase of the form Eq. (3) following similar analysis in subsection (i). (iii) _Type-III SFEs: Bilayer \(\mathrm{T}_{\mathrm{d}}\)-structure TMDs._--A bilayer \(\mathrm{T}_{\mathrm{d}}\)-structure TMD is formed by two centrosymmetric topological 1T'-monolayers stacked with a relative \(\mathcal{C}_{2z}\) rotation [30; 31]. Due to its extremely low \(C_{s}\) point group symmetry with only one vertical mirror plane \(\sigma_{v}\), the \(\mathrm{T}_{\mathrm{d}}\)-bilayer exhibits out-of-plane polarity and was among the first sliding ferroelectrics discovered [5]. The low-energy physics in both 1T'-monolayer and \(\mathrm{T}_{\mathrm{d}}\)-bilayer involves states near the \(Q\) and \(Q^{\prime}\) points of the BZ where the inter-orbital spin-orbit coupling (SOC) opens up nontrivial band gaps at the topological band crossing points [31; 32; 33; 34; 35]. Using a symmetry-adapted \(\mathbf{k}\cdot\mathbf{p}\) model for \(\mathrm{T}_{\mathrm{d}}\)-bilayer [34; 29; 35], we find that the bilayer near \(Q,Q^{\prime}\) can be described by asymmetrically coupled massive Dirac fermions similar to Eq. (4), with spin valley-dependent Dirac masses \(m_{\xi\sigma}=\xi\sigma m\) generated by the inter-orbital SOC (details in SM [29]). The problem then can be mapped to the two-cell limit of a valley-dependent _spinful_ SSH chain, which in the \(N_{z}\rightarrow\infty\) limit is characterized by a \(\mathbf{d}\)-vector in Eq. (1) with components \(d_{0,\xi\sigma}=\xi\sigma V\), \(d_{x}(k)=2g_{0,-}\cos(ka),d_{y}(k)=-2g_{1,-}\sin(ka)\), \(d_{z,\xi\sigma}(k)=\xi\sigma m+2g_{1,+}\cos(ka)\). Here, the spin-valley-dependent potential \(\xi\sigma V\) arises from the combination of SOC and broken \(\mathcal{I}\)-symmetry in T\({}_{\rm d}\) bilayer and lifts the spin degeneracy in each band as shown schematically in Fig. 2c. The \(g_{0,\pm},g_{1,-}\) terms are even under \(\mathcal{I}\) while the \(g_{1,+}\) term is odd [29]. For illustration we consider the simple limit \(\left|g_{0,-}\right|=\left|g_{1,-}\right|\equiv g\) which enables analytic calculation of the spin-dependent Berry phase and gives, for \(m\gg g_{1,+}\) under realistic settings, \[\gamma_{\xi\sigma}\simeq-\xi\sigma\frac{m^{2}g^{2}\pi}{(m^{2}-4g_{1,+}^{2})^{ 3/2}(m-2\xi\sigma g_{1,+})}. \tag{5}\] Note that \(\gamma_{\xi,+}\simeq-\gamma_{\xi,-}\) at each valley \(\xi\) due to the spin-dependent sign in \(m_{\xi\sigma}\), while for general band filling the imbalance between spin subbands caused by \(V\) (Fig. 2c) implies a partial cancellation between different spin sectors and the net Berry phase remains finite. _Self-consistent real-space formalism._-- Although the Berry phase origin for the three types of SFE materials is revealed through Eqs. (3) and (5) in the \(N_{z}\rightarrow\infty\) limit for special points in the 2D BZ, a predictive theory of \(P_{z}\) requires (i) a more careful treatment of the surface bound charge effects in the bilayer limit, and (ii) inclusion of momenta \(\mathbf{k}\) in the entire 2D BZ. We illustrate the surface charge effects by taking AB-stacked honeycomb bilayers as an example, assuming \(\epsilon_{A}>\epsilon_{B}\). In this case, the filled single-electron levels near \(K\)-points consist of a layer-polarized \(\left|\psi_{v,2}\right\rangle\equiv\left|2,B\right\rangle\) state with energy \(\epsilon_{B}\), and another layer-hybridized state \(\left|\psi_{v,1}\right\rangle\equiv w_{1,B}\left|1,B\right\rangle+w_{2,A}\left| 2,A\right\rangle\) with energy \((\epsilon_{A}+\epsilon_{B})/2-\sqrt{g^{2}+\Delta_{AB}^{2}}<\epsilon_{B}\), where \(w_{1,B}=\Delta_{AB}^{2}/(g^{2}+\Delta_{AB}^{2})>w_{2,A}=g^{2}/(g^{2}+\Delta_{ AB}^{2})\) given \(\Delta_{AB}>g\) in realistic settings. Because of the finite \(w_{2,A}\) generated by \(g\), the Wannier center of \(\left|\psi_{v1}\right\rangle\) is displaced from layer 1 and shifted toward layer 2, leading to a local charge imbalance between the layers, _i.e._, the surface bound charges (Fig. 2d). This establishes a potential difference across the two layers with \(\Phi=\Phi_{1}-\Phi_{2}\neq 0\) which can be approximated as \(\Phi=-eP_{z}d/\epsilon\) (\(d\): inter-layer distance, \(\epsilon\): dielectric constant). The complication brought about by \(\Phi\) is two-fold: first, \(\Phi\) breaks the translation symmetry across the two layers, which prevents a direct computation of the Berry phase Eq. (2). This issue can be resolved by the equivalent real-space Wannier function method [17]. Second, \(\Phi\) enters the microscopic Hamiltonian as an inter-layer potential difference which introduces further corrections to \(w_{1,B},w_{2,A}\), causing an implicit dependence of the Wannier centers on \(\Phi\). Physically, the Wannier cen Figure 2: (a) AB-stacked honeycomb bilayer with direct coupling \(g\) between 1B and 2B sites. (b) Asymmetric coupling in 3R-bilayer TMD at \(+K\) occurs only between the conduction band in layer 1 and the valence band in layer 2 with the same effective angular momentum \(m_{z}=0\). (c) Schematic band diagram at \(Q\)- and \(Q^{\prime}\)-valleys for bilayer T\({}_{\rm d}\) TMD. A spin-valley-dependent band splitting \(2V\) leads to imbalance between spin subbands from \(Q\) and \(Q^{\prime}\) valleys. (d) Feedback loop for the self-consistency condition established in Eq. (6). (e-g) \(\mathbf{k}\)-resolved polarization for (e) bilayer SiC, (f) 3R-bilayer MoS\({}_{2}\), (g) T\({}_{\rm d}\)-bilayer WTe\({}_{2}\). Color scales in (e,g) represent polarization in units of polarization quantum \(e/A_{\rm Cell}\), with \(A_{\rm Cell}\) the area of the unit cell. \(|\Psi_{1}|^{2}\) in (f) denotes weight of eigenstates in layer 1. (h) \(P_{z}\) of bilayer WTe\({}_{2}\) as a function of inversion-symmetric \(g_{1,-}\). ter shift corresponding to the nonzero Berry phase induces a charge imbalance between the two layers, which in turn holds back the Wannier center shifting process, thus forming a feedback loop shown in Fig. 2d. To model \(P_{z}\) within a microscopic framework that takes into account both surface charge effects and all the \(\mathbf{k}\)-points in the BZ, we construct realistic tight-binding models for the three types of bilayer SFE materials discussed above. This allows us to capture simultaneously the effective physics at \(K\)- and \(Q\)-points and incorporate \(\Phi\) induced by surface bound charges (see SM [29]). The feedback loop in Fig. 2d indicates that \(P_{z}\) must be determined self-consistently via the equation \[P_{z}=-\frac{e}{\mathcal{V}}\sum_{n,\mathbf{k}\in\text{BZ}}\left\langle W_{n\mathbf{k} }(\Phi)|z|W_{n\mathbf{k}}(\Phi)\right\rangle, \tag{6}\] where \(\mathcal{V}\) is the total volume, \(|W_{n\mathbf{k}}(\Phi)\rangle\) denotes the Wannier function of a filled band \(n\) at momentum \(\mathbf{k}\) for a given \(\Phi\) (details in SM [29]). For concreteness, we consider bilayer SiC, 3R bilayer MoS\({}_{2}\) and T\({}_{\text{d}}\)-bilayer WTe\({}_{2}\) as specific examples for the three types of SFE and present the results in Table 1. Excellent agreement is found between the results obtained by self-consitently solving Eq. (6) and literature values. While the exact value of \(P_{z}\) is obtained via the real-space approach, the quantum geometry associated with the \(\mathbf{d}\)-vector continues to play a key role in the bilayer limit and under \(\Phi\neq 0\). This can be seen by the \(\mathbf{k}\)-resolved polarization with self-consistent \(\Phi\) as shown in Fig. 2e-g - contributions to \(P_{z}\) are predominantly concentrated near the \(K\)-points in AB-stacked honeycomb bilayers and 3R-bilayer TMDs, and near the \(Q\)-points in T\({}_{\text{d}}\)-TMDs. This confirms the origin of \(P_{z}\) in bilayer SFE as rooted in the nontrivial Berry phase in Eq. (3)-(5). By contrast the \(\mathbf{d}\)-vector away from \(K\) or \(Q\) is generally pinned close to a particular axis on the Bloch sphere, thus contributing little to the solid angle \(\Omega\) (see SM [29]). _Comparison among SFE materials._-- Following the criteria we established via Eq. (3), the electric polarity is robust when \(\Delta_{AB}\sim t\). In the context of SFEs parameter \(\Delta_{AB}\) is usually given by the intrinsic band gap and \(t\) is the asymmetric inter-layer coupling strength. In the specific case of SiC we consider for AB honeycomb bilayers of type (I), the intrinsic band gap is of order 2 eVs, and \(t\) originates from the strong \(\sigma\)-bond between \(p_{z}\)-orbitals of order 0.5 eV [29; 36]. Thus, SiC is within the \(\Delta_{AB}\sim t\) regime and we expect similar physics in other AB-stacked bilayers with strong \(\sigma\)-bonds. For 3R-bilayer TMDs of type (II), the inter-layer hopping is relatively weaker \(g\sim 0.1\) eV [11; 29; 12] which places the robustness of these polar materials in the intermediate range. The polarity of T\({}_{d}\)-bilayer TMD is the weakest among the three types due to the partial cancellation between \(\gamma_{\xi,+},\gamma_{\xi,-}\), and the fact that the emergence of \(P_{z}\) relies on nonzero \(V\) due to broken inversion. On the other hand, \(\mathcal{I}\)-symmetry breaking is a necessary but insufficient condition for electric polarity, and the role of geometry is essential: if \(\gamma_{\xi\sigma}\) in Eq. (5) is zero, the system should exhibit no polarity even if \(\mathcal{I}\) is broken. We demonstrate this by artificially tuning the strength of the \(\mathcal{I}\)-preserving \(g_{1,-}\)-term, which can modify the Berry phase according to Eq. (5) but keeps the symmetry of the system unchanged. As shown clearly in Fig. 2h, \(P_{z}\) is negligible for \(g_{1,-}=0\) where \(\mathcal{I}\) is already broken, while increases monotonically as a function of \(g_{1,-}\). This exemplifies the essential role of Berry phase for understanding SFE. _Conclusions._-- Our considerations uncover the quantum-geometric origin of SFE polarization by establishing its close relation to the geometric property of the \(\mathbf{d}\)-vector characterizing the classic SSH chain. This geometric approach not only unifies the bilayer SFE with the modern theory of polarization, but also establishes a general criterion, _i.e._, intrinsic band gap magnitude comparable to the asymmetric inter-layer coupling, as the key condition for the appearance of robust polarization in bilayer SFE materials. We conclude by noting that the geometric origin of SFE can be probed experimentally through the bulk photovoltaic effect, in which the shift current is known to scale with the inter-band polarization difference [37; 38; 39]. According to our findings of dominant contribution to the Berry phase from the vicinity of the \(K\) and \(Q\) points (Fig. 2), the shift current response is expected to be significant only for optical transitions at \(K\) and \(Q\) where the band edges possess different layer-polarizations (_e.g._, Fig. 2f), which then allows incident photons to pump electrons between the layers. \begin{table} \begin{tabular}{c c c c} Type of SFE & AB-stacked honeycomb bilayer & 3R-bilayer TMD & T\({}_{\text{d}}\)-bilayer TMD \\ \hline Examples & BN, GaN, SiC & MoS\({}_{2}\), MoSe\({}_{2}\), WS\({}_{2}\) & WTe\({}_{2}\), MoTe\({}_{2}\) \\ \hline \(\Delta_{AB}\) in SSH chain & Staggered sublattice potential & Semiconducting gap & Inter-orbital SOC \\ \hline Asymmetric coupling & Direct \(\sigma\)-bond (1B - 2A) & Tunneling between \(|v,1\rangle\), \(|c,2\rangle\) & Inter-orbital tunneling \\ \hline Robustness of polarity & Strong & Intermediate & Weak \\ \hline \(P_{z}\) (obtained from Eq. (6)) & 1.77 \(\mu\)C/cm\({}^{2}\) (SiC) & 0.6 \(\mu\)C/cm\({}^{2}\) (MoS\({}_{2}\)) & 0.03 \(\mu\)C/cm\({}^{2}\) (WTe\({}_{2}\)) \\ \hline \(P_{z}\) (reported in literature) & 1.76 \(\mu\)C/cm\({}^{2}\) (SiC) & 0.6 \(\mu\)C/cm\({}^{2}\) (MoS\({}_{2}\)) & 0.02-0.06 \(\mu\)C/cm\({}^{2}\) (WTe\({}_{2}\)) \\ \end{tabular} \end{table} Table 1: Comparison among different types of SFE materials. Specific examples of type I-III SFEs are represented by bilayer SiC, bilayer 3R-MoS\({}_{2}\) and bilayer T\({}_{\text{d}}\)-WTe\({}_{2}\), respectively. Reported values of \(P_{z}\) (last row) are taken from previous DFT studies [3; 4] and experiments [5; 6; 11]. See Supplemental Material [29] for details of model parameters. _Acknowledgement._ -- The authors thank Dongyang Yang, Jing Liang and Ziliang Ye for helpful discussions and fruitful collaborations which inspired the current work. This work was supported by NSERC, CIFAR and the Canada First Research Excellence Fund, Quantum Materials and Future Technologies Program.
2308.00401
VideoPro: A Visual Analytics Approach for Interactive Video Programming
Constructing supervised machine learning models for real-world video analysis require substantial labeled data, which is costly to acquire due to scarce domain expertise and laborious manual inspection. While data programming shows promise in generating labeled data at scale with user-defined labeling functions, the high dimensional and complex temporal information in videos poses additional challenges for effectively composing and evaluating labeling functions. In this paper, we propose VideoPro, a visual analytics approach to support flexible and scalable video data programming for model steering with reduced human effort. We first extract human-understandable events from videos using computer vision techniques and treat them as atomic components of labeling functions. We further propose a two-stage template mining algorithm that characterizes the sequential patterns of these events to serve as labeling function templates for efficient data labeling. The visual interface of VideoPro facilitates multifaceted exploration, examination, and application of the labeling templates, allowing for effective programming of video data at scale. Moreover, users can monitor the impact of programming on model performance and make informed adjustments during the iterative programming process. We demonstrate the efficiency and effectiveness of our approach with two case studies and expert interviews.
Jianben He, Xingbo Wang, Kam Kwai Wong, Xijie Huang, Changjian Chen, Zixin Chen, Fengjie Wang, Min Zhu, Huamin Qu
2023-08-01T09:28:48Z
http://arxiv.org/abs/2308.00401v1
# VideoPro: A Visual Analytics Approach for ###### Abstract Constructing supervised machine learning models for real-world video analysis require substantial labeled data, which is costly to acquire due to scarce domain expertise and laborious manual inspection. While data programming shows promise in generating labeled data at scale with user-defined labeling functions, the high dimensional and complex temporal information in videos poses additional challenges for effectively composing and evaluating labeling functions. In this paper, we propose _VideoPro_, a visual analytics approach to support flexible and scalable video data programming for model steering with reduced human effort. We first extract human-understandable events from videos using computer vision techniques and treat them as atomic components of labeling functions. We further propose a two-stage template mining algorithm that characterizes the sequential patterns of these events to serve as labeling function templates for efficient data labeling. The visual interface of _VideoPro_ facilitates multifaceted exploration, examination, and application of the labeling templates, allowing for effective programming of video data at scale. Moreover, users can monitor the impact of programming on model performance and make informed adjustments during the iterative programming process. We demonstrate the efficiency and effectiveness of our approach with two case studies and expert interviews. Interactive machine learning, data programming, video exploration and analysis ## 1 Introduction * _J. He, X. Wang, KK Wong, X. Huang, Z. Chen, H. Qu are with the Hong Kong University of Science and Technology, Hong Kong, China. X. Wang is the corresponding author. E-mail: (jhebt, wxungege, kkwongar, zhuanghs, zchenst, [email protected]). C. Chen is with the Tsinghua University, Beijing, China. Email: [email protected] F. Wang, Min Zhu are with the Sichuan University, Chengdu, China. Email: [email protected], and [email protected] Manuscript received xx xx. 201x; accepted xx.xxx. 201x. Date of Publication xx.xx. 201x; date of current version xxx. 201x. For information on obtaining reprints of this article, please send e-mail to: [email protected]. Digital Object Identifier: xx.xxx/YYCG.2017.xxxxxxx The growing prevalence of video recordings has opened up opportunities for video analysis in numerous applications. For instance, sports analysts analyze athletic maneuvers from recorded competitions to enhance strategic decision-making [64, 73], while scientists study videotaped experiments to identify behavioral patterns and gather evidence to support their hypotheses [35, 58]. Recently, deep learning models have shown remarkable potential in automatically detecting domain-specific events in videos, significantly improving analysis efficiency over manual video review [20, 63]. However, building such models necessitates abundant labeled data, and the labeling process can be quite time-consuming and challenging, especially for complex video Fig. 1: The _VideoPro_ interface consists of three major views. The _Template View_ (**A**) offers descriptive statistics and rich interactions to facilitate multi-faceted exploration and comprehension of labeling templates. The _Labeling View_ (**B**) provides a summary of the nuanced event compositions within the selected template to allow effective template validation and refinement. It also displays retrieved matching videos for efficient examination and at-scale programming. The _Info View_ (**C**) presents comprehensive information regarding data embedding distribution in latent space and the model iteration process. content that needs specialized domain knowledge and expertise [60]. Data programming [80, 38] has emerged as a promising paradigm for reducing manual labeling efforts. By defining labeling functions based on their domain knowledge, users can assign weak-supervision labels to raw data for model training [53]. For example, for text labeling tasks, if a cluster of sentences contains similar harmful words, users can define a labeling function to assign a "toxic" label to the cluster [52]. For images, users can define a set of rules that assemble image segments (_e.g._, head, body) to formulate new visual objects (_e.g._, person) [26]. Nevertheless, compared with text and image data, programming video data is particularly challenging. First, it is demanding to decompose video data into meaningful semantic units for building labeling functions. Videos contain segments of events that involve complex interactions among multiple objects over time. Particularly, the temporal context information could largely influence the semantic meaning of the video content. For example, two cooking videos with the same food ingredients but different cooking steps can result in dishes with distinct textures and flavors. Therefore, labeling functions need to model the variations and nuances in temporal relationships among multiple events. However, manually constructing such functions is challenging, given the wide range of events and their complex temporal dependencies. Second, evaluating, refining, and applying labeling functions for high-quality label generation and efficient model training is non-trivial. Multiple factors, including data coverage, model performance, and semantic meanings of labeling functions, need to be considered before applying them to large unlabeled video datasets. Furthermore, during the iterative programming process, users need to continuously monitor model performance under the impact of different labeling functions, and make corresponding refinement leveraging their domain knowledge. Developing an effective tool to facilitate and expedite the programming process with minimal user efforts is also challenging. To solve the above challenges, we introduce _VideoPro_, a visual analytics approach that enables flexible and scalable video data programming. Our target users are Machine Learning (ML) practitioners dealing with video datasets that have insufficient labeled examples. They seek to supplement high-quality data samples for enhanced model performance of aimed tasks. In this paper, we mainly focus on the video classification task. We also discuss how _VideoPro_ can be extended to support other tasks in Sec. 7. Drawing inspirations from the event segmentation theory [33] in cognitive science, we leverage Computer Vision (CV) techniques to decompose intricate video sequences into a series of human-comprehensible and semantically meaningful events. To address the first challenge, we propose a two-stage template mining algorithm to exploit diverse event sequential patterns as templates for labeling functions. Regarding the second challenge, the _VideoPro_ interface provides carefully designed visualizations and rich interactions, allowing users to efficiently explore, validate, and refine labeling templates based on their domain knowledge. Users can then apply the labeling functions to video data at scale and make prompt adjustments during the iterative programming process. Our contributions are summarized as follows: * We propose a novel approach that leverages advanced algorithms to exploit diverse event sequential patterns from videos to guide video data programming. * We develop a visual analytic system that provides carefully designed visualizations and rich interactions to facilitate efficient and scalable video programming. * We conduct two case studies and expert interviews to validate the efficiency and effectiveness of the system. ## 2 Related works ### _Interactive Data Labeling_ A surge of research has been proposed to minimize the effort and accelerate the labeling process for supervised ML. These works can be categorized into model-centered and user-centered approaches [22]. Model-centered approaches, exemplified by _Active Learning_ (AL), employ various selection strategies to prioritize the labeling of the most "informative" data samples, thus reducing the burden by focusing on smaller subsets of candidate instances [6]. However, AL limits users to labeling lengthy sequences of recommended instances solely determined by the selection algorithms, causing the final model performance to be heavily influenced by the selection strategies [5]. Visual interactive labeling is a user-centered approach that takes advantage of users' domain expertise and visual perception to guide the selection and labeling process. Various visualization techniques (_e.g._, self-organizing maps [47], dimension reduction techniques [12, 32, 42, 13], and thumbnail visualization [34, 54]) have been employed to cluster and sort similar items for efficient labeling [77]. Recent works have incorporated more model suggestions with visualizations to further enhance labeling efficiency [11, 82, 75]. For example, VINA [59] and AILA [16] enable efficient text document labeling by visually emphasizing important text segments recommended by ML algorithms. These mixed-initiative workflows allow users to understand and steer the models by eliciting human knowledge during the interactive labeling process [31, 29]. Notably, PEAX [36] employs the iterative labeling strategy to train classifiers for searching similar patterns in multivariate time series. Despite the advancements, these approaches still face scalability challenges due to the need for manual verification of data instances one by one. We aim to address this limitation by developing a scalable solution that enables at-scale labeling and programming of video data, facilitating efficient knowledge transfer from a small set of labeled videos to a large set of unlabeled videos. Houque _et al._[26] proposed the visual concept programming for image data, which is the most relevant work to ours. The method decomposes images into human-understandable visual concepts leveraging a pre-trained vision-language model. Users can program these visual concepts to inject their knowledge at scale. However, the system primarily focuses on static spatial relationships between detected objects in images, and cannot easily generalize the resulting heuristics to temporal relationships among multiple events in videos. Furthermore, it relies solely on users to explore and define labeling functions and lacks prompt feedback on the impact of programming on the model performance. To achieve a streamlined and flexible video programming workflow, we first conceptualize videos as event sequences. Then we propose a two-stage template mining algorithm to automatically generate labeling templates to be explored, examined, and applied, such that users can inject their knowledge via video programming in a scalable and interpretable manner. Additionally, we offer an interim model evaluation to guide labeling focuses. ### _Visual Event Exploration in Videos_ Depending on the varying processing and target intervals, video visual analytics aims to determine the statuses in frames, detect events from scenes, and generate models for videos [28]. Recent advances in CV techniques have empowered researchers to analyze videos at the frame level (_e.g._, object detection and recognition) and study the detected objects' behaviors and interactions over extended intervals [2]. These behaviors and interactions are often broadly defined as "events" to describe the spatial and temporal dynamics within videos [55]. Many Visual Analytics (VA) systems have been developed to analyze events in videos. Li _et al._[37] derived anomalous events from online exam videos to support efficient proctoring. Similarly, Tang _et al._[61] detected fraudulent events in live-streaming videos with reference to streaming moderation policies. While anomaly detection seeks to identify one anomalous event or instance as evidence, many analytical tasks require a comprehensive and multimodal context for decision-making. As Wang _et al._[66] and Liang _et al._[40] summarized, data in different modalities can dominate, complement, or conflict with each other. These properties have been applied to VA systems that analyze emotion [79, 44], speech [68, 69], and body language [78, 72] in videos. These systems used multimodal and heterogeneous data sources to infer the actual states of events. However, they mainly focused on one event at a time with little consideration for their temporal order, which is crucial for contextual reasoning and gaining higher-level insights. Parry _et al._[50] identified three characteristics of events in videos, _i.e._, _hierarchy_, _importance_, and _state transition_. They have inspired later research to analyze videos through the lens of event sequence understanding. EventAnchor [19] is developed based on the observation that badminot tactics are formulated by individual strokes, which can be detected by CV algorithms. From this observation, a three-level hierarchy (_i.e._, object, event, and context) is proposed and further generalized to sports videos as the object-event-tactic framework [15] to inform the design space of augmented sports videos. As for state transition and importance, Anchorage [71] performed event sequence analysis on customer service videos to study how different states in services affect event satisfaction ratings. However, these works still analyze one video at a time and have low scalability. We aim at the data labeling scenarios, which extrapolate the event knowledge obtained from individual videos to a collection of videos. Over the past decade, the architecture and challenges of video labeling tools have evolved from labeling visual features [29, 18] to labeling accurate event contexts [2]. Given the complexity of temporal information, these event contexts require additional information to assist careful human labeling for reliable knowledge injection. For example, users need consistency checks when coding recorded system usage videos [7] and temporal awareness when analyzing color usage in movies [24]. Similar to these video labeling tools, our approach extracts sufficient CV-based features and supports an iterative labeling process. Furthermore, we explore the use of data programming on videos, emphasizing the events and their temporal relations to form more prominent labels. We propose using event sequences to distinguish and retrieve batches of videos with specific sequential patterns of interest. ## 2 Requirement Analysis Our goal is to develop a visual analytics system that enables efficient user knowledge integration and facilitates high-quality data label generation at scale through interactive video data programming. The initial motivation for this research originated from our collaboration with two companies, aiming to develop high-performance models for real-world applications, including the analysis of customer and student behaviors in service and educational videos. Considering the diverse and complex nature of events to analyze in these domain-specific videos, domain experts need to manually label the video dataset before model training. However, the video labeling process was time-consuming, taking several weeks even for a small-scale dataset of approximately one thousand videos, due to limited expert availability and the substantial workload involved. Therefore, finding an efficient and scalable way to transfer domain knowledge from a small labeled video dataset to a large unlabeled one for high-quality data sample supplementation has been a persistent demand. We worked closely with five ML experts (**E1**-**E5**, five males; three researchers, and two MLOps engineers) to understand the general needs and to derive design requirements. **E1**, **E2**, and **E3** are three researchers with multiple to research publications in the areas of CV and interactive ML. **E4** and **E5** are two MLOps engineers from our collaborated company who have averaged five years of experience in developing and deploying ML models. Specifically, **E1** and **E3** are the co-authors of this paper. All experts have rich experience training and utilizing ML models for video analysis. They highlighted that despite the availability of many public video datasets, building resilient models tailored to domain-specific tasks still necessitates significant amounts of real-world labeled data. Given the shortage and acquisition difficulty of such labeled data, experts expressed a desire for a tool that supports scalable knowledge transfer and efficient video programming. The derived four design requirements are summarized as follows: 1. [leftmargin=*,noitemsep,topsep=0pt] 2. **Decompose videos with meaningful temporal event sequences** All experts acknowledged the challenging and time-consuming nature of comprehending video datasets due to their large volume and rich temporal and semantic information. They emphasized the importance of presenting videos in a way that humans can readily understand and explore. Particularly, experts mentioned that video contains much redundant and unimportant information. They often rely on key events to digest the entire video content, which also echoes prior research [33, 50] on video understanding. **E1** commented that "condensing lengthy video content into a succinct event sequence enables quick grasp of the video's essence at a glance, without the need to review the entire footage." 3. **Summarize event temporal relationships with templates from multiple facets** Given the large set of events in the video dataset, all the experts concurred that it is crucial to summarize event temporal relationships in videos with several compact templates and identify meaningful ones that can serve as labeling functions for video programming. Specifically, a template is a sequence of events shared by several videos, which can potentially help describe the semantics of the labels and define labeling functions for video programming. In addition, the experts expressed interest in exploring the templates from multiple facets, such as data coverage and model performance, to identify meaningful ones. For example, **E1** prioritized templates that yield poor model performance, while **E2** focused on templates that encompass a larger number of unlabeled instances. **E4** showed interest in templates containing instances from a single class, indicating that "such templates may well capture class-specific characteristics." 4. **Support efficient and scalable template-guided video data programming** The experts expected the system to support interactive validation and refinement of templates to achieve efficient and scalable video programming. They pointed out that comprehending the semantic implications of templates and verifying their correctness is crucial to ensuring high-quality labeling outcomes. Moreover, the system should allow experts to refine or manually compose templates based on their domain knowledge and new insights that emerge during the exploration process. Additionally, the system should automatically retrieve the most relevant videos for programming. This will allow users to apply selected and refined templates to program a large number of videos efficiently, as **E5** commented, "it would save much effort if we could apply the knowledge to a batch of videos simultaneously." 5. **R Reveal the effect of programming on model performance** The experts also expressed a desire to monitor model performance changes throughout the programming process. They suggested that the system should provide visualizations depicting the iterative programming process to improve controllability and transparency. They can thus gain insights into the effectiveness of selected templates and data samples as well as make corresponding adjustments in the later programming stage. For instance, **E3** said that when observing an unbalanced dataset distribution, he would consider adding more data samples from the minority classes to balance it. Based on the visualized programming process, the experts can also make informed decisions about when to retrain the model and when to stop programming. ## 3 System & Methods In this section, we first provide an overview of the system framework and workflow. Then we illustrate the methods for video data processing, event extraction, and labeling template mining. ### System Framework Figure 2 demonstrates the overarching system framework. The input video dataset consists of a small number of videos with ground truth labels and a substantial amount of unlabeled videos. The _Event extraction_ module (Fig. 2A) first abstracts the input videos as temporal sequences composed of various events (_e.g._, wave hands) that humans can readily understand. Subsequently, in the _Template mining_ module (Fig. 2B), a two-stage template mining algorithm is employed to extract diverse sequential patterns among events (_i.e._, the order of event occurrence) from the collections of output video event sequences from the _Event extraction_ module. In the first stage, the sequential pattern mining algorithm (Fig. 2B-1) extracts sequential patterns, which serve as potential labeling templates for programming. In the second stage, the MinDL algorithm (Fig. 2B-2) further distinguishes and clusters the nuanced sequence variations within a template for further examination and modification. In the _VideoPro_ interface, Users begin by conducting a comprehensive exploration of the generated templates in the _Template View_ from multiple perspectives, including model accuracy and data coverage (Fig. 2C-1). Following the selection of a template of interest, users can then efficiently validate and refine the template (Fig. 2C-2), and subsequently apply the validated and refined template to label videos at scale in the _Labeling View_ (Fig. 2C-3). The labeled instances are then forwarded to the model for retraining. Users can inspect and evaluate the impact of each programming iteration on the model performance in the _Info View_ (Fig. 2C-4) and correspondingly adjust their programming strategy in the subsequent iterative programming process. ### _Data and Event Extraction_ Given input raw videos, state-of-the-art CV algorithms are leveraged to extract pre-defined events, which vary based on domain-specific requirements and expert needs. For instance, in application scenarios focusing on human behaviors, events of interest may include body movements (_e.g._, jump and move right). These movements can be captured through analyzing position and angle changes of body parts based on heuristics and object detection models. Each extracted event is represented as a tuple \((eventType,t\_start,t\_end)\), where \(eventType\) denotes the event type, and \(t\_start\) and \(t\_end\) are the timestamps of the start and end of the event. ### _Template Mining_ Event sequential patterns, including the order and frequency of event occurrence, are crucial for comprehending and comparing video event sequences during programming. Considering the diversity and complexity of event sequential patterns, we adopted a two-stage template mining algorithm (Fig. 2B) to efficiently extract event sequential patterns and characterize the labeling templates. The two-stage template mining algorithm allows for scalable and generalizable analysis of large-scale datasets of varying lengths and diverse event sequential patterns. The first frequent sequential pattern mining algorithm [67] provides a comprehensive dataset overview and avoids generating unwieldy templates that can be challenging for experts to interpret and define. It also allows users to add self-defined constraints on template compositions flexibly to accommodate their needs [81, 83]. The MinDL algorithm [70, 14] in the later stage further summarizes and distinguishes nuanced sequence differences within a template to facilitate detailed validation and refinement. After event extraction, each video can be construed as an event sequence, denoted as an ordered event list \(S=[e_{1},e_{2},...e_{m}]\) where \(e_{i}\) belongs to the event set \(E\). The video dataset as a whole can then be expressed as \(\mathcal{S}=[S_{1},S_{2},...S_{n}]\), where \(n\) signifies the total number of video instances. A sequential pattern \(P=[e_{1},e_{2},...e_{|P|}]\) is a subsequence of some \(S\in\mathcal{S}\) if there exist an ordered \(|P|\)-tuple \(m=(m_{1},m_{2},...,m_{|P|})\) such that \(\mathcal{S}[m_{i}]=e_{i}\) for each \(e_{i}\in P\). For example, the sequential pattern \(P=[A,D]\) is a subsequence of \(S=[A,B,D,C,D]\) with two ordered 2-tuples (1,3) and (1,5). A sequential pattern is considered frequent if its occurrence exceeds a manually defined threshold. We first employed the seq2pat algorithm [67] to extract frequent sequential patterns from the video dataset \(\mathcal{S}\), which were then used as labeling templates \(T=[T_{1},T_{2},T_{3}...]\). This algorithm was chosen over other sequential pattern mining techniques due to its scalability and efficiency. It utilizes a multi-valued decision diagram structure [27] to compactly encode video sequences, enabling efficient computation for large volumes of sequences (_e.g._, thousands) in our scenario. Moreover, the algorithm is highly adaptable, allowing for flexible addition and revision of various constraints, such as sequential pattern length and continuity, based on user needs and task requirements. We then implemented the MinDL algorithm [70, 14] to further analyze sequence nuances within a template. This algorithm applies the minimum description length principle [23] to partition video sequence collections within the selected template into clusters and summarizes each cluster with the most "representative" sequential pattern, denoted as sub-template. Events belonging to the selected template are denoted as core events. Events within the sub-template that are not part of the selected template are called focus events, while events outside the sub-template are referred to as context events (Fig. 2B-2). Every individual sequence in the cluster can be restored by editing the sub-template, including adding, deleting, or replacing events. The total description length equals the sum of the sequential pattern length and edit length, and the optimal clustering results are obtained by minimizing the total description length \(L(\mathcal{C})\): \[L(\mathcal{C})=\sum_{(P,G)\in\mathcal{C}}\operatorname{len}(P)+\alpha\sum_{(P,G)\in\mathcal{C}}\sum_{S\in G}\left\|edits(s,P)\right\|+\lambda\left\| \mathcal{C}\right\| \tag{1}\] Here, \(\mathcal{C}\) denotes the collection of video sequences in a template. \(s\) represents the individual video event sequence. The divided sequence clusters are denoted as \(\mathcal{C}=\{(P_{1},G_{1})\,,(P_{2},G_{2})\,,\ldots,(P_{n},G_{n})\}\) where \(P_{i}\) and \(G_{i}\) are the representative sequential pattern and sequence collection of the \(i^{th}\) cluster. The parameters \(\alpha\) and \(\lambda\) respectively control the information loss importance and the number of clusters. Based on our experiment results, we found that setting \(\alpha\) as 0.8 and \(\lambda\) as 0 can yield a satisfactory summary for our dataset. We adopted a similar Locality Sensitive Hashing (LSH) strategy [70, 14] to speed up the computation. We also modified the original algorithm to adapt to our problem. Specifically, the computed representative sequential patterns of all clusters must include the original template for effective understanding and comparison. The MinDL algorithm excels in partitioning sequences into meaningful clusters based on temporal similarity and identifying representative sequential patterns to provide an informative summary. This is particularly useful for users to compare and understand different video sequence clusters for further labeling template validation and refinement in our scenarios. ## 5 User Interface The _VideoPro_ interface consists of three coordinated views (Fig. 1) to support flexible and smooth programming experience. In this section, we introduce the visual design of each view and the interactions connecting them in detail. The _VideoPro_ adopted a unified color and event encoding scheme that is displayed at the top of the system interface. In consideration of scalability and generalizability, we use alphabets instead of icons or colors to encode individual events. Fig. 2: The system framework contains three main modules. (A) The _Event Extraction_ module converts input videos from the dataset into event sequences. (B) The _Template Mining_ module distills the event sequential patterns as templates to guide programming. (C) The _VideoPro_ interface supports template exploration, validation and refinement, at-scale labeling, and model evaluation for the iterative programming process. ### Template View The _Template View_ (Fig. 1A) summarizes the frequent and influential labeling templates in an organized table. It facilitates multi-faceted template exploration and comprehension (**R1, R2**). The first column in the _Template View_ records the template name, which indicates the summarized event sequential patterns. The second column uses a stacked bar chart to encode the class distribution of labeled video instances included in the corresponding template. The length of the bar chart encodes the video instance number, while the color encodes the class type. Hovering over the bars of different colors shows each class's exact number of labeled video instances, providing a clear understanding of the class distribution within the template. The bar charts will be updated after each labeling round. Newly labeled instances are visually distinguished from previously labeled ones using the corresponding class color and a check texture. The third and fourth columns respectively display the overall prediction accuracy of labeled video instances and the number of unlabeled instances within the template, which will also be updated after each labeling round. A control panel on the top of the template table offers multiple interaction options, where users can choose to aggregate templates in different ways, including by prefix, by degree (_i.e._, template length), and by set (_i.e._, event collections in template). By default, templates are aggregated by prefix. Users can expand templates for further exploration by clicking the "+" symbol. Users can customize the _Template View_ based on their specific needs by setting frequency and degree threshold to filter templates. They can also sort the templates by multiple predefined metrics, including overall prediction accuracy, unlabeled video instance number, and label purity in ascending or descending order. In addition, users can manually input and search for templates based on their domain knowledge in the search box above the table. ### Labeling View Upon selecting a template in the _Template View_, users can validate and refine the selected template, as well as examine the videos that match the template for scalable labeling in the _Labeling View_ (**R1, R3**). The upper part of the view (Fig. 1B) consists of three parts from left to right: the summary figures, the cluster heatmaps, and the connected Sankey diagrams. The summary figures (Fig. 1B-1 and Fig. 3A), inspired by the periphery plots [48], provide an overview of the temporal event distributions within the corresponding clusters. The middle stacked line charts depict the aggregated temporal distribution of the sub-template events across the entire video clusters, while the histograms on either side illustrate the frequency of context events occurring before and after the sub-template events. This design enables users to compare the event temporal distribution of sub-templates and observe the differences in contextual events between and within clusters. The middle cluster heatmaps show the temporal distribution of the labeled videos belonging to the clusters. Each row represents an individual video sequence, and each grid represents a fixed time interval (Fig. 1B-2). For example, if one video is 10 seconds long and there are 10 grids, then each grid represents 1 second time interval. To facilitate cross-video temporal comparisons, the time duration of all video sequences is normalized so that they contain the same number of grids. Videos belonging to the same cluster are vertically stacked together, with larger clusters having larger heights. The color of each grid indicates the types of events occurring during the corresponding time interval, including core events from the selected template, the focus events in the sub-template, and other context events. Users can hover over the grid to inspect the specific event. Furthermore, a Sankey diagram-based design (Fig. 1B-(3-4)) is adopted to visualize the label distribution across different clusters. The colored bar at the end of each video sequence indicates its label class. Therefore, the height of the colored bars at the end of each cluster (Fig. 1B-3) reflects the number of video instances belonging to the corresponding class in the cluster. The rightmost colored rectangles (Fig. 1B-4) represent corresponding classes and are linked with their contained video instances (_i.e._, the colored bars) through flows of different widths. The width of the flows equals the bar height, thereby encoding the total number of video instances for each class. Hovering over a rectangle will highlight all associated flows. Additionally, users can click on each rectangle to stack videos of the same class together for efficient comparison. Additionally, users can select a group of videos by clicking on the corresponding colored bar. Then the original video keyframe sequences of the selected group will be displayed below. **Alternative design** Another candidate design based on the icicle plot (Fig. 3C) was considered to visualize the sub-templates. By accumulating all event sequences in the sub-template as one, the bars' height encodes the event occurrence in order. These events are aligned by the selected template, with the before-and-after events on the sides. However, the design does not scale when the sequences are long (_i.e._, too many layers on both sides). It could also mislead users that the left and right sub-sequences belong to some actual sequences if they share the same horizontal positions. For example, the CABC in the middle of Fig. 3C might not exist in the cluster. To fix these critical flaws, we adopt the current design that shows the overview and details separately. The lower part of the _Labeling View_ presents the original video content to facilitate quick examination and at-scale labeling (Fig. 1B-(5-6)). The lists of labeled and unlabeled videos are displayed on the left and right sides respectively, enabling straightforward comparison. Unlabeled videos are ranked based on their similarities to labeled videos by default. The similarity (_Sim\({}_{total}\)_) between an unlabeled video and a labeled video is modeled by a linear combination of similarities of both event sequence (_Sim\({}_{k}\)_) and video embedding (_Sim\({}_{v}\)_): \(Sim_{total}=w\cdot Sim_{E}+(1-w)\cdot Sim_{V}\). The \(Sim_{E}\) is measured using editing distance to compare the discrete event sequences of the two videos, while _Sim\({}_{V}\)_ is measured using cosine similarity to compare their video embeddings. The weight factor \(w\) balances the assessment of patterns of interest (_Sim\({}_{E}\)_) and overall video visual similarities (_Sim\({}_{V}\)_). Users can adjust the similarity slider to control video similarity, and retrieve similar unlabeled videos by selecting corresponding labeled videos and clicking the retrieval button. In the video list, each row represents a single video. Each event within the video sequence is succinctly summarized using the extracted keyframe. The position of the keyframe on the horizontal timeline encodes the event's occurrence time, while the border color indicates the event type (core, focus, or context event). Users can hover over the keyframe to browse the complete frame sequences of the event. They can also click on the row to play the original videos for detailed inspection and bookmarking. This design allows efficient video content digestion and intuitive comparison of the event temporal distribution. Users can apply a label to multiple selected videos at once by checking corresponding selection boxes in an efficient and user-friendly manner. Users can also check the labeling history and resolve labeling conflicts in the upper-left labeling history panel. ### Info View The _Info View_ (Fig. 1C) provides comprehensive information about data embedding distribution and model iterations (**R4**). The _Projection_ design (Fig. 1C-1) provides an overview of data instances by displaying their label status and latent space similarity. High-dimensional latent embeddings are projected onto a 2D plane using the UMAP algorithm [45], resulting in data instances with similar embeddings positioned close to each other. Labeled and unlabeled data instances are differentiated using two distinct colors. Users can select to view all data instances or focus on partial(_i.e._, labeled or unlabeled) data instances from the top menu. A heatmap is added in the background to Fig. 3: The design for event sequential pattern summarization. (**A**) our current design based on the periphery plots [48]. (**B**) an illustration of the original event sequence. (**C**) an icicle plot alternative design. encode the prediction error of data instances, with the redder shades indicating higher prediction errors. The _Model Iteration_ part (Fig. 1C-2) serves to update users about the impact of each iteration of programming on the model training progress. It includes an overall model accuracy line chart and a confusion matrix for model performance evaluation. The line chart shows how overall model accuracy changes with the number of labeled instances. The x-axis indicates the number of labeled instances while the y-axis indicates model accuracy. To ensure computation efficiency, retraining occurs when the number of newly labeled instances reaches the batch threshold, and the line chart will be updated accordingly. The confusion matrix, color-coded with a sequential colormap, shows the proportion of correctly classified video instances per class. The rows and columns represent ground truth classes and predicted classes respectively. Users can analyze classifier performance across classes, guiding template selection and data supplementation in subsequent programming iterations. ### Cross-view Interactions The _VideoPro_ system offers diverse interactions for seamless coordination of different views with on-demand access to details. **Clicking** Users can double-click on a specific template to inspect labeled and unlabeled video instances belonging to the template in the _Labeling View_ and highlight them in the _Info View_ projection plane. The reset buttons can be used to undo any operations. **Lasso and zooming** Users can leverage the lasso and zoom interactions in the _Info View_ projection to inspect and select instance groups of interest. The corresponding templates will be computed and updated in the _Template View_. ## 5 Evaluation In this section, we demonstrate the efficiency and effectiveness of our system through two case studies and domain expert feedback. The first case study is conducted on a real-world online education video dataset provided by our collaborated speaking training company. This dataset was used to build a robust classification model for assessing students' engagement levels in online classes, as no related public datasets or models were available. The second case study is performed on the UCF101 dataset [57], a representative public action recognition dataset, for the action classification task. The primary goal of these two case studies is to facilitate experts in efficiently supplementing high-quality data samples using _VideoPro_, achieving satisfactory model performance with minimal effort. ### Case One: Engagement Classification We invited expert **E1** to conduct the case study. As a member of the collaborated project, **E1** has been responsible for developing a classification model on this dataset and involved in the prototype design of our system. He thus has a good understanding of the task, dataset, workflow, and system design. **Dataset** The whole video dataset contains 5,788 videos in total, including 1,774 videos with four-class ground-truth labels and 4,014 videos without labels. For the labeled videos, the label falls into four classes: _Highly Disengaged (HD)_, _Disengaged (DE)_, _Engaged (EN)_, and _Highly Engaged (HE)_. This classification scheme is established according to the experts' requirements and previous work practices [62, 3]. The class distribution of the labeled videos is as follows: _HD_ (8.68%), _DE_ (23.96%), _EN_ (52.03%), and _HE_ (15.33%). Following MS COCO [41], we further split the videos with ground-truth labels at the proportion around 2:1 into the training and test sets. In the splitting process, we maintain the label distribution of four classes to be the same in both the training and testing sets. The training set contains 1,182 videos, and the test set contains 592 videos. **Initial Setting** To understand typical events for assessing student engagement levels, we interviewed three experienced teachers from our collaborating company. These teachers, with rich domain knowledge, are also responsible for labeling a small subset of the dataset. Ultimately, the consolidated event set \(E\) consisted of seven types of events: active hand movement, look away, look center, smile, look down, move away from the screen, and move close to the screen. We leveraged several state-of-the-art CV techniques [4, 8, 46] to extract these representative events from videos. Initially, we trained a baseline classifier that integrated spatiotemporal features extracted by ISD [10], a state-of-the-art pre-trained model, and event features represented by one-hot encoding. We use the accuracy for each class and the overall F1 score to evaluate the model performance. It achieved an overall F1 score of 66.78% on the test set, where its performance is recorded in the first row of Tab. 1. **Iteration One: Distinguish between _DE_ class and _EN_ class** After the initial round of training, **E1** observed that the model performance was unsatisfactory in distinguishing between the _DE_ class and _EN_ class. He suspected that the model struggled to effectively differentiate some videos within these two classes that share similar event sequential patterns. To address this issue, **E1** aimed to identify common templates with low accuracy that were shared between the _DE_ class and _EN_ class. While examining the projection in the _Info View_ (Fig. 1C-1), **E1** identified a group of video embeddings highlighted with a red-colored background, indicating high errors. To further investigate these videos, **E1** utilized the lasso tool to select them, and the corresponding templates that characterized these videos were shown in the _Template View_ (**R2**). **E1** observed that the template "AF" exclusively contained videos from the _DE_ and _EN_ classes, as indicated by the two-color distribution chart (Fig. 1A-1). This template also contained a relatively large number of labeled and unlabeled videos. Therefore, he decided to further investigate the "AF" template by double-clicking on it to examine its contained videos in the _Labeling View_. Upon observing the flows between these clusters and their corresponding classes in the _Labeling View_, **E1** found that two clusters exclusively contained videos from the _DE_ class (Fig. 4A-1) and _EN_ class (Fig. 4A-2) respectively. **E1** also inspected the representative sequential patterns and event distributions in the left summary figure to better understand the relationship between the sequence orders within the same template and class results (**R3**). The _DE_ cluster was characterized by the sequence "AAFA", while the _EN_ cluster exhibited the sequential pattern "FGAF". Through analyzing the event distribution histogram, **E1** also noticed that the _DE_ cluster had a higher occurrence of events involving moving far away and looking away, while the _EN_ cluster had a higher occurrence of events such as looking center and moving closer to the screen. After examining the labeled videos by clicking on the colored bars (Fig. 4A-(3-4)), **E1** observed that participants classified as _DE_ frequently looked down, appeared preoccupied with their own work, and only occasionally directed their attention to the center of the screen. In contrast, participants classified as _EN_ listened attentively with their eyes focused on the center of the screen most of the time, looked down for a short time, and exhibited positive behaviors like smiling. These observations led **E1** to conclude that these two summarized sequential patterns effectively characterized the _DE_ and _EN_ classes. Consequently, **E1** felt confident in using these two refined templates for data supplementation to highlight the differences between the _DE_ and _EN_ classes. By clicking on the Retrieve button (Fig. 4B), **E1** obtained the unlabeled Figure 4: The _Label View_ in the case one.(A) the sub-templates within the selected “AF” template. (B) the corresponding labeled videos and retrieved unlabeled videos when clicking on the green colored bars. videos exhibiting similar patterns for efficient labeling (**R3**). Through browsing the keyframes and their border colors, **E1** quickly identified the videos that closely matched the two representative patterns (**R1**). He then selected these videos by checking the selection boxes and applying the corresponding class label to them all at once (Fig. 4C). **E1** then initiated model retraining in the _Info View_. The results of this iteration are shown in the second row of Tab. I. Compared with the initial baseline, the performance of the _DE_ and _EN_ classes improved +3.86% and +2.29% respectively. This result indicated the effective utilization of the acquired knowledge about the distinction between classes in supervising model training, which was achieved by supplementing high-quality labels using refined templates. Meanwhile, **E1** noticed that the performance of the _HD_ and _HE_ classes significantly dropped. Considering the absence of supervision for the other two classes in this round, he thought this outcome was reasonable. As a result, **E1** planned to augment the model's understanding of the other two classes in the next iteration (**R4**). **Iteration Two: Balance dataset distribution** After examining the labeled data distributions, **E1** noticed an imbalance across the four classes, where the _HD_ and _HE_ classes had only a few video samples. To improve the model's robustness and stability, **E1** decided to use _VideoPro_ to supplement more samples from the two minority classes. To identify representative templates for quick labeling of the _HE_ class (**R2**), **E1** sorted the templates in the _Template View_ based on the descending purity value by clicking the distribution column. The top-ranked template "CGF" displayed a red distribution bar, indicating that all labeled videos in the template belonged to the _HE_ class (Fig. 5A). After randomly selecting labeled videos and browsing the original videos in the _Labeling View_, **E1** observed that this behavior sequence frequently occurred when students were deeply engaged in the teaching content. They tended to approach the screen, respond with smiles, and maintain focused attention on the screen (**R1**). As the unlabeled videos were ranked based on similarity, **E1** directly checked the last retrieved unlabeled video by hovering over its keyframes for quick validation. He found that the scenarios in this video aligned well with the labeled ones, which increased his confidence in the template. Therefore, **E1** labeled these retrieved unlabeled videos as _HE_ class at scale (**R3**). In the sorted templates based on purity value, **E1** failed to find a template with a pure blue bar in the distribution column, indicating the absence of exclusively labeled videos for the _HD_ class. This observation reinforced the need to supplement more data from this class to achieve dataset balance. Drawing on his general knowledge and previous discussions with domain experts, **E1** recalled that the pattern of moving away from the screen and then consistently looking away is often associated with high disengagement. Thus he searched the corresponding template "BE" directly in the search box in the _Template View_ (Fig. 5B). As a result, the template "BE" appeared at the top with a distribution bar that has a large portion of blue. Most of the retrieved unlabeled videos had a good match where he applied the _HD_ label in a similar manner. After adding more samples to these two minority classes following a similar process, **E1** proceeded to send the newly supplemented samples for model retraining. The outcomes of the second round of iteration are shown in the third row of Tab. I. It is evident that following the introduction of additional knowledge and supplementation of data for the two under-represented classes, the model exhibited significant improvements in its performance in these classes (+7.24% and +5.65% for _HD_ class and _HE_ class respectively). Moreover, the overall model performance has also been improved. After 10 programming iterations, **E1** noticed that the overall accuracy ceased to increase and instead stabilized at around 75.4%. This result also satisfied the project objective of achieving an overall classification accuracy above 70%. The final overall accuracy and each class all improved compared with the initial baseline. Consequently, **E1** is satisfied with this programming result and decided to stop programming (**R4**). **Post Analysis** Following the case study, a quantitative experiment was conducted to compare the labeling efficiency of _VideoPro_ with an active learning-based labeling baseline approach. The baseline approach utilized the uncertainty-based strategy, a widely adopted technique in active learning [56], that selects the most uncertain videos for labeling at each time. The experiment results are summarized in Tab. II. It showed that the active learning-based approach required labeling 2081 video samples to achieve an overall accuracy of 75.38%. In contrast, _VideoPro_ enabled the expert to label 10 iterations and 452 samples in total, achieving an overall accuracy of 75.43%. Furthermore, we compared the time cost of the two approaches. The time cost for the baseline approach was estimated based on the average time needed for labeling a single video by domain experts (i.e., teachers) using the labeling tool provided by the collaborated company. The average labeling time was half a minute per video as recorded, resulting in a total time cost of 17.3 hours. In comparison, _VideoPro_ recorded the total operation time, where the expert took 1 hour to finish all the labeling. The experiment results show that _VideoPro_ incurs lower labeling and time cost than the baseline approach to attain comparable levels of accuracy. It demonstrates that _VideoPro_ significantly improves labeling efficiency. (7.35%), Basketball Shooting (10.29%), High Jump (11.76%), Javelin Throw (14.71%), Tennis Swing (7.35%), PullUps (4.41%), PushUps (11.76%), Lanes (8.82%), Body Weight Squats (13.24%)._ The number in the bracket indicates the corresponding class distribution in the dataset. To identify the fine-grained semantic events associated with these activities, we conducted interviews with **E6** and his colleagues, and extensively reviewed relevant literature in the field. Drawing from experts' insights and borrowing concepts from relevant sports biomechanics research [1], we defined a set of fine-grained semantic events that include arm flexion (A), arm extension (B), arm abduction (C), arm adduction (D), leg flexion (E), leg extension (F), leg abduction (G), and leg adduction (H), body elevation (I), and body depression (J). To detect these events, we first adopted the advanced pose detection model [9] for body keypoint and part detection. We then utilized rule-based heuristics [25] to detect these events. Specifically, we calculated the displacement of body parts along and perpendicular to the body's midline to detect abduction/adduction and elevation/depression events. Additionally, we measure angle changes between body parts to detect flexion/extension events. We followed the original training-test split of the UCF101 dataset on the selected 10 classes. The constructed 10-class dataset thus contains 1,016 videos in total, with 733 videos in the training set and 283 videos in the test set. We further split the training set into the labeled dataset with 68 videos and the unlabeled dataset with 665 videos to simulate the scenarios with very few labeled videos at the beginning. The label distribution in the original dataset is preserved during the splitting process. We adopt the state-of-the-art uniFormer backbone [39] to train a baseline classification model on the constructed labeled dataset (with 68 videos). It achieves an overall F1 score of 82.69% on the test set. **Programming Process** After analyzing the performance of the baseline model on the test set, **E6** observed that the model performed poorly on the _High Jump_ and _Javelin Throw_ classes. The confusion matrix further indicated the model's inability to distinguish between these two classes. Therefore, **E6** decided to supplement more labels for these two classes. Looking at the _Template View_ sorted by prediction accuracy, **E6** discovered the template "EEFFEF" (Fig. 6A), which indicates repetitive leg flexion and extension movements, contained a large portion of videos from these two classes (**R2**). Drawing from domain knowledge, **E6** pointed out the distinct stages within the _High Jump_ and _Javelin Throw_ activities. The _High Jump_ can be roughly divided into approach, takeoff, and landing stages, while the _Javelin Throw_ activity involves stages such as approach, windup, and release. These two actions shared common initial event sequences involving repetitive leg movements to generate momentum during the approach stage. Recognizing the potential value of this template in representing these two classes, **E6** proceeded to explore its contained sub-templates in the _Labeling View_ by clicking on the template. In the _Labeling View_, three sub-templates were identified. By observing the flow width and color, **E6** noticed that the sub-template "EEFFEFIJ" (Fig. 6B-1) predominantly contained videos from the _High Jump_ class, while the sub-template "EEFFEFBD" (Fig. 6B-2) mainly included videos from the _Javelin Throw_ class (**R3**). This finding aligned with **E6**'s knowledge, as the event sequences following the approach stage captured the distinguishing characteristics of these two classes. The _High Jump_ class exhibited a body elevation event for takeoff, followed by a body depression event for landing. On the other hand, the _Javelin Throw_ action involved arm extension and adduction for the delivery and then release of the javelin. **E6** then clicked on the colored bars at the end of the two clusters respectively to retrieve similar unlabeled videos for labeling from the below video list. **E6** randomly checked several videos (Fig. 7), observing the event sequence through the frame border color and hovering on the keyframes to unfold the frame sequences for quick video browsing (**R1**). After selecting the matched videos, he applied the labels to them at once (**R3**). After supplementing more samples for the _High Jump_ class and _Javelin Throw_ class, **E6** initiated model retraining, resulting in noticeable improvements in performance for these two classes (+1.25% for the _High Jump_ class and +3.47% for the _Javelin Throw_ class). **E6** proceeded to program other classes with relatively low performance, such as_Tennis Swing_ and _Lunges_. After 8 iterations with 304 videos being labeled, the model finally achieved an overall F1 score of 93.98% on the test set (**R4**). **Post Analysis** We followed a similar practice in case one to quantitatively evaluate the labeling efficiency. On this public dataset, the active learning-based labeling approach requires labeling 496 samples to achieve an overall F1 score of 93.92%. For time cost estimation, we referred to Ma _et al._[43], which reported an average of 45s for video-level action labels in a 60s video. Therefore, the time cost for the active learning-based approach is computed as 0.75 * (total time length of 496 labeled samples), which is 0.8h. In contrast, **E6** finished the whole programming in 0.5h. The results are listed in Tab. 2, which further validates the efficiency of _VideoPro_. ### Expert Interviews We further conducted semi-structured individual interviews with three ML practitioners (**P1-P3**) from the project development team, who have more than four years of experience in developing and operating ML models for video applications. While familiar with the project context, non of them had known or tried the system before the interview. The interviews began with an introduction to the research background and system designs. Then we demonstrated system workflow and usage with specific examples [74]. After the demonstration, we asked the practitioners to freely explore and try the system for programming on the real dataset, and express their thoughts, findings and suggestions in a think-aloud protocol. We also collected feedback from **E6** during the second case study. The feedback collected was categorized into the following three perspectives: **System workflow** All participants confirmed the effectiveness of using human-understandable events to represent video data, which is "_intuitive and useful to understand video content_". They also appreciated the idea of extracting event sequential patterns as programming guide templates. **E6** commented, "_This tool is pretty helpful for labeling and analyzing sports tactics, as the event order directly determines the tactic type_." Furthermore, the participants valued the tool's ability to efficiently search and retrieve videos from large-scale video datasets through flexible event composition and assembly. They emphasized Fig. 6: (A) The template shared by _High Jump_ class and _Javelin Throw_ class. (B) The sub-templates in the _Labeling View_ with event temporal distribution of corresponding labeled videos. Fig. 7: Two examples of retrieved videos for _High Jump_ class and _Javelin Throw_ class. (A) the video labeled as _High Jump_. (B) the video labeled as _Javelin Throw_. that this is particularly "_important and needed_" in real-world work scenarios, which allows them to retrieve video data samples at scale for model building and steering with minimal effort and cost. **Visual designs and interactions** Overall, the practitioners reported the system is "_easy to use_" with intuitive visual designs and smooth interactions. The _Labeling View_ is favored by all participants, where they can "_grasp the video content by glancing at the keyframes_". **P2** appreciated the sorting of videos based on similarity, making it easy to identify the most matched videos and apply labels in patches conveniently. The design of _Template View_ is also well-received, especially for its rich interactions, enabling "_efficient template exploration based on different metrics_." **P3** expressed a liking for the projection design in the _Info View_, with the intuitive background error heatmap and useful lasso interaction for selecting video groups of interest. Nevertheless, the participants found the _Labeling View_ design somewhat complex, as it contained a lot of information, requiring some time for them to grasp. **Suggestions for improvement PI** expressed the need to save the history of all selected templates for future reference. He also proposed that more events could be included to provide a more exhaustive summarization of the video content, while acknowledging the importance of focusing on critical ones. **P2** suggested that the system could provide real-time operation guidance and suggestions to reduce the learning curve. He also mentioned that more advanced strategies are needed to resolve label conflicts, which are currently being handled manually. **P3** recommended that the system should support adjusting more parameters such as learning rate and batch size on the interface. **E6** suggested the use of semantic meaningful icons or abbreviations to enhance the intuitive understanding of events. ## 7 Discussion During the development process of _VideoPro_, we have gained insights, identified limitations, and got inspirations for future exploration. **Data-centric approach for video data programming with labeling templates** Data programming adopts a data-centric perspective to enhance data quality at scale, enabling model steering under the supervision of users' domain knowledge. While previous works have focused on temporal pattern labeling [36] or static spatial relationships in images [26], they fall short in handling the rich spatial and temporal semantic information present in videos. Our work overcomes these limitations by utilizing semantic-rich events to compose labeling functions. We employ compact labeling templates to summarize diverse events and their intricate temporal relationships, helping users to understand video data characteristics and identify semantic meaningful ones for labeling target data classes. This "video-event-template" abstraction process effectively elicits users' high-level domain knowledge for data labeling and model training. Currently, our templates mainly consider the semantics of event types and temporal orders. Future works can consider more complex semantics involving event characteristics like duration and object interactions. Meanwhile, when exploring different templates to distill meanings of event compositions, there often exists a trade-off between coverage and meaningfulness. Some templates may cover a large number of instances but introduce some noisy and meaningless ones, requiring greater effort for validation. On the contrary, some templates can accurately reflect the semantic meaning of a target data class but cover only a few instances. Future systems can also consider adaptive designs of templates that strike a balance between coverage and meaningfulness. **System generalizability** The proposed generic labeling workflow is capable of accommodating various tasks beyond classification with minor modifications. For example, for temporal action localization (which seeks to identify the interval of a specific activity in untrimmed videos), _VideoPro_ can match the activity with its representative event sequences to provide a rough estimate of time spans. Then, the _Labeling View_ can be revised to enable zooming in on the fine-grained components of the start and end events for precise start and end timestamp annotation. For tasks such as video retrieval [76] and generation [51], the template mining algorithm and the _Template View_ design can be directly employed to define and compose sequential event relationships flexibly. Moreover, considering the fundamental role of event sequences in video data, _VideoPro_ is transferable to a wide range of applications and video types. For example, domains such as social science and behavior psychology share similar requirements for building models to analyze user behaviors and interactions in recorded experiment videos. The system also readily supports needs such as sports tactics analysis, tutorial video understanding, and surgical video comparison. Furthermore, events can be compiled in different ways to create new classes flexibly for new use cases. For instance, altering the cooking order of ingredients can build templates for new recipes. **Event extraction effectiveness** Defining atomic events properly relies on domain knowledge, task specifics, and suitable algorithms, considering the hierarchical nature of events. For instance, a cooking video for "preparing a salad" involves atomic events such as chopping vegetables, tossing, and dressing the salad. These high-level events can either be detected by action recognition [17], or further decomposed as hand movements and object manipulations that can be deduced through heuristics [30]. Visual-language models have also emerged as a powerful tool for tagging semantic concepts [65, 21] (_e.g._, objects, actions, and scenes), offering new possibilities for capturing complex higher-level semantic events. Apart from employing more powerful models, visualizations should be designed for summarizing high-level event semantics and facilitating intuitive reviews of detailed video frames. High-level semantics representations (_e.g._, object-scene graphs [49]) can also benefit from novel visual designs to analyze event relationships. Considering the impact of imperfect algorithms, camera movements, and view occlusions on event detection, incorporating more robust algorithms and uncertainty visualization techniques can enhance the system's resilience and reliability. **System scalability** In terms of visual design, when the number of classes and event categories reaches tens and hundreds, it will lead to a long template list and visual clutters in the distribution bar charts in the _Template View_. To address this, _VideoPro_ offers sorting and filtering options based on multiple thresholds and metrics, allowing users to quickly explore and locate templates of interest. Similarly, the Sankey diagram design in the _Labeling View_ may become visually cluttered with a large number of classes. However, since experts often need to compare only a few classes at the same time, the _Labeling View_ can satisfy their requirements. In the future, we plan to implement multi-level grouping strategies, together with hierarchical visualization and interaction techniques to further enhance visual scalability. For example, we can group classes and events based on some taxonomies, themes, or model performance. Then users can explore and program different video data subsets that contain a few categories of interest. **Limitations and future works** Currently, _VideoPro_ is designed for discrete events and could face challenges with datasets featuring longer, overlapping events. In such cases, events could be weighted based on their significance or aggregated into compounds (_e.g._, \(A(AB)B\to ACB\)) to retain sequential patterns. However, these adjustments may complicate template mining, making additional research necessary for overlapping events [79]. Additionally, the system currently supports video programming through visual channels only, while some video analyses could benefit from incorporating concurrent audio and speech information [71]. Therefore, efficient methods of encoding and complementing multimodal information [66] for programming are worth exploring. Furthermore, the current system is designed for single-person operations. To enable collaborative programming, it is important to explore methods for efficiently resolving label conflict and maintaining consistent labeling quality in future work. ## 8 Conclusion This paper presents _VideoPro_, a novel visual analytics approach that extracts and externalizes video event composition knowledge to streamline video data programming. The conducted two case studies and expert interviews validate the system's efficiency and effectiveness for video data supplementation and model steering. Meanwhile, the development and evaluation of _VideoPro_ reveal several promising future research directions, including integrating more complex event attributes, balancing template coverage and meaningfulness, and exploring multi-modal and collaborative video programming techniques. ## Acknowledgments The authors would like to thank the anonymous reviewers for their constructive and insightful comments. This work is partially supported by Hong Kong ITF grant PRP/001/21FX.
2303.02164
Eryn : A multi-purpose sampler for Bayesian inference
In recent years, methods for Bayesian inference have been widely used in many different problems in physics where detection and characterization are necessary. Data analysis in gravitational-wave astronomy is a prime example of such a case. Bayesian inference has been very successful because this technique provides a representation of the parameters as a posterior probability distribution, with uncertainties informed by the precision of the experimental measurements. During the last couple of decades, many specific advances have been proposed and employed in order to solve a large variety of different problems. In this work, we present a Markov Chain Monte Carlo (MCMC) algorithm that integrates many of those concepts into a single MCMC package. For this purpose, we have built {\tt Eryn}, a user-friendly and multipurpose toolbox for Bayesian inference, which can be utilized for solving parameter estimation and model selection problems, ranging from simple inference questions, to those with large-scale model variation requiring trans-dimensional MCMC methods, like the LISA global fit problem. In this paper, we describe this sampler package and illustrate its capabilities on a variety of use cases.
Nikolaos Karnesis, Michael L. Katz, Natalia Korsakova, Jonathan R. Gair, Nikolaos Stergioulas
2023-03-03T12:45:03Z
http://arxiv.org/abs/2303.02164v2
# Eryn: A multi-purpose sampler for Bayesian inference1 ###### Abstract In recent years, methods for Bayesian inference have been widely used in many different problems in physics where detection and characterization are necessary. Data analysis in gravitational-wave astronomy is a prime example of such a case. Bayesian inference has been very successful because this technique provides a representation of the parameters as a posterior probability distribution, with uncertainties informed by the precision of the experimental measurements. During the last couple of decades, many specific advances have been proposed and employed in order to solve a large variety of different problems. In this work, we present a Markov Chain Monte Carlo (MCMC) algorithm that integrates many of those concepts into a single MCMC package. For this purpose, we have built Eryn, a user-friendly and multipurpose toolbox for Bayesian inference, which can be utilized for solving parameter estimation and model selection problems, ranging from simple inference questions, to those with large-scale model variation requiring trans-dimensional MCMC methods, like the LISA global fit problem. In this paper, we describe this sampler package and illustrate its capabilities on a variety of use cases. ## I Introduction In physics, and in science in general, one of the most encountered problems is the one of model calibration and comparison. We test our models of the physical world against the measured data, to estimate their parameters and to robustly determine the most suitable model that describes our observations. A crucial first step in this direction, is to efficiently explore the posterior distribution of the parameters given the measured data. Markov Chain Monte Carlo (MCMC) algorithms have proven to be very successful in this regard [1; 2; 3; 4], being one of the few methods which can efficiently perform Bayesian inference when the posterior is not analytically tractable and without solving exactly for the marginal likelihood. This is compared to, for example, grid methods, which are often computationally unfeasible. This is especially true in the field of Gravitational-Wave (GW) astronomy, where MCMC methods have been extensively used in order to find physical parameters for signals buried in the data (see e.g. [5; 6; 7; 8; 9; 10]), as well as to hierarchically infer the properties of the underlying astrophysical populations (e.g. [11; 12; 13]). MCMC approaches can also compute the _marginal likelihood or evidence_ (see section II), by using techniques such as _Thermodynamic Integration_ (see section II.4). In a Bayesian framework, the evidence difference between two models can be used to compute the Bayes Factor, which is used to select between different models that could describe the observations. Thermodynamic integration (or other approximations, see section II.5 or [14; 15]), is ideal for cases where the number of competing models is small. However, in situations where the number of potential models becomes too large, the task of iteratively and hierarchically computing the marginal likelihood can become computationally inefficient, or even practically unachievable. Such is the case for future signal-dominated GW observatories, such as the Laser Interferometer Space Antenna (LISA) [16] or other proposed space-borne GW observatories [17; 18; 19]. LISA will observe different types of GW sources, the most numerous of them being the Ultra Compact Binaries (UCBs) within the Milky Way [16; 20; 21; 22; 23; 24; 25; 26; 27]. Those are mostly comprised of a population of Double White Dwarfs (DWD), although a small fraction of neutron star - white dwarf (NS-WD) or double neutron stars (NS-NS) binaries are expected [28; 29]. In fact, LISA is going to detect GWs from the complete population of \(\mathcal{O}(10^{7})\) sources simultaneously, but only a small fraction of them are going to be individually resolvable (\(\mathcal{O}(10^{4})\)). The large majority of signals will generate an anisotropic and non-stationary "confusion" type of signal, which will dominate the LISA band between \(0.05\) and \(\sim 0.2\) mHz. In the above context, computing the marginal likelihood for such a large parameter space and for all possible numbers of events that could be in the population becomes computationally prohibitive. Instead, we can employ _dynamical trans-dimensional_ MCMC methods [30]. This family of methods can be quite challenging to tune, but it has proven to yield satisfactory results, even for such demanding problems as the LISA data [21; 23]. There are also implementation challenges, which arise from technical aspects of the algorithm; one example being the _dimension matching_ requirement when proposing moves between models with different dimensionality. In terms of algorithm efficiency, it is also crucial to choose proposal distributions that allow smooth transitions on a dynamical parameter space, a task which in many cases requires substantial effort. For these technical reasons, all the available software tools of this kind have been specifically developed for the particular problem they intend to solve. In this work, we present Eryn, a reversible jump MCMC algorithm, capable of _efficiently sampling dynamical parameter spaces, while remaining generic and usable by a large community_. We build upon various ideas from statistics, astronomy, etc., in order to develop an efficient statistical toolbox that can be applied to the majority of problems involving detection and characterization of signals. Our primary goal, however, is to utilize Eryn as a basic ingredient for a data analysis pipeline [31] to perform the LISA _Global Fit_[21; 23]. The Global Fit is a data analysis strategy required to tackle the problem of multiple source detection, separation, and characterization in LISA data. For demonstration purposes in this work, we use Eryn to analyze a "reduced" scenario of the LISA data in section IV. This paper is organized as follows: in section II, we begin with explaining the foundations of the MCMC algorithm, as well as some of the relevant methods that we have adopted for our implementation. In section III, we describe how the methods introduced in section II are combined into the actual toolbox implemented in Eryn. In section III.3, we demonstrate the capabilities of Eryn through some toy examples, while in section IV we apply our machinery to more demanding applications in Gravitational-Wave astronomy. In particular, we demonstrate Eryn by performing model selection on a simulated population of Ultra Compact Galactic Binaries (UCBs) as measured by the future LISA observatory. Finally, in section V, we summarize our work and discuss future applications. We should state again here, that Eryn is available as open source software in [https://github.com/mikekatz04/Eryn](https://github.com/mikekatz04/Eryn). ## II Markov chain Monte Carlo algorithms Nowadays, MCMC methods are considered to be a cornerstone of Bayesian Inference, being very effective in finding solutions to problems encountered across wide-ranging disciplines [e.g. 32; 33; 34; 6; 35]. These include the sampling of the posterior densities of parameters of interest, the numerical marginalisation over nuisance parameters, and providing a framework to compute the marginal posterior distributions (or evidences) that can be used for model selection. The Bayesian framework is based around Bayes' Theorem: \[p(\vec{\theta}|y,\mathcal{M})=\frac{p(y|\vec{\theta},\mathcal{M})p(\vec{ \theta}|\mathcal{M})}{p(y|\mathcal{M})}, \tag{1}\] where \(y\) is the measured data and \(\mathcal{M}\) our chosen model of analysis. The \(p(\vec{\theta}|y)\) term is the posterior distribution of the parameter set \(\vec{\theta}\), which is related to the likelihood function of the data \(p(y|\vec{\theta},\mathcal{M})\) and the prior densities of the parameters \(p(\vec{\theta}|\mathcal{M})\). The evidence \(p(y|\mathcal{M})\) is the marginal posterior over the parameter space \(\vec{\theta}\in\Theta\): \[\mathcal{Z}\equiv p(\vec{y}|\mathcal{M}) =\int_{\Theta}p(\vec{\theta},\vec{y}|\mathcal{M})\mathrm{d}\vec{ \theta}\] \[=\int_{\Theta}p(\vec{y}|\vec{\theta},\mathcal{M})p(\vec{\theta}| \mathcal{M})\mathrm{d}\vec{\theta}. \tag{2}\] For parameter estimation purposes, the evidence acts as a normalization constant and can be ignored. However, it is really important if one wants to perform model selection over the measured data. We shall describe in detail how one can numerically approximate the integral of eq. (II) in section II.5. MCMC algorithms work be constructing a Markov Chain sequence, whose elements, \(\vec{\theta}(t_{i})\), for \(i=0,1,\ldots\), are independent samples from the target distribution, \(f(\vec{\theta})\). Under fairly general assumptions, the distribution of samples in the chain will converge to the target distribution provided the algorithm satisfies _detailed balance_: \[f(\vec{\theta})p(\vec{\theta}\rightarrow\vec{\theta}^{\prime})=f(\vec{\theta }^{\prime})p(\vec{\theta}^{\prime}\rightarrow\vec{\theta}). \tag{3}\] Here \(p(\vec{\theta}\rightarrow\vec{\theta}^{\prime})\) is the probability that the Markov chain moves from point \(\vec{\theta}\) to point \(\vec{\theta}^{\prime}\). The most widely-used MCMC algorithm is Metropolis-Hastings [1; 2], which is explained in algorithm box 1. The first step of the algorithm is to define an initial state, \(\vec{\theta}(t_{0})\). Then, at each subsequent step \(i\), a new state is proposed, by randomly drawing from a given proposal distribution \(q(\vec{\theta}^{\prime}|\vec{\theta}(t_{i}))\). The newly proposed state is then accepted with a certain probability, given by eq. (4). If the move is accepted we set \(\vec{\theta}(t_{i+1})=\vec{\theta}^{\prime}\), otherwise we set \(\vec{\theta}(t_{i+1})=\vec{\theta}(t_{i})\). Any reasonable choice of the proposal density will generate a Markov chain with the correct stationary distribution. However, a good choice of \(q\) is critical for its efficiency, i.e. achieving the convergence of the MCMC chains within a reasonable computational time. For the special case of a symmetric proposal distribution, such as the widely used multivariate Gaussian distribution, the ratio of eq. (4) in algorithm box 1 becomes simply the ratio of the target densities at the current \(\vec{\theta}(t_{n})\) and proposed \(\vec{\theta}^{\prime}\) points. For high-dimensional problems, the multivariate Gaussian proposal can be tuned during the burn-in period of sampling to improve efficiency [36; 37; 15; 38], or even scaled according to the Cramer-Rao bound, estimated from the Information matrix [39]. Although the MH algorithm has been quite successful in tackling inference problems, there are practical implementation issues to overcome. Improving acceptance rate is crucial for convergence, and sometimes improvements in the proposal distribution are not sufficient to efficiently sample the parameter space. To tackle these issues, various MCMC enhancements have been proposed. A prime example is the Hamiltonian Monte Carlo (HMC) algorithms that utilize local gradients in order to generate proposal points [40; 41]. One variant of HMC is the No-U-Turn sampler which automates part of the required tuning of the HMC [42] sampler. Another alternative to the "standard" MH is the Gibbs sampling algorithm, which is particularly useful if the conditional distributions of the parameters of the model are known [43; 44; 45]. All of the above developments have been shown to be useful in various disciplines [46; 47; 48; 49; 50; 51; 52; 53; 54; 55]. Finally, there have recently been numerous proposals that aim to enhance sampling with machine learning techniques. At their core, many of these methods optimize the exploration of the likelihood surface, either by learning it directly (see for example [56]) or by sampling it in a simpler latent space (for example [57]). In this work, we introduce \(\mathtt{Eryn}\), which is built around the emcee package [58], enhanced with a variety of sampling mechanisms that allow us to perform inference on dynamical parameter spaces with minimal tuning. We expand on the most important features in the sections below. ### Affine-invariant samplers An affine transformation is one of the form \(\vec{\theta}\rightarrow\vec{\zeta}=A\vec{\theta}+b\), where \(A\) and \(b\) are a constant matrix and vector respectively. Under an affine transformation a probability density \(p(\vec{\theta}|y)\) transforms to \[p_{A,\,b}(\vec{\zeta}|y)=p(A^{-1}(\vec{\zeta}-b)|y)/\text{det}(A). \tag{5}\] Such transformations can help to transform difficult-to-sample distributions into easier-to-sample ones. A simple example is a multi-variate Normal distribution. If the dynamical range of the eigenvalues of the covariance matrix is very large, then sampling can be difficult, but any multi-variate Normal distribution can be transformed into a spherical distribution via an affine transformation. Affine-invariant MCMC is a class of samplers that are designed to have equal sampling efficiency for all distributions that are related by an affine transformation [58; 59]. The sequence of samples in a Markov chain, \(\{X(t)\}\), can be written deterministically as a function of a sequence of random variables, \(\xi(t)\), which represent the random draws used to propose new points and evaluate the accept/reject decision. Specifically we can always write \[X(t+1)=R(X(t),\xi(t),p) \tag{6}\] where \(p\) denotes the target density. An affine-invariant sampler has the property \[R(AX(t)+b,\xi(t),p_{A,b})=AR(X(t),\xi(t),p)+b, \tag{7}\] i.e., the sequence of points visited when sampling an affine-transformed density are the affine transformations of the states visited when sampling the original density. If an affine transformation exists that maps the given target density to one which is more straightforward to sample from, an affine-invariant sampler should sample it as efficiently as it could the simpler distribution, so the convergence of affine-invariant samplers is less affected by correlations between the parameters [58]. In practice, this goal is achieved by following an ensemble of points, called walkers, and basing proposed moves on the distribution of other points in the ensemble. In [58], the primary update move is the so-called _stretchmove_ proposal. Each walker at state \(X_{i}(t)\) is updated by randomly selecting another walker \(j\) and proposing a new value \(Y=X_{j}(t)+Z[X_{i}(t)-X_{j}(t)]\), where \(Z\) is a random variable drawn from the distribution [59] \[g(z)\propto\left\{\begin{array}{ll}\frac{1}{\sqrt{z}}&z\in\left[\frac{1}{a},a\right]\\ 0&\text{otherwise}\end{array}\right.. \tag{8}\] The parameter \(a\) can be tuned to improve convergence, but \(a=2\) works well in the majority of applications [58]. The proposed point is accepted with probability \[\alpha(X_{i}(t),Y)=1\wedge\left\{z^{d-1}\frac{p(Y)}{p(X_{i}(t))}\right\} \tag{9}\] where \(p\) is the target density and \(d\) is the dimension of the parameter space. This acceptance probability is specific to the stretch proposal distribution given by Eq. (8). For other stretch proposals, the term \(z^{d-1}\) must be replaced by \(z^{d-2}g(1/z)/g(z)\). Following this scheme, detailed balance is maintained, and it can be proven that affine-invariant samplers converge faster to their target distribution [58]. Below in section III we discuss the extension of the stretch-move proposal to Reversible-Jump MCMC methods. The benefits of running MCMC chains in parallel, combined with a proposal distribution that requires almost no tuning, have contributed to an increasing popularity of affine-invariant samplers. In particular, the emcee package [58], has been used widely in Astrophysics and Cosmology [60; 61; 62; 63], e.g.]. ### Delayed Rejection The Delayed Rejection (DR) scheme of sampling was devised in order to improve two aspects of MCMC algorithms. First, it allows for improvements in the acceptance rate of the proposals, yielding "healthier" parameter chains, with better mixing. Secondly, it is more robust against becoming trapped in local maxima of the posterior surface [64; 65; 66; 67]. The strategy, as the name suggests, is, at each iteration, instead of immediately rejecting the newly proposed point based on algorithm 1, we keep proposing new points while maintaining detailed balance by computing both the forward and backward transition probabilities. Suppose we are at a point \(\vec{\theta}_{0}\) and use a proposal \(q(\vec{\theta}_{1}|\vec{\theta}_{0})\) to propose a new point \(\vec{\theta}_{1}\). The usual acceptance probability, following the notation of eq.1, is \[\alpha_{1}(\vec{\theta}_{0},\vec{\theta}_{1})=1\wedge\left\{\frac{p(\vec{ \theta}_{1}|y)q(\vec{\theta}_{0}|\vec{\theta}_{1})}{p(\vec{\theta}_{0}|y)q( \vec{\theta}_{1}|\vec{\theta}_{0})}\right\}, \tag{10}\] as per eq.4. If \(\vec{\theta}_{1}\) is rejected, then instead of going back to step 1 of algorithm 1, we propose a new point, \(\vec{\theta}_{2}\), drawn from a proposal distribution \(q(\vec{\theta}_{2}|\vec{\theta}_{1},\vec{\theta}_{0})\). This proposal distribution may depend only on \(\vec{\theta}_{1}\), but we write it more generally here to allow for the case that the proposal is adapted based on the sequence of steps that have been rejected. The acceptance probability for \(\vec{\theta}_{2}\), \(\alpha_{2}(\vec{\theta}_{0},\vec{\theta}_{1},\vec{\theta}_{2})\), is computed using \[\alpha_{2}(\vec{\theta}_{0},\vec{\theta}_{1},\vec{\theta}_{2})=\] \[1\wedge\left\{\frac{p(\vec{\theta}_{2}|y)q(\vec{\theta}_{1}| \vec{\theta}_{2})q(\vec{\theta}_{0}|\vec{\theta}_{1},\vec{\theta}_{2})\left[1 -\alpha_{1}\left(\vec{\theta}_{2},\vec{\theta}_{1}\right)\right]}{p(\vec{ \theta}_{0}|y)q(\vec{\theta}_{1}|\vec{\theta}_{0})q(\vec{\theta}_{2}|\vec{ \theta}_{1},\vec{\theta}_{0})\left[1-\alpha_{1}\left(\vec{\theta}_{0},\vec{ \theta}_{1}\right)\right]}\right\}. \tag{11}\] If \(\vec{\theta}_{2}\) is rejected, further steps can be included and each step adds additional proposal and rejection-probability terms to the numerator and denominator of the acceptance probability. For example, the three step acceptance probability, \(\alpha_{3}(\vec{\theta}_{0},\vec{\theta}_{1},\vec{\theta}_{2},\vec{\theta}_{3})\) is the minimum of one and \[\frac{p(\vec{\theta}_{3}|y)q(\vec{\theta}_{2}|\vec{\theta}_{3})q( \vec{\theta}_{1}|\vec{\theta}_{2},\vec{\theta}_{3})q(\vec{\theta}_{0}|\vec{ \theta}_{1},\vec{\theta}_{2},\vec{\theta}_{3})}{p(\vec{\theta}_{0}|y)q(\vec{ \theta}_{1}|\vec{\theta}_{0})q(\vec{\theta}_{2}|\vec{\theta}_{1},\vec{\theta}_ {0})q(\vec{\theta}_{3}|\vec{\theta}_{2},\vec{\theta}_{1},\vec{\theta}_{0})}\] \[\quad\times\frac{\left[1-\alpha_{1}\left(\vec{\theta}_{3},\vec{ \theta}_{2}\right)\right]\left[1-\alpha_{2}\left(\vec{\theta}_{3},\vec{\theta }_{2},\vec{\theta}_{1}\right)\right]}{\left[1-\alpha_{1}\left(\vec{\theta}_{0 },\vec{\theta}_{1}\right)\right]\left[1-\alpha_{2}\left(\vec{\theta}_{0},\vec{ \theta}_{1},\vec{\theta}_{2}\right)\right]} \tag{12}\] The proposal \(q\) can be different at each step, as long as the relevant proposal density is used in eq.11. For example, in [64] the proposal is built upon a Gaussian mixture model that tries further points in the parameter space with the aim of efficiently exploring multiple modes of the posterior distribution. As the number of steps in the DR scheme becomes arbitrarily large the acceptance probability slowly approaches zero. This algorithm is also limited in practice by high computational requirements, since at every delayed rejection step we need to evaluate a new likelihood and compute the backwards probability (the \(\alpha_{1}(\vec{\theta}_{2},\vec{\theta}_{1})\) from eq.11). Nevertheless, the DR scheme offers many advantages, and despite the computational cost, it is very useful when the posterior surface exhibits high dimensionality, and when acceleration techniques are available. These, for example, might include the use of Graphical Processing Units (GPUs), and/or heterodyned likelihoods [68]. In our implementation here, we follow closely the one in [64], for improving the acceptance rate of the _between-model step_ of the Reversible Jump algorithm (see sectionII.6). As already mentioned, the Reversible Jump MCMC allows for sampling dynamical parameter spaces. In the special case of nested models, such as the case of searching multiple signals in the LISA data, proposing the 'birth' of a signal out of a very wide prior can be very inefficient. A delayed rejection scheme alleviates this problem, by effectively performing a small search around the first set of rejections, increasing the chances of finding a good signal candidate, and thus improving the mixing of the chains. ### Multiple Try Metropolis The Multiple Try Metropolis (MTM) [69; 70; 71; 72] is a subclass of the implementation of the MH algorithm, which is based on the idea of generating a number of proposals for each individual current state, and then selecting one of them based on their importance weight. In proposing a move from \(\vec{\theta}_{t-1}\), a set of \(N\) possible new points, \(\{y_{k}\}\), are drawn from a proposal distribution \(q(y|\vec{\theta}_{t-1})\) and are assigned weights \(w_{i}=w(y_{i}|\vec{\theta}_{t-1})\) using a weight function \(w(\cdot|\vec{\theta}_{t-1})\). One of these proposed points, \(y_{J}\), is selected with probability given by the normalised weight \[\bar{w}_{i}=\frac{w_{i}}{\sum_{k=1}^{N}w_{k}}. \tag{13}\] To compute the acceptance probability we need to draw \(N-1\) points, \(\{x_{i},i=1,\ldots,N-1\}\), for the reverse move from the proposal \(q(x|y_{J})\), and assign weights \(w(x|y_{J})\). We then set \(\vec{\theta}_{t}=y_{J}\) with probability \[\alpha(\vec{\theta}_{t-1},y_{J})=1\wedge\left\{\frac{w(y_{J}|\vec{\theta}_{t-1 })+\sum_{k=1,k\neq J}^{N}w(y_{k}|\vec{\theta}_{t-1})}{w(\vec{\theta}_{t-1}|y_{J })+\sum_{k=1}^{N-1}w(x_{k}|y_{J})}\right\}. \tag{14}\] and set \(\vec{\theta}_{t}=\vec{\theta}_{t-1}\) otherwise [69]. This procedure will satisfy detailed balance if the weight function is chosen such that \[p(\vec{\theta}_{0}|y)q(\vec{\theta}_{1}|\vec{\theta}_{0})w(\vec{\theta}_{1}| \vec{\theta}_{0})=p(\vec{\theta}_{1}|y)q(\vec{\theta}_{0}|\vec{\theta}_{1})w( \vec{\theta}_{0}|\vec{\theta}_{1}). \tag{15}\] This will be satisfied by a weight function of the form \[w(\vec{\theta}_{t}|\vec{\theta}_{t-1})=p(\vec{\theta}_{t}|y)q(\vec{\theta}_{t-1 }|\vec{\theta}_{t})\xi(\vec{\theta}_{t-1},\vec{\theta}_{t}), \tag{16}\] where \(\xi(\vec{\theta}_{t-1},\vec{\theta}_{t})\) is any symmetric function, i.e., \(\xi(\vec{\theta}_{t-1},\vec{\theta}_{t})=\xi(\vec{\theta}_{t},\vec{\theta}_{t-1})\), \(\forall\vec{\theta}_{t},\vec{\theta}_{t-1}\in\mathcal{D}\subseteq\mathbb{R}^{d}\), with \(d\) being the dimensionality of the problem at hand. The detailed balance condition can also be satisfied by a weight function of the form \[w(\vec{\theta}_{t}|\vec{\theta}_{t-1})=\frac{p(\vec{\theta}_{t}|y)}{q(\vec{ \theta}_{t}|\vec{\theta}_{t-1})}. \tag{17}\] Making this choice and additionally using a proposal function that is independent of the current point, \(q(\vec{\theta}_{t}|\vec{\theta}_{t-1})=q(\vec{\theta}_{t})\) only, we obtain the _Independent MTM_ algorithm [69]. When using the independent MTM algorithm detailed balance is maintained when the same set of points is used for the reverse proposal as for the forward proposal, which saves the evaluation of \(N-1\) posterior densities. The base MTM is currently implemented in Eryn with options for the Independent MTM algorithm and symmetric proposals. For a symmetric proposal distribution, \(q(\vec{\theta}_{t-1}|\vec{\theta}_{t})=q(\vec{\theta}_{t}|\vec{\theta}_{t-1})\), eq. (15) can be satisfied using the weight function \(w(\vec{\theta}_{1}|\vec{\theta}_{0})=p(\vec{\theta}_{1}|y)\). In this case, we still need to draw separate samples for the reverse step (unlike in the Independent MTM case). Generating a large number of candidate points yields certain advantages. As expected, the first advantage is the fact that there is usually very good coverage of the parameter space. The second is that the implementation of the MTM usually results in chain states with very low correlation between them. Nevertheless, as for Delayed Rejection, this algorithm requires increased computational resources, since multiple likelihoods/posterior densities have to be evaluated at each iteration of the chain. This cost can be offset in cases where the computations can be parallelized, for example using either CPU or GPU acceleration. ### Adaptive Parallel Tempering The concept of Parallel Tempering was introduced in order to efficiently sample surfaces with high multi-modality [73, 74, 75]. The idea is based on a transformation of the posterior density to a density with a different temperature, \(T\), defined by \[p_{T}(\vec{\theta}|y)\propto p(y|\vec{\theta})^{1/T}p(\vec{\theta}). \tag{18}\] For \(T=1\) this is the target posterior density. In the limit \(T\to\infty\) it is the prior density. Intermediate temperatures "smooth out" the posterior by reducing the contrast between areas of high and low likelihood. In parallel tempering, a set of Markov chains are constructed in parallel, each one sampling the transformed posterior for a different temperature \(T\). These chains periodically exchange information. The idea is that the hottest chains explore the parameter space more widely, and information about areas of high likelihood that they encounter propagate to the colder chains. Information is exchanged by proposing swaps of the states between the different chains. If two chains are sampling from target densities \(p_{1}(\vec{\theta})\) and \(p_{2}(\vec{\theta})\) respectively, then the transition probability for chain 1 in the swap is \(p_{1}(\vec{\theta}_{0}\to\vec{\theta}_{1})=p_{2}(\vec{\theta}_{1})\alpha( \vec{\theta}_{0},\vec{\theta}_{1})\). Detailed balance is thus maintained by accepting the swap with probability \[\alpha(\vec{\theta}_{0},\vec{\theta}_{1})=1\wedge\left\{\frac{p_{1}(\vec{ \theta}_{1})p_{2}(\vec{\theta}_{0})}{p_{1}(\vec{\theta}_{0})p_{2}(\vec{\theta }_{1})}\right\}, \tag{19}\] which for the specific case of swapping between two tempered chains \(i\) and \(j\) when doing parallel tempering is \[\alpha_{i,j}=1\wedge\left\{\left(\frac{p(y|\vec{\theta}_{i})}{p(y|\vec{ \theta}_{j})}\right)^{\beta_{j}-\beta_{i}}\right\}, \tag{20}\] with \(\beta_{i}=1/T_{i}\) being the inverse temperature, and \(\vec{\theta}_{i}\) the given parameter state for the \(i\)-th chain. The temperature ladder \(T_{i}\) should be chosen in order to maximize the information flow between chains of different temperatures, so as to encourage the efficient exploration of the complete parameter space. Typically, this ladder can be static or dynamically adjusted during the sampling procedure. In Eryn we have adopted the procedure of [75], which adapts the temperature ladder based on the swap acceptance rate calculated directly from the chains. Ideally, one should aim for equal acceptance ratio between every pair of neighboring tempered chains, thus tuning their log-temperature-difference \(S_{i}\equiv\log(T_{i}-T_{i-1})\), according to the swap acceptance rate from eq. (20): \[\frac{\mathrm{d}S_{i}}{\mathrm{d}t}=\kappa(t)\left[\alpha_{i,i-1}(t)-\alpha_{i +1,i}(t)\right], \tag{21}\] where \(\kappa(t)\) tunes the timescale of the evolution of the temperatures. The function \(\kappa(t)\) can be chosen depending on the desired behavior of the procedure. In [75] a hyperbolic dependence on the \(t\) state is chosen, in order to suppress large dynamic adjustments on long timescales. This setup is the default option in Eryn, but it can be customized. This process is more straightforward for ensemble samplers, where multiple walkers are used, simply because one can get an estimate of the acceptance rate directly from the particular state of the walkers at any given time step \(t\). Otherwise, the acceptance rate is computed after iterating for a predefined number of steps, which can be chosen by the user for the given problem at hand. It can be proven [75], that the temperature ladder will converge to a particular stable configuration. One should only use this scheme for the initial burn-in stage of sampling, and then continue with a stationary ladder for the rest of the analysis. ### Marginal posterior calculation for model selection One of the most frequently encountered problems in physics, and in science in general, is that of model or variable selection, i.e., identifying the model best supported by the observed data. Working in a Bayesian framework, comparison between different hypotheses may be done by computing their evidences or marginal posteriors [46] and evaluating the Bayes Factor: \[B_{12}=\frac{p(\vec{y}|\mathcal{M}_{1})p(\mathcal{M}_{1})}{p(\vec{y}|\mathcal{M} _{2})p(\mathcal{M}_{2})}, \tag{22}\] where the term \(p(\mathcal{M}_{i})\), is the prior probability assigned to the model \(\mathcal{M}_{i}\). The marginal posterior density, or evidence, is given by the integral of eq. (2) and is in general quite challenging to compute. For some high signal-to-noise ratio (SNR) cases it can be approximated if the covariance matrix \(\Sigma\) of the parameters for all candidate models \(\mathcal{M}\) are known. This approach is called the _Laplace approximation_[46, 76]. However, this is only an approximation, and it sometimes fails for models with weak support from the data [77] (in particular when the posterior cannot be approximated by a multivariate Gaussian at \(\vec{\theta}_{\rm MAP}\)). When using parallel tempering II.4, it is possible to compute the evidence by a procedure known as _thermodynamic integration_[78]. We define a continuous distribution of evidences based on the target distribution for a chain with inverse temperature \(\beta=1/T\) via \[\mathcal{Z}_{i,\beta}=\int p(y|\vec{\theta},\mathcal{M}_{i})^{\beta}p(\vec{ \theta})\,\mathrm{d}\vec{\theta}. \tag{23}\] For \(\beta=0\) the chain is sampling the prior and therefore \(\log\mathcal{Z}_{i,0}=0\). For \(\beta=1\) we are sampling the target distribution and \(\log\mathcal{Z}_{i,1}=\log\mathcal{Z}_{i}\). Additionally we have \[\frac{\mathrm{d}\log\mathcal{Z}_{\beta}}{\mathrm{d}\beta} =\int\log[p(y|\vec{\theta},\mathcal{M}_{i})]\,p(y|\vec{\theta}, \mathcal{M}_{i})^{\beta}p(\vec{\theta})\mathrm{d}\vec{\theta}\] \[\equiv\mathbb{E}_{\beta}[\log p(y|\vec{\theta},\mathcal{M}_{i})]. \tag{24}\] From this we deduce \[\log\mathcal{Z}_{i}=\int_{0}^{1}\mathbb{E}_{\beta}\left[\log p\left(y|\vec{ \theta},\mathcal{M}_{i}\right)\right]d\beta. \tag{25}\] The expectation value is over the distribution being sampled by the chain at temperature \(\beta\) and so can be computed by averaging over the posterior samples [75, 78, 79]. The integral can then be evaluated using standard methods, for example the trapezium rule, using the full ladder of temperatures. This approach generates reliable evidences, with accuracy limited only by the number of temperatures being sampled, and the efficiency of the sampling of the parameter space \(\Theta\) by the chains. Since its first introduction, there have been many applications of this approach, and in particular, there is extensive usage in GW astronomy [80, 81, 82, 83, 75, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 252, 259, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 289, 288, 28, 289, 280, 285, 286, 287, 289, 281, 287, 288, 289, 280, 289, 281, 28, 282, 283, 285, 286, 287, 288, 289, 280, 281, 284, 285, 286, 287, 289, 282, 286, 287, 288, 289, 280, 282, 287, 283, 289, 281, 285, 286, 287, 288, 289, 282, 289, 280, 283, 284, 285, 286, 287, 288, 289, 287, 288, 289, 289, 280, 289, 281, 280, 281, 282, 283, 284, 286, 287, 285, 287, 288, 289, 286, 288, 289, 287, 289, 280, 289, 280, 281, 282, 284, 285, 286, 287, 288, 289, 289, 282, 286, 287, 289, 280, 283, 287, 289, 281, 285, 288, 289, 286, 287, 288, 289, 280, 287, 288, 289, 281, 28, 289, 282, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 29, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 29, 28, 28, 29, 28, 29, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 44, 45, 46, 45, 46, 47, 48, 49, 50, 42, 49, 51, 52, 53, 54, 55, 56, 57, 58, 59, 59, 60, 61, 62, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 89, 80, 83, 84, 85, 87, 88, 89, 91, 84, 86, 88, 89, 92, 85, 89, 86, 87, 88, 89, 93, 89, 94, 88, 95, 89, 96, 97, 98, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 119, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 140, 150, 167, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 200, 203, 204, 205, 206, 207, 208, 209, 209, 210, 209, 211, 223, 204, 206, 207, 208, 209, 221, 209, 222, 216, After running RJ-MCMC, the Bayes Factor can be approximated by the ratio of the number of iterations spent within each model: \[B_{12}=\frac{\text{\# of iterations in model}\;\mathcal{M}_{1}}{\text{\# of iterations in model}\;\mathcal{M}_{2}}. \tag{29}\] This algorithm has proven to be robust for evaluating high-dimensional competing models, and has been quite successful in tackling data analysis problems in GW astronomy [77; 22; 87] as well as areas spanning physics and signal processing [e.g., 88; 89; 90]. However, designing an efficient RJ-MCMC algorithm can be quite challenging. The first challenge is to choose suitable proposal distributions, which can greatly affect the convergence of the algorithm. In situations where the models are nested, it is both tempting and convenient to take the proposal to be the same as the prior distribution of the parameters. As an illustrative example, we refer to section III.3.1, which describes a toy problem of searching for Gaussian pulses in noisy data. There, the parameters of the individual pulses are the amplitude and location of the pulse described by their \((x,\,y)\) coordinates. In order to search for those signals, the prior on their location must be wide enough to include the complete data set (see figure 1a). A birth proposal based on the prior would inevitably be quite inefficient, simply because the chance of proposing a good source candidate is small, especially if the proposal distribution is flat across \((x,\,y)\). We treat the above problem as a motivation to adopt efficient proposals with minimal tuning in Eryn, which we further discuss in section III. The second major challenge, which of course depends on the given problem at hand, arises from the samplers' capability to explore a multi-modal dynamical parameter space. We discuss our strategy to overcome that challenge in section III. ## III **Eryn:** Gathering all the pieces together All the different algorithms described in previous sections can be extremely useful in tackling different kinds of problems that require sampling. In Gravitational Wave Astronomy, we encounter such problems far too often, where dynamical parameter spaces require vast computational resources in order to be explored efficiently. Motivated by those problems, we have implemented a new toolbox that combines all these techniques to enhance the capabilities of an MCMC sampler. We have named this package Eryn[91], borrowing the name from the Tolkien mythos [92]. The analogy has its basis in the idea of a forest: within a forest you have trees which correspond to different walkers, a.k.a. _Ents_, in an ensemble MCMC sampler. On each tree there are branches that represent the various types of models used to fit the data. For example, in the case of GW global fitting for LISA, we can imagine using the Galactic binaries as one branch and massive black hole binaries as another branch. Each branch has leaves, which represent the individual instances of each model. In the LISA example, leaves would represent the individual Galactic binaries or massive black hole binaries. And finally, to zoom out, when adding tempering capabilities, we can think of groups of walkers in each temperature taking the form of many forests (of walkers) located within different temperature climates. We adopt the architecture of ensemble samplers, and in particular the one of emcee[58]. Having multiple walkers running in parallel is ideal for efficiently sampling the parameter spaces using techniques such as parallel tempering, as described in II.4 (also see section II.1). In this setting, we evolve \(n_{w}\) walkers per temperature \(T_{i}\), where each walker follows a Reversible Jump MCMC (see section II.6), mapping a parameter space of altering dimensionality. In practice, walkers in higher temperatures sample the dynamic parameter space with fewer model components as the penalty from higher prior volume is not compensated by the smoother annealed likelihood. In other words higher temperatures have a sharper Occam's razor: the data can be explained with models that are simpler, or lower-dimensional. The highest temperature chain samples the prior on the model space (provided that \(T_{\text{max}}=\infty\)). More details will be given in section III.3. As already mentioned in section II.6, Reversible Jump algorithms are extremely challenging to tune, even for simpler classes of problems. One of the major challenges is the low acceptance rate for the between-model proposal, i.e., when we propose a new state where the parameter dimensionality differs. In cases of signal search and detection (which is a nested model situation), it is convenient to set the proposal corresponding to a "birth" move to be the same as the prior distribution. In order to accommodate all possibilities for the signals present, the prior densities are usually quite wide, and thus accepting a new higher dimensional state becomes quite improbable. For that reason, within Eryn, we have implemented a Delayed Rejection scheme with the aim of improving this acceptance ratio. When proposing \(\vec{\theta}_{l}\) for a higher-dimensional model \(l\), we do not reject immediately, but rather make new delayed rejection proposals around the first rejected point \(\vec{\theta}_{l}\), using the given _in-model step_ update proposal. This, in principle, allows the sampler to explore around \(\vec{\theta}_{l}\) before rejecting the new state [64], which in turn improves the _b_etween-model step acceptance rate and produces healthier MCMC chains. The Delayed Rejection scheme, as described in section II.2, requires a serial computation of the delayed rejection acceptance ratio for walkers where the newly proposed state has been rejected. This scheme of calculating costly likelihoods sequentially in a loop during the between-model step, can lead to a computational bottleneck of the MCMC process. This is especially true for the LISA Global Fit problem, where multiple binary waveform signals are present in the data-stream. Then, the computational time for each RJ-MCMC iteration is sig nificantly increased, since the progress will be halted until all walkers have gone through their respective Delayed Rejection process, which requires evaluation of new waveforms at each step. For the reasons summarized here, we have not used the Delayed-Rejection scheme for our analysis in section IV, and have resorted to the Multiple Try scheme. However, the Delayed-Rejection scheme, as explained in section II.2, has been implemented in the \(\mathtt{Eryn}\) package. The Multiply Try scheme was essentially implemented in order to facilitate use of a parallelized likelihood framework. Parallelization is naturally compatible with Multiple Try MCMC as multiple proposals are made for each individual walker, which allows for the parallelized evaluation of proposal distributions, likelihood functions, and acceptance probabilities. Under these parallelized settings, one proposal can act as many when compared to the usual serial evaluation of proposals, allowing for better chain mixing in situations where proposals are infrequently accepted. That being said, it is still important to choose a good proposal distribution, for both the in-model and between-model RJ-MCMC steps, which we discuss further below. ### Choosing efficient Proposal distributions In the previous sections, we briefly discussed some of the challenges in choosing efficient proposal distributions for both the in-model and between-model steps of the RJ algorithm. For the in-model case, the challenge arises from the fact that it is sometimes impractical, or even unfeasible, to define a well-tuned proposal for each of the possible models that could represent the data. Using again the example of LISA data, one would need to tediously design an effective proposal distribution for the thousands of overlapping binary signals in the data. On the other hand, for the case of the between-model step, choosing proposals from the prior distribution, especially if it is highly uninformative, can be very inefficient for Reversible Jump sampling. For \(\mathtt{Eryn}\), in order to tackle those issues, we have implemented the _Group Proposals_ explained below for addressing the within-model proposals in RJ, as well as a scheme to design an efficient proposal for birth moves during the between-model step, which is based on normalizing flows. #### ii.1.1 Group Proposals In section II.1, the stretch-move proposal was introduced and discussed. One of the obvious advantages of such a scheme of proposing new MCMC samples is that it requires minimal tuning [58]. However, it does not extend well in its simplest form to the generalized Reversible Jump MCMC. The stretch proposal is based on the idea that the ensemble of points \((X_{j})\) is sitting on the same posterior mode as the current point \((X_{i})\). In a nested model situation where both the model count and the individual model parameters change, each point may lie on a different posterior mode representing a different set of leaves (sources) in the data. This can be alleviated by applying the stretch move to individual leaves within each branch of each walker, but there is still an issue of identifying leaves in different walkers that lie on the same posterior mode. The stretch proposal will technically still work when mixing leaves in different posterior modes, but the acceptance fraction will be negatively affected. However, within the stretch proposal formalism, the choice of \(X_{j}\) is customizable. The key to maintaining Figure 1: Searching for 2D Gaussian pulses in the presence of Gaussian noise. Panel (a): the simulated data, which consists of injections of 25 pulses in Gaussian noise with \(\sigma_{n}=0.2\). Panel (b): the distribution of the model order, obtained by exploring the dynamical parameter space with \(\mathtt{Eryn}\). The true value is marked with a dashed red line. For this toy investigation, the correct number of simulated components is recovered. Panel (c): the 2D posterior densities for the parameters of the \(k\) Gaussian peaks (see text for details). detailed balance is that \(X_{j}\) cannot depend on \(X_{i}\), and \(X_{j}\) cannot be updated in the same iterative step as \(X_{i}\)[59]. We leverage this property to design a new type of stretch move that can handle Reversible Jump setups while maintaining a small number of tuning parameters. We call this proposal the "group" proposal. The group proposal chooses a single leaf from a stationary group, \(\{X_{j}\}\), that is fixed for many proposed updates. The stationary group is updated after a large number of sampler iterations and we make sure that detailed balance is maintained during the update. We update every leaf within every branch of every walker at each iteration and repeat many iterations between updates of the stationary group. The appropriate stationary group varies from problem to problem. The goal is to set a group that resembles as best as possible the posterior modes of the current leaves and then draw from it strategically so that the drawn point is likely (but not guaranteed) to lie on the same mode as the leaf that is currently being updated, \(X_{i}\). In the example of the LISA Galactic binaries analysis, we set our stationary group to the full set of leaves (binaries) across all walkers at a specific temperature of the sampler at the end of a given iteration. Then, at proposal time, we efficiently locate the \(\sim n_{w}\) points in the stationary group that are closest to \(X_{i}\) from based on their initial frequency parameter. We then draw \(X_{j}\) from this group. The hope is that some percentage of the \(n_{w}\) drawn points will lie on the posterior mode on which \(X_{i}\) sits. The exact percentage will vary depending on how close the posterior modes are to each other and how many model instances exist in the sampler that include this specific mode. For low SNR binaries, for example, a source may exist in some walkers and not others, making it harder to map its posterior mode with the current group of stationary points. The performance of group proposals is highly situation- and/or model-dependent. With individual source posterior modes that are well separated and easy to define in terms of separation, the performance will approach the performance of the base stretch proposal in non-Reversible Jump MCMC because the stationary group will well represent the specific posterior mode on which \(X_{i}\) is located. As the parameter space becomes more crowded and/or separation (distance) metrics become harder to define, the performance of group proposals will worsen. #### iv.1.2 Learning from the data The second improvement concerns the between-model step of the RJ-MCMC. As mentioned earlier, for the case of nested models, it is often convenient to draw "birth" candidates directly from the prior distribution of parameters of the given model. This practice can be quite ineffective in terms of acceptance rate. As an example we can again use the LISA data-set case. The UCBs are distributed within the Galactic disk, congregated mostly around its Center [93], therefore, adopting a proposal based on an uninformative uniform prior across the sky, would waste computational resources exploring a part of the parameter space with low probability mass. A proposed solution is to use an informative prior derived from the spatial distribution of binaries in the Galaxy [21]. In our work here, we have chosen an alternative data-driven route, based on the actual residual data after a burn-in period of the RJ-MCMC, which we describe below. After a sufficient number of RJ-MCMC iterations, we can extract a subset of sources from nested models which are constantly present in almost all walkers of our cold chain. In other words, we can find and subtract the brightest sources from the data, and then allow for another burn-in period on the resulting residuals. This allows the sampler to explore the remaining parameter space more easily, thus providing a good initial estimate for the weak signals possibly buried in the noisy residual data. We can then use those samples to construct a proposal density which will help us search for good candidates for those weak signals, without excluding the rest of the parameter space. This can be accomplished by learning the posterior distribution of the parameters run on the residual data mentioned above. The most efficient way to fit to the generic distribution, is to use invertible transformations such as normalizing flows (for example [94; 95]). The methods work in the following way: we sample from the base distribution (which is usually chosen to be Normal) and we fit the transform by optimizing Kullback-Leibler divergence between the transformed distribution and the distribution that we want to estimate. After the fit has converged we can draw samples from the Normal distribution and transform these to samples from the distribution fitted to all residuals and use it as a proposal. We will cover this method in more detail in a separate paper. ### Implementation In this section, we discuss the main implementation details of Eryn. We refer the interested reader, or user, to the Eryn documentation for more exhaustive information and examples [96]. The goal of Eryn is to produce a sampler that can handle all (or most) cases of MCMC sampling ranging from basic, non-tempered, single-model type, single-model instance posterior estimation to the full reversible jump MCMC with tempering, multiple model types, and adjustable model counts, as well as everywhere in between. In the basic case, Eryn aims to be a close replica of emcee trying to maintain as much simplicity as possible. At the complicated end of the spectrum, Eryn attempts to provide a common interface and underlying infrastructure for the variety of problems that may arise, allowing the user to maintain usage of the majority of the code from project to project, focusing on changing only the specific parts of the code that are difficult to implement or require special treatment for each specific problem. Since \(\mathtt{Eryn}\) is effectively an enhanced version of the emcee package, the overall structure of emcee is strongly maintained. Like in emcee, "State" objects move coordinate and likelihood information around the ensemble sampler, storing information in a similar back end object either in memory or HDF5 files. Additionally, the interface used for adding proposals has remained. The various enhancements discussed in this work, including tempering, reversible jump moves, multiply try MCMC, etc., are all implemented within the emcee-like structure. This involved two main changes. First, the State objects have been scaled to hold information necessary for reversible jump MCMC: temperature information, prior information, and efficient and concise containers for multiple types of models with an adjustable number of individual model instances. Second, the reversible-jump proposal has been added as a proposal base, similar to the use of the 'MH' or 'RedBlue' moves within emcee. Beyond these main enhancements, there are also a variety of smaller, but useful, additions to \(\mathtt{Eryn}\) that help the user build a variety of analysis pipelines. These include stopping or convergence functions, functions to periodically update the sampler setup while running, objects to carry special information through the sampler, and aids for coordinate transformation. ### Toy Examples In this section, we present a series of working examples for \(\mathtt{Eryn}\). We begin with simple problems, such as searching for simple signals in noisy data, with the aim of demonstrating the performance of this toolbox in a dynamical parameter space. The impact of the different enhancements discussed in section III will be assessed and discussed. Finally, in section IV, we will apply this machinery to more realistic problems in Gravitational-Wave astronomy. #### iii.3.1 Searching for pulse signals in Gaussian noise In this first example, we explore the capabilities of \(\mathtt{Eryn}\) in a simplified application, commonly encountered in physical sciences. We perform a search for Gaussian pulses in a simulated 2D data-set, in the presence of Gaussian noise with variance \(\sigma_{n}=0.2\). We generate 25 pulses randomly distributed on the \(x-y\) plane with all pulses contained within \(x,y\in[-10,10]\) (see figure 1), and amplitude uniformly drawn from \(\mathcal{U}[0.7,\,1.5]\). The amplitude of each pulse is considered a free parameter to be estimated, in addition to the Cartesian coordinates of the centres. The pulses' width was kept fixed to \(\sigma_{p}\times\delta_{ij}\), with \(\sigma_{p}=0.2\), for the sake of simplicity. Thus, we are required to estimate \(N_{p}\), the total number of pulses in the data, and also estimate the parameters for each individual signal \(k\): \(\vec{\theta}_{k}=\{A_{k},\,x_{k},\,y_{k}\}\). The noise variance \(\sigma_{n}\) is estimated as part of the fit. The analysis of this problem is performed using the adaptive parallel tempering scheme of section II.4 and the Reversible Jump MCMC proposals (section III.6). The in-model proposals for each model component are Gaussians, with a diagonal covariance matrix \(\Sigma=10^{-4}\delta_{ij}\). This proposal is not tuned during sampling. The priors for the parameters are quite wide, covering the entire range of the data, while the prior on the number of pulses \(k\) is set to \(k\sim\mathcal{U}[0,\,50]\). With the above settings, we obtain the results summarized in figures 1 and 2. In figure 1 we plot the most probable number of Gaussian pulses present in the data, or in other words, the most probable model given this particular data-set. It is clear that for the given level of noise, it is straightforward to recover the true number of signals. The noise variance is also estimated accurately as \(\sigma_{n}=0.2\pm 2\times 10^{-3}\). In figure 1 we plot the posterior densities for the parameters of all pulses recovered, while we also mark the true injected values. Figure 1 shows the trans-dimensional MCMC chains "stacked" over all samples of both model order and model-parameters. As already mentioned, in this simplified scenario all signals have similar value for the amplitude, thus the almost uni-modal marginal on \(A_{k}\). This illustrative example is useful as an introductory application to the more complicated case of detection in Gravitational Wave astronomy presented below, in section IV. In figure 2, three diagnostic quantities for this run are shown. In the top panel, the evolution of temperatures is presented. Following the recipe of [75], we control the distances between each temperature chain based on their in-between swap acceptance rate, computed from eq. (21). The tuning term \(\kappa(t)\) is set to \(\kappa(t)=t_{0}/\left(\nu(t+t_{0})\right)\), with the _adaptation lag_\(t_{0}=10^{4}\) and the _adaptation time_\(\nu=10^{2}\). The middle panel shows the evolution of the swap acceptance rate per number of walkers between the chains running at different temperatures. After \(\sim 10^{5}\) sampler iterations, the system converges to an equilibrium, where the rate of swapping states reaches a single value across the temperature range. In the bottom panel we show the acceptance rate for the in-model step of the algorithm, for all temperatures. As expected, after temperature equilibrium at \(\sim 10^{5}\) samples, the acceptance rate converges to a (different) value for each temperature, which is higher for higher temperatures (smoother posterior surfaces are easier to explore). Finally, it is interesting to investigate how the sampled dimensionality of the problem varies at at different temperatures. In figure 3, we plot the posterior on the number of pulses at each temperature. As expected, higher temperature chains tend to favour lower model dimensionality and the \(T_{\infty}\) chain samples the prior on \(k\). This can be attributed to the choice of priors and "birth proposal" distributions for both the signal parameters and \(k\). The likelihood is down-weighted at higher temperatures, making it harder to overcome the Occam penalty from including extra parameters in the model. This means quieter sources are less likely to be added and the preferred models have fewer sources. #### iii.2.2 Modelling power spectra: Searching for the optimal number of B-spline knots. One of the most common problems in signal processing is the characterization of the spectra of the data. This is often done by adopting spectral models and fitting the spectra directly in the frequency domain. This methodology is used when the signal of interest has stochastic properties. Examples from GW astronomy, include the measurement of stochastic signals with astrophysical, or cosmological origin [97; 98]. There are many examples of possible stochastic signals for LISA [16; 20; 93; 98]. Searching for signals with stochastic properties requires flexible spectral models, both for the observatory instrumental noise, and the measured stochastic signal. For these reasons, it is sometimes convenient to adopt a versatile model, such as one that is based on B-spline interpolation schemes. B-splines are a geometrical modeling tool, and have proven to be very useful for modelling or generating smoother representations of data. They are piecewise polynomial curves with a certain number of continuous derivatives, and can be parameterised in various ways [101]. For this application, we follow [102], and choose to work with cubic-spline interpolation, using the corresponding SciPy library [103]. The procedure starts by selecting a number of control points, or _knots_, with a given position and amplitude, which the smooth polynomial curve crosses and at which there is a change in the first non-continuous derivative. One of the challenging problems using such methods, is to choose the optimal number of spline knots for fitting the data, without over Figure 3: Posterior on the number of Gaussians, \(k\), at each temperature \(T_{i}\), for the toy problem of section III.3.1. The different colors indicate the the initial temperature chain index. Darker colors corresponds to colder chains and vice versa. Figure 2: _Top panel_: The evolution of the temperature chains running in parallel for the toy problem of searching for 2D Gaussian pulses in Gaussian noise. The different colors indicate the the initial temperature chain index. Following the parallel tempering recipe of [75], the temperature ladder is tuned according to eq. (21), and the chains start to converge after \(\sim 10^{4}\) iterations. _Middle panel_: The evolution of the swap acceptance rate \(\alpha_{i,j}\) described in eq. (20), per number of walkers \(n_{w}\). For this run we have used \(n_{w}=10\) walkers. After \(10^{5}\) iterations, the swap acceptance rate converges to a single (different) value for every temperature chain. _Bottom panel_: The “in-model” acceptance rate per temperature chain, given by eq. (4). fitting. This is a model selection problem that can be easily solved with dynamical algorithms such as the one presented here. For our next example, we generate time-series data directly from a theoretical model PSD. The simulated data are shown in figure 3(a). We then use the machinery of Eryn to find the optimal dimensionality for the problem, together with the best-fit parameters for the knots. To ease the computational complexity, we compute the PSD of the time series using the methodology presented in [99]. In more detail, we begin by choosing a new frequency grid, to which we compute the PSD using the _optimal_ number of averaged segments for each given frequency. We essentially split the time-series data at the maximum number of segments that the given choice of window and percentage of data overlap permits, which will allow us to estimate the PSD at each frequency bin with minimal variance. By carefully choosing the window function and distance between the data points, one can generate a numerical spectrum estimate with minimal correlations between them (see red data-points in figure 4). For more details we refer the reader to [99; 100]. Finally, we also keep two knots fixed at the edges of the spectra, allowing the sampler to estimate only their amplitude, while the rest of the knot parameters (and their number) are left to be estimated from the data. For the spline knot positions, \(\{\log f_{j,k}\}\), and amplitudes, \(\{\log S_{j,k}\}\), we adopt uniform priors that cover the complete parameter space. Here, the \(j\) index corresponds to the knot number for the given model order \(k\). We also use a ladder of 10 temperatures, with 10 walkers each, while maintaining the same settings for the adaptivity of the temperatures as in section III.3.1. Each walker is initialized at a random point on the parameter space, after drawing the dimensionality \(k\) of the model from \(k\sim\mathcal{U}[1,20]\). We adopt a Gaussian likelihood, with its logarithm written as \[\log p(D|\vec{\theta}_{k})\propto-\frac{1}{2}\sum_{i}N_{i}\left(\frac{D_{i}}{S _{i,k}(\vec{\theta}_{k})}+\log S_{i,k}(\vec{\theta}_{k})\right), \tag{30}\] where \(D_{i}\) is the PSD data value for the given frequency \(f_{i}\), as computed by the method presented in [99; 100], using \(N_{i}\) averaged segments. The \(S_{i,k}(\vec{\theta}_{k})\) is the spline model of order \(k\) evaluated at \(f_{i}\), that depends on a parameter set \[\vec{\theta}_{k}=\{\log f_{1,k},\cdots,\log f_{k,k},\,\log S_{0},\cdots,\log S _{k,k},\,\log S_{k+1}\}, \tag{31}\] in which the \(\log S_{0}\) and \(\log S_{k+1}\) parameters refer to the logarithm of the PSD amplitude of the two fixed knots at the "edges" of the spectrum. Those two parameters correspond to our zeroth model order (\(k=0\)), thus they are always being explored by the walkers of Eryn. The results are shown in figure 3. In particular, in figure 3(b) we show the histogram of the recovered number of knots for the particular data-set. It is clear that 8 spline knots are preferred, two of them being fixed at the edges of the spectrum, and the other six knots free to take any position in the given frequency range. In figure 3(c) we show the 2D sliced posteriors for the spline parameters, \(\{\log S_{j,k}\}\) and \(\{\log f_{j,k}\}\). In this figure, we again stack all the MCMC samples across model orders. Figure 4: Results for power spectra modelling with a shape agnostic model. (a) The simulated data (gray), generated from the theoretical model (dashed black line). The PSD computed on an equally-spaced logarithmic grid with the method of [99; 100] is also shown (red data points and their 1-\(\sigma\) errors). The pink solid lines represent models drawn randomly from the posterior chains. (b) The optimal B-spline knots estimated by the dynamical parameter estimation procedure. As shown from this histogram, the optimal interior knot count for this data converges to six, corresponding to eight total knots including the two edge knots. (c) Stacking the MCMC chains for all models. It is evident from this figure that we essentially “scan” the true noise shape (pink solid line), by placing knots across the frequency range (see text for more details). The true spectrum is indicated by the orange solid line. There is an interesting outcome of this toy investigation; while there is a preferable dimensionality of the model, there is a weak constraint on the actual positions of the knots. We find that the sampler is virtually "scanning" the PSD data, showing slightly higher preference for locations between \(-6\) and \(-4\) in log-frequency, where the spectrum follows a more complicated shape. Finally, in figure 4a, the data (gray solid line and red data points), is shown together with model evaluations drawn from the posterior samples (pink solid lines). ## IV Examples from gravitational wave astronomy In recent years, we have witnessed the beginning of Gravitational Wave Astronomy. Since the first detection [104] dozens of waveform signatures have been measured with the current network of observatories. At the time of the writing of this paper, more than 90 events have been recorded [7], the vast majority of them are black hole (BH) binary mergers, with a few of them being binary neutron star (NS) and BH-NS mergers. At the same time, detector networks are being improved [105, 106] and there are plans to expand them with the addition of new observatories, such as the Einstein Telescope [107, 108] or Cosmic Explorer [109, 110]. Those detectors will unlock the sky to larger redshifts \(z\), allowing access to a vast number of potential sources. In addition, space missions, such as LISA [16, 20], are predicted to be signal-dominated observatories, with many types of sources populating their data streams. In fact, we expect that source confusion will be one of the primary challenges in future data analysis efforts in gravitational wave astronomy. In a typical data-set, we expect an unknown number of signals, originating from sources that generate waveforms with different characteristics. Those range from the stellar-mass BH binaries now frequently observed by ground-based detectors, to the supermassive BH binaries, extreme mass ratio inspirals, ultra compact Galactic binaries (UCB), and stochastic GW signals from both astrophysical and possibly cosmological origin [16, 20, 98]. For this final example, we will focus on the LISA mission, and in particular on the case of discriminating UCB signals. ### Application to LISA data and the Ultra Compact Galactic Binaries LISA is going to measure GW signals in the mHz regime, accessing sources of all the aforementioned types. As already discussed, the most numerous of them are going to be the UCBs, which will be almost monochromatic in the LISA band. Out of the millions of sources, only \(\sim\mathcal{O}(10^{4})\) will be individually resolvable, and the rest will generate a confusion signal. As a consequence, for the duration of the mission, we will need to disentangle tens of thousands of sources which will be overlapping in both time and frequency domains. This is no trivial task, but various different strategies have already been proposed for analyzing such challenging data-sets. For example, Gaussian Processes can be employed [24], or Swarm Optimization techniques [25], or hybrid swarm-based algorithms [112]. Pipelines based on MCMC methods have been tested extensively [21, 23, 27, 113], and have been demonstrated to be able to tackle complex cases where signals are overlapping. Here, we will focus on the same problem, employing Eryn to solve a down-scaled version of the UCB challenge. It is down-scaled because we focus only on a single narrow frequency band, containing several overlapping signals. In addition, we focus solely on demonstrating the capabilities of Eryn on dynamic parameter estimation for UCB type sources and no other types of signals are contained in the data (e.g., chirping signatures from supermassive BH binaries). At the same time, we have access to the level of instrumental noise, which is shown in both panels of figure 5. Searching for the UCB signals across the complete LISA band requires a more elaborate imple Figure 5: _Top panel_: The simulated data used for demonstrating the capabilities of Eryn in tackling a high-dimensional problem. A total of 10 Ultra Compact Binaries in the vicinity of our Galaxy emitting Gravitational Wave signals at the mHz range. Here we plot the power spectral density of the \(A\) data channel of LISA. The catalogue of sources is taken from the second LISA Data Challenge [111]. Each signal is represented by a different colour. _Bottom panel_: The same data-set, now comparing the injected signal against the solution yielded by Eryn (see text for more details). We have plotted the shaded area by sampling the joint posterior on model order \(k\) and the corresponding parameters. mentation of this simplified pipeline. This pipeline will be focusing on solving the complete second LISA Data Challenge [111], and is going to be presented in future work [31]. We choose to work on the frequency segment between 3.997 and 4 mHz, which contains 10 UCB objects, drawn directly from the LDC2 catalogue [111]. Those are shown in the top panel of figure 5 which shows the power spectrum of the \(A\) data channel of LISA. We use the two noise-orthogonal \(A\), \(E\), and \(T\) Time Delay Interferometry variables [114; 115; 116], which are linear combinations of the LISA relative frequency TDI Michelson measurements \(X\), \(Y\), and \(Z\) as: \[\begin{split} A&=\frac{1}{\sqrt{2}}(Z-X),\quad E =\frac{1}{\sqrt{6}}(X-2Y+Z),\\ T&=\frac{1}{\sqrt{3}}(X+Y+Z).\end{split} \tag{32}\] In ideal conditions (equal noises across spacecrafts, and equal LISA arms), the noise in \(A\) and \(E\) is independent, while the \(T\) data stream can be used as a signal-insensitive _null_ channel, useful for instrument noise calibration. Since we perform analysis on a noise-free injection, we will be neglecting the \(T\) channel altogether. We simulated the injection data for an observation time of \(\mathrm{T_{obs}}=1\) year. The _optimal_ SNR for each injected source, \(\rho_{\mathrm{opt}}\), is given in table 1. The \(\rho_{\mathrm{opt}}\) quantity refers to the SNR of each source in _isolation_, with respect only to the instrumental noise, and can be calculated as \[\rho_{\mathrm{opt}}^{2}=\sum_{C}\left(h_{C}|h_{C}\right)_{C}, \tag{33}\] with \(C\in\{A,E\}\) the noise-orthogonal TDI channels of eq. (32), while the \(\left(\cdot|\cdot\right)_{C}\) notation represents the noise weighted inner product expressed for two time series \(a\) and \(b\) as \[\left(a|b\right)_{C}=2\int\limits_{0}^{\infty}\mathrm{d}f\left[\tilde{a}^{*}( f)\tilde{b}(f)+\tilde{a}(f)\tilde{b}^{*}(f)\right]/\tilde{S}_{n,C}(f). \tag{34}\] The tilde represents the data in the Fourier frequency domain, and the asterisk indicates complex conjugate. The \(\tilde{S}_{n,C}(f)\) is the one-sided PSD of the noise in TDI channel \(C\). Under our assumptions \(S_{n,A}(f)=S_{n,E}(f)\). For our investigation we chose to analyze noiseless data (no noise realization), while in the likelihood we are using the PSD noise levels taken from the LISA design studies [117]. For the signals, we utilize the fast frequency-domain UCB waveform model of [14]. Then, the two polarizations of an emitting UCB can be written as \[\begin{split} h_{+}(t)&=\frac{2\mathcal{M}}{D_{L}} \left(\pi f_{\mathrm{gw}}(t)\right)^{2/3}\left(1+\cos^{2}\iota\right)\cos \phi(t),\\ h_{\times}(t)&=-\frac{4\mathcal{M}}{D_{L}}\left( \pi f_{\mathrm{gw}}(t)\right)^{2/3}\cos\iota\sin\phi(t),\end{split} \tag{35}\] where \(\mathcal{M}\) is the chirp mass, \(f_{\mathrm{gw}}\) is the instantaneous gravitational wave frequency, \(D_{L}\) is the luminosity distance, \(\iota\) is the inclination of the binary orbit, and \(\phi(t)\) is the gravitational wave phase. The phase \(\phi\) can be expressed as \(\phi=\phi_{0}+2\pi\int^{t}f_{\mathrm{gw}}(t)dt^{\prime}\), with \(\phi_{0}\) being an initial arbitrary phase shift. For more details about the waveform model, we refer the reader to [118; 14; 80]. In our simplified scenario, each binary signal in the Solar System Barycenter is determined by a set of eight parameters. Those are the \(\vec{\theta}=\{\mathcal{A},\,f_{\mathrm{gw}}\,[\mathrm{mHz}],\,\dot{f}_{ \mathrm{gw}}\,[\mathrm{Hz/s}],\,\phi_{0},\,\cos\iota,\psi,\lambda,\sin\beta\}\), where \(\mathcal{A}\) is the overall amplitude, \(\dot{f}_{\mathrm{gw}}\) is the first derivative of the gravitational-wave frequency, \(\psi\) the polarization, \(\lambda\) is the ecliptic longitude, and \(\beta\) the ecliptic latitude of the binary. The amplitude of the signal is calculated as \[\mathcal{A}=\left(2\mathcal{M}^{5/3}\pi^{2/3}f_{\mathrm{gw}}^{2/3}\right)/D_{L}, \tag{36}\] which can be rused to obtain a rough SNR estimate, via [21] \[\rho^{2}=\frac{\mathcal{A}^{2}\mathrm{T_{obs}}\sin^{2}(f_{\mathrm{gw}}/f_{*})} {4S_{n}(f_{\mathrm{gw}})}, \tag{37}\] with \(S_{n}(f_{\mathrm{gw}})\) being the instrumental noise power spectral density at frequency \(f_{\mathrm{gw}}\), and \(f_{*}=1/(2\pi L)\), where \(L\) the LISA arm length. Given eq. (36) and (37), we find it convenient to directly sample on \(\rho\) instead of \(\mathcal{A}\), which also yields a more illustrative measure of the amplitude of each binary. We use wide uniform priors for all the rest of the binary parameters, covering essentially the complete parameter space. The exception is again the amplitude (SNR), \(\rho\), where we adopt a prior which was first introduced in [119] and then adapted in [21]. The prior density can be expressed as \[p(\rho)=\frac{3\rho}{4\rho_{*}\left(1+\rho/(4\rho_{*})\right)}, \tag{38}\] \begin{table} \begin{tabular}{|c|c c|} \hline \# & \(f_{\mathrm{gw}}\) [mHz] & \(\rho_{\mathrm{opt}}\) \\ \hline \hline 1 & 3.99780 & 9.98 \\ 2 & 3.99781 & 46.70 \\ 3 & 3.99784 & 4.55 \\ 4 & 3.99854 & 39.45 \\ 5 & 3.99873 & 13.02 \\ 6 & 3.99882 & 8.47 \\ 7 & 3.99919 & 10.88 \\ 8 & 3.99939 & 19.07 \\ 9 & 3.99964 & 20.00 \\ 10 & 3.99965 & 7.99 \\ \hline \end{tabular} \end{table} Table 1: The optimal SNR \(\rho_{\mathrm{opt}}\) for each of the 10 injected sources, computed for the given duration of the mission (see eq. (37)). The dominant emission frequency \(f_{\mathrm{gw}}\) is also given for reference. where \(\rho_{*}\) is a given constant that specifies the peak of the above distribution. This distribution is designed to prevent the proposal of sources with very small SNR in the model, as it drops sharply for \(\rho\to 0\). Those weak sources do not significantly affect the likelihood, and so their inclusion must be penalised by the prior [120]. This prior choice forces the sampler to explore only potential sources with non-zero SNR, avoiding populating the chains with numerous undetectable signals. This prior performs adequately in this problem, but there are other solutions one could adopt in order to keep control of the number of very weak sources. This discussion, which sets the grounds for a _global-fit_ analysis pipeline for the LISA data [21], is out of the scope of this paper, but a more detailed description will be presented in a future work [31]. Search Phase:Before parameter estimation, we initiate a _search phase_ of our analysis, with the aim of getting the walkers to a better starting point on the posterior surface. This phase consists of an iterative brute force procedure, based on drawing a very large number of proposals, then maximizing the likelihood over the initial phase \(\phi_{0}\), and finally perform a rapid MCMC sampling over the parameter space, using only a one-source model (therefore there is no dynamical parameter spaces). In particular, we draw \(5\times 10^{6}\) points in the parameter space, and after phase maximization, we use them as starting points to a parallel tempered MCMC run with \(N_{T}=10\) temperatures, each running with \(n_{w}=500\) walkers. When this step concludes, we keep the 100 best samples in terms of likelihood value and use their corresponding parameter estimates as starting points for the parameter estimation portion of the analysis. We also use the maximum Likelihood solution to subtract the source found from the data. We then use the residual data to search for another source, and this process repeats until there is no signal found with SNR \(\rho>5\). In between successive iterations of the single-binary search, we run another MCMC over all sources found so far in order to adjust the parameters to account for correlations and overlap between sources. After convergence, we found eight sources in our data-set with an optimal SNR greater than 5. We take these found sources and add them to all walkers in the sampler at the beginning of the full MCMC run described below. Parameter estimation:During this step, we perform hybrid MCMC sampling, where we both update the found sources (in-model) and dynamically search for new and weaker sources in the data employing Reversible Jump sampling. For the number of signals \(k\), we adopt a uniform prior \(k\sim\mathcal{U}\)[20, 6]. For the sake of convenience in this simple application, we keep the six loudest binaries found during the search phase as _fixed_. This means that we still sample their waveform parameters, but they are not allowed to be removed by the Reversible Jump process. We chose this setup in order to accelerate the convergence of the algorithm, being confident that these sources are part of the solution. In future work, this will be adjusted to deal with the much larger complexity of the full problem. Concerning the sampler settings, we use the adaptive parallel tempering scheme of sectionII.4, building a temperature ladder of \(N_{T}=10\) temperatures, with 100 walkers for each temperature. For this run, we have also utilized the Multiple Try Metropolis algorithm (see sectionII.3) in order to improve the acceptance rate in the reversible jump proposal. We have also tried the Delayed Figure 6: _Left panel_: In this figure, we show the posterior on the number of UCB sources in the data. The true injected number is shown with the red dashed line. It is clear that, for the given measurement duration of the particular data set, we manage to confidently resolve eight binaries out of a total of ten. _Right panel_: Corner plot for two of the eight parameters characterising each UCB source. These are the amplitude, expressed as an SNR \(\rho\), and the dominant emission (or initial) frequency, \(f_{\rm gw}\) [mHz] (see text for more details). The violet crosses represent the injected parameter values. A corner plot for more parameters is shown in figureA.7 in the Appendix. Rejection scheme which is implemented for Eryn, but we found that the Multiple-Try strategy yields more efficient sampling. Finally, we have utilized the basic stretch and group stretch proposals that were described in sectionII.1. After convergence, the result is shown in figures5 and 6. In figure6, the sampled posterior on the number of sources \(k\) is presented. In this histogram, we have added the six fixed binaries to the actual number of signals being sampled via the Reversible Jump algorithm. It is fairly obvious that we have managed to confidently resolve eight out of the ten injected binary signals. This fact that we do not favour 10 sources can be explained partly by the low SNR of the signals (see table1) and partly by confusion from source overlap (also shown in figure5). Additionally, the result of figure6 depends on the given observation duration. The greater the \(T_{\text{obs}}\), the better our ability to resolve the confused sources. Thus, in that case, we should expect more Reversible Jump iterations across the higher dimensional models. On the right panel, in figure6, the ensemble 2D posterior slice is shown, for two selected parameters. We call it ensemble because we are again "stacking" all the chains for these two parameters for all sources for all model orders \(k\). We chose to show only the amplitude (the \(\rho\) parameter explained in eq.37) above) and the dominant emission frequency \(f_{\text{gw}}\), which illustrates the number of sources resolved, and how they overlap in frequency. A corner plot for more parameters is shown in figureA.7 in AppendixA. We also show the true injection values, marked as crosses, on top of the 2D posterior. From this plot alone, one can see that the sampler is exploring efficiently the parameter space, converging to the true values of the resolvable binaries that were injected. ## V Discussion We have implemented Eryn, a Bayesian sampling package capable of performing efficient trans-dimensional inference, by employing different techniques that improve its acceptance rate. These techniques are the affine invariant sampling, the adaptive parallel tempering, the delayed rejection, and multiple try metropolis, in combination with the construction of informative proposal distributions for the parameters of the models. The structure of Eryn is based on the widely used software emcee [58], enhanced with the ability of performing Reversible Jumps [30] between different model spaces. The sampler capabilities have been demonstrated with toy models that are commonly encountered in different data analysis problems. We have begun with an application to signal detection, and in particular to searching for simple signals in the form of Gaussian pulse signals in the presence of Gaussian noise (see sectionIII.3.1). In sectionIII.3.2, we applied our algorithm to a problem of modeling power spectra with arbitrary shapes in frequency domain. In such cases, it is convenient to define models based on B-splines, which are able to faithfully capture the shape of any spectral data. However, in order to avoid over-fitting situations, the optimal order of the model (_i.e._ the optimal number of spline knots), needs to be estimated from the data. This can be done either sequentially, by trying models of different dimensionality and then comparing their performance, or dynamically, by using trans-dimensional algorithms such as Eryn. This class of problems is often encountered in cosmology [121, 122], where the signal of interest is stochastic in nature, and sometimes the prior knowledge on its shape is very limited. As already discussed, this is especially true for future GW observatories, which open the possibility of detecting such signals from both astrophysical and cosmological origin [16, 98, 102]. The different theoretical models produce spectra with distinct shapes, increasing the need for shape-agnostic spectral models, such as the B-spline used here. Finally, in sectionIV.1, we demonstrated Eryn in a more complicated problem, that of the analysis of ultra compact binary signals measured by the future LISA detector. These objects are going to produce the majority of the signals in the LISA data, each emitting almost monochromatic radiation. Their vast number will generate a confusion foreground, while only a few thousand of them will be resolvable from the data. We employ our tools described in this work, together with a search phase that is based on iteratively running the sampler on "static models" (no trans-dimensional moves) with phase-maximized likelihoods. We do these runs on the residuals of each iteration, with the aim of extracting all bright sources. In order to achieve faster convergence of our parameter estimation run, we choose to keep the brightest sources found during search as fixed (minimum number of model order is \(k=6\)), while the the Reversible Jump algorithm is used to search for weaker signals in the data. This is purely a choice that allows quick convergence in this fully-controlled and simplified LISA data-set. We perform this analysis for a mission duration of \(T_{\text{obs}}=1\) year and only on a single narrow frequency band around 4 mHz, which contains a total of 10 binary signals. It is worth noting here, that the synthetic data were produced assuming idealized conditions. This means that we do not consider any data irregularities, such as data gaps and glitches and spectral lines, or any other contamination originating from the mixing of signals of different types (such as supermassive BH binaries). In the end, as shown in figure6, we manage to recover eight out of ten injected signals. This result makes sense given the relative strength of the injections, and their waveform overlap. Many of the injected sources have an optimal SNR in isolation which is rather low (see table1), so these are more susceptible to deterioration when we account for signal overlap. The above investigations demonstrate that the dynamical parameter estimation capabilities of Eryn are suitable for these types of computationally demanding problems. Eryn has already been used in several works that have been already published [80, 102, 123], or are going to appear soon. The work presented in this paper is the initial part of our efforts toward implementing a data analysis pipeline for LISA data. This pipeline will be demonstrated on the LDC2 data-set [111], which contains multiple types of signals overlapping in both time and frequency domains. That being said, Eryn is a generic and versatile sampler, which can be used in any investigation that requires Reversible Jump sampling, and to our knowledge is one of the very few statistical tools of this kind that is not specialized to a single type of analysis (see discussion in section II.6). ###### Acknowledgements. We wish to thank S. Babak, M. Le Jeune, S. Marsat, T. Littenberg, and N. Cornish for their useful comments and very fruitful discussions. N Stergioulas and N Karmensi acknowledge support from the Gr-PRODEX 2019 funding program (PEA 4000132310). ## Appendix A In figure 11 we show the triangle plot of the stacked posterior points as sampled by Eryn, for the investigation of section IV. The difference to figure (b)b is that here we include more parameters of the sources, but we still do not include all parameters for the sake of clarity.
2310.05733
Polyhedral approach to weighted connected matchings in general graphs
A connected matching in a graph G consists of a set of pairwise disjoint edges whose covered vertices induce a connected subgraph of G. While finding a connected matching of maximum cardinality is a well-solved problem, it is NP-hard to determine an optimal connected matching in an edge-weighted graph, even in the planar bipartite case. We present two mixed integer programming formulations and a sophisticated branch-and-cut scheme to find weighted connected matchings in general graphs. The formulations explore different polyhedra associated to this problem, including strong valid inequalities both from the matching polytope and from the connected subgraph polytope. We conjecture that one attains a tight approximation of the convex hull of connected matchings using our strongest formulation, and report encouraging computational results over DIMACS Implementation Challenge benchmark instances. The source code of the complete implementation is also made available.
Phillippe Samer, Phablo F. S. Moura
2023-10-09T14:05:46Z
http://arxiv.org/abs/2310.05733v1
# Polyhedral approach to weighted connected matchings in general graphs ###### Abstract A connected matching in a graph \(G\) consists of a set of pairwise disjoint edges whose covered vertices induce a connected subgraph of \(G\). While finding a connected matching of maximum cardinality is a well-solved problem, it is NP-hard to determine an optimal connected matching in an edge-weighted graph, even in the planar bipartite case. We present two mixed integer programming formulations and a sophisticated branch-and-cut scheme to find weighted connected matchings in general graphs. The formulations explore different polyhedra associated to this problem, including strong valid inequalities both from the matching polytope and from the connected subgraph polytope. We conjecture that one attains a tight approximation of the convex hull of connected matchings using our strongest formulation, and report encouraging computational results over DIMACS Implementation Challenge benchmark instances. The source code of the complete implementation is also made available. **Acknowledgements**. P.Samer gratefully acknowledges the work of the institute administration at UiB, according to the Working Environment Act SS14-7 of the Royal Norwegian Ministry of Labour and Social Inclusion, which enabled the mobility period leading to the research results presented in this paper, as well as the support by the Research Council of Norway through the research project 249994 CLASSIS. Introduction A \(\mathsf{P}\)-matching in a graph \(G\) consists of a matching \(M\) such that the subgraph induced by vertices covered by \(M\) has some property \(\mathsf{P}\), _e.g._ being connected. This paper is devoted to the problem of computing maximum weight _connected matchings_ in a general graph: a set of pairwise disjoint edges of maximum total weight, whose covered vertices induce a connected subgraph of \(G\). Exciting results on the computational tractability of determining connected matchings attract justified attention in the literature around this appealing generalization of classical matchings, which are already such a fundamental structure in bridging theory and sophisticated applications across domains that range from early logistics, as the postperson problem illustrates, to novel programs in kidney paired exchange (Lam and Mak-Hau, 2020). A striking dichotomy here is that finding a maximum cardinality connected matching is a well-solved problem, while the edge-weighted counterpart is NP-hard even in very restricted graph classes. As we outline below, our contributions represent a step forward in sharpening our ability to face that challenge, proposing a polyhedral combinatorics framework to actually determine maximum weight connected matchings in practice. We remark that work on \(\mathsf{P}\)-matching problems dates back at least to Stockmeyer and Vazirani (1982) on induced matchings. Increased attention is due to thorough advances by Golumbic et al. (2001) on uniquely restricted matchings, and Goddard et al. (2005) contemplating acyclic, connected, and disconnected matchings. More recently, a number of fine-grained complexity results about the weighted connected matching (WCM) problem were presented by Gomes et al. (2022, 2023). They establish the NP-hardness of finding maximum weight connected matchings, for instance, even in planar graphs of maximum vertex degree 3 and edge weights in \(\{-1,1\}\), and in planar bipartite graphs with edge weights in \(\{0,1\}\). In light of that complexity barrier, our hope is to bring the machinery of polyhedral studies and mixed-integer linear programming (MILP) to bear on the investigation of WCM in general graphs. We seek to contribute both to the theoretical study in understanding and approximating the polytope \(\mathfrak{C}(G)\) of connected matchings (_i.e._ the convex hull of characteristic vectors in \(\mathbb{R}^{|E(G)|}\) of connected matchings in \(G\)), and to practical algorithms and their computer implementation. We now stand on decades' worth of progress in matching theory, in combinatorial optimization problems around connectivity and network design, and in mathematical programming computation. It is therefore expected that we are able to harness the polyhedral point of view, and evaluate to what extent it leads to the practical solution of the WCM problem. The main idea in this paper is that there are powerful, elegant polyhedral descriptions of WCM in general graphs, in the sense that we may expect a strong foundation of polyhedral results and progressively more effective MILP solvers for this problem. We defend the standpoint that only the combination of theoretical and applied results from communities in combinatorics and mathematical programming may truly settle (and push) the limitations around finding optimal weighted connected matchings. From this perspective, the carefully designed formulations and the open-source software that we propose are useful ingredients towards that end. In summary, our main contributions are the following. 1. Polyhedral descriptions yielding exact integer programming algorithms to find weighted connected matchings in general graphs. We present both a compact, extended formulation that can be easily fed to a black-box solver, and an exponential formulation on the space of natural variables only, using blossom inequalities from the matching polytope, and minimal separators and indegree inequalities from the connected subgraph polytope. 2. Detailed presentation of a sophisticated branch-and-cut scheme based on the exponential formulation. The resulting algorithm, as well as the solver based on the compact formulation, attain encouraging computational performance on four different sets of benchmark instances of connected subgraph problems from the 11th DIMACS Implementation Challenge, and settles a state-of-the-art baseline for future work. 3. Free, open-souce implementation of the complete algorithms, including a series of useful, general-purpose algorithmic components - all of the separation procedures more prominently. ## 2 Polyhedral descriptions of weighted connected matchings In this section we present the main idea of the paper. We concentrate here on the polyhedra leading to MILP formulations for WCM. Section 3 continues with algorithmic aspects, including our particular design choices based on preliminary computational evaluations. Our terminology and notation are standard in algorithmics and graph theory. Note that we write \([k]\stackrel{{\text{def}}}{{=}}\{1,\ldots,k\}\), and that we denote by \(2^{S}\) the power set of \(S\), that is, the set of all subsets of \(S\). ### Extended formulation We begin with a compact, extended formulation. That is, a system of inequalities in higher dimensional space which (i) has a number of variables and constraints that is polynomial in the input size, and (ii) whose orthogonal projection into the original space contains all (and only those) lattice points corresponding to integer feasible solutions. In particular, we use the well-known approach of modelling the flow of a commodity in an auxiliary network to impose the connectivity of the induced subgraph; see Magnanti and Wolsey (1995) for a thorough introduction. We denote the flow network by \(\mathcal{D}=\left(V(G)\cup\left\{s\right\},\mathcal{A}\right)\) where \(s\) is an artificial source vertex, and \(\mathcal{A}\) contains both orientations of each original edge in \(G\), besides an arc from \(s\) to each other vertex. That is, \(\mathcal{A}\stackrel{{\mathrm{def}}}{{=}}\left\{(u,v),(v,u):\text { for }\left\{u,v\right\}\in E(G)\right\}\cup\left\{(s,u):u\in V(G)\right\}\). As usual, let \(n\stackrel{{\mathrm{def}}}{{=}}|V(G)|\), \(m\stackrel{{\mathrm{def}}}{{=}}|E(G)|\), \(\delta:V(G)\to 2^{E}\) denote the set of edges incident to a vertex of graph \(G\), and let \(\delta^{+},\delta^{-}:V(\mathcal{D})\to 2^{\mathcal{A}}\) denote the set of arcs leaving (resp. entering) a vertex of network \(\mathcal{D}\). The model we propose imposes connectivity of the solution by requiring that there be an arborescence rooted in \(s\), so that there is an arc reaching a given vertex if and only if it is saturated by a matching edge. To accomplish that, we note that each matching \(M\) covers \(2\cdot|M|\) vertices, and so determine that exactly \(2\cdot|M|\)_units of flow_ leave the artificial source \(s\), and that each covered vertex _absorbs_ a flow unit. Specifically, a first MILP formulation of the WCM problem is given by \[\max\left\{\sum_{e\in E(G)}w_{e}x_{e}:(\mathbf{x},\mathbf{y},\mathbf{f})\in \mathcal{P}_{\mathrm{ext}}(G)\cap\left\{0,1\right\}^{m}\times\left\{0,1\right\} ^{2m+n}\times\mathbb{Q}_{+}^{2m+n}\right\}, \tag{1}\] where \(\mathcal{P}_{\mathrm{ext}}(G)\) is the following polyhedral region: \[\sum_{e\in\delta(u)}x_{e} \leq 1 \text{for each }u\in V(G), \tag{2}\] \[\sum_{a\in\delta^{-}(u)}y_{a} =\sum_{e\in\delta(u)}x_{e} \text{for each }u\in V(G),\] (3) \[\sum_{u\in V(G)}y_{su} \leq 1,\] (4) \[y_{uv} \leq\sum_{a\in\delta^{-}(u)}y_{a} \text{for each }u\in V(G)\text{ and each }uv\in\delta^{+}(u),\] (5) \[f_{a} \leq n\cdot y_{a} \text{for each }a\in\mathcal{A},\] (6) \[\sum_{u\in V(G)}f_{su} =2\cdot\sum_{e\in E(G)}x_{e}, \tag{7}\] \[\sum_{a\in\delta^{\cdot}(u)}f_{a}-\sum_{a\in\delta^{+}(u)}f_{a} =\sum_{a\in\delta^{-}(u)}y_{a} \text{for each }u\in V(G), \tag{8}\] \[x_{e} \geq 0 \text{for each }e\in E(G),\] (9) \[y_{a} \geq 0 \text{for each }a\in\mathcal{A},\] (10) \[f_{a} \geq 0 \text{for each }a\in\mathcal{A}. \tag{11}\] Note that variables \(\mathbf{x}\) determine which edges of \(G\) are in the solution matching, variables \(\mathbf{y}\) determine an orientation by allowing arcs of \(\mathcal{D}\) to carry flow, and variables \(\mathbf{f}\) give the actual flow running on each arc. The classical degree inequalities in (2) are enough to have integer points where at most one edge reaches each vertex. Constraints (3) link \(\mathbf{x}\) and \(\mathbf{y}\) variables, by setting the number of (directed) arcs entering \(u\) as the number of (undirected) edges incident to it - either zero or one, as enforced by the previous constraints. Constraint (4) establishes that the artificial flow source \(s\) should be linked to at most one vertex in \(G\); note that we expect exactly one arc leaving \(s\) in interesting examples, but the model still allows an empty solution (_e.g._ when the objective coefficients \(\mathbf{w}\) are negative everywhere). Constraints (5), which capture that we may only open an out arc from \(u\) to \(v\) if some in arc arrives at \(u\), are actually implied by the other sets of inequalities but generally perceived as helping solve LP relaxation faster. The remaining constraints concern the network flow. Inequalities (6) bind \(\mathbf{y}\) and \(\mathbf{f}\) variables: nonzero flow is only allowed in open arcs, and the maximum flow is \(n=|V(G)|\). Constraint (7) establishes that the flow leaving the artificial source \(s\) is exactly the number of vertices saturated by the matching (_i.e._ twice as many as there are edges in the matching). Lastly, flow balance constraints (8) impose a single connected component in the solution: vertices in the arborescence (namely, those whose number of incoming arcs in the right-hand side is one) consume one unit of flow, while others may not interfere either consuming or creating flow. The main advantage of having a MILP formulation with polynomial number of variables and constraints is the practicality of just feeding it to a black-box solver, automatically benefiting of increased performance due to software and hardware improvement. On the other hand, while an extended formulation may have much smaller number of facets than its projection, decades of mathematical programming computation led to numerous examples where superior performance is attained by branch-and-cut algorithms that dynamically identify and add cutting-planes violated by relaxation solutions. That is the path we now thread. First, designing a formulation for WCM anchored in solid knowledge of the underlying connected subgraph polytope and the classical matching polytope. Next, in Section 3, filling in the details of the best-performing branch-and-cut scheme we devised and offer in our accompanying software repository. ### Formulation in the original space of variables The guiding principle in the design of our second formulation for WCM is to waive the concern about model size and build on strong valid inequalities leading to the best-performing solvers for closely related problems, defined over the larger polytopes of classical matchings and connected subgraphs. The classical matching polytope is well-known since the very birth of the polyhedral combinatorics field - tied to the celebrated results of Edmonds (1965), and better understood in light of the combinatorial proof of Balinski (1972). Namely, Edmonds showed that, together with degree inequalities in (2) above, _blossom inequalities_ give all the facets missing in a complete characterization of the matching polytope, and can be separated efficiently. On the other hand, as the maximum-weight connected subgraph (MWCS) problem is NP-hard even in very restricted particular cases (Johnson, 1985), there is no hope for an ideal formulation of the connected subgraph polytope under the assumption that \(\mathsf{P}\neq\mathsf{NP}\) and the equivalence of separation and optimization. While there are many options for modelling induced connectivity, a recent performance breakthrough in exact solvers for problems like the MWCS (Wang et al., 2017; Alvarez-Miranda et al., 2013) and Steiner trees (Fischetti et al., 2017) is attributed to _minimal separator inequalities_ (MSI) on vertex choosing variables \(y\in\{0,1\}^{|V(G)|}\): \[y_{a}+y_{b}-\sum_{u\in S}y_{u}\leq 1, \tag{12}\] for each pair of non-adjacent vertices \(a\) and \(b\), and each \((a,b)\)-separator \(S\subseteq V\backslash\left\{a,b\right\}\), _i.e._ there are no paths connecting \(a\) to \(b\) if we remove \(S\) from \(G\). The eminently readable paper by Wang et al. (2017) includes a proof that (12) defines a facet of the connected subgraph polytope if and only if the separator \(S\) is minimal. It is worth remarking that the breakthrough we refer to in practical evidence (runtime of resulting MILP solvers) does not agree with the theoretical intuition given by inclusion of different polyhedral relaxations. In particular, the recent work of Rehfeldt et al. (2022) proves that the LP bound from the MSI relaxation for induced connectivity is weaker than earlier alternatives based on combining vertex and edge-variables. That is in stark contrast to the experimental results mentioned above. In particular, the praised solver of Fischetti et al. (2017) won most of the categories at the 11th DIMACS Implementation Challenge (DIMACS'11). In line with our guiding principle in this section, we follow the standpoint of those authors, who conclude that a simpler model defined in the natural space of variables and a careful implementation framework enabled their superior performance. To define a system of inequalities combining the separation of MSI (12) and only using natural design variables \(\mathbf{x}\in\{0,1\}^{|E(G)|}\) in the original space of polytope \(\mathfrak{C}(G)\), we use the fact that vertex \(u\) belongs to the subgraph induced by matching \(M\) if and only if there is exactly one edge in \(M\) incident to \(u\). Hence, projecting the MSI onto the space of \(\mathbf{x}\) variables using \(y_{u}\mapsto\sum_{e\in\delta(u)}x_{e}\), we derive the first IP formulation to find maximum weight connected matchings using MSI, \[\max\left\{\sum_{e\in E(G)}w_{e}x_{e}:\mathbf{x}\in\mathcal{P}_{\mathrm{sep}} (G)\cap\{0,1\}^{m}\right\}, \tag{13}\] with \(\mathcal{P}_{\mathrm{sep}}(G)\) defined by degree inequalities (2), non-negativity bounds (9), and the projection of separator inequalities (12): \[\sum_{e\in\delta(a)}x_{e}+\sum_{e\in\delta(b)}x_{e}-\sum_{u\in S}\sum_{e\in \delta(u)}x_{e}\leq 1\quad\text{ for each }a,b\in V(G),\{a,b\}\not\in E(G),\text{ and each }S\in\mathcal{C}(a,b),\] where \(\mathcal{C}(a,b)\subset 2^{V(G)}\) denotes the set of all minimal \((a,b)\)-separators in \(G\). We reinforce this formulation with two exponential families of valid inequalities. The first is an additional class of facets of the connected subgraph polytope studied by Wang et al. (2017). The other was already mentioned earlier: blossom inequalities, which define all remaining facets of the classical matching polytope. **Indegree inequalities**: A vector \(d\in\mathbb{Z}_{+}^{n}\) is called an _indegree_ vector of graph \(G\) if there exists an orientation of its edges such that the indegree of each vertex \(u\) is \(d_{u}\). Introduced decades earlier in the context of greedoid optimization and only in the particular case where \(G\) is a tree (Korte et al., 1991, Chapter XI, Theorem 3.6), it was later shown by Wang et al. (2017) that, for each indegree vector \(d\) of an _arbitrary_ graph \(G\), inequality \[\sum_{u\in V(G)}(1-d_{u})\cdot y_{u}\leq 1\] is valid for the connected subgraph polytope of \(G\). Interestingly, the indegree inequalities provide a minimal, complete characterization of that polytope when \(G\) is a tree: each indegree inequality defines a facet, and none is missing. More importantly in the context of our problem, Wang et al. (2017) prove that the indegree inequalities can still define facets in the general case, and may be separated in time proportional to \(O(n+m)\). Again, we shall use the projection of those inequalities in the original space of the connected matching polytope \(\mathfrak{C}(G)\) by the linear map \(y_{u}\mapsto\sum_{e\in\delta(u)}x_{e}\). **Blossom inequalities**: Finally, to ensure that our formulation is within the tightest possible description of the (classical) matching relaxation, one would naturally expect the inclusion of blossom inequalities. Namely, for each _handle_\(H\subset V(G)\) of odd cardinality, the inequality \[\sum_{e\in E(G[H])}x_{e}\leq\frac{|H|-1}{2}\] is valid for the matching polytope of \(G\). Besides being sufficient to determine the convex hull of incidence vectors of matchings in an arbitrary graph, each blossom inequality is also necessary when \(G\) is a complete graph, for example. They are also an important ingredient in state-of-the-art solvers for the travelling salesperson problem (Conforti et al., 2014, Section 7.4). Putting the inequalities together, the complete formulation on which we base our enhanced branch-and-cut algorithm to find weighted connected matchings is \[\max\left\{\sum_{e\in E(G)}w_{e}x_{e}:\mathbf{x}\in\mathcal{P}_{\text{full}}(G )\cap\left\{0,1\right\}^{m}\right\}, \tag{14}\] where \(\mathcal{P}_{\text{full}}(G)\) is the following polyhedral region: \[\sum_{e\in\delta(u)}x_{e} \leq 1 \text{for each }u\in V(G), \tag{15}\] \[\sum_{e\in\delta(u)}x_{e}+\sum_{e\in\delta(b)}x_{e}-\sum_{u\in S }\sum_{e\in\delta(u)}x_{e} \leq 1 \text{for each non-adjacent }a,b\in V(G),\text{ and each }S\in\mathcal{C}(a,b),\] (16) \[\sum_{u\in V(G)}(1-d_{u})\sum_{e\in\delta(u)}x_{e} \leq 1 \text{for each indegree vector }d\text{ of }G,\] (17) \[\sum_{e\in E(G[H])}x_{e} \leq\frac{|H|-1}{2} \text{for each }H\subset V(G)\text{ with }|H|\text{ odd},\] (18) \[x_{e} \geq 0 \text{for each }e\in E(G). \tag{19}\] Branch-and-cut scheme for the exponential formulation The enhanced WCM formulation (14) is only useful in practice if an implementation of efficient separation procedures for the (exponentially-many) inequalities in (16), (17) and (18) defining \(\mathcal{P}_{\text{full}}(G)\) is available. This section completes our main contribution, filling in the algorithmic details and presenting our free, open-source software package with the resulting solver for WCM in general graphs. We designed our C++ code with attention to time and space efficiency, and fairly tested it for correctness along months of development. It is available in the wcm-branch-and-cut repository on GitHub ([https://github.com/phillippesamer/wcm-branch-and-cut](https://github.com/phillippesamer/wcm-branch-and-cut)), thus welcoming collaboration towards extensions and facilitating the direct comparison with eventual algorithms designed for the WCM problem in the future. Moreover, the code can be forked and turned into useful algorithmic components to different problems and applications. ### Separation procedures Efficient separation algorithms are at the core of a successful branch-and-cut scheme, as they are executed a number of times in each node of the enumeration tree partitioning the search space. Since the classes of inequalities (16), (17) and (18) grow exponentially with the size of the input graph, it is not practical to add them in an explicit model _a priori_ for reasonably sized problems. Except for the last method presented below, the following are _exact_ separation procedures: oracles that, given a relaxation solution \(\mathbf{x}^{*}\), either identify a specific inequality valid for \(\mathcal{P}_{\text{full}}(G)\) that is violated at \(\mathbf{x}^{*}\) (which can then be added to the formulation and _cut off_\(\mathbf{x}^{*}\) to continue the search), or certify implicitly that no such inequality exists. In contrast, when it comes to blossom inequalities (18), we use both an exact and a _heuristic_ separation procedure, _i.e._ a faster method to search for a blossom cut, that may fail even when one exists. #### MSI cuts We followed the description of (Fischetti et al., 2017, Section 2.1) for two exact separation algorithms for MSI in the award-winning solver mentioned in Section 2.2. The first algorithm is based on the computation of maximum flows in a support digraph \(H\), whose arc capacities are defined according to the current relaxation solution. For each pair of non-adjacent vertices \(u,v\in V(G)\) such that \(x_{u}^{*}+x_{v}^{*}>1\) (which is necessary for an MSI to be violated), we calculate a maximum \((u,v)\)-flow \(\overline{f}\) in \(H\). If \(\overline{f}<x_{u}^{*}+x_{v}^{*}-1\), we may determine a violated separator inequality from the corresponding minimum cut. Two implementation tweaks are worth mentioning. 1. As first observed by (Miyazawa et al., 2021, Section 3.1), we often have a large number of variables in the relaxation solution \(\mathbf{x}^{*}\) at either \(0\) or \(1\), and none of these appear in a \((u,v)\)-separator that gives a violated inequality. We may therefore contract the corresponding arcs and vertices in \(H\) to run the separation algorithm in considerably smaller support digraphs. Implementing such contractions requires special care to keep track of the original variables that make up the violated inequality we eventually find. 2. When we identify a minimum cut \(C\) yielding a violated \((u,v)\)-separator inequality, it is in general not a _minimal_ separator. As mentioned in Section 2.2, we thus have the opportunity to _lift_ the left-hand side towards a minimal separator \(S\subset C\) to derive a non-dominated inequality. This can be achieved with a simple graph traversal procedure, as formalized by (Fischetti et al., 2017, Section 2.1). While they refer to a breadth-first search (BFS) and use an edge-deletion operation, we observe faster runtimes combining (i) a depth-first search that avoids an explicit BFS queue, and (ii) a boolean mask of _active/inactive_ edges passed as a reference parameter instead of modifying the graph. The worst-case time complexity of the whole procedure is in \(O(n^{2}\cdot g(\tilde{n},\tilde{m}))\), where \(n\stackrel{{\text{def}}}{{=}}|V(G)|\), \(m\stackrel{{\text{def}}}{{=}}|E(G)|\), and \(g(\tilde{n},\tilde{m})\) denotes the complexity of a single maximum-flow computation in a digraph with \(\tilde{n}\) vertices and \(\tilde{m}\) arcs. We use the highly tuned implementation of the preflow-push algorithm of Goldberg and Tarjan (1988) in COIN-OR LEMON 1.3.1 - the Library for Efficient Modeling and Optimization in Networks (Dezso et al., 2011), as the responsible team reports that best computational runtimes are attained with that algorithm. Its time complexity \(g(\tilde{n},\tilde{m})\) is in \(O(\tilde{n}^{2}\sqrt{\tilde{m}})\). Since the digraph \(H\) above is such that \(\tilde{n}\leq 2n\) and \(\tilde{m}\leq 2m+n\) before the contractions due to integer-valued variables, the runtime of our separation is within \(O(n^{4}\sqrt{n+m})\). Contrary to what one could expect from such a high worst-case time complexity, we had very encouraging numerical results in practice, which we attribute to the digraph contractions and the particular branch-and-cut scheme that we outline in Section 3.2 below. An alternative, more efficient separation procedure is readily available in the particular case where the relaxation solution \(\mathbf{x}^{*}\) is actually integer-valued. We may resort to a simple depth-first search (DFS) to check connectivity of the induced solution. In a disconnected solution, inspecting the neighbourhood \(C\) of any connected component with a vertex \(u\) such that \(x_{u}^{*}=1\) gives a violating \((u,v)\)-separator, for some \(v\) in a different component with \(x_{v}^{*}=1\). It is important to stress that implementation tweak (B) above applies here as well, and that we still need to derive a minimal separator \(S\subset C\) to add a stronger inequality. The time complexity of the separation in this particular case is dominated by the DFS step, and is thus in \(O(n+m)\). It is fair to remark that MSI first appeared in two, earlier papers in applied operations research. Fugenschuh and Fugenschuh (2008) introduced this class of inequalities and their separation algorithm in an intricate, non-linear programming problem arising in the sheet metal industry. Their work was picked up by Carvajal et al. (2013), who extended on the role of MSI when imposing connectivity in a forestry planning problem. The MSI were also introduced independently in the polyhedral studies of the convex recoloring problem (Campelo et al., 2013). We also praise, again, the thorough, instructive chapter of Alvarez-Miranda et al. (2013) on the maximum weight connected subgraph problem more generally. #### Indegree cuts The separation of indegree inequalities is remarkably simple. We implemented the procedure exactly as presented by Wang et al. (2017). The main point is to consider an orientation maximizing the left-hand side over all indegree vectors, namely: taking \(\overrightarrow{uv}\) for \(\{u,v\}\in E(G)\) if and only if \(x_{u}^{*}\geq x_{v}^{*}\). The worst-case time complexity of the algorithm is in \(O(n+m)\). #### Blossom cuts We consider two separation algorithms for blossom inequalities. See Section 3.2 below for the detailed scheme in which they are used. The first method is the exact separation procedure of Padberg and Rao (1982). We strictly followed its presentation _cf._Letchford et al. (2008), who introduced the state-of-the-art algorithm for the more general version of \(b\)-matchings with edge capacities. The separation works on a support graph \(G^{\prime}\) with \(n+1\) vertices and \(m+n\) edges, whose capacities are determined by the current relaxation solution \(\mathbf{x}^{*}\). It boils down to determining a _cut tree_\(\mathcal{T}(G^{\prime})\): an elegant data structure introduced by Gomory and Hu (1961) to encode minimum cuts between all pairs of vertices in the graph, at the expense of computing only \(|V(G^{\prime})|-1\) maximum flow computations. We use COIN-OR LEMON 1.3.1 (Dezso et al., 2011) here as well to build the cut tree \(\mathcal{T}(G^{\prime})\). Then, it suffices to verify the blossom inequality \(\sum_{e\in E(G[H])}x_{e}^{*}\leq\nicefrac{{(|H|-1)}}{{2}}\) for at most one handle \(H\subset V(G)\) per edge of the tree. Constructing the data structure dominates the worst-case time complexity of the complete algorithm, and is within \(O(n^{4})\)(Letchford et al., 2008). We remark that an implementation here may construct the support graph \(G^{\prime}\) only once, and just update edge capacities according to the current relaxation values. Our second method is inspired by the work of (Bicalho et al., 2016, Section 4.1.2), who observed comparable results using the exact method above and a separation heuristic for blossom inequalities in a row-and-column generation algorithm for a different network design problem. We devised a simpler algorithm to try to find blossom cuts in linear time as follows. Let \(H\) denote the support graph induced only from fractional variable in \(\mathbf{x}^{\star}\), and let \(H_{i}\) denote the connected components of \(H\). For each \(H_{i}\) of odd cardinality, we simply inspect the corresponding blossom inequality for violation. The complexity of this separation heuristic is dominated by the step finding connected components in \(H\), which is in \(O(n+m)\) in the worst case. ### Further algorithmic aspects We are now ready to depict our complete branch-and-cut scheme. We use the overall framework in the Gurobi 10.0.2 solver (Gurobi Optimization, LLC, 2023), with callbacks to implement separation procedures. It is important to distinguish between the specific callback from a new MIP incumbent, where only _lazy constraints_ are added (in our case, MSI tailored for integer-valued points), and the standard callback from the search at a given MIP node, where we add _user cuts_ (all of MSI, indegree, and blossom inequalities). In the beginning, only degree inequalities (15) are included _a priori_ in the model. Instead of relying in the solver standard behavior concerning how long to explore the root node relaxation before branching, we designed a **strengthened root node LP relaxation**. Here, we dedicate up to 300 seconds or 10% of the specified time limit (whichever is shorter) prior to the main call to the solver, and alternate the solution of an LP relaxation and cut generation observing that: 1. All MSI violated in the current relaxation solution \(\mathbf{x}^{\star}\) are added; 2. The exact separation of blossom inequalities is attempted only if \(\mathbf{x}^{\star}\) is fractional, no MSI cut was found and the separation heuristic failed; 3. No indegree cuts are added unless all other separation algorithms failed, to prevent the inclusion of an excessively large number of constraints. The reinforced model resulting from this initialization consistently showed the best computa tional performance across a range of configurations in our preliminary computational evaluation. In particular, we explain in Section 4.2 that a number of instances could be solved without resorting to branching - which was not the case before we devised this strengthened root node relaxation. The general algorithm continues with the branch-and-cut enumeration tree, partitioning the search space by fixing a single binary variable, \(x_{u}=0\) or \(x_{u}=1\), in each branch. In each remaining node below the root relaxation, more attention is paid to limit the complexity of cut generation, by observing the following: 1. The MSI separation algorithm concludes as soon as any inequality violated at the current relaxation is found, iterating over the initial source vertex considered for maximum flow computations; 2. The exact separation of blossom inequalities is turned off, and only the heuristic is run; 3. The separation of indegree inequalities is turned off. ## 4 Experimental evaluation We conclude our work with the first experimental evaluation of an algorithm for the weighted connected matching (WCM) problem. The main goal here is to indicate that our polyhedral descriptions and the resulting algorithms constitute a practical approach to find optimal connected matchings in non-trivial inputs. ### Benchmark design We hope to set a judicious baseline towards progress in the computation of WCM. Namely, one that is anchored in the _reproducibility_ of materials and methods, and that reports experimental evidence from respectable, interesting testbeds. #### Interlude At an early stage of our experiments, we considered using binomial (Erdos-Renyi) graphs \(\mathcal{G}_{n,p}\) as reported by Wang et al. (2017) when studying the performance of minimal separator and indegree inequalities for the maximum weight connected subgraph (MWCS) problem. To our surprise, the random graphs in this model resulted in quite simple WCM instances. Both the compact extended formulation and the exponential one in the original space give so tight bounds that problems in the order of \(10^{5}\) vertices were solved in negligible time on a desktop computer - whether on sparse or dense \(\mathcal{G}_{n,p}\) graphs (\(p\in[0.01,0.6]\)), whether with Gaussian or uniform random weights. The script to produce such instances using the robust NetworkX Python package is still available in our GitHub repository. Nevertheless, we do not go further in evaluating those examples in our research. Instead, we choose to proceed by borrowing credibility from a certified source of benchmark instances of MWCS and similar problems. #### Benchmark instances Our computational evaluation is carried over benchmark instances from the 11th DIMACS/ICERM Implementation Challenge. The competition covered several variants around the Steiner tree problem, and we chose to use all three sets of instances of the MWCS problem, and the one set available for the Generalized Maximum-Weight Connected Subgraph (GMWCS) problem. Specifically, there are 39 instances in set MWCS-GAM, 72 in MWCS-JMPALMK, 8 in MWCS-ACTMOD, and 63 instances in set GMWCS-GAM. Table 1 in the supplementary material (Appendix A) contains the full names, sizes, and a numerical ID for ease of reference of the 182 instances. The smallest instances have less than a thousand vertices and edges; the largest ones exceed 5.000 vertices and 90.000 edges. See DIMACS'11 for more information. Since MWCS instances contain only vertex weights, we determined \(w(e)\stackrel{{\mathrm{def}}}{{=}}w(u)+w(v)\) for each \(e=\{u,v\}\in E(G)\). For GMWCS problems we used the edge weights included in the input instance, and ignored vertex weights. We note that 10 out of the 63 GMWCS instances have only negative weights, and so the resulting WCM problem has null optimum. We decided to keep those instances in the benchmark for the sake of completeness. #### Platform and settings We tested the implementation in a desktop machine with an Intel Core i5-8400 processor, with 6 CPU cores at 2.80GHz, and 16GB of RAM, runnning GNU/Linux kernel 6.2.0-33 under the Ubuntu 22.04.3 LTS distribution. All the code is compiled with g++ 11.4.0. As mentioned in Section 3.2, we use the Gurobi Optimization, LLC (2023) MILP solver. We set a time limit of 3600 seconds in all executions, while noting that the solver process may exceed that by a negligible amount. All solver parameters are used in their default values, except for setting an extra effort on MIP heuristics when using the exponential formulation with our branch-and-cut scheme (MIPFocus = 1 and GRB_DoubleParam_Heuristics = 0.2). In our implementation, we require a violation by at least \(10^{-5}\) to add a cutting plane in all separation procedures. ### Discussion Table 2 in the supplementary material in Appendix A contains the detailed results of the solver using the extended formulation (1), _i.e._ optimizing over polyhedron \(\mathcal{P}_{\text{ext}}\), and our enhanced branch-and-cut scheme with formulation (14), which is based on the exponential polyhedral description in \(\mathcal{P}_{\text{full}}\). We include information both on the LP relaxation and the full integer program on the table, and discuss our findings next. Overall performance.A first note is about the actual practicality of the implementations. We were satisfied that the enhanced branch-and-cut scheme over \(\mathcal{P}_{\text{full}}\) concludes with an optimality certificate for 168 out of 182 instances. The corresponding number for the compact formulation is 151 cases. Taking into account that we use at most one hour of processing by a regular desktop computer, we consider this a rather encouraging conclusion. Practitioners and applications with a connected matching subproblem should therefore be able to derive improved runtimes by taking advantage of more powerful computing platforms. Empirical approximation of the ideal formulation.Concerning how tightly we approximate the convex hull \(\mathfrak{C}(G)\) with our formulations, it is remarkable that the LP relaxation bound of \(\mathcal{P}_{\text{full}}\) matches the optimum in 98 out of 168 instances for which the optimum is known, and the enhanced branch-and-cut algorithm is able to prove optimality in the root node relaxation in 111 problems. More generally, the optimum is within 5% of LP bound in 145 of those 168 instances. We observe that the LP relaxation bound of \(\mathcal{P}_{\text{full}}\) is stronger or equal to that of \(\mathcal{P}_{\text{ext}}\) in all instances. It is 23.9% tighter on average, and up to 84% tighter (recall that we impose a time limit for the LP relaxation, so it could be even stronger). Comparing bounds between formulations.As expected, the dual bounds attained with the enhanced branch-and-cut algorithm over \(\mathcal{P}_{\text{full}}\) are consistently stronger than that of the compact extended formulation, and there is only a single case where the latter is stronger (namely, the instance of ID 16). Concerning primal bounds, we were surprised positively with 8 examples where the compact extended formulation does find an integer feasible solution better than the exponential one. We refer to the results on instances with identifiers \(3,16,21,35,36,42,44,45\). Superiority of exponential formulation in harder instances.Finally, seeking a classification of the instances with respect to computational hardness, we find that 123 out of 182 instances could be solved to optimality by both formulations within 5 minutes. In the remaining 59 instances, we find good clues of the superiority of \(\mathcal{P}_{\text{full}}\). 1. In 30 of those 59 cases, the exponential formulation does finish within 5 minutes. 2. The LP relaxation bound of \(\mathcal{P}_{\text{full}}\) is up to 65.0% stronger than that of \(\mathcal{P}_{\text{ext}}\) in this subset; it is 34.7% stronger on average. 3. In 11 instances, the exponential formulation solves the problem at the root relaxation node, whereas the compact one struggles to finish: not even proving optimality before 3600 seconds in 3 cases; taking 1510 or 2388 seconds in 2 cases; and 6 other cases taking longer than 5 minutes. 4. While there are 14 instances where the solver with the exponential formulation could not prove optimality (_i.e._ exceeds the one hour time limit with an open gap), there are 31 such cases for the compact formulation. ## 5 Final remarks The standing argument behind our work is that polyhedra and MILP should lead to an interesting, useful perspective to study WCM both in theory and in practice. This complements other methodologies that are currently available to find weighted connected matchings, which assume restricted input graph classes. Moreover, we hope that our approach also determines a solid baseline of comparison for further research crafting WCM algorithms. Besides their appealing polyhedral structures and intrinsic connections with established problems in combinatorics and optimization, both formulations considered here had encouraging computational performance, and could be considered for eventual applications of the WCM problem as well. On the one hand, having good results from the compact extended formulation is an achievement in its own right (we remark that it was able to find better primal feasible solutions in a few examples), as performance improvements in the underlying MILP solver usually leads to better runtimes in such "simpler" models automatically. Still, all the work in designing an enhanced formulation did pay off, and we are proud to contribute yet another success story where the theoretical insight gathered from careful polyhedral relaxations translates to strides in practical computing experience. Using the exponential description and the resulting branch-and-cut scheme, we provide optimality certificates for 168 out of 182 instances. Most noticeable, that formulation solves 111 out of 182 instances in the root relaxation, without resorting to branching. We believe that further research should consider only the subset of harder instances discussed here, and investigate features that characterize the most challenging ones before proposing new benchmark sets. It should also be possible to strengthen the compact extended polyhedron as well, so that more instances could be solved to proven optimality within limited runtimes. Finally, our software repository includes not only the implementation of all the methods presented in this paper, but also a simple tool using polymake(Gawrilow and Joswig, 2000; Assarf et al., 2017) to assist one in inspecting the connected matching polytope and finding new classes of strong valid inequalities. We had some progress in this direction (Samer, 2023), and trust that many fruitful results could be derived by further research translating the polyhedral insight to improved WCM algorithms.
2301.13610
Non-Singular Bouncing Model in Energy Momentum Squared Gravity
This work is concerned to study the bouncing nature of the universe for an isotropic configuration of fluid $\mathcal{T}_{\alpha\beta}$ and Friedmann-Lema\^{i}tre-Robertson-Walker metric scheme. This work is carried out under the novel $f(\mathcal{G},\mathcal{T}_{\alpha \beta} \mathcal{T}^{\alpha \beta})$ gravitation by assuming a specific model i.e, $f(\mathcal{G},\mathcal{T}^2)=\mathcal{G}+\alpha \mathcal{G}^2+2\lambda \mathcal{T}^2$ with $\alpha$ and $\lambda$ are constants, serving as free parameters. {The terms $\mathcal{G}$ and $\mathcal{T}^2$ served as an Gauss-Bonnet invariant and square of the energy-momentum trace term as an inclusion in the gravitational action respectively, and is proportional to $\mathcal{T}^2=\mathcal{T}_{\alpha \beta} \mathcal{T}^{\alpha \beta}$.} A specific functional form of the Hubble parameter is taken to provide the evolution of cosmographic parameters. A well known equation of state parameter, $\omega(t)=-\frac{k \log (t+\epsilon )}{t}-1$ is used to represent the dynamical behavior of energy density, matter pressure and energy conditions. A detailed graphical analysis is also provided to review the bounce. Furthermore, all free parameters are set in a way, to make the supposed Hubble parameter act as the bouncing solution and ensure the viability of energy conditions. Conclusively, all necessary conditions for a bouncing model are checked.
Z. Yousaf, M. Z. Bhatti, H. Aman, P. K. Sahoo
2023-01-30T15:57:22Z
http://arxiv.org/abs/2301.13610v1
# Non-Singular Bouncing Model in Energy Momentum Squared Gravity ###### Abstract This work is concerned to study the bouncing nature of the universe for an isotropic configuration of fluid \({\cal T}_{\alpha\beta}\) and Friedmann-Lemaitre-Robertson-Walker metric scheme. This work is carried out under the novel \(f({\cal G},{\cal T}_{\alpha\beta}{\cal T}^{\alpha\beta})\) gravitation by assuming a specific model i.e, \(f({\cal G},{\cal T}^{2})={\cal G}+\alpha{\cal G}^{2}+2\lambda{\cal T}^{2}\) with \(\alpha\) and \(\lambda\) are constants, serving as free parameters. The terms \({\cal G}\) and \({\cal T}^{2}\) served as an Gauss-Bonnet invariant and square of the energy-momentum trace term as an inclusion in the gravitational action respectively, and is proportional to \({\cal T}^{2}={\cal T}_{\alpha\beta}{\cal T}^{\alpha\beta}\). A specific functional form of the Hubble parameter is taken to provide the evolution of cosmographic parameters. A well known equation of state parameter, \(\omega(t)=-\frac{k\log(t+\epsilon)}{t}-1\) is used to represent the dynamical behavior of energy density, matter pressure and energy conditions. A detailed graphical analysis is also provided to review the bounce. Furthermore, all free parameters are set in a way, to make the supposed Hubble parameter act as the bouncing solution and ensure the viability of energy conditions. Conclusively, all necessary conditions for a bouncing model are checked. **Keywords:** cosmography; Hubble Parameter; \(f({\cal G},{\cal T}_{\alpha\beta}{\cal T}^{\alpha\beta})\). **PACS:** 98.80.-k; 04.20.Cv; 04.50.Kd. Introduction According to the big bang hypothesis, the whole universe was created by a single explosion, with all matter in the cosmos as an infinite speck [1, 2]. This hypothesis works well in order to study the beginning, but lack to define different cosmological problems. These problems include the horizon problem, the flatness problem, the singularity problem, etc. In order to resolve these big cosmic challenges, different cosmic theories have been developed in literature [3, 4, 5]. The bouncing hypothesis is one of the major independent theories that came up with the answers related to the starting of the universe and should be enough to resolve the major cosmic problem of singularity. The bouncing cosmology works on the scheme of an oscillatory universe, i.e, a universe that came into being from the pre-existing universe without undergoing the singularity [6, 7, 8]. This whole transition of the universe not only explains the big-bang cosmology but also reduces one of the major issues. For the bouncing, the universe moves into the contraction phase as a matter-dominated the era of the universe. After the contraction, the universe starts to expand in a nonsingular manner for which gravity dominates the matter [9, 10]. Also, density perturbations can be produced during the bounce era. This idea of the origination of the universe is highly accepted and appreciated in literature. General relativity (\(\mathcal{GR}\)) was presented by Einstein and it was thought to be one of the best theories to explain different cosmological issues. It explains the gravity under the fabric of space-time. However, to understand gravity much more effectively and to provide the answers to the effect of gravity, dark energy, and accelerated expansion of the universe under the addition of different scalar fields, different attempts have been made in past to modify \(\mathcal{GR}\). These modifications change the geometric or matter or both parts of the Einstein field equations accordingly. These could help to discuss the effects of couplings of matter and curvature terms on the above-described items. Roshan and Shojai [11] presented the nonlinear form of matter term i.e, \(\mathcal{T}^{2}=\mathcal{T}_{\alpha\beta}\mathcal{T}^{\alpha\beta}\), naming it \(f(\mathcal{R},\mathcal{T}^{\in})\). They further indicated that the use of nonlinear terms may provide the prevention of early time singularities. Since the functional form of curvature terms has helped to introduce new gravitational theories, so it was considered to be effective to modify the generic action integral of \(\mathcal{GR}\) as corrections. These modifications give light to the \(f(\mathcal{G})\) theory, for which the term \(\mathcal{G}\) is defined as \(\mathcal{G}=\mathcal{R}_{\xi\zeta\alpha\beta}\mathcal{R}^{\xi\zeta\alpha\beta }-4\mathcal{R}_{\xi\zeta}\mathcal{R}^{\xi\zeta}+\mathcal{R}^{2}\). Nojiri and Odintsov [12] introduced this \(f(\mathcal{G})\) theory for the first time in their work. They tested solar systems for this formalism and reported the phase change of acceleration to deceleration for the achievement of phantom line, which cooperated to study dark energy. Odintsov and Oikonomou [13] considered \(\mathcal{R}+f(\mathcal{G})\) form of the gravitational theory to provide their contribution to the study of gravitational baryogenesis. Their work included the higher-order derivatives of Gauss-Bonnet terms that work in order to produce the baryon asymmetry. Sharif and Ikram [14] gives rise to a new theory by following the footsteps of Harko. They coupled the matter part \({\cal T}\) with the geometric part of the \(f({\cal G})\) theory, making it \(f({\cal G},{\cal T})\) cosmology. They investigated the validity of their theory with the help of energy conditions. Later on, Bhatti _et al._[15] worked on the \(f({\cal G},{\cal T})\) theory to carry out the investigation of some physically feasible features of compact star formation. They inferred that the compactness of a star model grows at the core whereas the energy conditions remain constant. Yousaf and his mates [16] inspired by [17], have recently developed a novel \(f({\cal G},{\cal T}^{2})\) to present the complexity of structural scalars from the use of Herrera's method of splitting scalars. They considered the exponential coupling of Gauss-Bonnet terms as a functional form as \(f({\cal G},{\cal T}^{2})=\alpha{\cal G}^{n}(\beta{\cal G}^{m}+1)+\eta{\cal T}^{2}\), to explore the validity of their solutions for the Darmois and Israel conditions. They also worked on the non-static complex structures under the same theory to describe the effects of an electromagnetic field. They used specific model configuration i.e, \(f({\cal G},{\cal T}^{2})=k_{1}{\cal G}^{m}(k_{2}{\cal G}^{n}+1)+\lambda{\cal T }^{2}\), in their work. Bouncing cosmology has gained much reputation over the past few years, because of its independent hypothetical nature from different standard comic problems. Guth [18] during \(1980^{\prime}s\), had put forward his inflationary theory to tackle early and late time cosmic evolutionary problems. He remained successful in solving some related problems, but the answer to the initial singularity is still under concern. One of the best hypotheses to answer the singularity problem is the bouncing nature of the universe. The nature of the bouncing universe allows a certain universe model to transit from a pre-big crunch (contracted) phase into a new big bang (expanded) phase with the exclusion of singularity during the whole event [19]. Steinhardt and Ijjas [20] are considered to be the pioneers of the bouncing hypothesis. They devised a wedge diagram for a smooth bouncing method to explore the consequences of some cosmological problems. Sahoo _et al._[21] worked on the non-singular bouncing by assuming the specific coupling of \({\cal R}\) and \({\cal T}\) as \(f({\cal R},{\cal T})={\cal R}+\chi{\cal R}{\cal T}\), for \(0<\chi<\frac{\pi}{4}\). They allowed such a parametric approach for the Hubble parameter to provide no singularity during the bounce. They used quintom and phantom scalar field configurations for the bouncing paradigm. Bamba and his collaborators [22] inspected the singularity-free concept of bounce by considering an exponential form of scale factor \(a(t)=\sigma\exp(\lambda t)+\tau\exp(\lambda t)\) under the effect of \(f({\cal G})\) gravity. They checked the stability of their assumed solution under the restricted parametric scheme. Yousaf _et al._[23, 24] explored the bouncing universe with a specific functional form of Hubble parameter by taking exponential \(f({\cal G},{\cal T})\) form. Different cosmic models are under consideration for the scale factor in order to determine the value of expansion and contraction at the current cosmic phase and also to predict the current phase equation of state. These models predicted different results in the literature. However, cosmography provided us a benefit in processing cosmological data for explaining the universal kinematics without the involvement of the gravity model and hence provided that the cosmography can be employed with the Taylor expansions as an alternative scheme. Also, the cosmographic analysis for the \({\cal FLRW}\) universe, is helpful in such a way that it can put aside the effect of the dynamical field equations [25]. Gruber _et al_. [26] studied an alternative approach to describe cosmography by extending the conventional methodology. They resulted from numerical values of the cosmographic parameters by applying the \(Pade\acute{e}\) approximations. The testing of the \(\Lambda\)CDM model had been conveyed by Busti _et al_. [27] with the use of cosmographical analysis. Capozziello _et al_. provided cosmography as a non-predictive phenomenon when the redshift parameter becomes \(z\approx 1\). They used the pad\(\acute{e}\) approximations for the fifth order and resulted the divergence of data at the higher levels of the approximations. Lobo _et al_. [28] evaluated the dynamics of the redshift drift. They used the expanding \(FLRW\) universe to produce a general matter and low redshift model with the use of different variables. However, the cross-correlation of large-scale quasars can be used and translated with the CMB and **BAO** scale data to produce the best for Hubble parameter \(H(z)\) and angular diametric distance \(S_{A}\). Also, the cosmic chronometers approach can be done to predict the model independent \(H(z)\) measurements which have been extensively used for cosmological applications [29, 30, 31]. The low redshift data set with the inclusion of the megamasers and chronometers had been presented by Krishnan and others [32]. They result that the Hubble constant \({\cal H}_{0}\), showed descending behavior with the redshift and having non-zero slop when fitted on the line by statistical means. Font _et al_. [33] studied correlation technique for quasars by using Lya absorption and produced the best line of fit for Planck's data. They generated different results on the measurements of the Hubble parameter and the angular distance. One important thing is to develop such a cosmic Hubble parameter that comes from early to late span in such a way that it changes from a low to a high value. The Gaussian method helped to predict but provided a non-transitional behavior for both \(\Lambda\) and \(\omega\) epochs. The null energy condition also proved to be an important restriction for the cut-off model, when compared with Hubble parameter data [34]. King _et al_. [35] studied the future approximations of the redshift by the inclusion of dark energy. They tested the equation of state by the linear parametrization technique. Hu _et al_. [34] reported different values of the Hubble constant by the Gaussian method. Their research produced an effective reduction in the Hubble crisis and proposed the non-transitional behavior of the Hubble constant. Different dark energy models respective to holography and agegraphy had been conducted by Zhang _et al_. [36]. They produced different energy conditions for different red shift values and resulted in an effective role of energy conditions for different cosmic ages. In this article, we implemented a functional form of the Hubble parameter that evolves periodically with cosmic time \(t\) and investigate the bouncing nature of the universe in \(f({\cal G},{\cal T}_{\alpha\beta}{\cal T}^{\alpha\beta})\) gravity using a flat \({\cal FLRW}\) peacetime. This analysis of the bouncing universe involves one of the most important forms of \(EoS\) parameter proposed in the literature [37, 38, 39]. The outline is given as: Sect.**2** provides a brief introduction to \(f({\cal G},{\cal T}_{\alpha\beta}{\cal T}^{\alpha\beta})\) gravity with the necessary formalism of \({\cal FLRW}\) metric and modified field equations. Sect.**3** builds the Hubble parameter as a bouncing solution for the produced field equations. The cosmographic parameters are also evaluated in this section. We provide the mathematical expressions of energy density and matter pressure for the assumed \(EoS\) parameter form in Sect.**4**. The energy conditions are also formulated in the same fashion. Detailed graphical profiles of energy conditions are represented in the same section to discuss the evolution of the universe under the influence of restricted free parameters. Finally, the concluding remarks are made in Sect.**5**. ## 2 \(f({\cal G},{\cal T}_{\alpha\beta}{\cal T}^{\alpha\beta})\) Formalism The modified action for the \(f({\cal G},{\cal T}_{\alpha\beta}{\cal T}^{\alpha\beta})\) gravity theory is defined as [16] \[{\mathbb{A}}_{f({\cal G},{\cal T}_{\alpha\beta}{\cal T}^{\alpha\beta})}=\frac{ \sqrt{-g}}{2\kappa^{2}}\left(\int d^{4}x[f({\cal G},{\cal T}_{\alpha\beta}{\cal T }^{\alpha\beta})+{\cal R}]+\int d^{4}x{\cal L}_{m}\right), \tag{1}\] where \({\cal R}\) and \({\cal G}\) symbolize the Ricci scalar and the Gauss-Bonnet scalar terms, respectively and are provided as \[{\cal R}\equiv g_{\alpha\beta}{\cal R}^{\alpha\beta},\ \ \ \ {\cal G}\equiv{ \cal R}_{\xi\zeta\alpha\beta}{\cal R}^{\xi\zeta\alpha\beta}-4{\cal R}_{\xi \zeta}{\cal R}^{\xi\zeta}+{\cal R}^{2}, \tag{2}\] and \(\kappa^{2}=8\pi\)G (G be the gravitational constant) and \({\cal L}_{m}=-p\). Also, the term \(g\) implies the trace of the metric tensor \(g_{\alpha\beta}\) with \(T_{\alpha\beta}\), \(R_{\xi\zeta\alpha\beta}\) and \(R_{\alpha\beta}\) indicate the stress energy-momentum tensor, the Riemannian tensor, and the Ricci tensor respectively. The expression for \({\cal T}_{\alpha\beta}\) is given as \[{\cal T}_{\alpha\beta}=\frac{-2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}{\cal L}_{m}) }{\delta g^{\alpha\beta}}. \tag{3}\] Equation (3) yields the following expression, due the dependency of the matter Lagrangian \({\cal L}_{m}\) on \(g_{\alpha\beta}\) components \[{\cal T}_{\alpha\beta}=g_{\alpha\beta}{\cal L}_{m}-\frac{2\partial{\cal L}_{m }}{\partial g^{\alpha\beta}}. \tag{4}\] Now, by taking the variation of Eq.(1) with respect to the term \(g_{\alpha\beta}\), we get the following field equations for the \(f({\cal G},{\cal T}_{\alpha\beta}{\cal T}^{\alpha\beta})\) theory as \[{\cal R}_{\alpha\beta}-\frac{1}{2}{\cal R}g_{\alpha\beta}={\cal T}^{\it eff}_{ \alpha\beta}, \tag{5}\] where the term \({\cal T}^{\it eff}_{\alpha\beta}\) takes the following form \[{\cal T}^{\it eff}_{\alpha\beta} = \kappa^{2}{\cal T}_{\alpha\beta}-\Theta_{\alpha\beta}f_{{\cal T}^ {2}}({\cal G},{\cal T}^{2})+\frac{1}{2}g_{\alpha\beta}f({\cal G},{\cal T}^{2}) -(2{\cal R}{\cal R}_{\alpha\beta}-4{\cal R}^{\varepsilon}_{\alpha}{\cal R}_{ \varepsilon\beta}-4{\cal R}_{\alpha\varepsilon\beta\eta}{\cal R}^{\varepsilon\eta}\] \[+\frac{1}{2}g_{\alpha\beta}f({\cal G},{\cal T}^{2})-(2{\cal R}{ \cal R}_{\alpha\beta}-4{\cal R}^{\varepsilon}_{\alpha}{\cal R}_{\varepsilon \beta}-4{\cal R}_{\alpha\varepsilon\beta\eta})\] \[+\frac{1}{2}g_{\alpha\beta}f({\cal G},{\cal T}^{2})-(2{\cal R}{ \cal R}_{\alpha\beta}-4{\cal R}^{\varepsilon}_{\alpha}{\cal R}_{\varepsilon \beta}-4{\cal R}_{\alpha\varepsilon\beta\eta})\] \[+\frac{1}{2}g_{\alpha\beta}f({\cal G},{\cal T}^{2})-(2{\cal R}{ \cal R}_{\alpha\beta}-4{\cal R}^{\varepsilon}_{\alpha}{\cal R}_{\varepsilon \beta}-4{\cal R}_{\alpha\varepsilon\beta\eta})\] \[+\frac{1}{2}g_{\alpha\beta}f({\cal G},{\cal T}^{2})-(2{\cal R}{ \cal R}_{\alpha\beta}-4{\cal R}^{\varepsilon}_{\alpha}{\cal R}_{\varepsilon \beta}-4{\cal R}_{\alpha\varepsilon\beta\eta})\] \[+\frac{1}{2}g_{\alpha\beta}f({\cal G},{\cal T}^{2})-(2{\cal R}{ \cal R}_{\alpha\beta}-4{\cal R}^{\varepsilon}_{\alpha}{\cal R}_{\varepsilon \beta}-4{\cal R}_{\alpha\varepsilon\beta\eta})\] \[+\frac{1}{2}g_{\alpha\beta}f({\cal G},{\cal T}^{2})-(2{\cal R}{ \cal R}_{\alpha\beta}-4{\cal R}^{\varepsilon}_{\alpha}{\cal R}_{\varepsilon \beta}-4{\cal R}_{\alpha\varepsilon\beta\eta})\] \[+\frac{1}{2}g_{\alpha\beta}f({\cal G},{\cal T}^{2})-(2{\cal R}{ \cal R}_{\alpha\beta}-4{\cal R}^{\varepsilon}_{\alpha}{\cal R}_{\varepsilon \beta}-4{\cal R}_{\alpha\varepsilon\beta\eta})\] \[+\frac{1}{2}g_{\alpha\beta}f({\cal G},{\cal T}^{2})-(2{\cal R}{ \cal R}_{\alpha\beta}-4{\cal R}^{\varepsilon}_{\alpha}{\cal R}_{\varepsilon \beta}-4{\cal R}_{\alpha\varepsilon\beta\eta})\] \[+\frac{1}{2}g_{\alpha\beta}f({\cal G},{\cal T}^{2})-(2{\cal R}{ \cal R}_{\alpha\beta}-4{\cal R}^{\varepsilon}_{\alpha}{\cal R}_{\varepsilon \beta}-4{\cal R}_{\alpha\varepsilon\beta\eta})\] \[+\frac{1}{2} \[+2{\cal R}_{\alpha}^{\,\varepsilon\eta\delta}{\cal R}_{\beta \varepsilon\eta\delta})f_{\cal G}({\cal G},{\cal T}^{2})-(2{\cal R}\nabla^{2}g_{ \alpha\beta}-2{\cal R}\nabla_{\alpha}\nabla_{\beta}-4{\cal R}_{\alpha\beta} \nabla^{2}-4g_{\alpha\beta}{\cal R}^{\varepsilon\eta}\nabla_{\varepsilon} \nabla_{\eta}\] \[+4{\cal R}_{\alpha}^{\,\varepsilon}\nabla_{\beta}\nabla_{ \varepsilon}+4\nabla_{\varepsilon}\nabla_{\alpha}{\cal R}_{\beta}^{\, \varepsilon}+4{\cal R}_{\alpha\varepsilon\beta\eta}\nabla^{\varepsilon}\nabla^ {\eta})f_{\cal G}({\cal G},{\cal T}^{2}), \tag{6}\] where, \[\Theta_{\alpha\beta}\equiv\frac{\delta({\cal T}_{\mu\nu}{\cal T}^{\mu\nu})}{ \delta g^{\alpha\beta}}=2{\cal T}_{\alpha}^{\xi}{\cal T}_{\beta\xi}-{\cal T}{ \cal T}_{\alpha\beta}-4{\cal T}^{\mu\nu}\frac{\partial^{2}{\cal L}_{m}}{ \partial g^{\alpha\beta}g^{\mu\nu}}-2{\cal L}_{m}({\cal T}_{\alpha\beta}-\frac {1}{2}{\cal T}g_{\alpha\beta}) \tag{7}\] \[{\cal T}^{2}={\cal T}_{\alpha\beta}{\cal T}^{\alpha\beta},\ \ \ \ \ \nabla^{2}=\nabla_{\alpha}\nabla^{\alpha} \tag{8}\] The terms \(f_{\cal G}({\cal G},{\cal T}^{2})\) and \(f_{{\cal T}^{2}}({\cal G},{\cal T}^{2})\) used above are defined as \[f_{\cal G}({\cal G},{\cal T}^{2})\equiv\frac{df({\cal G},{\cal T}^{2})}{d{\cal G }},\ \ \ \ and\ \ f_{{\cal T}^{2}}({\cal G},{\cal T}^{2})\equiv\frac{df({\cal G},{\cal T}^{2}) }{d{\cal T}^{2}}. \tag{9}\] The trace of the above-defined field equations is produced as \[{\cal T}-\Theta f_{{\cal T}^{2}}({\cal G},{\cal T}^{2})+2{\cal G}f_{\cal G}({ \cal G},{\cal T}^{2})-2{\cal R}\nabla^{2}f_{\cal G}({\cal G},{\cal T}^{2})+4{ \cal R}_{\alpha\beta}\nabla^{\alpha}\nabla^{\beta}f_{\cal G}({\cal G},{\cal T} ^{2})=0. \tag{10}\] Equation (10) shows the non-conversed situation of the stress energy-momentum tensor. Also, the properties of \({\cal GR}\) can be recovered for \(f({\cal G},{\cal T}^{2})=0\). Similarly if we put \(f({\cal G},{\cal T}^{2})=f({\cal G})\), we get the properties of \(f({\cal G})\) gravity. Now, as we are concerned to study the bouncing nature of the universe, so we consider the fluid distribution to be perfect throughout the cosmic evolution. For this, we take \[{\cal T}_{\alpha\beta}=(\rho+p)V_{\alpha}V_{\beta}-pg_{\alpha\beta}, \tag{11}\] here, the four-vector velocity is defined by \(V^{\beta}\) with \[V^{\beta}=(1,0,0,0),\ \ V^{\beta}V_{\beta}=1\,\ V^{\beta}\nabla_{\zeta}V_{ \zeta}=0. \tag{12}\] In addition, \(\rho\) defines the energy density part and \(p\) defines the pressure part of the stress energy-momentum tensor. Also the geometric background considered to be in a \({\cal FLRW}\) space time [40], so it implies \[ds^{2}=dt^{2}-a^{2}(t)\Sigma_{i}dx_{i}^{2},\ \ \ \ \ \ \ \ \ \ \ i=1,2,3. \tag{13}\] The metric component \(a(t)\) symbolizes the scale factor, that contributes to the Hubble parameter as \({\cal H}=\frac{\dot{a}(t)}{a(t)}\). Using Eq.(13) and Eq.(7) in Eq.(5), we get the following field equations \[6\left(\frac{\dot{a}}{a}\right)^{2}-24\left(\frac{\dot{a}}{a}\right)^{3}\dot{f }_{\cal G}+24\left(\frac{\ddot{a}}{a}\right)\left(\frac{\dot{a}}{a}\right)^{2 }f_{\cal G}-f-2(\rho^{2}+3p^{2}+4\rho p)f_{{\cal T}^{2}}=2\rho\kappa^{2}, \tag{14}\] \[-2\left(2\frac{\ddot{a}}{a}+\left(\frac{\dot{a}}{a}\right)^{2}\right)+16\left( \frac{\ddot{a}\dot{a}}{a^{2}}\right)\dot{f_{G}}+8\left(\frac{\dot{a}}{a}\right)^ {2}\ddot{f_{G}}-24\left(\frac{\ddot{a}}{a}\right)\left(\frac{\dot{a}}{a}\right) ^{2}f_{G}+f=2p\kappa^{2}. \tag{15}\] To draw the conclusions on the field equations, we just need some functional form of \(f(\mathcal{G},\mathcal{T}^{2})\). As, there are many functional forms regarding the interaction of matter with the curvature terms, in order to deal with the issues of cosmic evolution. Various coupling models can be used to evaluate the formations of both energy density and matter pressure, like one can take \(f(\mathcal{G},\mathcal{T}^{2})=\mathcal{G}+2f(\mathcal{T}^{2})\) that may help to provide an analysis about \(\Lambda CDM\) epoch. However, the other choice is \(f(\mathcal{G},\mathcal{T}^{2})=f_{1}(\mathcal{G})+f_{2}(\mathcal{T}^{2})\) that may be worked as a correction to \(f(\mathcal{G})\) gravity theory because of \(f_{2}(\mathcal{T}^{2})\). Similar forms have been explored in [41, 42] and provided some distinct results due to the direct minimal curvature matter coupling. Also, \(f(\mathcal{G},\mathcal{T}^{2})=f_{1}(\mathcal{G})+f_{2}(\mathcal{G})f_{3}( \mathcal{T}^{2})\) can be taken because of an explicit non-minimally coupling nature between geometric parameters and matter variables [43]. So, we considered the following form to produce the validating results. \[f(\mathcal{G},\mathcal{T}^{2})=f_{1}(\mathcal{G})+f_{2}(\mathcal{T}^{2}). \tag{16}\] To produce a bouncing universe, we need some functional forms of \(f_{1}\) and \(f_{2}\) that not only describe the accelerating expansion of the universe but also explain inflation to a great extent. For this, the higher power curvature terms perform well to eliminate such issues. Elizalde [44] introduced the power forms of the curvature scalar as \(\eta\mathcal{R}^{n}\) (\(n\geq 1\)) and produced the cosmological dynamics, so we consider the specific form of the \(f_{1}\) as the quadratic power model, so \[f_{1}(\mathcal{G})=\mathcal{G}+\alpha\mathcal{G}^{2}. \tag{17}\] Also, we take \(\chi_{2}\) as \[f_{2}(\mathcal{T}^{2})=2\lambda\mathcal{T}^{2}. \tag{18}\] So, by using Eq.s (17) and (18) in the field equations, we get \[6\mathcal{H}^{2}-48\alpha\mathcal{H}^{3}\mathcal{G}\dot{\mathcal{G}}+\alpha \mathcal{G}^{2}=2\kappa^{2}\rho+6\lambda\rho^{2}+18\lambda p^{2}+16\lambda \rho p, \tag{19}\] and \[-2(2\dot{\mathcal{H}}+3\mathcal{H}^{2})+32(\dot{\mathcal{H}}+\mathcal{H}^{2}) \alpha\mathcal{G}\dot{\mathcal{G}}+16\alpha\mathcal{H}^{2}(\dot{\mathcal{G}} ^{2}+\mathcal{G}\ddot{\mathcal{G}})-\alpha\mathcal{G}^{2}=2\kappa^{2}p-2 \lambda\rho^{2}-6\lambda p^{2}. \tag{20}\] In order to reduce the complexity of the Eq.(19) and Eq.(20), we utilize \(p=\omega\rho\), as the _EoS_ used in [37, 38, 39]. So we get the relations, \[(3\lambda+9\lambda\omega^{2}+8\lambda\omega)\rho^{2}+\kappa^{2}\rho-(3 \mathcal{H}^{2}-24\alpha\mathcal{H}^{3}G\dot{\mathcal{G}}+\frac{\alpha}{2} \mathcal{G}^{2})=0 \tag{21}\] \[(-\frac{\lambda}{\omega^{2}}-3\lambda)p^{2}+\kappa^{2}p+((2\dot{\cal H}+3H^{2})-16( \dot{\cal H}+{\cal H}^{2})\alpha{\cal G}\dot{\cal G}-8\alpha{\cal H}^{2}(\dot{ \cal G}^{2}+{\cal G}\ddot{\cal G})+\frac{\alpha}{2}{\cal G}^{2})=0. \tag{22}\] where, \({\cal G}=24{\cal H}^{2}(\dot{\cal H}+{\cal H}^{2})\). Yousaf and his collaborators checked the stability of cosmic models in various modified gravity theories [45, 46, 47, 48]. ## 3 Hubble Parameter and Cosmography This section mainly focuses on describing the evolutionary behavior of these above-described dynamical terms. Hence, we consider a trigonometric form of the \({\cal H}(t)\) which feasibly provides a bounce solution [44, 49], as follows \[{\cal H}(t)=\zeta\sin(\phi t)h(t). \tag{23}\] This parameterized form of \({\cal H}(t)\) includes \(\zeta\) and \(\phi\), which are considered to be constants here. The choice of \(h(t)\) depends on the periodic values of the function \(\sin(\phi t)\), so the form of \(h(t)\) can be chosen as periodic, that cooperates with the non-vanishing values of the above trigonometric function. This artificial approach of choosing such an ansatz can be considered as a numerical analysis of making the bouncing solution. One interesting feature is possessed by the term \(\zeta\), which can work well as a phase changer for the value of \({\cal H}(t)\). We consider \(h(t)\) as \[h(t)=\exp(\varphi t), \tag{24}\] where \(\varphi\) acts as a constant. Finally, we have \[{\cal H}(t)=\zeta\sin(\phi t)\exp(\varphi t). \tag{25}\] This functional form of the Hubble parameter is helpful to study cosmic evolutionary expansion and contraction. This form of the Hubble parameter gives us the bounce at \(t=313\), depending upon the values of \(\varphi=0.001\) and \(\phi=0.01\) provided in Fig.1. We have restricted the values of \({\cal H}(t)\) in the positive era of time. The basic scale factor form for this parameterized Hubble parameter becomes \[a(t)=\exp\left(\frac{\zeta\exp(\varphi t)(\varphi\sin(\phi t)-\phi\sin(\phi t ))}{\varphi^{2}+\phi^{2}}\right). \tag{26}\] Similarly, the set of dynamical parameters that are derived from the Taylor series expansion of the scale factor is termed as cosmographic factors. These factors helped to obtain the cosmological concordance with the assumptions of the universal homogeneity and isotropy on large cosmic scales [27, 50]. These include deceleration, jerk and snap parameters. These factors allow us to check the compatibility of the scale factor and the Hubble parameter. The negative value of the deceleration parameter \(q\) describes the accelerated expansion of the universe. Similarly, jerk \(j\) and snap \(s\) determine the expansion rate of the toy universe model. The mathematical interpretation for these cosmography elements are defined as \[q=-\frac{1}{{\cal H}^{2}}\frac{1}{a}\frac{d^{2}a}{dt^{2}}=-1-\frac{1}{\zeta}(e^{ -\varphi t}\csc(\phi t)(\phi\cot(\phi t)+\varphi)), \tag{27}\] \[j=\frac{1}{{\cal H}^{3}}\frac{1}{a}\frac{d^{3}a}{dt^{3}} = 1+\frac{1}{\zeta^{2}}(e^{-2\varphi t}\csc(\phi t)(3\zeta e^{ \varphi t}(\phi\cot(\phi t)+\varphi) \tag{28}\] \[+\csc(\phi t)(2\varphi\phi\cot(\phi t)+\varphi^{2}-\phi^{2}))),\] and \[s=\frac{1}{{\cal H}^{4}}\frac{1}{a}\frac{d^{4}a}{dt^{4}}=-\frac{ 1}{3\zeta(3\zeta e^{\varphi t}+2\csc(\phi t)(\phi\cot(\phi t)+\varphi))}(2e^{ -\varphi t}\csc(\phi t)(\csc(\phi t) \tag{29}\] \[(3\zeta\varphi e^{\varphi t}\sin(\phi t)+\varphi^{2}-\phi^{2})+ \phi\cot(\phi t)(3\zeta e^{\varphi t}+2\varphi\csc(\phi t)))).\] Fig.1 shows the progression of the Hubble (left panel) and scale parameters (right panel) along the positive time axis. Similarly, the development of jerk (left panel) and snap factors (right panel) are provided in the fig.2. The evolution of the deceleration parameter towards the negative value i.e, \(q\rightarrow-1\), before the bouncing point, provided in fig.6, shows the accelerating universe. ## 4 Energy Conditions under the EoS Parameter For a specific cosmology model, energy conditions play an important role to make its validation for the restricted free parameters. These energy conditions help to maintain the specifications of the certain cosmic model [51, 52, 53, 54, 55]. Similarly, these energy conditions also work for the bouncing cosmology and provide a reasonable approach to validate the procedure for our toy bouncing model. These conditions are described as * Dominant energy condition (\({\cal DEC}\))\(\Leftrightarrow\)\(\rho\geq 0\), \(\rho\pm p\geq 0\). * Strong energy condition (\({\cal SEC}\))\(\Leftrightarrow\)\(\rho+3p\geq 0\), \(\rho+p\geq 0\). * Weak energy condition (\({\cal WEC}\))\(\Leftrightarrow\)\(\rho\geq 0\), \(\rho+p\geq 0\). Figure 1: The illustrations of Hubble parameter and scale factor with fixed values of \(\varphi=0.001\) and \(\phi=0.01\). Figure 2: The illustration of jerk and snap factors with fixed values of \(\varphi=0.001\) and \(\phi=0.01\). * Null energy condition (\(\mathcal{NEC}\))\(\Leftrightarrow\)\(\rho+p\geq 0\). * Trace energy condition (\(\mathcal{TEC}\))\(\Leftrightarrow\)\(\rho-3p\geq 0\). The positivity of \(\mathcal{DEC}\), \(\mathcal{SEC}\) and \(\mathcal{WEC}\) passes on the validity and necessity of the bouncing concept. However, the violation of \(\mathcal{NEC}\) has a major role. This violation is different in the \(\mathcal{GR}\) context. Universal bouncing scenario is one of those ideas that provides a chance to discuss the singularity-free universal beginning. Many proposals in the literature suggested avoiding this singularity through quantum aspects, but these don't have such reliability to fit in the gravitational theory. So, at this point gravitational theories allow a specific mechanism to check the validity of the bounce model and as well its own. Null energy condition is one such tool to help achieve the task. Also, it has been proved that in the context of \(\mathcal{GR}\), the violation of \(\mathcal{NEC}\) is extremely difficult to be achieved for local-field models. So, effective field theories provide a chance to recognize the violation of the \(\mathcal{NEC}\) and to allow a non-singular bounce [56, 57, 58, 59]. One such effective field is \(f(\mathcal{G},\mathcal{T}_{\alpha\beta}\mathcal{T}^{\alpha\beta})\) theory that provides a chance to study the quadratic nature of the energy terms i.e, energy density and matter pressure [16, 60]. However, it also allows getting a non-singular bounce for the assumed gravity model form. For an excellent bouncing model, the value of \(\mathcal{H}(t)\) turns out to be \(\mathcal{H}=-4G\rho\pi(1+\omega)>0\) for the formulation of \(\mathcal{GR}\). However, if the \(\mathcal{NEC}\) gets violated, we have the surety to get a bouncing scenario. To provide the mathematical formulation of the energy conditions, we consider Eqs. (21) and (22). Also, the \(EoS\) parameter in the negative regime provides the present cosmic evolution [61, 62, 63] and becomes favorable in the bouncing context with \(\omega(t)\approx-1\). However, bouncing cosmology provides the possible geodesic evolution of the universe by avoiding the singularity along with the resolution of the horizon problem, flatness problem, entropy problem and many more [5]. For the modified gravity, \(EoS\) parameter enables us to study the universal dynamics. In this study, we used \(EoS\) parameter [44] to obtain the possible chance of obtaining a bounce solution in \(f(\mathcal{G},\mathcal{T}^{2})\) as \[\omega(t)=-\frac{k\log(t+\epsilon)}{t}-1, \tag{30}\] here \(k\) is assumed to be a constant. This particular form of the \(EoS\) parameters allows us to study the contracting and expanding behavior without involving the Hubble parameter as well as the scale factor. Elizalde _et al_. [44] produces cosmological dynamics by considering \(\mathcal{R}^{2}\) gravity and logarithmic trace terms. They checked the effects of the \(\lambda\) parameter in the gravity model \(f(\mathcal{R},\mathcal{T})=\mathcal{R}+\lambda\mathcal{R}^{2}+2\beta\ln( \mathcal{T})\) along with the bouncing solution depending on the two \(EOS\) parameters. Our work first described the choice of Hubble parameter and its effects on the dynamical field equations and then involves the \(EOS\) parameter. We only took one of the \(\omega(t)\) value, because this state factor after the bouncing point remains negative and becomes \(\omega(t)\approx-1\). Also, the current cosmic expansion and \(\Lambda-CDM\) can be verified by this state factor. However, the dynamic properties are greatly affected under the influence of this \(EoS\) parameter form. Hence, the general forms of the Eqs.(21) and (22), under the influence of Eq.30, are presented as \[\rho = -\frac{1}{2\lambda(9\omega^{2}+8\omega+3)}(\kappa^{2}+(\kappa^{4} -12\zeta^{2}\lambda(9\omega^{2}+8\omega+3)e^{2\varphi t}\sin^{2}(\phi t)(2304 \alpha\zeta^{7}e^{7\varphi t}\sin^{4}(\phi t) \tag{31}\] \[(\sin(\phi t)(\zeta e^{\varphi t}\sin(\phi t)+\varphi)+\phi\cos( \phi t))(4\zeta\varphi e^{\varphi t}\sin^{3}(\phi t)+2\zeta\phi e^{\varphi t} \sin(2\phi t)\sin(\phi t)\] \[- (\phi^{2}-3\varphi^{2})\sin^{2}(\phi t)+2\phi^{2}\cos^{2}(\phi t )+3\varphi\phi\sin(2\phi t))-96\alpha\zeta^{4}e^{4\varphi t}\sin^{2}(\phi t)\] \[(\sin(\phi t)(\zeta e^{\varphi t}\sin(\phi t)+\varphi)+\phi\cos( \phi t))^{2}-1))^{\frac{1}{2}}\] \[p = \frac{1}{2(3\lambda\omega^{2}+\lambda)}(\kappa^{2}\omega^{2}+( \kappa^{4}\omega^{4}+4\zeta\omega^{2}(3\lambda\omega^{2}+\lambda)e^{\varphi t} (18432\alpha\zeta^{9}e^{9t\varphi}(2\varphi(2\varphi+1)-\phi^{2}) \tag{32}\] \[\sin^{10}(t\phi t)+4608\alpha\zeta^{8}e^{8t\varphi}(\varphi^{2}(2 5\varphi+22)-(13\varphi+2)\phi^{2})\sin^{9}(\phi t)+9216\alpha\zeta^{7}\phi^{3 }e^{7\varphi t}\] \[\sin^{5}(\phi t)\cos^{3}(\phi t)(7\zeta e^{\varphi t}\sin(\phi t) 10\varphi+8)+288\alpha\zeta^{7}e^{7\varphi t}(144\varphi^{4}+320\varphi^{3}-16 (9\varphi+4)\] \[\varphi\phi^{2}-1)\sin^{8}(\phi t)+576\alpha\zeta^{6}\phi^{4}e^{ \phi\varphi t}\sin^{4}(2\phi t)(\zeta e^{\varphi t}+2\csc(\phi t))+576\alpha \zeta^{6}\varphi e^{6\varphi t}\] \[(48\varphi^{3}-16\varphi\phi^{2}-1)\sin^{7}(\phi t)-96\alpha \zeta^{5}\varphi(3\varphi-8)e^{5\varphi t}\sin^{6}(\phi t)+192\alpha\zeta^{4} e^{4\varphi t}(3\varphi^{2}-\phi^{2})\] \[\sin^{5}(\phi t)+2\sin(\phi t)(5760\alpha\zeta^{6}\varphi\phi^{3 }e^{6\varphi t}\sin^{3}(2\phi t)+48\alpha\zeta^{4}\phi^{2}e^{4\varphi t}\sin^ {2}(\phi t)-\varphi)+288\alpha\] \[\zeta^{5}\phi^{2}e^{5\varphi t}\sin^{4}(\phi t)\cos^{2}(\phi t)( 192\zeta^{4}e^{4\varphi t}\sin^{4}(\phi t)+32\zeta^{3}(31\varphi+10)e^{3 \varphi t}\sin^{3}(\phi t)\] \[+16\zeta^{2}e^{2\varphi t}(\varphi(45\varphi+56)-7\phi^{2})\sin^ {2}(\phi t)+32\zeta e^{\varphi t}(17\varphi^{2}-\phi^{2})\sin(\phi t)-1)-2 \phi\cos(\phi t)\] \[(-18432\alpha\zeta^{9}(4\varphi+1)e^{9\varphi t}\sin^{9}(\phi t)-2 304\alpha\zeta^{8}e^{8\varphi t}(\varphi(75\varphi+44)-11\phi^{2})\sin^{8}( \phi t)-9216\alpha\zeta^{7}\] \[e^{7\varphi t}(3\varphi^{2}(3\varphi+5)-(4\varphi+1)\phi^{2}) \sin^{7}(\phi t)-288\alpha\zeta^{6}e^{6\varphi t}(192\varphi^{3}-32\varphi \phi^{2}-1)\] \[\sin^{6}(\phi t)+96\alpha\zeta^{5}(3\varphi-4)e^{5\varphi t}\sin^ {5}(\phi t)-576\alpha\zeta^{4}\varphi e^{4\varphi t}\sin^{4}(\phi t)+1)\] \[-3\zeta e^{\varphi t}\sin^{2}(\phi t)))^{\frac{1}{2}}\] Now, the profiles of energy density and pressure under the presence of Eq.(30), are provided in fig.3. The plots indicate that the energy density suffers a positive behavior for the assumed values of free parameters. Similarly, the negative behavior for the pressure term indicates that the universe is in the accelerated expansion phase. However, the positive density proves a strong validation for the verification of the energy conditions. Also, one can get the positive and alternate trends of the both terms for different time periods due to the oscillatory behavior of the assumed Hubble parameter. We restrict our work for the positive density and negative pressure behavior to ascertain the energy conditions. The evolutionary profiles of the energy conditions are provided in the figs. 4 and 5. The \(\cal NEC\) plot shows the violation with in the bouncing regime and confirms the major verification for the universe to attain a bounce with in the framework of \({\cal FLRW}\) spacetime. The violated \({\cal WEC}\) and \({\cal SEC}\) are given in the left plots of the figs. 3 and 4. The violated \({\cal SEC}\) also maintains the recent observations for the accelerating universe [52]. One important energy condition i.e, \({\cal TEC}\) has also been given in this recent study. The positive profiles for the \({\cal DEC}\) and \({\cal TEC}\) are given in the fig.5. The evolution of these energy conditions is strictly dependent on the values of the free parameters used in this study. However, one can get another configuration of these physical factors by implementing the different free parameters. The evolution of \(EoS\) parameter is provided in fig.6 to encounter the negative value i.e, \(\omega(t)\approx-1\), for the current expansion phase of the universe. ## 5 Discussions This work involves the study of bouncing cosmology for an isotropic configuration of fluid \({\cal T}_{\alpha\beta}\) and \({\cal FLRW}\) metric. We comprehend this work under \(f({\cal G},{\cal T}_{\alpha\beta}{\cal T}^{\alpha\beta})\) theory of gravitation by assuming a specific model i.e, \(f({\cal G},{\cal T}^{2})={\cal G}+\alpha{\cal G}^{2}+2\lambda{\cal T}^{2}\) with \(\alpha\) and \(\lambda\) are constants, serving as free parameters. This is the first-ever attempt to cover bouncing cosmology in the \(f({\cal G},{\cal T}_{\alpha\beta}{\cal T}^{\alpha\beta})\) theory. By the consideration of a specific functional form of the Hubble parameter, we discuss the evolution of cosmographic parameters. The assumption of a well-known equation of state (_EoS_) parameter, \(\omega(t)=-\frac{k\log(t+\epsilon)}{t}-1\), is used as a direct implementation to represent the dynamical behavior of energy density, matter pressure, and energy conditions. The free parameters are restricted to the special values provided in each Figure 4: The illustration of \(\mathcal{NEC}\) and \(\mathcal{SEC}\) with fixed values of \(\alpha=0.005\), \(k=0.5\), \(\varphi=0.001\), \(\epsilon=0.001\), \(\phi=0.01\), \(\kappa=1\) and \(\lambda=-0.005\). graph plot and are used for \(\mathcal{H}(t)\) to act as the bouncing solution. The viability of energy conditions is studied with the help of a graphical approach. Following are the concluding remarks for this present work. * The Hubble parameter \(\mathcal{H}(t)\) used in this study is considered to have a trigonometric functional form. The evolutionary behavior of different cosmographic factors is described under the same form of \(\mathcal{H}(t)\). This parameterized form of \(\mathcal{H}(t)\) depends on the periodic values of the function \(\sin(\phi t)\) and \(h(t)\). We considered this \(h(t)\) as a nonvanishing function for the periodic values of \(\sin(\phi t)\). A perfect bouncing model allows the Hubble parameter to show the contraction phase i.e, \(\mathcal{H}<0\), and when the universe expands it becomes \(\mathcal{H}>0\). During this expansion and contraction phase, there is the point in between, at where \(\mathcal{H}(t)\) becomes zero. So, in order to produce such a scenario, we have arranged the constants (\(\phi\) and \(\varphi\)) in the Hubble parameter (\(\mathcal{H}(t)=\zeta\sin(\phi t)\exp(\varphi t)\)) to some specific values and notice the bounce at \(t=313\). However, \(t=313\) is significant in such a way that all the energy conditions necessary for the bounce, get satisfied accordingly till \(t=313\), depending on the values of \(\phi\) and \(\varphi\). One can also produce other values of \(t\) for bounce by restricting other values of \(\phi\) and \(\varphi\). The plot of \(\mathcal{H}(t)\) is given in fig.1. The Hubble parameter gives us the bounce at \(t=313\) which is the future singularity in the scale factor, see fig.1. The mathematical forms of deceleration, jerk, and snap are evaluated with the same \(\mathcal{H}(t)\). The deceleration parameter tends to have a negative trend i.e, \(q(t)\) approaches \(-1\), which can be seen in fig.6. Similarly, the trends of jerk and snaps are given in fig.2 with \(j(t)\) approaches to \(1\) and \(s(t)\) approaches to \(0\). All these values show a deflection Figure 6: The illustration of _EoS_ and deceleration parameters with fixed values of \(k=0.5\) and \(\epsilon=0.001\). at the bouncing point, that fits in for the bouncing universe. * We ensure the configuration of the bouncing cosmology by studying energy conditions. These energy conditions are provided in terms of energy density and matter pressure derived from the modified field equations. We assumed a specific _EoS_ parameter in the form \(\omega(t)=-\frac{k\log(t+\epsilon)}{t}-1\). This _EoS_ parameter helped to maintain the positive and negative growth of energy density and matter pressure for the limited bouncing time period. The profiles of \(\rho\) and \(p\) are provided in the fig.3. However, the mathematical expression for these terms is evaluated in Eqs.(31) and (32). * Under the restricted values of the free parameters, \(\alpha=0.005\), \(k=0.5\), \(\varphi=0.001\), \(\epsilon=0.001\), \(\phi=0.01\), \(\kappa=1\) and \(\lambda=-0.005\), we get the violation of the \(\mathcal{NEC}\) and \(\mathcal{SEC}\). The violated \(\mathcal{NEC}\) derives the bouncing nature of the universe. However, the violated \(\mathcal{SEC}\) and \(\mathcal{WEC}\) provide the phase of cosmic expansion. suitable with the observational data. The left plots of figs.3 and 4 shows the violated \(\mathcal{SEC}\) and \(\mathcal{WEC}\). Similarly, the positive behavior of \(\mathcal{DEC}\) and \(\mathcal{TEC}\) assure that the assumed model configuration is valid. Figure 5 represents the illustration of \(\mathcal{DEC}\) and \(\mathcal{TEC}\). Also, the evolution of _EoS_ can be seen in fig.6, showing that \(\omega(t)\rightarrow-1\). This value of \(\omega(t)\) favors the current accelerated expansion phase of the universe [61, 62, 63]. * The above discussion provides that the bouncing evolution of the universe, studied in the framework of \(f(\mathcal{G},\mathcal{T}^{2})=\mathcal{G}+\alpha\mathcal{G}^{2}+2\lambda \mathcal{T}^{2}\) and agrees with the recent astronomical observations [64, 65] i.e, all the energy conditions are fully satisfied, a great negative pressure behavior had been observed and provided help to study the late time accelerated universe [44]. However, this study can be used in the future for different models of the scale factors and Hubble parameters. * We finally conclude that the bouncing evolution of the universe can be studied effectively with the oscillating nature of the scale factor under the flat \(\mathcal{FLRW}\) regime.
2302.05522
Weissler and Bernoulli type inequalities in Bergman spaces
We consider Weissler type inequalities for Bergman spaces with general radial weights and give conditions on the weight $w$ in terms of its moments ensuring that $\|f_r\|_{A^{2n}(w)}\leq \|f\|_{A^2(w)}$ whenever $n\in \mathbb{N}$ and $0< r\le 1/\sqrt{n}$. For noninteger exponents a special case of this inequality is proved which can be considered as a certain analog of the Bernoulli inequality. An example of a monotonic weight is constructed for which these inequalities are no longer true.
Anton D. Baranov, Ilgiz R. Kayumov, Diana M. Khammatova, Ramis Sh. Khasyanov
2023-02-10T22:02:55Z
http://arxiv.org/abs/2302.05522v1
# Weissler and Bernoulli type inequalities in Bergman spaces ###### Abstract We consider Weissler type inequalities for Bergman spaces with general radial weights and give conditions on the weight \(w\) in terms of its moments ensuring that \(\|f_{r}\|_{A^{2n}(w)}\leq\|f\|_{A^{2}(w)}\) whenever \(n\in\mathbb{N}\) and \(0<r\leq 1/\sqrt{n}\). For noninteger exponents a special case of this inequality is proved which can be considered as a certain analog of the Bernoulli inequality. An example of a monotonic weight is constructed for which these inequalities are no longer true. keywords: Bergman space, Weissler inequality, Bernoulli inequality + Footnote †: journal: ## 1 Introduction We consider Weissler type inequalities for Bergman spaces in the disc with general radial weights. While such questions were studied extensively for the classical weights, the case of general weights apparently was not previously addressed. The aim of this note is to propose regularity/convexity conditions on the weight, expresses in terms of its moments, ensuring that some special cases of the Weissler type inequality hold true. We show that certain regularity is required since there exist monotonic weights for which even these special cases of the Weissler type inequality are false. To describe our results, let us recall some definitions and classical inequalities. A function \(f\), analytic in the unit disk \(\mathbb{D}\), belongs to the Hardy space \(H^{p}\), \(0<p<\infty\), if \[\|f\|_{H^{p}}:=\sup_{0<r<1}\left(\frac{1}{2\pi}\int\limits_{0}^{2\pi}|f(re^{i \theta})|^{p}\,d\theta\right)^{\frac{1}{p}}<\infty.\] For \(r\in(0,1)\) let \(f_{r}(z)=f(rz)\). A well-known result of F. B. Weissler [9] states that for the Hardy spaces \(H^{p}\) and \(H^{q}\) (\(0<p<q\)) \[\|f_{r}\|_{H^{q}}\leq\|f\|_{H^{p}}\ \ \text{for any }f\in H^{p}\qquad \Longleftrightarrow\qquad r\leq\sqrt{\frac{p}{q}}\leq 1.\] Given a summable nonnegative function \(w\) on \([0,1)\), we say that a function \(f\), analytic in the unit disk \(\mathbb{D}\), belongs to the weighted Bergman space \(A^{p}(w)\) if \[\|f\|_{A^{p}(w)}:=\left(\frac{1}{2\pi}\int\limits_{\mathbb{D}}|f(z)|^{p}w(|z|) \,dA(z)\right)^{1/p}<\infty,\] where \(A\) denotes the planar Lebesgue measure. We define the moments of the weight \(w\) as \[h_{m}=\int\limits_{0}^{1}\rho^{m+1}w(\rho)\,d\rho,\qquad m\geq 0.\] In what follows we always assume the normalization \(h_{0}=1\). The weights \(w_{\alpha}(r)=2(\alpha-1)(1-r^{2})^{\alpha-2}\), where \(\alpha>1\), are called classical weights. We will denote Bergman spaces with the weight \(w_{\alpha}\) as \(A^{p}_{\alpha}\). For the theory of Bergman spaces see, e.g., [4]. The classical Carleman inequality \[\left(\sum\limits_{n=0}^{\infty}\frac{|a_{n}|^{2}}{n+1}\right)^{\frac{1}{2}} \leq\|f\|_{H^{1}}\] for \(f(z)=\sum\limits_{n=0}^{\infty}a_{n}z^{n}\in H^{1}\) implies that \(\|f\|_{A^{2}_{x}}\leq\|f\|_{H^{1}}\) whence it is easy to deduce that \[\|f\|_{A^{2p}_{2}}\leq\|f\|_{H^{p}},\qquad 0<p<\infty.\] The following generalization of the Carleman inequality was proved by J. Burbea [3]. Let \(k\in\mathbb{N}\), \(k\geq 2\), and \(p=\frac{2}{k}\). Then, for \(f(z)=\sum\limits_{n=0}^{\infty}a_{n}z^{n}\in H^{p}\), \[\|f\|_{A^{2}_{2/p}}=\left(\sum\limits_{n=0}^{\infty}\frac{|a_{n}|^{2}}{c_{2/p} (n)}\right)^{\frac{1}{2}}\leq\|f\|_{H^{p}},\qquad c_{\beta}(n)=C^{n}_{n+\beta- 1}. \tag{1}\] It was conjectured in [2] that inequality (1) holds for all \(0<p\leq 2\). An inequality similar to (1), but with a constant slightly worse than \(1\) on the right, was proved in [2, 6]. Contractive inequalities for Bergman spaces \(A^{p}_{\alpha}\) were also studied by many authors. In particular, F. Bayart, O. F. Brevig, A. Haimi, J. Ortega-Cerda and K.-M. Perfekt [1] proved the following counterpart of the Weissler inequality. Let \(0<p\leq q<\infty\) and \(\alpha=\frac{n+1}{2}\) for some \(n\in\mathbb{N}\). Then for \(f\in A^{p}_{\alpha}\) \[\|f_{r}\|_{A^{q}_{\alpha}}\leq\|f\|_{A^{p}_{\alpha}}\ \ \mbox{for any}\ f\in A^{p}_{ \alpha}\quad\Longleftrightarrow\quad r\leq\sqrt{\frac{p}{q}}\leq 1. \tag{2}\] A remarkable progress in these problems was achieved in 2022 (after the first version of the present note was finished). A. Kulikov [5] proved the conjectured inequality (1) for all \(0<p\leq 2\), he also showed that it is sharp in every coefficient. Using the results and methods of the paper by Kulikov, P. Melentijevic [7] proved that (2) is true for any \(0<p<q<\infty\) such that \(q\geq 2\) and for any \(\alpha>1\). Moreover, (2) holds for \(q<2\) as well if we assume that \(f\) is zero-free, while in general only a lower bound for \(r\) was found in this case. In the present note we are interested in analogs of (2) for Bergman spaces with general radial weights. Our goal is to find conditions on the weight for which a Weissler type inequality is true. We give such conditions in terms of the moments \(h_{2m}\) of the weight \(w\). While our present results deal with very special situations (even integer exponents or a specific choice of a function) we conjecture that under these or similar conditions on the weight the results can be extended to the case of general exponents. Note that by the Cauchy inequality, one always has \(h_{2m}^{2}\leq h_{2(m-1)}h_{2(m+1)}\). Our condition in the next theorem is a converse (in a sense) inequality. **Theorem 1.1**.: _Let \(w\) be a Bergman weight satisfying_ \[\frac{h_{2m}}{h_{2(m-1)}}\geq\frac{m}{m+1}\frac{h_{2(m+1)}}{h_{2m}} \tag{3}\] _all \(m\geq 1\). Then for any \(n\in\mathbb{N}\) we have_ \[\|f_{r}\|_{A^{2n}(w)}\leq\|f\|_{A^{2}(w)}\ \ \mbox{for any}\ f\in A^{2}(w)\quad \Longleftrightarrow\quad r\leq\frac{1}{\sqrt{n}}\leq 1. \tag{4}\] It is easy to see that (3) holds for all classical weights \(w_{\alpha}\). We now turn to Weissler type inequalities between \(A^{2}(w)\) and \(A^{2q}(w)\) where \(q>1\) is an arbitrary real number. This problem is much more complex, because now one cannot use combinatorics. In fact, a natural conjecture arises that the inequality \(\|f_{r}\|_{A^{2q}(w)}\leq\|f\|_{A^{2}(w)}\) holds for \(r=\frac{1}{\sqrt{q}}\). In other words, \[\frac{1}{2\pi}\int\limits_{0}^{1}\int\limits_{0}^{2\pi}\left|f^{q}\left(\frac {\rho e^{i\theta}}{\sqrt{q}}\right)\right|^{2}\rho w(\rho)\,d\theta\,d\rho \leq\left(\frac{1}{2\pi}\int\limits_{0}^{1}\int\limits_{0}^{2\pi}|f(\rho e^{i \theta})|^{2}\rho w(\rho)\,d\theta\,d\rho\right)^{q}. \tag{5}\] To have the possibility of taking the powers we consider functions nonvanishing in \(\mathbb{D}\). They can be represented as \(f(z)=e^{\varphi(z)}\). Let \(\varphi(z)=\sum_{n=0}^{\infty}a_{k}z^{k}\). Since the norms in \(A_{\alpha}^{n}\) are expressed in terms of modulus of coefficients, we can assume that all \(a_{k}\geq 0\). For such functions \[f^{q}\left(z\right)=e^{q\varphi(z)}=\sum_{n=0}^{\infty}\frac{q^{n}}{n!}\left( \sum_{k=0}^{\infty}a_{k}z^{k}\right)^{n}=\sum_{n=0}^{\infty}\frac{q^{n}}{n!} \sum_{k=0}^{\infty}\sum_{j_{1}+\ldots+j_{n}=k}a_{j_{1}}\cdot\ldots\cdot a_{j_ {n}}z^{k}.\] Changing the order of summation and substituting the value of the argument, we have \[f^{q}\left(\frac{\rho e^{i\theta}}{\sqrt{q}}\right)=\sum_{n=0}^{\infty}\left( \sum_{k=0}^{\infty}\frac{q^{k}}{k!}\sum_{j_{1}+\ldots+j_{k}=n}a_{j_{1}}\cdot \ldots\cdot a_{j_{k}}\right)\frac{\rho^{n}e^{i\theta n}}{q^{n/2}}.\] We introduce the following notation. \[g_{nk}=\frac{1}{k!}\sum_{j_{1}+\ldots+j_{k}=n}a_{j_{1}}\cdot\ldots\cdot a_{j_ {k}}.\] Then the left-hand side of (5) can be represented as \[\frac{1}{2\pi}\int\limits_{0}^{1}\int\limits_{0}^{2\pi}\left|f^{q}\left(\frac{ \rho e^{i\theta}}{\sqrt{q}}\right)\right|^{2}\rho w(\rho)\,d\theta\,d\rho=\sum \limits_{n=0}^{\infty}\frac{1}{q^{n}}\left(\sum\limits_{k=0}^{\infty}q^{k}g_{ nk}\right)^{2}h_{2n}.\] Similarly, for the right-hand side of (5) we have \[\left(\frac{1}{2\pi}\int\limits_{0}^{1}\int\limits_{0}^{2\pi}|f(\rho e^{i \theta})|^{2}\rho w(\rho)\,d\theta\,d\rho\right)^{q}=\left(\sum\limits_{n=0}^ {\infty}\left(\sum\limits_{k=0}^{\infty}g_{nk}\right)^{2}h_{2n}\right)^{q}.\] Therefore, the desired inequality takes the following form: \[\sum\limits_{n=0}^{\infty}\frac{1}{q^{n}}\left(\sum\limits_{k=0}^{\infty}q^{k }g_{nk}\right)^{2}h_{2n}\leq\left(\sum\limits_{n=0}^{\infty}\left(\sum\limits _{k=0}^{\infty}g_{nk}\right)^{2}h_{2n}\right)^{q}.\] This inequality looks, in general, quite complicated, so it seems logical to consider some particular cases. For example, one can take the function \(f(z)=e^{z}\). In this case \(g_{nk}=\frac{\delta_{nk}}{k!}\) and the desired inequality is \[\sum\limits_{n=0}^{\infty}\frac{q^{n}}{(n!)^{2}}h_{2n}\leq\left(\sum\limits_{ n=0}^{\infty}\frac{h_{2n}}{(n!)^{2}}\right)^{q}. \tag{6}\] Although this is only a particular case, this inequality seems to be of independent interest because it can be considered as an analog of the classical Bernoulli inequality for the moment sequences. It should be emphasized that it is essential here that \(h_{2n}\) is a sequence of moments of some weight and also that this inequality need not be true for an arbitrary (even monotonic) weight. The following theorem gives a sufficient condition for the inequality (6). We have to replace condition (3) from Theorem 1.1 by a stronger condition on the moments. **Theorem 1.2**.: _Let \(w\) be a Bergman weight. If the inequality_ \[\frac{h_{2m}}{h_{2(m-1)}}\geq\frac{h_{2(m+1)}}{(m+1)h_{2(m-1)}}+\frac{m}{m+1} \frac{h_{2(m+1)}}{h_{2m}} \tag{7}\] _holds for all \(m\geq 1\), then_ \[\sum\limits_{n=0}^{\infty}\frac{q^{n}}{(n!)^{2}}h_{2n}\leq\left(\sum\limits_ {n=0}^{\infty}\frac{h_{2n}}{(n!)^{2}}\right)^{q},\qquad q\geq 1. \tag{8}\] In particular, the estimate (7) holds for all standard Bergman weights \(w_{\alpha}\); for them it turns into equality. Thus, it looks plausible that the weights satisfying (7) are a correct class for generalizations of (2) to general weights. However, (4) and (8) do not hold for arbitrary weights and we indeed need to impose some regularity conditions. **Theorem 1.3**.: _There exists a monotonically decreasing weight \(w\) such that the inequality_ \[\|f_{r}\|_{A^{2q}(w)}\leq\|f\|_{A^{2}(w)}\] _does not hold for \(f(z)=e^{z}\), \(r=\frac{1}{\sqrt{q}}\) and \(q=2\) as well as for \(q\in(1,1+\varepsilon)\) for some \(\varepsilon>0\)._ The paper is organized as follows. In Section 2 we prove Theorem 1.1 which gives the counterpart of Weissler inequality for Bergman spaces when \(q\) is even and \(p=2\). In Section 3 two auxiliary lemmas are proved, while Section 4 is devoted to the proof of Theorems 1.2 and 1.3. ## 2 Proof of Theorem 1.1 We need to prove the following inequality. \[\frac{1}{2\pi}\int\limits_{0}^{1}\int\limits_{0}^{2\pi}\left|f^{n}\left(\frac{ \rho e^{i\theta}}{\sqrt{n}}\right)\right|^{2}\rho w(\rho)\,d\theta\,d\rho\leq \left(\frac{1}{2\pi}\int\limits_{0}^{1}\int\limits_{0}^{2\pi}|f(\rho e^{i \theta})|^{2}\rho w(\rho)\,d\theta\,d\rho\right)^{n}. \tag{9}\] Let \(f(z)=\sum\limits_{k=0}^{\infty}a_{k}z^{k}\). Then \(f^{n}(z)\) can be represented as \[f^{n}(z)=\sum\limits_{k=0}^{\infty}\left(\sum\limits_{j_{1}+\ldots+j_{n}=k}a_{ j_{1}}\cdot\ldots\cdot a_{j_{n}}\right)z^{k}.\] The integrals in (9) can be calculated using Parseval identity. Thus, the left-hand side of (9) is \[\int\limits_{0}^{1}\rho w(\rho)\sum\limits_{k=0}^{\infty}\left(\sum\limits_{ j_{1}+\ldots+j_{n}=k}a_{j_{1}}\cdot\ldots\cdot a_{j_{n}}\right)^{2}\frac{\rho^{2k} }{n^{k}}\,d\rho=\sum\limits_{k=0}^{\infty}\frac{1}{n^{k}}\left(\sum\limits_{j_ {1}+\ldots+j_{n}=k}a_{j_{1}}\cdot\ldots\cdot a_{j_{n}}\right)^{2}h_{2k}\] and the right-hand side of (9) is \[\left(\int\limits_{0}^{1}\rho w(\rho)\sum\limits_{k=0}^{\infty}a_ {k}^{2}\rho^{2k}\,d\rho\right)^{n}=\left(\sum\limits_{k=0}^{\infty}a_{k}^{2} \int\limits_{0}^{1}\rho^{2k+1}w(\rho)\,d\rho\right)^{n}=\\ =\sum\limits_{k=0}^{\infty}\sum\limits_{j_{1}+\ldots+j_{n}=k}a_{ j_{1}}^{2}\cdot\ldots\cdot a_{j_{n}}^{2}h_{2j_{1}}\cdot\ldots\cdot h_{2j_{n}}.\] From the Cauchy inequality \[\sum\limits_{k=0}^{\infty}\frac{h_{2k}}{n^{k}}\left(\sum\limits_{ j_{1}+\ldots+j_{n}=k}a_{j_{1}}\cdot\ldots\cdot a_{j_{n}}\right)^{2}\leq\\ \leq\sum\limits_{k=0}^{\infty}\frac{h_{2k}}{n^{k}}\sum\limits_{j_ {1}+\ldots+j_{n}=k}a_{j_{1}}^{2}\cdot\ldots\cdot a_{j_{n}}^{2}h_{2j_{1}}\cdot \ldots\cdot h_{2j_{n}}\cdot\sum\limits_{j_{1}+\ldots+j_{n}=k}\frac{1}{h_{2j_{1 }}\cdot\ldots\cdot h_{2j_{n}}}.\] Now it is sufficient to prove that \[\sum_{j_{1}+\ldots+j_{n}=k}\frac{1}{h_{2j_{1}}\cdot\ldots\cdot h_{2j_{n}}}\leq \frac{n^{k}}{h_{2k}},\] and then the desired inequality will hold term by term. Inequality (3) implies that for \(m\geq n\) \[h_{2n}h_{2m}\geq\frac{n}{m+1}h_{2(n-1)}h_{2(m+1)}.\] We need to estimate the product \(h_{2j_{1}}\cdot\ldots\cdot h_{2j_{n}}\), where \(j_{1}+\ldots+j_{n}=k\). Without loss of generality we assume that \(j_{1}\leq j_{2}\). Then \[h_{2j_{1}}h_{2j_{2}}\geq\frac{j_{1}}{j_{2}+1}h_{2(j_{1}-1)}h_{2( j_{2}+1)}\geq\frac{j_{1}(j_{1}-1)}{(j_{2}+1)(j_{2}+2)}h_{2(j_{1}-2)}h_{2(j_{2}+2)}\geq\\ \geq\frac{j_{1}!}{(j_{2}+1)\ldots(j_{2}+j_{1})}h_{0}h_{2(j_{2}+j_ {1})}=\frac{j_{1}!j_{2}!}{(j_{1}+j_{2})!}h_{0}h_{2(j_{2}+j_{1})}\] Next, assume, without loss of generality, that \(j_{1}+j_{2}\geq j_{3}\). Then \[h_{2j_{1}}h_{2j_{2}}h_{2j_{3}}\geq\frac{j_{1}!j_{2}!}{(j_{1}+j_{2})!}h_{2(j_{1 }+j_{2})}h_{2j_{3}}\geq\frac{j_{1}!j_{2}!j_{3}!}{(j_{1}+j_{2}+j_{3})!}h_{2(j_{ 1}+j_{2}+j_{3})}.\] Proceeding in this way we finally get \[h_{2j_{1}}\cdot\ldots\cdot h_{2j_{n}}\geq\frac{j_{1}!j_{2}!\ldots j_{n}!}{(j_ {1}+j_{2}+\ldots+j_{n})!}h_{2(j_{1}+j_{2}+\ldots+j_{n})}=\frac{j_{1}!j_{2}! \ldots j_{n}!}{k!}h_{2k}\] Therefore, \[\sum_{j_{1}+\ldots+j_{n}=k}\frac{1}{h_{2j_{1}}\cdot\ldots\cdot h_{2j_{n}}} \leq\frac{k!}{h_{2k}}\sum_{j_{1}+\ldots+j_{n}=k}\frac{1}{j_{1}!\cdot\ldots \cdot j_{n}!}=\frac{k!}{h_{2k}}\cdot\frac{n^{k}}{k!}=\frac{n^{k}}{h_{2k}}.\] The equality \(\sum\limits_{j_{1}+\ldots+j_{n}=k}\frac{1}{j_{1}!\ldots\cdot j_{n}!}=\frac{n^ {k}}{k!}\) is obtained by comparing the coefficients of \[e^{nz}=\sum_{k=0}^{\infty}\frac{n^{k}z^{k}}{k!}=\sum_{k=0}^{\infty}\sum_{j_{1 }+\ldots+j_{m}=k}\frac{z^{k}}{j_{1}!\cdot\ldots\cdot j_{n}!}.\] This finishes the proof of (9). It remains to show the sharpness of the value \(r=\frac{1}{\sqrt{n}}\). For this we use the standard test function \(f(z)=1+\varepsilon z\) where \(\varepsilon\) can be taken arbitrarily small. Then the left-hand side of the desired inequality (i.e., \(\|f_{r}\|_{A^{2n}(w)}^{2n}\)) will be equal to \[\frac{1}{2\pi}\int\limits_{0}^{1}\int\limits_{0}^{2\pi}\big{|}1+\varepsilon \tau\rho e^{i\theta}\big{|}^{2n}\rho w(\rho)\,d\theta\,d\rho=1+n^{2} \varepsilon^{2}r^{2}\int\limits_{0}^{1}\rho^{3}w(\rho)\,d\rho+O(\varepsilon^{4})\] and the right-hand side (i.e., \(\|f\|_{A^{2}(w)}^{2n}\)) will be given by \[\left(\frac{1}{2\pi}\int\limits_{0}^{1}\int\limits_{0}^{2\pi}\big{|}1+\varepsilon \rho e^{i\theta}\big{|}^{2}\rho w(\rho)\,d\theta\,d\rho\right)^{n}=1+n \varepsilon^{2}\int\limits_{0}^{1}\rho^{3}w(\rho)\,d\rho+O(\varepsilon^{4}).\] Comparing these values with \(\varepsilon\to 0\), we see that if the desired inequality holds then \(nr^{2}\leq 1\), so \(r\leq\frac{1}{\sqrt{n}}\). Let us show that the theorem applies to all classical weights \(w_{\alpha}\). In this case \[h_{2n}=\frac{\Gamma(\alpha)\Gamma(n+1)}{\Gamma(\alpha+n)},\] whence \[\frac{h_{2(n+1)}}{(n+1)h_{2(n-1)}}+\frac{n}{n+1}\frac{h_{2(n+1)}}{h_{2n}}= \frac{n}{n+\alpha-1}=\frac{h_{2n}}{h_{2(n-1)}}.\] Thus, inequality (7) turn into equality and, in particular, the weaker condition (3) is satisfied. Theorem 1.1 is proved. ## 3 Two lemmas **Lemma 3.1**.: _Let \(g_{n}\) be a sequence of real numbers, such that \(0<g_{n}\leq g_{n-1}\) for \(n\geq 1\), \(g_{0}=1\) and_ \[\frac{g_{n}}{g_{n-1}}\geq\frac{n}{n+1}\frac{g_{n+1}}{g_{n}},\qquad n\geq 1. \tag{10}\] _Then the inequality_ \[\sum_{k=0}^{n}\frac{g_{n-k}\cdot g_{k+2}}{((n-k)!)^{2}(k!)^{2}(k+1)(k+2)}\leq \sum_{k=0}^{n}\frac{g_{n-k+1}\cdot g_{k+1}}{((n-k)!)^{2}(k!)^{2}(n-k+1)(k+1)}.\] _holds for all \(n\geq 0\)._ Proof.: We can write \[T_{n}:=\sum_{k=0}^{n}\frac{g_{n-k}\cdot g_{k+2}}{((n-k)!)^{2}(k!) ^{2}(k+1)(k+2)}-\\ -\sum_{k=0}^{n}\frac{g_{n-k+1}\cdot g_{k+1}}{((n-k)!)^{2}(k!)^{2} (n-k+1)(k+1)}=\\ =\frac{g_{0}\,g_{n+2}}{(n!)^{2}(n+1)(n+2)}-\frac{g_{1}\,g_{n+1}} {(n!)^{2}(n+1)}+\sum_{k=0}^{n-1}\frac{g_{n-k}\,g_{k+2}(2k-n+1)}{((n-k)!)^{2}( (k+1)!)^{2}(k+2)}.\] **Case 1.** Let \(n\) be even, i.e. \(n=2m\). Then it can be represented as \[T_{2m}:=\frac{g_{0}\,g_{2m+2}}{((2m)!)^{2}(2m+1)(2m+2)}-\frac{g_ {m+1}^{2}}{(m!)^{2}\left((m+1)!\right)^{2}(m+1)}+\\ +\sum_{k=0}^{m-1}\frac{g_{m+k+2}\,g_{m-k}}{((m+k+2)!)^{2}\left((m- k)!\right)^{2}}(-2m+4k^{2}+8k+2).\] We denote \[t_{k}=\frac{g_{m+k+2}\,g_{m-k}}{\left((m+k+2)!\right)^{2}\left((m-k)!\right)^{2}} (-2m+4k^{2}+8k+2)\] and introduce the sequence \(s_{k}\) the following way. \[s_{0}=-\frac{g_{m+1}^{2}}{\left(m!\right)^{2}\left((m+1)!\right)^{2}\left(m+1 \right)},\] \[s_{k}=s_{k-1}\cdot\frac{(m-k+2)}{(m+k+1)}\frac{g_{m-k+1}\,g_{m+k+1}}{g_{m-k+2} \,g_{m+k}}+t_{k-1},\qquad 0\leq k\leq m.\] It is easy to show by induction that \[s_{k}=-\frac{(2k+1)g_{m+k+1}\,g_{m-k+1}}{\left((m-k)!\right)^{2}\left((m+k+1)! \right)^{2}\left(m-k+1\right)}.\] Therefore, \(s_{k}<0\). One can note that at each iteration, the previously obtained result is multiplied by some coefficient and added to \(t_{k-1}\), starting with the term \(s_{0}\), which participates in \(T_{2m}\). From the condition (10) one has \[\frac{p}{q+1}g_{p-1}\,g_{q+1}\leq g_{p}\,g_{q}\leq\frac{q}{p+1}g_{p+1}g_{q-1}, \qquad p<q. \tag{11}\] Hence, \[g_{m-k+1}\,g_{m+k+1}\leq\frac{(m+k+1)}{(m-k+2)}g_{m-k+2}\,g_{m-k}.\] We conclude that \[s_{k}\geq s_{k-1}+t_{k-1}\geq s_{k-2}+t_{k-2}+t_{k-1}\geq\ldots\geq s_{0}+ \sum_{q=0}^{k-1}t_{k}.\] Therefore, \[T_{2m}\leq\frac{g_{0}\,g_{2m+2}}{((2m)!)^{2}(2m+1)(2m+2)}+s_{m}.\] From the property (11) \[s_{m}=-\frac{(2m+1)g_{2}\,g_{2m+1}}{((2m+1)!)^{2}}\leq-\frac{g_{0}\,g_{2m+2}}{ ((2m)!)^{2}(2m+1)(2m+2)}\] and \(T_{n}=T_{2m}\leq 0\). **Case 2.** Now let \(n\) be odd. In this case \[T_{n}:=\frac{g_{0}\,g_{n+2}}{(n!)^{2}(n+1)(n+2)}+\sum_{k=0}^{\frac{n-1}{2}} \frac{g_{\frac{n+3}{2}+k}\,g_{\frac{n+1}{2}-k}}{\left(\left(\frac{n+3}{2}+k \right)!\right)^{2}\left(\left(\frac{n+1}{2}-k\right)!\right)^{2}}(-n+4k^{2}+ 4k-1).\] As before, we denote \[t_{k}=\frac{g_{\frac{n+3}{2}+k}\,g_{\frac{n+1}{2}-k}}{\left(\left(\frac{n+3}{ 2}+k\right)!\right)^{2}\left(\left(\frac{n+1}{2}-k\right)!\right)^{2}}(-n+4k^{ 2}+4k-1)\] and introduce the sequence \[s_{0}=t_{0}=-\frac{2g_{\frac{n+3}{2}}\,g_{\frac{n+1}{2}}}{\left(\left(\frac{n+3}{ 2}\right)!\right)^{2}\left(\left(\frac{n-1}{2}\right)!\right)^{2}\left(\frac{n +1}{2}\right)},\] \[s_{k}=s_{k-1}\cdot\frac{\left(\frac{n+3}{2}-k\right)}{\left(\frac{n+3}{2}+k \right)}\cdot\frac{g_{\frac{n+3}{2}+k}\,g_{\frac{n+1}{2}-k}}{g_{\frac{n+1}{2}+ k}\,g_{\frac{n+3}{2}-k}}+t_{k}.\] It can be shown by induction that \[s_{k}=-\frac{2(k+1)g_{\frac{n+3}{2}+k}\,g_{\frac{n+1}{2}-k}}{\left(\left(\frac {n+3}{2}+k\right)!\right)^{2}\left(\left(\frac{n-1}{2}-k\right)!\right)^{2} \left(\frac{n+1}{2}-k\right)}.\] Using the same considerations as for the even case, we get \[T_{n}\leq\frac{g_{0}\,g_{n+2}}{(n!)^{2}(n+1)(n+2)}+s_{\frac{n-1}{2}}.\] The property (11) gives \[s_{\frac{n-1}{2}}=-\frac{(n+1)g_{n+1}\,g_{1}}{((n+1)!)^{2}}\leq-\frac{g_{0}\, g_{n+2}}{(n!)^{2}(n+1)(n+2)}.\] Therefore, \(T_{n}\leq 0\) for both cases. This completes the proof. **Lemma 3.2**.: _If the Bergman weight \(w\) satisfies inequality (7) for all \(n\geq 1\), then_ \[\frac{h_{2(n+1)}}{n+1}\leq h_{2n}\ln\left(\sum_{k=0}^{\infty}\frac{h_{2k}}{(k!)^{2}}\right),\qquad n\geq 1. \tag{12}\] Proof.: Let \(n=1\). From the Holder inequality, we get the estimate \[h_{2}^{n}=\left(\int\limits_{0}^{1}\rho^{3}w(\rho)\,d\rho\right)^{n}\leq\int \limits_{0}^{1}\rho^{2n+1}w(\rho)\,d\rho\cdot\left(\int\limits_{0}^{1}\rho w( \rho)\,d\rho\right)^{n-1}=h_{2n}\] (recall that \(h_{0}=1\)). It also follows from condition (7) with \(m=1\), that \[h_{4}\leq\frac{2h_{2}^{2}}{h_{2}+1}. \tag{13}\] Hence, \[2h_{2}\ln\left(\sum_{k=0}^{\infty}\frac{h_{2k}}{(k!)^{2}}\right) -h_{4}\geq 2h_{2}\ln\left(\sum_{k=0}^{\infty}\frac{h_{2}^{k}}{(k!)^{2}} \right)-\frac{2h_{2}^{2}}{h_{2}+1}\geq\\ \geq 2h_{2}\ln I_{0}(2\sqrt{h_{2}})-2h_{2}^{2}\left(1-\frac{h_{2}} {4}\right).\] Here \(I_{0}\) is the modified Bessel function. Recall that the modified Bessel functions \(I_{n}\), defined as \[I_{n}(z)=\sum_{m=0}^{\infty}\frac{(z/2)^{2m+n}}{m!(m+n)!},\] are connected with the classical Bessel functions \(J_{n}\) by the equality \(I_{n}(z)=e^{-\frac{in\pi}{2}}J_{n}(ze^{\frac{i\pi}{2}})\). In what follows we will use the following three properties of the functions \(I_{n}\) which can be found in [8, SS2.12]: \[I_{n}^{\prime}(x)=I_{n+1}(x)+\frac{n}{x}I_{n}(x), \tag{14}\] \[\frac{2n}{x}I_{n}(x)=I_{n-1}(x)-I_{n+1}(x), \tag{15}\] \[2I_{n}^{\prime}(x)=I_{n-1}(x)+I_{n+1}(x). \tag{16}\] We denote \(2\sqrt{h_{2}}=t\) and consider the function \[u_{1}(t)=16\ln I_{0}(t)-4t^{2}+\frac{t^{4}}{4}.\] Since \(0<h_{2}\leq 1\), our goal is to show that \(u_{1}(t)\geq 0\) for \(0\leq t\leq 2\). By the property (14) for \(n=0\), we have \[u_{1}^{\prime}(t)=\frac{16I_{1}(t)}{I_{0}(t)}-8t+t^{3}.\] Since \(I_{0}(t)>0\) for \(t\geq 0\), we need to consider \[16I_{1}(t)-8tI_{0}(t)+t^{3}I_{0}(t).\] Using the property (15) with \(n=1\), we get the expression \[t^{3}I_{0}(t)-8tI_{2}(t)=:tu_{2}(t)\] We take the derivative of \(u_{2}(t)\) and use the properties (14)-(16) to obtain that \[u_{2}^{\prime}(t)=2tI_{0}(t)+t^{2}I_{1}(t)-4(I_{1}(t)+I_{3}(t)) \geq 2tI_{0}(t)-4I_{1}(t)-4I_{3}(t)=\\ =2tI_{2}(t)-4I_{3}(t)=2tI_{2}(t)-\frac{2}{3}tI_{2}(t)+\frac{2}{3} tI_{4}(t)=\frac{4}{3}tI_{2}(t)+\frac{2}{3}tI_{4}(t).\] Since \(I_{n}(x)>0\), \(x>0\), it follows that \[u_{2}(t)\geq u_{2}(0)=0\] and so \(u_{1}\) also increases. Then \[u_{1}(t)\geq u_{1}(0)=0.\] Thus, we proved that \[\frac{h_{4}}{2}\leq h_{2}\ln\left(\sum_{k=0}^{\infty}\frac{h_{2k}}{(k!)^{2}} \right).\] Let us suppose that the inequality (12) is proved for \(n-1\). Obviously, (7) implies \[\frac{h_{2n}}{h_{2(n-1)}}\geq\frac{n}{n+1}\frac{h_{2(n+1)}}{h_{2n}},\qquad n\geq 1.\] Hence, \[\frac{h_{2(n+1)}}{n+1}\leq\frac{h_{2n}^{2}}{nh_{2(n-1)}}\leq\frac{h_{2n}}{h_{2( n-1)}}\cdot h_{2(n-1)}\ln\left(\sum_{k=0}^{\infty}\frac{h_{2k}}{(k!)^{2}} \right),\] which is exactly the desired inequality. ## 4 Proofs of Theorems 1.2 and 1.3 Proof of Theorem 1.2.: We consider the function \[\varphi(q)=\ln\left(\sum_{n=0}^{\infty}\frac{q^{n}}{(n!)^{2}}h_{2n}\right)-q \ln\left(\sum_{n=0}^{\infty}\frac{h_{2n}}{(n!)^{2}}\right).\] Its derivative is \[\varphi^{\prime}(q)=\frac{\sum\limits_{n=0}^{\infty}\frac{q^{n}}{(n!)^{2}(n+1) }h_{2(n+1)}}{\sum\limits_{n=0}^{\infty}\frac{q^{n}}{(n!)^{2}}h_{2n}}-\ln\left( \sum_{n=0}^{\infty}\frac{h_{2n}}{(n!)^{2}}\right).\] We also take the second derivative \[\varphi^{\prime\prime}(q)=\frac{\sum\limits_{n=0}^{\infty}\frac{q^{n}}{(n!)^{ 2}(n+1)(n+2)}h_{2(n+2)}\cdot\sum\limits_{n=0}^{\infty}\frac{q^{n}}{(n!)^{2}}h_ {2n}-\left(\sum\limits_{n=0}^{\infty}\frac{q^{n}}{(n!)^{2}(n+1)}h_{2(n+1)} \right)^{2}}{\left(\sum\limits_{n=0}^{\infty}\frac{q^{n}}{(n!)^{2}}h_{2n} \right)^{2}}.\] Multiplying the series, we see that the first term in the numerator is \[\sum_{n=0}^{\infty}\left(\sum_{k=0}^{n}\frac{h_{2(n-k)}\cdot h_{2(k+2)}}{((n- k)!)^{2}(k!)^{2}(k+1)(k+2)}\right)q^{n},\] and the second is equal to \[\sum_{n=0}^{\infty}\left(\sum_{k=0}^{n}\frac{h_{2(n-k+1)}\cdot h_{2(k+1)}}{(( n-k)!)^{2}(k!)^{2}(n-k+1)(k+1)}\right)q^{n}.\] Since (7) implies (10) for \(g_{n}=h_{2n}\), we can apply Lemma 3.1 and see that for all \(n\geq 0\) \[\sum_{k=0}^{n}\frac{h_{2(n-k)}\cdot h_{2(k+2)}}{((n-k)!)^{2}(k!)^{2}(k+1)(k+2) }\leq\sum_{k=0}^{n}\frac{h_{2(n-k+1)}\cdot h_{2(k+1)}}{((n-k)!)^{2}(k!)^{2}(n- k+1)(k+1)}.\] It follows that \(\varphi^{\prime\prime}(q)\leq 0\). Therefore, \(\varphi^{\prime}(q)\leq\varphi^{\prime}(1)\) when \(q\geq 1\). We denote \[\psi=\sum_{n=0}^{\infty}\frac{h_{2(n+1)}}{(n!)^{2}(n+1)}-\sum_{n=0}^{\infty} \frac{h_{2n}}{(n!)^{2}}\cdot\ln\left(\sum_{n=0}^{\infty}\frac{h_{2n}}{(n!)^{2} }\right).\] From Lemma 3.2 we know that \[\frac{h_{2(n+1)}}{n+1}\leq h_{2n}\ln\left(\sum_{k=0}^{\infty}\frac{h_{2k}}{(k! )^{2}}\right),\qquad n\geq 1.\] We proved that all the terms of the series \[\psi=\sum_{n=0}^{\infty}\frac{1}{(n!)^{2}}\left(\frac{h_{2(n+1)}}{n+1}-h_{2n} \ln\left(\sum_{k=0}^{\infty}\frac{h_{2k}}{(k!)^{2}}\right)\right),\] are negative except the case \(n=0\). To get rid of this exceptional term, we add it to the term with \(n=1\). \[h_{2}-\ln\left(\sum_{k=0}^{\infty}\frac{h_{2k}}{(k!)^{2}}\right) +\frac{h_{4}}{2} -h_{2}\ln\left(\sum_{k=0}^{\infty}\frac{h_{2k}}{(k!)^{2}}\right)=\] \[=h_{2}+\frac{h_{4}}{2}-(1+h_{2})\ln\left(\sum_{k=0}^{\infty} \frac{h_{2k}}{(k!)^{2}}\right)\leq\] \[\leq h_{2}+\frac{h_{4}}{2}-(1+h_{2})\ln\left(1+h_{2}+\frac{h_{4}} {4}\right)=:y(h_{2},h_{4}).\] The derivative of this expression with respect to \(h_{4}\) \[\frac{\partial y}{\partial h_{4}}=\frac{2+2h_{2}+h_{4}}{8+8h_{2}+2h_{4}}>0,\] so \(y\) increases with respect to \(h_{4}\). Recall that we have \(h_{4}\leq\frac{2h_{2}^{2}}{1+h_{2}}\) (see (13)). Therefore, it is sufficient to substitute this value into \(y\): \[y\left(h_{2},\frac{2h_{2}^{2}}{1+h_{2}}\right)=h_{2}+\frac{h_{2}^{2}}{1+h_{2} }-(1+h_{2})\ln\left(1+h_{2}+\frac{h_{2}^{2}}{2+2h_{2}}\right).\] The function \(v(h_{2})=y\left(h_{2},\frac{2h_{2}^{2}}{1+h_{2}}\right)/(1+h_{2})\) has the same sign as the expression \(y\left(h_{2},\frac{2h_{2}^{2}}{1+h_{2}}\right)\). Calculations show that its derivative equals to \[v^{\prime}(h_{2})=-\frac{h_{2}^{2}(2+3h_{2}+3h_{2}^{2})}{(1+h_{2})^{3}(2+4h_{2 }+3h_{2}^{2})}\leq 0.\] Then \[y\left(h_{2},\frac{2h_{2}^{2}}{1+h_{2}}\right)\leq y(0,0)=0\] for all \(h_{2}\in(0,1)\). Thus, the term with \(n=0\) does not affect the sign and \(\psi<0\). We proved that \(\varphi^{\prime}(q)<0\), and so \[\varphi(q)\leq\varphi(1)=\ln\left(\sum_{n=0}^{\infty}\frac{h_{2n}}{(n!)^{2}} \right)-\ln\left(\sum_{n=0}^{\infty}\frac{h_{2n}}{(n!)^{2}}\right)=0.\] Since logarithm is an increasing function, we finally obtain the desired inequality \[\sum_{n=0}^{\infty}\frac{q^{n}}{(n!)^{2}}h_{2n}\leq\left(\sum_{n=0}^{\infty} \frac{h_{2n}}{(n!)^{2}}\right)^{q}.\] Proof of Theorem 1.3.: Consider the weight \[w^{*}(\rho)=\begin{cases}\frac{3}{2\rho},&0\leq\rho\leq\frac{1}{2},\\ \frac{1}{2\rho},&\frac{1}{2}<\rho\leq 1.\end{cases}\] Then \(h_{n}=\frac{1+2^{-n}}{2(1+n)}\). One can note that \(h_{0}=1\), so this weight is admissble. It is not continuous but we always can approximate it with a continuous or even smooth monotonically decreasing weight with arbitrarily close moments \(h_{n}\). However, one can check numerically that the derivative of the function \[\psi(q)=\sum_{n=0}^{\infty}\frac{q^{n}}{(n!)^{2}}h_{2n}-\left(\sum_{n=0}^{ \infty}\frac{h_{2n}}{(n!)^{2}}\right)^{q}\] at \(q=1\) is positive (\(\approx 0.0048\)). So, the function \(\psi\) is equal to zero when \(q=1\) and then increases. It means that this value will be positive at least on some interval \((1,1+\varepsilon)\), \(\varepsilon>0\), and (6) does not hold. Also numerical calculations show that \(\psi(2)\approx 0.0105>0\). **Remark 4.1**.: As we have shown at the end of Section 2, for the classical weights \(w_{\alpha}\) inequality (7) turns into equality. Let us show that all power weights \(w(\rho)=(m+2)\rho^{m}\), \(m\geq 0\) (not necessarily integer), also satisfy (7). Indeed, we have \(h_{2n}=\frac{2+m}{2+m+2n}\) and \[\frac{h_{2n}}{h_{2(n-1)}}-\left(\frac{h_{2(n+1)}}{(n+1)h_{2(n-1)}}+\frac{n}{n +1}\frac{h_{2(n+1)}}{h_{2n}}\right)=\frac{2m}{(1+n)(2+m+2n)(4+m+2n)}\geq 0.\] #### Funding The results of Sections 3 and 4 (Theorem 1.2) were obtained with the support of Russian Science Foundation grant 22-71-10094. Other results of the paper were obtained with the support of Ministry of Science and Higher Education of the Russian Federation, agreement No 075-15-2021-602.
2308.10612
A Study of Morris-Thorne Wormhole in Einstein-Cartan Theory
This paper focuses on the Einstein-Cartan theory, an extension of general relativity that incorporates a torsion tensor into spacetime. The differential form technique is employed to analyze the Einstein-Cartan theory, which replaces tensors with tetrads. A tetrad formalism, specifically the Newmann-Penrose-Jogia-Griffiths formalism, is used to study the field equations. The energy-momentum tensor is also determined, considering a Weyssenhoff fluid with anisotropic matter. The spin density is derived in terms of the red-shift function. We also examine the energy conditions at the throat of a Morris-Thorne wormhole. The results shed light on the properties of wormholes in the context of the Einstein-Cartan theory, including the energy conditions at the throat.
Sagar V. Soni, A. C. Khunt, A. H. Hasmani
2023-08-21T10:22:52Z
http://arxiv.org/abs/2308.10612v1
# A Study of Morris-Thorne Wormhole in Einstein-Cartan Theory ###### Abstract This paper focuses on the Einstein-Cartan theory, an extension of general relativity that incorporates a torsion tensor into spacetime. The differential form technique is employed to analyze the Einstein-Cartan theory, which replaces tensors with tetrads. A tetrad formalism, specifically the Newmann-Penrose-Jogia-Griffiths formalism, is used to study the field equations. The energy-momentum tensor is also determined, considering a Wessonhoff fluid with anisotropic matter. The spin density is derived in terms of the red-shift function. We also examine the energy conditions at the throat of a Morris-Thorne wormhole. The results shed light on the properties of wormholes in the context of the Einstein-Cartan theory, including the energy conditions at the throat. Wormhole, Differential Forms, Newmann-Penrose-Jogia-Griffiths Formalism, Energy conditions, Einstein-Cartan Theory ## 1 Introduction In 1915, Einstein devised a theory called general relativity, in which he demonstrated that gravitation is a geometric property rather than a force. Later, in 1924, Cartan [1] extended this theory by incorporating a torsion tensor into spacetime, which is known as Einstein-Cartan theory. Because of the torsion tensor, spacetime does not possess in Riemannian geometry, hence people in this field work in non-Riemannian geometry. This non-Riemannian aspect of spacetime is described by affine connection \(\tilde{\Gamma}^{h}_{ij}\) and it is defined as, \[\tilde{\Gamma}^{h}_{ij}=\Gamma^{h}_{ij}-{K_{ij}}^{h}, \tag{1}\] where \(\Gamma^{h}_{ij}\) are the usual Christoffel symbols, which hold symmetry property in lower indices and the symbols \({K_{ij}}^{h}\) are the contortion tensor with \(K_{i(jh)}=0\), that is skew-symmetric in last two indices. The torsion tensor in terms of contortion tensor is given by \[{Q_{ij}}^{h}=-\frac{1}{2}({K_{ij}}^{h}-{K_{ji}}^{h}). \tag{2}\] It is worth noting that the torsion tensor is nothing but the anti-symmetric part of affine connection (1) and hence from the (2) the following relations hold \[{K_{ij}}^{h}={Q_{j}}^{h}{}_{i}-{Q^{h}}_{ij}-{Q_{ij}}^{h}. \tag{3}\] In all the modified theories of relativity, researchers are interested to find exact solutions to field equations and analyze energy conditions in different spacetimes. Kibble [2] discovered spin and torsion in gravitation separately. Hehl. et. al. [3, 4, 5] created the Einstein-Cartan theory of gravitation. In 1939, Tolman [6] developed an explicit solution to Einstein's field equations for static fluid spheres. Prasanna [7] presented the solution's with the perfect fluid distribution in 1975, using Hehl's approach and Tolman's technique, and discovered that a space-time metric similar to the Schwarzschild interior solution will no longer represent a homogeneous fluid sphere in the presence of spin density, and the hydrostatic pressure is discontinuous at the fluid sphere's boundary. Jogia and Griffiths [8] extended the Newmann-Penrose formalism technique for Einstein-Cartan theory in 1980. In 2009, Katkar [9] used differential forms to derive Cartan's equations of structure, Einstein field equations, and Bianchi's identities. The exact solution of Einstein-Cartan field equations for static, conformally flat spherically symmetric space-time have been derived by Katkar and Patil [10]. Katkar [11] used differential forms to determine the general observer values in 2015. In the framework of the Einstein-Cartan theory, Bronnikov and Galialakhmetov [12] investigated the possibility of static traversable wormholes without the use of exotic matter. Di Grazia et. al. [13] calculated the torsion tensor for matter fields with varying spins for traversable wormholes. Mehdizadeh and Ziaie [14, 15] have discovered wormhole solutions. Katkar and Phadatare [16, 17] have established a solution for non-static conformally flat spherically symmetric spacetimes and static spherically symmetric spacetimes with the source as a Weyssenhoff fluid in 2019. Recently a consistent solution of Einstein-Cartan equations with torsion outside matter has been found by Morawetz [18]. Raja et. al. [19] have extensively elaborated a particular choice of shape functions in the regime of \(f(R,L_{m})\) gravity. They have explored physical analysis using the isotropic fluid equation of state (EoS) as well as anisotropic fluid EoS, to investigate the physical plausibility of wormhole solutions in the framework of \(f(R,L_{m})\) gravity. In this paper, we have obtained expressions of Ricci tensor, Ricci scalar, and energy-momentum tensor in tetrad frame using Newmann-Penrose-Jogia-Griffiths formalism and Katkar's approach in differential forms. Spin density has been derived in the form of a red-shift function. Also, we analyze energy conditions at the throat of the wormhole.Based on the Raja et. al. [19] wormhole study, we adopted one particular choice of shape function. In this study, we shall explore geometrical as well physical conditions for wormholes in the framework of Einstien-Cartan theory. ## 2 Einstein-Cartan theory in Differential Forms The differential form technique [20] is immensely used in Einstein's general relativity for numerous calculations, particularly in finding exact solutions. In this tool, tetrads are used instead of tensors. There are many approaches for dealing with tetrads, such as Newmann-Penrose formalism, Geroch-Held-Penrose formalism, and so on, among these, Newmann-Penrose formalism [21] is the most commonly used formalism. McIntosh [22] derives the relationship between Newmann-Penrose formalism and differential forms. This formalism was extended by Jogia and Griffiths [8] to Einstein-Cartan theory. This formalism was later known as the Newmann-Penrose-Jogia-Griffiths formalism. The basis 1-forms and usual basis are related by, \[\theta^{\alpha}=e^{(\alpha)}{}_{i}dx^{i}, \tag{4}\] where, \(e^{(\alpha)}{}_{i}\), represent basis vectors of the Newmann-Penrose tetrad consists of real and complex null vector fields provided by \[e^{(\alpha)}{}_{i}=(n_{i},l_{i},-\bar{m}_{i},-m_{i}). \tag{5}\] The Greek letters denote tetrad indices, whereas the Latin indices denote tensor indices; all of these indices go from 1 to 4, and this nomenclature will be used throughout the study. Einstein's summation convention is also employed. The metric tensor field is expressed as, \[g_{ij}=\eta_{(\alpha)(\beta)}e^{(\alpha)}{}_{i}e^{(\beta)}{}_{j} \tag{6}\] where \(\eta_{(\alpha)(\beta)}=e^{i}{}_{(\alpha)}e_{(\beta)j}\). The Newmann-Penrose complex null vector fields \(l_{i},n_{i},m_{i},\bar{m}_{i}\) are satisfying conditions, \[l_{i}n^{i}=1=-m_{i}\bar{m}^{i}, \tag{7}\] and \[l_{i}l^{i}=n_{i}n^{i}=m_{i}m^{i}=\bar{m_{i}}\bar{m^{i}}=0,\] \[m_{i}l^{i}=\bar{m}_{i}l^{i}=m_{i}n^{i}=\bar{m_{i}}n^{i}=0. \tag{8}\] Cartan's first equation of structure is provided by Katkar [9, 11] in the Einstein-Cartan theory are given by \[d\theta^{\alpha}=-\omega^{\alpha}{}_{\beta}\wedge\theta^{\beta}, \tag{9}\] where \[\omega^{\alpha}{}_{\beta}=(\gamma^{\alpha}{}_{\beta\delta}-K_{\delta\beta}{}^{ \alpha})\theta^{\delta} \tag{10}\] are connection 1-forms which depend on torsion also. Here \(\gamma^{\alpha}{}_{\beta\delta}\) are Ricci roatation coefficients and \(K_{\delta\beta}{}^{\alpha}\) are tetrad components of contorsion tensor. The covariant form equations (10) can be written as, \[\omega_{\alpha\beta}=\eta_{\alpha\epsilon}\omega^{\epsilon}{}_{\beta}. \tag{11}\] The non-vanishing tetrad components of connection 1-forms are represented using notations devised by Jogia and Griffiths and from equation (11) as \[\omega_{12} = -[(\epsilon+\bar{\epsilon}+\epsilon_{1}+\bar{\epsilon}_{1}) \theta^{1}+(\gamma+\bar{\gamma}+\gamma_{1}+\bar{\gamma}_{1})\theta^{2} \tag{12}\] \[+(\bar{\alpha}+\beta+\bar{\alpha_{1}}+\beta_{1})\theta^{3}+( \alpha+\bar{\beta}+\alpha_{1}+\bar{\beta}_{1})\theta^{4}],\] \[\omega_{13} = -[(\kappa+\kappa_{1})\theta^{1}+(\tau+\tau_{1})\theta^{2}+(\sigma +\sigma_{1})\theta^{3}+(\rho+\rho_{1})\theta^{4}],\] \[\omega_{23} = (\pi+\pi_{1})\theta^{1}+(\bar{\nu}+\bar{\nu}_{1})\theta^{2}+( \bar{\lambda}+\bar{\lambda}_{1})\theta^{3}+(\bar{\mu}+\bar{\mu}_{1})\theta^{4},\] \[\omega_{34} = (\epsilon-\bar{\epsilon}+\epsilon_{1}-\bar{\epsilon}_{1})\theta^ {1}+(\gamma-\bar{\gamma}+\gamma_{1}-\bar{\gamma}_{1})\theta^{2}\] \[-(\bar{\alpha}-\beta+\bar{\alpha}_{1}-\beta_{1})\theta^{3}+( \alpha-\bar{\beta}+\alpha_{1}-\bar{\beta}_{1})\theta^{4},\] where the symbols \(\epsilon,\kappa,\nu,...\) indicate the Ricci rotation coefficients in Einstein's theory of relativity and the symbols with subscript demonstrates contortion tensor. In the Einstein-Cartan theory, Cartan's second equation of structure is, \[\Omega^{\alpha}{}_{\beta}=d\omega^{\alpha}{}_{\beta}+\omega^{\alpha}{}_{\sigma }\wedge\omega^{\sigma}{}_{\beta}+(\gamma^{\alpha}{}_{\beta\sigma}-K_{\sigma \beta}{}^{\alpha})K_{\epsilon\delta}{}^{\alpha}\theta^{\delta}\wedge\theta^{\epsilon} \tag{13}\] where \[\Omega^{\alpha}{}_{\beta}=-\frac{1}{2}R_{\delta\epsilon\beta}{}^{\alpha} \theta^{\delta}\wedge\theta^{\epsilon} \tag{14}\] are the components of curvature 2-forms, they give tetrad components of the Riemann-Christoffel tensor. ### Field Equations in Einstein-Cartan Theory Hehl et. al. [4, 5] provide the field equations for the Einstein-Cartan theory of gravitation in the following format: \[R_{ij}-\frac{1}{2}Rg_{ij}=-Kt_{ij}, \tag{15}\] and \[Q_{ij}{}^{k}+\delta_{i}^{k}Q_{ji}{}^{l}-\delta_{j}^{j}Q_{il}{}^{l}=kS_{ij}{}^{k}, \tag{16}\] here \(t_{ij}\) is energy-momentum tensor and \(S_{ij}{}^{k}\) is the spin angular momentum tensor. The spin angular momentum tensor \(S_{ij}{}^{k}\) was decomposed by Hehl et al. [4, 5] into the spin density tensor \(S_{ij}\) by, \[S_{ij}{}^{k}=S_{ij}u^{k}, \tag{17}\] with Frankel's condition, That is, \[S_{ij}u^{j}=0. \tag{18}\] The spin density tensor \(S_{ij}\) comprises six distinct components and is anti-symmetric in nature. These components can be defined in tetrad components as, \[S_{0} = S_{13}=S_{ij}l^{i}m^{j},\] \[S_{1} = \frac{1}{2}(S_{12}-S_{34})=\frac{1}{2}S_{ij}(l^{i}n^{j}-m^{i}\bar{ m}^{j}),\] \[S_{2} = S_{32}=S_{ij}m^{i}n^{j} \tag{19}\] and the other three are complex conjugates of the above three components. Therefore, \(S_{ij}\) can be written in the form of tetrad as, \[S_{ij}=2[(S_{1}+\bar{S}_{1})l_{[i}n_{j]}+(S_{1}-\bar{S}_{1})m_{[i}\bar{m}_{j]}+ (\bar{S}_{2}l_{[i}m_{j]}+\bar{S}_{0}m_{[i}n_{j]})+C.C.], \tag{20}\] where \(C.C\), denotes the complex conjugate of preceding terms. The condition (18) gives, \[S_{0}=S_{2},\ \ \ \ S_{1}=-\bar{S}_{1}. \tag{21}\] The equation (20) reduces to \[S_{ij}=2[2S_{1}m_{[i}\bar{m}_{j]}+\bar{S}_{0}(l_{[i}m_{j]}+m_{[i}n_{j]})+C.C.]. \tag{22}\] Katkar [9] transformed field equations (16) into tetrad forms \[\pi_{1} = \tau_{1}=\lambda_{1}=\sigma_{1}=0,\] \[\rho_{1} = \mu_{1}=2\epsilon_{1}=2\gamma_{1}=-\sqrt{2}kS_{1},\] \[\bar{\nu}_{1} = \kappa_{1}=2\bar{\alpha}_{1}=2\beta_{1}=-\sqrt{2}kS_{0}. \tag{23}\] We suppose Einstein-Cartan spacetime contains Weyssenhoff fluid with anisotropic matter. \[t_{ij}=(\rho+p_{t})u_{i}u_{j}-p_{t}g_{ij}+(p_{r}-p_{t})v_{i}v_{j}-S_{hi,k}u^{k }u^{h}u_{j}. \tag{24}\] The \(u_{i}\) and \(v_{i}\) can be chosen as, \[u_{i}=\frac{1}{\sqrt{2}}(l_{i}+n_{i})\ \ \ \ \text{and}\ \ \ \ \ v_{i}=\frac{1}{\sqrt{2}}(l_{i}-n_{i}),\] and so equation (24) written as, \[t_{ij} = \frac{1}{2}(\rho+p_{t})(l_{i}l_{j}+l_{i}n_{j}+n_{i}l_{j}+n_{i}n_{ j})-p_{t}(l_{i}n_{j}+n_{i}l_{j}-m_{i}\bar{m}_{j}-\bar{m}_{i}m_{j}) \tag{25}\] \[+\frac{1}{2}(p_{r}-p_{t})(l_{i}l_{j}-l_{i}n_{j}-n_{i}l_{j}+n_{i}n _{j})\] \[+\frac{1}{2\sqrt{2}}[\bar{S}_{0}\{(\bar{\epsilon}+\bar{\nu}- \kappa-\tau)+C.C\}(l_{i}l_{j}+l_{i}n_{j}-n_{i}l_{j}-n_{i}n_{j})\] \[+\{2S_{1}(\pi+\nu-\bar{\kappa}-\bar{\tau})-2\bar{S}_{0}(\epsilon+ \bar{\epsilon}+\gamma+\bar{\gamma})\}(m_{i}l_{j}+m_{i}n_{j})+C.C].\] ## 3 Morris-Thorne Wormhole Using Differential Forms Wormhole solutions must obey Einstein's field equations and have a throat connecting two asymptotically flat regions of the universe. The metric for Morris-Thorne wormhole [23, 24] is given by, \[ds^{2}=e^{2\Phi(r)}dt^{2}-\frac{dr^{2}}{\left(1-\frac{b(r)}{r}\right)}-r^{2}d \theta^{2}-r^{2}\sin^{2}\theta d\phi^{2}, \tag{26}\] where \(\Phi(r)\) is a red-shift function and \(b(r)\) is a shape function. There should be no event horizon in the traversable wormhole and the effect of tidal gravity forces on the traveler should be negligible. The shape function has to satisfy the following conditions in order wormhole solutions to exist: * \(b(r_{0})=r_{0}\) * For all \(r>r_{0}\), \(\frac{b(r)-b^{\prime}(r)r}{b^{\prime}(r)}>0\), this is called flare-out condition. * \(b^{\prime}(r)<1\) * \(\frac{b(r)}{r}<1\) for \(r>r_{0}\) * \(\frac{b(r)}{r}\rightarrow\) as \(r\rightarrow\infty\). Note that the above requirements signify that the equality holds at the throat, \(b(r)\leq r\) and \(b^{\prime}(r)\leq 1\) for all \(r\geq r_{0}\). We choose a set of four basis 1-forms \(\theta^{\alpha}\) as, \[\theta^{1} = \frac{1}{\sqrt{2}}\left[e^{\Phi}(r)dt-\left(1-\frac{b(r)}{r} \right)^{-\frac{1}{2}}dr\right],\] \[\theta^{2} = \frac{1}{\sqrt{2}}\left[e^{\Phi}(r)dt+\left(1-\frac{b(r)}{r} \right)^{-\frac{1}{2}}dr\right],\] \[\theta^{3} = \frac{r}{\sqrt{2}}(d\theta-i\sin\theta d\phi),\] \[\theta^{4} = \frac{r}{\sqrt{2}}(d\theta+i\sin\theta d\phi). \tag{27}\] Consequently, the metric (26) becomes \[ds^{2}=2\theta^{1}\theta^{2}-2\theta^{3}\theta^{4}. \tag{28}\] The non-vanishing covariant components of metric tensor for the metric (28) denoted by \(\eta_{(\alpha)(\beta)}\) are given by \[\eta_{12}=1=\eta_{21},\ \ \ \ \eta_{34}=-1=\eta_{43} \tag{29}\] By equations (4) and (27), the null tetrad vectors can be chosen as \[l_{i} = \frac{1}{\sqrt{2}}\left[e^{\Phi}(r),\left(1-\frac{b(r)}{r}\right)^{- \frac{1}{2}},0,0\right],\] \[n_{i} = \frac{1}{\sqrt{2}}\left[e^{\Phi}(r),-\left(1-\frac{b(r)}{r}\right) ^{-\frac{1}{2}},0,0\right],\] \[m_{i} = \frac{1}{\sqrt{2}}(0,0,-r,-ir\sin\theta),\] \[\bar{m}_{i} = \frac{1}{\sqrt{2}}(0,0,-r,ir\sin\theta). \tag{30}\] Using (23) the exterior derivatives of the basis 1-forms are, \[d\theta^{1} = \frac{-\Phi^{\prime}(r)}{\sqrt{2}}\left(1-\frac{b(r)}{r}\right)^{ \frac{1}{2}}\theta^{1}\wedge\theta^{2}-\sqrt{2}kS_{0}(\theta^{1}\wedge\theta^ {3}-\theta^{2}\wedge\theta^{3})\] \[-\sqrt{2}kS_{0}(\theta^{1}\wedge\theta^{4}-\theta^{2}\wedge \theta^{4})+2\sqrt{2}kS_{1}\theta^{3}\wedge\theta^{4},\] \[= d\theta^{2},\] \[d\theta^{3} = -\frac{1}{\sqrt{2}r}\left(1-\frac{b(r)}{r}\right)^{\frac{1}{2}} \theta^{1}\wedge\theta^{3}+\frac{1}{\sqrt{2}r}\left(1-\frac{b(r)}{r}\right)^{ \frac{1}{2}}\theta^{2}\wedge\theta^{3}-\frac{\cot\theta}{\sqrt{2}r}\theta^{3} \wedge\theta^{4},\] \[d\theta^{4} = -\frac{1}{\sqrt{2}r}\left(1-\frac{b(r)}{r}\right)^{\frac{1}{2}} \theta^{1}\wedge\theta^{4}+\frac{1}{\sqrt{2}r}\left(1-\frac{b(r)}{r}\right)^{ \frac{1}{2}}\theta^{2}\wedge\theta^{4}-\frac{\cot\theta}{\sqrt{2}r}\theta^{4} \wedge\theta^{3}.\] Comparing above equation with Cartan's first equation of structure (9) the non-vanishing Ricci rotation coefficients are, \[\rho = \mu=\frac{1}{\sqrt{2}r}\left(1-\frac{b(r)}{r}\right)^{\frac{1}{2}},\] \[\alpha = -\beta=\frac{-\cot\theta}{2\sqrt{2}r},\] \[\epsilon = \gamma=-\frac{\Phi^{\prime}(r)}{2\sqrt{2}}\left(1-\frac{b(r)}{r} \right)^{\frac{1}{2}}. \tag{32}\] The tetrad components of connection 1-forms are easily obtained by substituting these values in equations (12) to get \[\omega_{12} = \frac{\Phi^{\prime}(r)}{\sqrt{2}}\left(1-\frac{b(r)}{r}\right)^{ \frac{1}{2}}(\theta^{1}+\theta^{2})+\sqrt{2}kS_{0}\theta^{3}+\sqrt{2}kS_{0} \theta^{4},\] \[\omega_{13} = \sqrt{2}kS_{0}\theta^{1}+\left[-\frac{1}{\sqrt{2}r}\left(1-\frac{ b(r)}{r}\right)^{\frac{1}{2}}+\sqrt{2}kS_{1}\right]\theta^{4},\] \[\omega_{23} = -\sqrt{2}kS_{0}\theta^{2}+\left[\frac{1}{\sqrt{2}r}\left(1-\frac {b(r)}{r}\right)^{\frac{1}{2}}+\sqrt{2}kS_{1}\right]\theta^{4},\] \[\omega_{34} = -\sqrt{2}kS_{1}(\theta^{1}+\theta^{2})+\frac{\cot\theta}{\sqrt{2 }r}(\theta^{3}-\theta^{4}). \tag{33}\] Using Cartan's second equation of structure (13) the tetrad form of curvature 2-forms are obtained as, \[\Omega_{12}= -\left[\Phi^{\prime\prime}(r)\left(1-\frac{b(r)}{r}\right)+\frac {\Phi^{\prime}(r)}{2}\left(\frac{b(r)}{r^{2}}-\frac{b^{\prime}(r)}{r}\right)+ \Phi^{\prime}(r)^{2}\left(1-\frac{b(r)}{r}\right)+4k^{2}S_{0}S_{0}\right]\theta ^{1}\wedge\theta^{2}\] \[+\left[2k^{2}S_{0}S_{1}-kS_{0,r}\left(1-\frac{b(r)}{r}\right)^{1 /2}\right]\theta^{1}\wedge\theta^{3}+\left[-2k^{2}S_{0}S_{1}-k\bar{S}_{0,r} \left(1-\frac{b(r)}{r}\right)^{1/2}\right]\theta^{1}\wedge\theta^{4}\] \[+\left[2k^{2}S_{0}S_{1}+kS_{0,r}\left(1-\frac{b(r)}{r}\right)^{1 /2}\right]\theta^{2}\wedge\theta^{3}+\left[-2k^{2}\bar{S}_{0}S_{1}+k\bar{S}_{0,r}\left(1-\frac{b(r)}{r}\right)^{1/2}\right]\theta^{2}\wedge\theta^{4}\] \[+\left[\frac{k}{r}(\bar{S}_{0}-S_{0})\cot\theta-4kS_{1}\left(1- \frac{b(r)}{r}\right)^{1/2}\right]\theta^{3}\wedge\theta^{4},\] \[\Omega_{13}= -\left[2kS_{0}\Phi^{\prime}(r)\left(1-\frac{b(r)}{r}\right)^{1 /2}+kS_{0,r}\left(1-\frac{b(r)}{r}\right)^{1/2}+2k^{2}S_{0}S_{1}\right]\theta ^{1}\wedge\theta^{2}\] \[+\left[\frac{kS_{0}}{r}\cot\theta-2k^{2}S_{0}^{2}\right]\theta^{ 1}\wedge\theta^{3}+\left[\frac{1}{4r}\left(\frac{b(r)}{r^{2}}-\frac{b^{\prime }(r)}{r}\right)-kS_{1,r}\left(1-\frac{b(r)}{r}\right)^{1/2}\right.\] \[\left.+kS_{1}\Phi^{\prime}(r)\left(1-\frac{b(r)}{r}\right)^{1/2} -\frac{\Phi^{\prime}(r)}{2r}\left(1-\frac{b(r)}{r}\right)-\frac{kS_{0}}{r} \cot\theta-2k^{2}S_{1}^{2}-2k^{2}S_{0}\bar{S}_{0}\right]\theta^{1}\wedge \theta^{4}\] \[+\left[\frac{-1}{4r}\left(\frac{b(r)}{r^{2}}-\frac{b^{\prime}(r)} {r}\right)+kS_{1,r}\left(1-\frac{b(r)}{r}\right)^{1/2}kS_{1}\Phi^{\prime}(r) \left(1-\frac{b(r)}{r}\right)^{1/2}-\frac{\Phi^{\prime}(r)}{2r}\left(1-\frac{ b(r)}{r}\right)\right.\] \[\left.+\frac{2kS_{1}}{r}\left(1-\frac{b(r)}{r}\right)^{1/2}-2k^{2 }S_{1}^{2}\right]\theta^{2}\wedge\theta^{4}+\left[\frac{-kS_{0}}{r}\left(1- \frac{b(r)}{r}\right)^{1/2}+2k^{2}S_{0}S_{1}\right]\theta^{3}\wedge\theta^{4},\] \[\Omega_{23}= -\left[2k^{2}S_{0}S_{1}-kS_{0,r}\left(1-\frac{b(r)}{r}\right)^{1/2}- 2kS_{0}\Phi^{\prime}(r)\left(1-\frac{b(r)}{r}\right)^{1/2}\right]\theta^{1} \wedge\theta^{2}\] \[-\left[kS_{1,r}\left(1-\frac{b(r)}{r}\right)^{1/2}+\frac{1}{4r} \left(\frac{b(r)}{r^{2}}-\frac{b^{\prime}(r)}{r}\right)+kS_{1}\Phi^{\prime}(r) \left(1-\frac{b(r)}{r}\right)^{1/2}+\frac{\Phi^{\prime}(r)}{2r}\left(1-\frac{b( r)}{r}\right)\right.\] \[+\left.\frac{2kS_{1}}{r}\left(1-\frac{b(r)}{r}\right)^{1/2}+2k^{ 2}S_{1}^{2}\right]\theta^{1}\wedge\theta^{4}-\left[\frac{kS_{0}}{r}\cot\theta +2k^{2}S_{0}^{2}\right]\theta^{2}\wedge\theta^{3}\] \[+\left[kS_{1,r}\left(1-\frac{b(r)}{r}\right)^{1/2}+\frac{1}{4r} \left(\frac{b(r)}{r^{2}}-\frac{b^{\prime}(r)}{r}\right)-\frac{\Phi^{\prime}(r )}{2r}\left(1-\frac{b(r)}{r}\right)-kS_{1}\Phi^{\prime}(r)\left(1-\frac{b(r)} {r}\right)^{1/2}\right.\] \[+\left.\frac{kS_{0}}{r}\cot\theta-\frac{2kS_{1}}{r}\left(1-\frac{ b(r)}{r}\right)^{1/2}-2k^{2}S_{1}^{2}-2k^{2}S_{0}S_{0}\right]\theta^{2} \wedge\theta^{4}\] \[-\left[\frac{kS_{0}}{r}\left(1-\frac{b(r)}{r}\right)^{1/2}+2k^{2} S_{0}S_{1}\right]\theta^{3}\wedge\theta^{4},\] \[\Omega_{34} =\left[2kS_{1,r}\left(1-\frac{b(r)}{r}\right)^{1/2}+2kS_{1}\Phi^ {\prime}(r)\left(1-\frac{b(r)}{r}\right)^{1/2}\right]\theta^{1}\wedge\theta^{2}\] \[+\left[\frac{2kS_{1}}{r}\cot\theta-\frac{kS_{0}}{r}\left(1-\frac{ b(r)}{r}\right)^{1/2}+2k^{2}S_{0}S_{1}\right]\theta^{1}\wedge\theta^{3}\] \[+\left[\frac{2kS_{1}}{r}\cot\theta-\frac{kS_{0}}{r}\left(1-\frac{ b(r)}{r}\right)^{1/2}-2k^{2}S_{0}S_{1}\right]\theta^{2}\wedge\theta^{3}\] \[+\left[\frac{2kS_{1}}{r}\cot\theta+\frac{k\bar{S_{0}}}{r}\left(1 -\frac{b(r)}{r}\right)^{1/2}-2k^{2}\bar{S_{0}}S_{1}\right]\theta^{2}\wedge \theta^{4}+\left[\frac{b(r)}{r^{3}}-4k^{2}S_{1}^{2}\right]\theta^{3}\wedge \theta^{4}. \tag{34}\] The relation of curvature 2-forms and Riemann-Christoffel curvature tensor given in equation (14) and hence by equations (14) and (34) the independent tetrad component of Riemann tensor are obtained as, \[R_{1212} = -\left[\Phi^{\prime\prime}(r)\left(1-\frac{b(r)}{r}\right)+\frac{ \Phi^{\prime}(r)}{2}\left(\frac{b(r)}{r^{2}}-\frac{b^{\prime}(r)}{r}\right)+ \Phi^{\prime}(r)^{2}\left(1-\frac{b(r)}{r}\right)+4k^{2}S_{0}S_{0}\right],\] \[R_{1312} = \left[2k^{2}S_{0}S_{1}-kS_{0,r}\left(1-\frac{b(r)}{r}\right)^{1/2 }\right],\] \[R_{2312} = \left[2k^{2}S_{0}S_{1}+kS_{0,r}\left(1-\frac{b(r)}{r}\right)^{1/2 }\right],\] \[R_{3412} = \left[\frac{k}{r}(\bar{S}_{0}-S_{0})\cot\theta-\frac{4kS_{1}}{r} \left(1-\frac{b(r)}{r}\right)^{1/2}\right],\] \[R_{1213} = \left[-2kS_{0}\Phi^{\prime}\left(1-\frac{b(r)}{r}\right)^{1/2}- kS_{0,r}\left(1-\frac{b(r)}{r}\right)^{1/2}-2k^{2}S_{0}S_{1}\right],\] \[R_{1313} = \left[\frac{kS_{0}}{r}\cot\theta-2k^{2}S_{0}^{2}\right],\] \[R_{1413} = \left[\frac{1}{4r}\left(\frac{b(r)}{r^{2}}-\frac{b^{\prime}(r)}{ r}\right)-kS_{1,r}\left(1-\frac{b(r)}{r}\right)^{1/2}+kS_{1}\Phi^{\prime}(r) \left(1-\frac{b(r)}{r}\right)^{1/2}\right.\] \[\left.-\left.\frac{\Phi^{\prime}(r)}{2r}\left(1-\frac{b(r)}{r} \right)-\frac{kS_{0}}{r}\cot\theta-2k^{2}S_{1}^{2}-2k^{2}S_{0}S_{0}\right],\] \[R_{2313} = 0,\] \[R_{2413} = \left[\frac{-1}{4r}\left(\frac{b(r)}{r^{2}}-\frac{b^{\prime}(r)}{r} \right)+kS_{1,r}\left(1-\frac{b(r)}{r}\right)^{1/2}+kS_{1}\Phi^{\prime}\left(1- \frac{b(r)}{r}\right)^{1/2}\right.\] \[\left.-\left.\frac{\Phi^{\prime}(r)}{2r}\left(1-\frac{b(r)}{r} \right)+\frac{2kS_{1}}{r}\left(1-\frac{b(r)}{r}\right)^{1/2}-2k^{2}S_{1}^{2} \right],\] \[R_{3413} = \left[\frac{-kS_{0}}{r}\left(1-\frac{b(r)}{r}\right)^{1/2}+2k^{2} S_{0}S_{1}\right],\] \[R_{1223} = \left[kS_{0,r}\left(1-\frac{b(r)}{r}\right)^{1/2}+2kS_{0}\Phi^{ \prime}(r)\left(1-\frac{b(r)}{r}\right)^{1/2}-2k^{2}S_{0}S_{1}\right],\] \[R_{1323} = 0,\] \[R_{1423} = -\left[kS_{1,r}\left(1-\frac{b(r)}{r}\right)^{1/2}+\frac{1}{4r} \left(\frac{b(r)}{r^{2}}-\frac{b^{\prime}(r)}{r}\right)+kS_{1}\Phi^{\prime}(r )\left(1-\frac{b(r)}{r}\right)^{1/2}\right.\] \[\left.+\left.\frac{\Phi^{\prime}(r)}{2r}\left(1-\frac{b(r)}{r} \right)+\frac{2kS_{1}}{r}\left(1-\frac{b(r)}{r}\right)^{1/2}+2k^{2}S_{1}^{2} \right],\] \[R_{2323} = -\left[\frac{kS_{0}}{r}\cot\theta+2k^{2}S_{0}^{2}\right],\] \[R_{2423} = \left[kS_{1,r}\left(1-\frac{b(r)}{r}\right)^{1/2}+\frac{1}{4r} \left(\frac{b(r)}{r^{2}}-\frac{b^{\prime}(r)}{r}\right)-\frac{\Phi^{\prime}(r )}{2r}\left(1-\frac{b(r)}{r}\right)\right.\] \[\left.-\left.kS_{1}\Phi^{\prime}\left(1-\frac{b(r)}{r}\right)^{1/ 2}+\frac{kS_{0}}{r}\cot\theta-\frac{2kS_{1}}{r}\left(1-\frac{b(r)}{r}\right)^ {1/2}-2k^{2}S_{1}^{2}-2k^{2}S_{0}\bar{S_{0}}\right],\] \[R_{3423} = -\left[\frac{kS_{0}}{r}\left(1-\frac{b(r)}{r}\right)^{1/2}+2k^{2} S_{0}S_{1}\right],\] \[R_{1234} = \left[2kS_{1,r}\left(1-\frac{b(r)}{r}\right)^{1/2}+2kS_{1}\Phi^{ \prime}\left(1-\frac{b(r)}{r}\right)^{1/2}\right],\] \[R_{1334} = \left[\frac{2kS_{1}}{r}\cot\theta-\frac{kS_{0}}{r}\left(1-\frac{ b(r)}{r}\right)^{1/2}+2k^{2}S_{0}S_{1}\right],\] \[R_{1434} = \left[\frac{2kS_{1}}{r}\cot\theta+\frac{kS_{0}}{r}\left(1-\frac{ b(r)}{r}\right)^{1/2}+2k^{2}\bar{S_{0}}S_{1}\right],\] \[R_{2334} = \left[\frac{2kS_{1}}{r}\cot\theta-\frac{kS_{0}}{r}\left(1-\frac{ b(r)}{r}\right)^{1/2}-2k^{2}S_{0}S_{1}\right],\] \[R_{2434} = \left[\frac{2kS_{1}}{r}\cot\theta+\frac{k\bar{S_{0}}}{r}\left(1- \frac{b(r)}{r}\right)^{1/2}-2k^{2}\bar{S_{0}}S_{1}\right],\] \[R_{3434} = \left[\frac{b(r)}{r^{3}}-4k^{2}S_{1}^{2}\right] \tag{35}\] The tetrad components of Ricci tensor and Ricci scalar are expressed as, \[R_{11} =\frac{1}{2r}\left(\frac{b(r)}{r^{2}}-\frac{b^{\prime}(r)}{r}\right) -\frac{\Phi^{\prime}(r)}{r}\left(1-\frac{b(r)}{r}\right)-\frac{k}{r}\cot\theta( S_{0}+\bar{S}_{0})-4k^{2}S_{1}^{2}-4k^{2}S_{0}\bar{S}_{0},\] \[R_{12} =-\left[\left(\Phi^{\prime\prime}(r)+\Phi^{\prime}(r)^{2}+\frac{ \Phi^{\prime}(r)}{r}\right)\left(1-\frac{b(r)}{r}\right)+\left(\frac{\Phi^{ \prime}(r)}{2}+\frac{1}{2r}\right)\left(\frac{b(r)}{r^{2}}-\frac{b^{\prime}(r) }{r}\right)+4k^{2}S_{1}^{2}+4k^{2}S_{0}\bar{S}_{0}\right]\] \[=R_{21},\] \[R_{13} =\frac{2kS_{1}}{r}\cot\theta-\left[\frac{kS_{0}}{r}+kS_{0,r}+2kS _{0}\Phi^{\prime}(r)\right]\left(1-\frac{b(r)}{r}\right)^{1/2},\] \[R_{22} =\frac{1}{2r}\left(\frac{b(r)}{r^{2}}-\frac{b^{\prime}(r)}{r} \right)-\frac{\Phi^{\prime}(r)}{r}\left(1-\frac{b(r)}{r}\right)+\frac{k}{r} \cot\theta(S_{0}+\bar{S}_{0})-4k^{2}S_{1}^{2}-4k^{2}S_{0}\bar{S}_{0},\] \[R_{23} =\frac{2kS_{1}}{r}\cot\theta-k\left[2\Phi^{\prime}(r)S_{0}+\frac{ S_{0}}{r}+S_{0,r}\right]\left(1-\frac{b(r)}{r}\right)^{1/2},\] \[R_{31} =4k^{2}S_{0}S_{1}-\left[\frac{kS_{0}}{r}+kS_{0,r}\right]\left(1- \frac{b(r)}{r}\right)^{1/2},\] \[R_{32} =-\left[\left(kS_{0,r}+\frac{kS_{0}}{r}\right)\left(1-\frac{b(r)} {r}\right)^{1/2}+4k^{2}S_{0}S_{1}\right],\] \[R_{33} =0,\] \[R_{34} =\frac{1}{2r}\left(\frac{b(r)}{r^{2}}-\frac{b^{\prime}(r)}{r} \right)+\frac{\Phi^{\prime}(r)}{r}\left(1-\frac{b(r)}{r}\right)+\frac{b(r)}{r ^{3}}, \tag{36}\] and \[R =-2\left[\Phi^{\prime\prime}(r)+\Phi^{\prime}(r)^{2}+\frac{2\Phi^ {\prime}(r)}{r}\right]\left(1-\frac{b(r)}{r}\right)-\left(\Phi^{\prime}(r)+ \frac{2}{r}\right)\left(\frac{b(r)}{r^{2}}-\frac{b^{\prime}(r)}{r}\right)\] \[\quad-\frac{2b(r)}{r^{3}}-8k^{2}S_{0}\bar{S}_{0}-8k^{2}S_{1}^{2}. \tag{37}\] In the tetrad form Energy momentum tensor can be obtained by, \[t_{\alpha\beta}=t_{ij}e^{i}{}_{(\alpha)}e^{j}{}_{(\beta)}\] Hence by equation (25) we get, \[t_{11}=t_{22}=\frac{1}{2}(\rho+p_{r}),\ \ \ \ t_{12}=t_{21}= \frac{1}{2}(\rho-p_{r}),\] \[t_{34}=t_{43}=p_{t},\ \ \ \ t_{13}=t_{23}=t_{33}=0,\] \[t_{31}=t_{32}=-S_{0}\Phi^{\prime}(r)\left(1-\frac{b(r)}{r} \right)^{1/2}. \tag{38}\] ### Field Equations Field equations are expressed as, \[R_{ij}-\frac{1}{2}\eta_{ij}R=-kt_{ij},\] \[\frac{1}{2r}\left(\frac{b(r)}{r^{2}}-\frac{b^{\prime}(r)}{r}\right) -\frac{\Phi^{\prime}(r)}{r}\left(1-\frac{b(r)}{r}\right)-\frac{k}{r}\cot\theta( S_{0}+\bar{S_{0}})-4k^{2}S_{1}^{2}-4k^{2}S_{0}\bar{S_{0}}=\frac{-k}{2}(\rho+p_{r}),\] \[\frac{\Phi^{\prime}(r)}{r}\left(1-\frac{b(r)}{r}\right)+\frac{1}{ 2r}\left(\frac{b(r)}{r^{2}}-\frac{b^{\prime}(r)}{r}\right)+\frac{b(r)}{r^{3}}= \frac{-k}{2}(\rho-p_{r}),\] \[\frac{2kS_{1}}{r}\cot\theta-\left[\frac{kS_{0}}{r}+kS_{0,r}+kS_{ 0}\Phi^{\prime}(r)\right]\left(1-\frac{b(r)}{r}\right)^{1/2}=0,\] \[\frac{1}{2r}\left(\frac{b(r)}{r^{2}}-\frac{b^{\prime}(r)}{r} \right)-\frac{\Phi^{\prime}(r)}{r}\left(1-\frac{b(r)}{r}\right)+\frac{k}{r} \cot\theta(S_{0}+S_{0})-4k^{2}S_{1}^{2}-4k^{2}S_{0}\bar{S_{0}}=\frac{-k}{2}( \rho+p_{r}),\] \[\frac{2kS_{1}}{r}\cot\theta-\left[\frac{kS_{0}}{r}+kS_{0,r}+2kS_{ 0}\Phi^{\prime}(r)\right]\left(1-\frac{b(r)}{r}\right)^{1/2}=0,\] \[4k^{2}S_{0}S_{1}-\left[kS_{0}\left(\frac{1}{r}+1\right)+kS_{0,r} \right]\left(1-\frac{b(r)}{r}\right)^{1/2}=0,\] \[4k^{2}S_{0}S_{1}+\left[kS_{0}\Phi^{\prime}(r)+S_{0,r}\right] \left(1-\frac{b(r)}{r}\right)^{1/2}=0,\] \[\left(\frac{\Phi^{\prime}(r)}{2}+\frac{1}{2r}\right)\left(\frac{ b(r)}{r^{2}}-\frac{b^{\prime}(r)}{r}\right)+\left(\Phi^{\prime\prime}(r)+\Phi^{ \prime}(r)^{2}+\frac{\Phi^{\prime}(r)}{r}\right)\left(1-\frac{b(r)}{r}\right)\] \[+4k^{2}S_{0}\bar{S_{0}}+4k^{2}S_{1}^{2}=kp_{t}. \tag{39}\] In equations (39) one can see that they are consistent only if \(S_{0}=0\) and so they reduce to \[\frac{1}{2r}\left(\frac{b(r)}{r^{2}}-\frac{b^{\prime}(r)}{r}\right) -\frac{\Phi^{\prime}(r)}{r}\left(1-\frac{b(r)}{r}\right)-4k^{2}S_{1}^{2}=\frac {-k}{2}(\rho+p_{r}),\] \[\frac{\Phi^{\prime}(r)}{r}\left(1-\frac{b(r)}{r}\right)+\frac{1}{ 2r}\left(\frac{b(r)}{r^{2}}-\frac{b^{\prime}(r)}{r}\right)+\frac{b(r)}{r^{3}} =\frac{-k}{2}(\rho-p_{r}),\] \[\left(\frac{\Phi^{\prime}(r)}{2}+\frac{1}{2r}\right)\left(\frac{ b(r)}{r^{2}}-\frac{b^{\prime}(r)}{r}\right)+\left(\Phi^{\prime\prime}(r)+\Phi^{ \prime}(r)^{2}+\frac{\Phi^{\prime}(r)}{r}\right)\left(1-\frac{b(r)}{r}\right)\] \[+4k^{2}S_{1}^{2}=kp_{t}. \tag{40}\] By equations (40) density and pressures are written as, \[k\rho =4k^{2}S_{1}^{2}-\frac{1}{r}\left(\frac{b(r)}{r^{2}}-\frac{b^{ \prime}(r)}{r}\right)-\frac{b(r)}{r^{3}}, \tag{41}\] \[kp_{r} =4k^{2}S_{1}^{2}+\frac{2\Phi^{\prime}(r)}{r}\left(1-\frac{b(r)}{ r}\right)+\frac{b(r)}{r^{3}},\] (42) \[kp_{t} =\left(\frac{\Phi^{\prime}(r)}{2}+\frac{1}{2r}\right)\left(\frac{ b(r)}{r^{2}}-\frac{b^{\prime}(r)}{r}\right)+\left(\Phi^{\prime\prime}(r)+\Phi^{ \prime}(r)^{2}+\frac{\Phi^{\prime}(r)}{r}\right)\left(1-\frac{b(r)}{r}\right)\] \[\quad+4k^{2}S_{1}^{2}. \tag{43}\] However, in the above expressions there is a term of spin density \(S_{1}\) for that we used the law of conservation of energy-momentum tensor from which we obtained the equation, \[\Phi^{\prime}(r)(\rho+p_{r})+p_{r}^{\prime}+\frac{2}{r}(p_{r}-p_{t})-4[2\Phi^{ \prime}(r)S_{1}^{2}+(S_{1}^{2})^{\prime}]=0 \tag{44}\] In equation (44) the first three terms correspond to the matter part and the last term corresponds to the spin density part. The conservation of the spin part can be written independently as, \[2\Phi^{\prime}(r)S_{1}^{2}+(S_{1}^{2})^{\prime}=0 \tag{45}\] The solution to the above equation is \[S_{1}^{2}=Ce^{-2\Phi(r)} \tag{46}\] where \(C\) is a positive integration constant. ## 4 Physical analysis An appropriate shape function \(b(r)\) and red-shift function \(\Phi(r)\) are needed to investigate the behavior of the matter distribution supporting the wormhole geometry. In the present investigation, we employed a particular choice of shape function (SF) and the red-shift function to perform physical analysis. The shape function is expressed as: \(b(r)=re^{1-\frac{r}{r_{0}}}\)[19]. The variable \(r\) has values ranging from \(r_{0}\) to infinity, where \(r_{0}\) represents the minimum radius of the wormhole's throat. Employing this particular choice of SF, we investigate different cases. The features of the shape function and the trend of the red-shift func Figure 1: **Left Panel (a)** Profile of shape functions \(b(r)\) and the red-shift function (\(\Phi(r)=\frac{r_{0}}{r_{0}}\), with \(r_{0}=1\), \(n=3\)), **Right Panel (b)** The flare-out condition against the radial coordinate. The vertical grey band represent throat radius region, \(r_{0}\leq 1\) 1. It is evident that the SF fairly meets the requirement for a wormhole, as it can be observed that \(b(r)<1\), \(b^{\prime}(r)<1\), and \(\frac{b(r)}{r}\to 0\) as \(r\rightarrow\infty\) hold. However, in Fig.1a it can be seen that the \(b^{\prime}(r)<1\) criterion for the particular choice of shape function does not satisfy the geometrical requirement of a wormhole. Finally, it can be seen that all \(r\) satisfy the flare-out criterion in Fig.1b. Therefore, we infer the conclusion that the Einstein-Cartan theory and the models we have at present meet the essential criteria to characterize a traversable wormhole. The consequences of the Einstein-Cartan theory on the hydrodynamic balance at the throat and its surroundings for the Morris-Thorne model with a non-zero red-shift function are addressed in the next part. ### Energy Conditions Energy conditions are critical for studying the behavior of matter inside the wormhole. In GR, there are some fundamental energy requirements as follows: * Null Energy Condition (NEC) : Both \(\rho+p_{r}\) and \(\rho+p_{t}\) are non-negative. It depicts gravity's attractive nature. * Weak Energy Condition (WEC) : Non-negative energy density requires that \(\rho+p_{r}\) and \(\rho+p_{t}\) are both non-negative, besides this the energy density needs to be positive. Thus, WEC demands positive energy density besides NEC. * Strong Energy Condition (SEC) : For non-negative \(\rho+p_{r}\geq 0\), \(\rho+p_{t}\geq 0\), \(\rho+p_{r}+p_{t}\geq 0\). It originates from the spherically symmetrical metric and the attracting character of gravity. * Dominant Energy Condition (DEC) : It is given by \(\rho-|p_{r}|\) and \(\rho-|p_{t}|\) have to be non-negative for there to be non-negative energy density. In addition, which quantifies the velocity at which energy flows at the speed of light. Now, we will examine the above energy conditions for a particular choice of shape function and red-shift function for the wormhole. _Case :_\(b(r)=re^{1-\frac{r}{r_{0}}}\), \(\Phi(r)=\frac{r_{0}}{r^{n}}\) To begin with positive real number \(n\), red-shift function \(\Phi(r)=\frac{r_{0}}{r^{n}}\) and the shape function \(b(r)=re^{1-\frac{r}{r_{0}}}\), where \(n\) is arbitrary positive real number. The analysis of this kind of shape function for constant red-shift has been found in [19]. In this instance, in order to solve the field equations (41)-(43), we took into account the exponential forms of the shape function. Following the solution of these equations, the terms for energy density, radial pressure, and tangential pressure are as follows: \[\rho = \frac{1}{8\pi}\left[256\pi^{2}Ce^{-2\frac{r_{0}}{r_{0}}}-\frac{1}{rr_ {0}}e^{1-\frac{r}{r_{0}}}-\frac{1}{r^{2}}e^{1-\frac{r}{r_{0}}}\right] \tag{47}\] \[p_{r} = \frac{1}{8\pi}\left[256\pi^{2}Ce^{-2\frac{r_{0}}{r_{0}}}-\frac{2nr _{0}}{r^{n+2}}(1-e^{1-\frac{r}{r_{0}}})+\frac{1}{r^{2}}e^{1-\frac{r}{r_{0}}}\right]\] (48) \[p_{t} = \frac{1}{8\pi}\left[256\pi^{2}Ce^{-2\frac{r_{0}}{r_{0}}}+\left( \frac{1}{2rr_{0}}-\frac{nr_{0}}{2r_{0}r^{n+1}}\right)e^{1-\frac{r}{r_{0}}}\right.\] (49) \[\left.+\left(\frac{n(n+1)r_{0}}{r^{n+2}}+\frac{n^{2}r_{0}^{2}}{r^ {2(n+1)}}-\frac{nr_{0}}{r^{n+2}}\right)(1-e^{1-\frac{r}{r_{0}}})\right]\] The pressure anisotropy is denoted by \(\Delta\) and it is given by \[\Delta=p_{t}-p_{r} \tag{50}\] We show the energy density and pressure profile for the SF in Fig. 2. It can be observed that the energies \(\rho\), \(p_{r}\)\(p_{t}\), and \(\Delta\) are displaying a positive magnitude right above the throat. In this study, we incorporate an anisotropic configuration, which offers the plausibility of such exotic matter. Even more, far from the throat the anisotropy \(\Delta\) is saturated, i.e, \(\Delta\to 0\) for large distances. Moreover, as can be seen \(\rho\), \(p_{r}\) and \(p_{t}\) are finite for all \(r\in[r_{0},\infty]\). The results of Fig.2 show that \(\rho\) and \(p_{t}\) yield the same sort of curve. Figure 2: Variation of \(\rho\), \(p_{r}\), \(p_{t}\) and \(\Delta\) with radial coordinate \(r\) for particular choice of SF with \(n=3\), \(r_{0}=1\) and \(C=0.9\). The vertical grey band represent throat radius region, \(r_{0}\leq 1\). In Fig. 3, we present the energy condition profile. It is important to note that, all the energy conditions are strictly positive throughout the entire spacetime. ## 5 Stability analysis In this section, we investigate the stability of a wormhole within the framework of the Einstein-Cartan theory. By incorporating the spin of particles into the theory, we aim to explore how this additional geometric quantities affect the stability of the wormhole. To analyze its stability, we utilize an equilibrium condition derived from the Tolman-Oppenheimer-Volkov (TOV) equation [25, 26]. This equation provides insights into the equilibrium state of a self-gravitating system, enabling us to assess the stability of the wormhole under consideration. The four terms in Eq.(44), which are defined as follows, are used to calculate the equilibrium state of a structure: 1. The gravitational force \[F_{g}=\Phi^{\prime}(\rho+p_{r}),\] (51) 2. The hydrostatic force \[F_{h}=\frac{dp_{r}}{dr},\] (52) 3. The anisotropy force \[F_{a}=\frac{2(p_{r}-p_{t})}{r},\] (53) Figure 3: Variation of energy conditions, NEC, SEC and DEC against radial coordinate \(r\) with \(n=3\), \(r_{0}=1\) and \(C=0.9\). The vertical grey band represent throat radius region, \(r_{0}\leq 1\). * The spin force \[F_{s}=-4[2\Phi^{\prime}S_{1}^{2}+(S_{1}^{2})^{\prime}],\] (54) due to the anisotropic pressure and spin force in a Morris-Thorne wormhole. Eq.(44) then yields the following equilibrium condition: \[F_{g}+F_{h}+F_{a}+F_{s}=0. \tag{55}\] From Fig. 4, we can visualize that the hydrostatic condition is satisfied at a large distance thereby, concluding that the system is in static equilibrium. The spin force is almost negligible for the adopted shape function. ## 6 Conclusion With the help of Cartan's equations of structure and Newmann-Penrose-Jogia-Griffiths formalism, the tetrad form of various curvature tensors like Riemannian tensor, Ricci tensor, etc. has been computed. We have found a tetrad form of energy-momentum tensor for Weysenhoff fluid. Using the conservation law of energy-momentum tensor the spin density has been derived in the form of red-shift function. Figure 4: The hydrostatic balance of the structure under different forces. The detailed analysis of EC for different ranges of throat radius are listed in Table 1. We studied a particular scenario in which the red-shift function is taken as \(\frac{r\alpha}{r^{n}}\) and the shape function is \(re^{1-\frac{r}{r_{0}}}\). In addition, geometric configurations and energy conditions are determined. From Table 1, we can see that DEC is violated everywhere. With the use of exotic matter, which serves as the supporting matter for wormhole geometry in the framework of general relativity, traversable static wormholes may be possible and consequently, the WEC is also violated. However, the existence of such wormholes is possible in ECT in the absence of exotic matter. The energy conditions WEC and NEC are valid for \(r>0.3\) and SEC is valid for \(r>0.1\) as mentioned in the above table and Fig.3. For \(r\leq 0.3\) the energy conditions are violated and hence we get some exotic type matter in that region. Thus, in our case radius of the throat must be greater than \(0.3\) for an exotic-free wormhole. Our investigation into the stability of a wormhole within the framework of the Einstein-Cartan theory has provided valuable insights into the role of spin density in determining the equilibrium state of the wormhole structure. By utilizing the TOV equation, we were able to analyze the interplay of various forces like gravitational, hydrostatic, anisotropy, and spin forces acting on the wormhole. Stability analysis showed that the spin force has a negligible impact on the equilibrium of the wormhole for particular choice shape functions. This finding suggests that within the considered framework, the inclusion of spin density does not significantly affect the overall stability of the wormhole. ## Acknowledgments SVS is thankful to the CSIR, India for providing financial support under CSIR Senior Research Fellowship (09/157(0059)/2021-EMR-I).
2310.04439
Fibonacci Cycles and Fixed Points
Let $S_b(n)$ denote the sum of the squares of the digits of the positive integer $n$ in base $b\geq2$. It is well-known that the sequence of iterates of $S_b(n)$ terminates in a fixed point or enters a cycle. Let $N=2n-1$, $n\geq2$. It is shown that if $b=F_{N+1}$, then a cycle of $S_b$ exists with initial term $F_{N}=F_{0}.F_{N}$, and terminal element $F_{n}.F_{n-1}$ if $n$ is even, or terminal element $F_{n-1}.F_{n}$ if $n$ is odd. Similarly, Let $N=2n+1$, $n\geq1$. If $b=F_{N-1}$, then a cycle of $S_b$ exists with initial term $F_{N}=F_{2}.F_{N-2}$, and terminal element $F_{n}.F_{n+1}$ if $n$ is even, or terminal element $F_{n+1}.F_{n}$ if $n$ is odd. Furthermore, the cycles also admit extension as an arithmetic sequence of cycles of $S_b$ with base $b=F_{N+1}+F_{N+2}k$ and $b=F_{N-1}+F_{N-2}k$, respectively. Some fixed points of $S_b$ with $b$ a Fibonacci base are shown to exist. Lastly, both cycles and fixed points admit further generalization to Pell polynomials.
Walter A. Kehowski
2023-10-01T11:26:52Z
http://arxiv.org/abs/2310.04439v1
# Fibonacci cycles and fixed points ###### Abstract. Let \(S_{b}(n)\) denote the sum of the squares of the digits of the positive integer \(n\) in base \(b\geq 2\). It is well-known that the sequence of iterates of \(S_{b}(n)\) terminates in a fixed point or enters a cycle. Let \(N=2n-1\), \(n\geq 2\). It is shown that if \(b=F_{N+1}\), then a cycle of \(S_{b}\) exists with initial term \(F_{N}=F_{0}.F_{N}\), and terminal element \(F_{n}.F_{n-1}\) if \(n\) is even, or terminal element \(F_{n-1}.F_{n}\) if \(n\) is odd. Similarly, Let \(N=2n+1\), \(n\geq 1\). If \(b=F_{N-1}\), then a cycle of \(S_{b}\) exists with initial term \(F_{N}=F_{2}.F_{N-2}\), and terminal element \(F_{n}.F_{n+1}\) if \(n\) is even, or terminal element \(F_{n+1}.F_{n}\) if \(n\) is odd. Furthermore, the cycles also admit extension as an arithmetic sequence of cycles of \(S_{b}\) with base \(b=F_{N+1}+F_{N+2}k\) and \(b=F_{N-1}+F_{N-2}k\), respectively. Some fixed points of \(S_{b}\) with \(b\) a Fibonacci base are shown to exist. Lastly, both cycles and fixed points admit further generalization to Pell polynomials. ## 1. Introduction Let \(b\geq 2\) be any number base and let \(S_{b}(n)\) denote the sum of squares of the digits of the positive integer \(n\) in base \(b\). It is known that the iterates of \(S_{b}\) on any positive integer eventually enter a cycle or terminate in a fixed point. It is the purpose of this paper to demonstrate the existence of certain cycles and fixed points of \(S_{b}\), where \(b=F_{2n}\), and \(F_{n}\) refers to the Fibonacci sequence: \(F_{0}=0\), \(F_{1}=1\), \(F_{n}=F_{n-1}+F_{n-2}\), \(n\geq 2\). See Section 3. Digits in base \(b\) are separated by periods, for example, \(x.y|_{b}=xb+y\). The \(|_{b}\) will be omitted if the base \(b\) is understood. We will say "\(x\) is fixed in base \(b\)" to mean that \(x\) is a fixed point for \(S_{b}\), that is, \(S_{b}(x)=x\). Once an element of a cycle is designated as an initial element \(x\), the terminal element of the cycle will that element \(z\) such that \(S_{b}(z)=x\). ### Fundamental cycles **Example 1.1** (Fundamental cycle of type I).: _A cycle under \(S_{b}\) starting with \(F_{N}=0.F_{N}\) and \(b=F_{N+1}\), \(N=2n-1\), \(n\geq 2\), is called a fundamental cycle of type I. The elements of the cycle follow in general from identity (4.1), namely,_ \[F_{i}^{2}+F_{N-i}^{2}=F_{N-(2j+1)}F_{N+1}+F_{2j+1},\] _where \(j=\min(i,N-i)\) and \(0\leq i\leq n-1\). Furthermore, the cycle ends with \(F_{n}.F_{n-1}\) if \(n\) is even, and ends with \(F_{n-1}.F_{n}\) if \(n\) is odd. See Section 4. For example, suppose the initial term is \(F_{11}=89\), with \(b=F_{12}=144\). See Table 1. Observe that the indices of each element sum to \(N=11\). The terminal element \(F_{6}.F_{5}\) of the cycle follows from Lucas's identity (3.4)._ **Example 1.2** (Fundamental cycle of type II).: _A cycle under \(S_{b}\) starting with \(F_{N}\) and \(b=F_{N-1}\), \(N=2n+1\), \(n\geq 2\), is called a fundamental cycle of type II. The elements of the cycle follow in general from identity (5.1), namely,_ \[F_{i}^{2}+F_{N-i}^{2}=F_{N-(2j-1)}F_{N-1}+F_{2j-1},\] _where \(j=\min(i,N-i)\) and \(1\leq i\leq n+1\). Furthermore, the cycle ends with \(F_{n}.F_{n+1}|_{b}\) if \(n\) is even, and ends with \(F_{n+1}.F_{n}|_{b}\) if \(n\) is odd. See Section 5. For example, suppose the initial term is \(F_{13}=233\), with \(b=F_{12}=144\). A computation similar to that of Table 1 shows that the cycle is_ \[F_{13}=F_{2}.F_{11}|_{b},\ F_{10}.F_{3}|_{b},\ F_{8}.F_{5}|_{b},\ F_{4}.F_{9}|_{b},\ F_{6}.F_{7}|_{b}.\] _Observe that the indices of each term sum to \(N=13\). The terminal element \(F_{6}.F_{7}|_{b}\) of the cycle follows from Lucas's identity (3.4)._ ### Fixed points Fixed points occur in base \(b=F_{6n-2}\), with fixed point \(F_{2n}.F_{4n-1}|_{b}\), and in base \(b=F_{6n+2}\), with fixed point \(F_{2n}.F_{4n+1}|_{b}\). They are isolated fixed points, that is, they have no preimage under their respective \(S_{b}\). See Section 6. ### Arithmetic sequences of cycles The cycles of type I admit an extension to an arithmetic sequence of cycles. Namely, if \(b=F_{N+1}+F_{N+2}k\), \(N=2n-1\), \(n\geq 2\), there exists a cycle with initial term \(F_{0}+F_{1}k.F_{N}+F_{N+1}k|_{b}\), and terminal element \(F_{n}+F_{n+1}k.F_{n+1}+F_{n+2}k|_{b}\) if \(n\) is even, and terminal element \(F_{n+1}+F_{n+2}k.F_{n}+F_{n+1}k|_{b}\) if \(n\) is odd. See Section 7. The cycles of type II also admit an extension to an arithmetic sequence of cycles. Namely, if \(b=F_{N-1}+F_{N-2}k\), \(N=2n+1\), \(n\geq 2\), there exists a cycle with initial term \(F_{0}+F_{1}k.F_{N}+F_{N+1}k|_{b}\), and terminal element \(F_{n}+F_{n+1}k.F_{n+1}+F_{n+2}k|_{b}\) if \(n\) is even, and terminal element \(F_{n+1}+F_{n+2}k.F_{n}+F_{n+1}k|_{b}\) if \(n\) is odd. See Section 8. ### Pell cycles and fixed points The results mentioned in Subsections 1.2 and 1.3 can be generalized to Pell polynomials. See Section 9. **Theorem 1.3** (Theorem 9.9).: _Let \(n\) be a positive integer._ * _The polynomial_ \(p_{2n}(x).p_{4n-1}(x)|_{b}\) _is a fixed point of_ \(S_{b}\)_, where_ \(b=p_{6n-2}(x)\)_._ * _The polynomial_ \(p_{2n}(x).p_{4n+1}(x)|_{b}\) _is a fixed point of_ \(S_{b}\)_, where_ \(b=p_{6n+2}(x)\)_._ * _The polynomial_ \(p_{2n}(x)p_{2n-1}(x).p_{2n+1}(x)p_{2n-1}(x)|_{b}\) _is a fixed point of_ \(S_{b}\)_, where_ \(b=p_{4n}(x)\)_._ **Corollary 1.4** (Corollary 9.10).: _Assume \(n\geq 1\) and \(k\geq 0\). The polynomial \(p_{2n}(x)u.p_{2n+1}(x)u|_{b}\) is a fixed point in base \(b=p_{4n}(x)+p_{2n+1}(x)p_{4n+1}(x)k\), where \(u=p_{2n-1}(x)+p_{2n}(x)p_{2n+1}(x)k\)._ **Theorem 1.5** (Theorem 9.6).: _The iterates of \(S_{b}\), \(b=p_{2n}(x)\), \(n\geq 2\), on \(p_{0}(x).p_{2n-1}(x)|_{b}\) comprise a cycle with initial element \(p_{0}(x).p_{2n-1}(x)|_{b}\) and terminal element \(p_{n}(x).p_{n-1}(x)|_{b}\) if \(n\) is even, or terminal element \(p_{n-1}(x).p_{n}(x)|_{b}\) if \(n\) is odd._ \begin{table} \begin{tabular}{c c **Theorem 1.6** (Theorem 9.12).: _The iterates of \(S_{b}\), \(b=p_{2n}(x)+p_{2n+1}(x)k\), on \(p_{0}(x)+p_{1}(x)k.p_{2n-1}(x)+p_{2n}(x)k|_{b}\) via (9.8) comprise a cycle with initial element_ \[p_{0}(x)+p_{1}(x)k.p_{2n-1}(x)+p_{2n}(x)k\] _and terminal element_ \[p_{n}(x)+p_{n+1}(x)k.p_{n+1}(x)+p_{n+2}(x)k\] _if \(n\) is even, or terminal element_ \[p_{n+1}(x)+p_{n+2}(x)k.p_{n}(x)+p_{n+1}(x)k\] _if \(n\) is odd._ **Theorem 1.7** (Theorem 9.8).: _The iterates of \(S_{b}\), \(b=p_{2n}(x)\), on \(p_{2}(x).p_{2n-1}(x)|_{b}\) via (9.9) comprise a cycle with initial element \(p_{2}(x).p_{2n-1}(x)|_{b}\) and terminal element \(p_{n}(x).p_{n+1}(x)|_{b}\) if \(n\) is even, or terminal element \(p_{n+1}(x).p_{n}(x)|_{b}\) if \(n\) is odd._ **Theorem 1.8** (Theorem 9.14).: _Let \(N=2n+1\), \(n\geq 1\), and \(k\geq 0\). Then the iterates of \(S_{b}\), \(b=p_{N-1}(x)+p_{N-2}(x)k\), yield via (9.11) a cycle with initial element_ \[p_{2}(x)+p_{1}(x)k.p_{N-2}(x)+p_{N-3}(x)k|_{b}\] _and terminal element_ \[p_{n}(x)+p_{n-1}(x)k.p_{n-1}(x)+p_{n-2}(x)k|_{b}\] _if \(n\) is even, or terminal element_ \[p_{n-1}(x)+p_{n-2}(x)k.p_{n}(x)+p_{n-1}(x)k|_{b}\] _if \(n\) is odd._ ## 2. A summary of Beardon The following results are from Beardon, [1]. **Lemma 2.1** ([1]).: _Suppose \(n\) has at least four digits in base \(b\). Then \(S_{b}(n)\) has fewer digits than \(n\)._ **Lemma 2.2** ([1]).: _If \(n\) has at most three digits in base \(b\), then so does \(S_{b}(n)\)._ **Theorem 2.3**.: _For any positive integer \(n\), the successive images of \(S_{b}\) either terminate in a fixed point or enter a cycle._ If \(n\) is a fixed point of \(S_{b}\), then \(n\) is _nontrivial_ if \(n>1\) and _trivial_ if \(n=1\). Since a fixed point can be regarded as a cycle of length one, a cycle will always be assumed to have length at least two. **Theorem 2.4** ([1]).: _Any nontrivial fixed point of \(S_{b}\) has exactly two digits in base \(b\)._ **Theorem 2.5** ([1]).: _The number \(x.y|_{b}\) is a fixed point of \(S_{b}\) if and only if \((x,y)\) is a solution to the equation_ \[(2x-b)^{2}+(2y-1)^{2}=1+b^{2}, \tag{2.1}\] _where \(0\leq x<b\), \(1\leq y<b\)._ Proof.: Complete the square on \(x^{2}+y^{2}=xb+y\). **Theorem 2.6** ([1]).: _Let \(\operatorname{Fix}_{b}\) be the set of fixed points of \(S_{b}\). Then \(|\operatorname{Fix}_{b}|=d(1+b^{2})-1\), where \(d(1+b^{2})\) is the number of divisors of \(1+b^{2}\)._ **Corollary 2.7** ([1]).: _For each base \(b\), \(S_{b}\) has only the trivial fixed point if and only if \(1+b^{2}\) is prime._ Let \((u,v)\) be a solution to \(u^{2}+v^{2}=1+b^{2}\), where \(u\) has the same parity as \(b\), \(-b\leq u<b\), and \(v\) is odd, \(0<v<b\). The fixed points \(x.y_{b}\) of \(S_{b}\) are then given by all \((x,y)\) such that \[2x-b=u,\quad 2y-1=v. \tag{2.2}\] **Example 2.8** (Fixed points in base \(b=12\)).: _If \(b=12\), then all solutions to \(u^{2}+v^{2}=1+12^{2}=145\) relevant to (2.2) are \((-12,1)\), \((-8,9)\), and \((8,9)\). The first possibility \((-b,1)\) always gives the trivial fixed point. The other two possibilities are easily mentally computed to be \((x,y)=(2,5)\) and \((x,y)=(10,5)\), so that \(\operatorname{Fix}_{12}=\{1,2.5,10.5\}_{12}=\{1,29,125\}\). By Theorem 2.6, the number of fixed points is \(d(5\cdot 29)-1=3\)._ **Theorem 2.9** ([1]).: _Any cycle of \(S_{b}\) is a subset of \(\{1,\dots,2b^{2}-1\}\)._ Let \(C\) be a cycle of \(S_{b}\), and let an element \(a\) of \(C\) be designated as the _initial element_. The _terminal element_ of \(C\) is then that element \(z\) of the cycle such that \(S_{b}(z)=a\). The initial element is chosen for convenience but will often be the smallest element of \(C\). If the initial element of a cycle is also the smallest element, then the cycle is said to be in _standard form_. The fundamental cycles in Sections 4 and 5 are in standard form. We define \(\mathbb{Z}|_{b}=\{1,\dots b^{2}-1\}\), \(b>2\), since all elements of a cycle in this paper will have at most two digits. **Remark 2.10**.: _Let \(\mathcal{C}_{b}\) be the set of all cycles of \(S_{b}\). There is no known formula to compute \(|\mathcal{C}_{b}|\) or algorithm besides direct search to determine \(\mathcal{C}_{b}\)._ **Example 2.11** (Cycles in base \(b=12\)).: _The set of cycles \(\mathcal{C}_{12}\) consists of the following:_ * \(\{5,2.1\}_{12}=\{5,25\}\)_;_ * \(\{8,5.4,3.5,2.10,8.8,10.8,1.1.8,5.6,5.1,2.2\}_{12}=\{8,64,41,34,104,128,164,66, 61,26\}\)_;_ * \(\{1.8,5.5,4.2\}_{12}=\{20,65,50\}\)_;_ * \(\{6.8,8.4\}_{12}=\{80,100\}\)_._ ## 3. Fibonacci numbers and identities The _Fibonacci sequence_\((F_{n})_{n=0}^{\infty}\) is recursively defined by \(F_{0}=0\), \(F_{1}=1\), \(F_{n}=F_{n-1}+F_{n-2}\), \(n\geq 2\), [7]. The following identities will be used frequently. The book [2] by T. Koshy is a comprehensive resource. See Section 9 for generalization of the identities to the Pell polynomials. **Cassini's identity:** \[F_{n}^{2}=F_{n-1}F_{n+1}-(-1)^{n} \tag{3.1}\] **Catalan's identity:** \[F_{n}^{2}=F_{n+r}F_{n-r}+(-1)^{n-r}F_{r}^{2} \tag{3.2}\] **Vajda's identity:** \[F_{n+r}F_{n+s}=F_{n}F_{n+r+s}+(-1)^{n}F_{r}F_{s} \tag{3.3}\] **Lucas's identity:** \[F_{2n+1}=F_{n+1}^{2}+F_{n}^{2} \tag{3.4}\] **d'Ocagne's identity:** \[F_{2n}=F_{n+1}^{2}-F_{n-1}^{2} \tag{3.5}\] ## 4. Fibonacci cycles of type I The author observed that cycles in base \(b=F_{2n}\), \(n\geq 2\), [9], and initial term \(F_{2n-1}=F_{0}.F_{2n-1}|_{b}\), [8], all have digits in the Fibonacci sequence. See Table 2 for some examples. We choose \(F_{1}=1\) so that the sum of the indices of the second term of each cycle is \(N=2n-1\). They are a consequence of the following generalization of Lucas's identity (3.4). **Theorem 4.1**.: _Let \(N=2n-1\), \(n\geq 1\). Then_ \[F_{N-i}^{2}+F_{i}^{2}=F_{N-(2j+1)}F_{N+1}+F_{2j+1}, \tag{4.1}\] _where \(j=\min(i,N-i)\) and \(0\leq i\leq n-1\). Observe that \(N-(2j+1)\) is always even, and that the indices \(N-(2j+1)\) and \(2j+1\) sum to \(N\). If \(i=n-1\), we have Lucas's identity (3.4)._ Proof.: Let us assume that \(j\) is the smaller of \(i\) and \(N-i\), \(0\leq i\leq n-1\). By Catalan's identity (3.2) and Lucas's identity (3.4), \[F_{N-i}^{2}+F_{i}^{2} =F_{N-j}^{2}+F_{j}^{2}\] \[=F_{(N+1)-2(j+1)}F_{(N+1)}+(-1)^{(N+1)-2(j+1)}F_{j+1}^{2}+F_{j}^{2}\] \[=F_{N-(2j+1)}F_{N+1}+F_{j+1}^{2}+F_{j}^{2}\] \[F_{N-i}^{2}+F_{i}^{2} =F_{N-(2j+1)}F_{N+1}+F_{2j+1},\quad(0\leq i\leq n-1).\qed\] Let \(\mathcal{P}_{+}(N)\) be the set of all pairs \([r,s]\) of nonnegative integers, where \(r+s=N\) and \(s\) is odd, so that \(r\) is necessarily even. Note that \(|\mathcal{P}_{+}(2n-1)|=n\). Define \(\psi_{+}:\mathcal{P}_{+}(N)\to\mathcal{P}_{+}(N)\) by \[\psi_{+}([r,s])=[N-(2t+1),2t+1],\quad t=\min(r,s).\] We write \(\psi_{+}\) instead of \(\psi_{+}^{N}\) since \(N\) is fixed, once chosen. Note that \(\psi_{+}([n,n-1])=[0,N]\) if \(n\) is even, or \(\psi_{+}([n-1,n])=[0,N]\) if \(n\) is odd. Now assume \(n\geq 2\). Define the map \(\Psi_{+}:\mathcal{P}_{+}(N)\to\mathbb{Z}|_{b}\) by \(\Psi_{+}([r,s])=F_{r}.F_{s}|_{b}\), where \(b=F_{N+1}\). Thus, we have \[S_{b}(F_{r}.F_{s})=\Psi_{+}(\psi_{+}([r,s])). \tag{4.2}\] Thus, we need only determine the cycles and fixed points of \(\psi_{+}\) on \(\mathcal{P}_{+}(N)\) to obtain the cycles and fixed points of \(S_{b}\), \(b=F_{N+1}\), on \(\mathbb{Z}|_{b}\). \begin{table} \begin{tabular}{c c c c c c c} \hline Base \(b\) & \multicolumn{4}{c}{Fundamental cycle} \\ \hline \(F_{4}\) & \(F_{0}.F_{3}\), & \(F_{2}.F_{1}\) & & & & \\ \(F_{6}\) & \(F_{0}.F_{5}\), & \(F_{4}.F_{1}\), & \(F_{2}.F_{3}\) & & & \\ \(F_{8}\) & \(F_{0}.F_{7}\), & \(F_{6}.F_{1}\), & \(F_{4}.F_{3}\) & & & \\ \(F_{10}\) & \(F_{0}.F_{9}\), & \(F_{8}.F_{1}\), & \(F_{6}.F_{3}\), & \(F_{2}.F_{7}\), & \(F_{4}.F_{5}\) & \\ \(F_{12}\) & \(F_{0}.F_{11}\), & \(F_{10}.F_{1}\), & \(F_{8}.F_{3}\), & \(F_{4}.F_{7}\), & \(F_{2}.F_{9}\), & \(F_{6}.F_{5}\) & \\ \(F_{14}\) & \(F_{0}.F_{13}\), & \(F_{12}.F_{1}\), & \(F_{10}.F_{3}\), & \(F_{6}.F_{7}\) & & \\ \(F_{16}\) & \(F_{0}.F_{15}\), & \(F_{14}.F_{1}\), & \(F_{12}.F_{3}\), & \(F_{8}.F_{7}\) & & \\ \(F_{18}\) & \(F_{0}.F_{17}\), & \(F_{16}.F_{1}\), & \(F_{14}.F_{3}\), & \(F_{10}.F_{7}\), & \(F_{2}.F_{15}\), & \(F_{12}.F_{5}\), & \(F_{6}.F_{11}\), & \(F_{4}.F_{13}\), & \(F_{8}.F_{9}\) \\ \(F_{20}\) & \(F_{0}.F_{19}\), & \(F_{18}.F_{1}\), & \(F_{16}.F_{3}\), & \(F_{12}.F_{7}\), & \(F_{4}.F_{15}\), & \(F_{10}.F_{9}\) & \\ \hline \end{tabular} \end{table} Table 2. Some Fibonacci cycles of type I in bases \(b=F_{2n}\) with initial terms \(F_{2n-1}=F_{0}.F_{2n-1}|_{b}\). See also Table 3. **Theorem 4.2** (\(\psi_{+}\) orbit).: _Let \(N=2n-1\), \(n\geq 1\). If \(n=1\), then \([0,1]\) is a fixed point of \(\psi_{+}\). If \(n\geq 2\), then the iterates of \(\psi_{+}\) on \([0,N]\) comprise a cycle with initial element \([0,N]\) and terminal element \([n,n-1]\) if \(n\) is even, or \([n-1,n]\) if \(n\) is odd._ Proof.: Observe that \((\psi_{+})^{-1}([r,2s+1])=[N-s,s]\) if \(s\) is odd, and \([s,N-s]\) if \(s\) is even, where \(r=N-(2s+1)\). Further, since \(\psi_{+}([n,n-1])=[0,2n-1]\) if \(n\) is even, and \(\psi_{+}([n-1,n])=[0,2n-1]\) if \(n\) is odd, we have that \([0,N]\) and \([n,n+1]\), \(n\) even, or \([n+1,n]\), \(n\) odd, are in the same \(\psi_{+}\)-orbit. Let us call the \(\psi_{+}\)-cycle generated by \([0,N]\) the _fundamental cycle_ of \(\psi_{+}\), since it occurs for every integer \(n\geq 2\). Furthermore, we require \(n\geq 2\) if \(F_{2n}\) is to be a meaningful base. See Table 3 for some cycles and fixed points. See Table 2 for some Fibonacci cycles. See also Section 6 on fixed points. **Corollary 4.3**.: _Every element of \(\mathcal{P}_{+}(N)\), \(N=2n-1\), \(n\geq 1\), is either a fixed point or in a cycle of \(\psi_{+}\)._ Proof.: Let \(z\) be an element of \(\mathcal{P}_{+}(N)\). If \(z\) is a fixed point, then we are done. Suppose \(z\) is not a fixed point and consider the iterates \(\{\psi_{+}^{(k)}(z)\}\) of \(z\). The following two theorems are immediate. **Theorem 4.4**.: _All fixed points and cycles of \(\psi_{+}\) on \(\mathcal{P}_{+}(N)\), \(N=2n-1\), \(n\geq 2\), generate via (4.1) Fibonacci fixed points and Fibonacci cycles._ **Theorem 4.5** (Fibonacci cycle of type I).: _Let \(N=2n-1\), \(n\geq 2\). The iterates of \(S_{b}\), \(b=F_{N+1}\), on \(F_{0}.F_{N}|_{b}\), comprise via (4.1) a cycle with initial element \(F_{0}.F_{N}|_{b}\), and terminal element \(F_{n}.F_{n-1}|_{b}\) if \(n\) is even, or terminal element \(F_{n-1}.F_{n}|_{b}\) if \(n\) is odd._ ## 5. Fibonacci cycles of type II The author observed that cycles in base \(b=F_{2n}\), \(n\geq 2\), and initial term \(F_{2n+1}=F_{2}.F_{2n-1}|_{b}\) all have digits in the Fibonacci sequence. See Table 4 for some examples. We choose \(F_{2}=1\) so that the sum of the indices of the first term of each cycle is \(N=2n+1\). They are a consequence of the following generalization of Lucas's identity (3.4). **Theorem 5.1**.: _Let \(N=2n+1\), \(n\geq 1\). Then_ \[F_{N-i}^{2}+F_{i}^{2}=F_{N-(2j-1)}F_{N-1}+F_{2j-1}, \tag{5.1}\] _where \(j=\min(i,N-i)\) and \(1\leq i\leq n+1\). Observe that \(N-(2j-1)\) is always even, and that the indices \(N-(2j-1)\) and \(2j-1\) sum to \(N\). If \(i=n+1\), we have Lucas's identity (3.4)._ Proof.: Let us assume that \(j\) is the smaller of \(i\) and \(N-i\), \(1\leq i\leq n+1\). By Catalan's identity (3.2) and Lucas's identity (3.4), \[F_{N-i}^{2}+F_{i}^{2} =F_{N-j}^{2}+F_{j}^{2}\] \[=F_{(N-1)-(j-1)}^{2}+F_{j}^{2}\] \[=F_{(N-1)-2(j-1)}F_{(N-1)}+(-1)^{(N-1)-2(j-1)}F_{j-1}^{2}+F_{j}^{2}\] \[=F_{N-(2j-1)}F_{N-1}+F_{j-1}^{2}+F_{j}^{2}\] \[F_{N-i}^{2}+F_{i}^{2} =F_{N-(2j-1)}F_{N-1}+F_{2j-1}.\qed\] Let \(\mathcal{P}_{-}(N)\) be the set of all pairs \([r,s]\) such that \(r+s=N\) and \(s\) is odd, so that \(r\) is necessarily even. Note that \(|\mathcal{P}_{-}(2n+1)|=n\). Define \(\psi_{-}:\mathcal{P}_{-}(N)\to\mathcal{P}_{-}(N)\) by \[\psi_{-}([r,s])=[N-(2t-1),2t-1],\quad t=\min(r,s).\] We write \(\psi_{-}\) instead of \(\psi_{-}^{N}\) since \(N\) is fixed, once chosen. Note that \(\psi_{-}([n,n+1])=[2,N-2]\) if \(n\) is even, or \(\psi_{-}([n+1,n])=[2,N-2]\) if \(n\) is odd. Define the map \(\Psi:\mathcal{P}_{-}(N)\to\mathbb{Z}|_{b}\) by \begin{table} \begin{tabular}{c c c c} \hline Base \(b\) & Fundamental cycle & \\ \hline \(F_{4}\) & \(F_{2}.F_{3}\) & & \\ \(F_{6}\) & \(F_{2}.F_{5}\), & \(F_{4}.F_{3}\) & & \\ \(F_{8}\) & \(F_{2}.F_{7}\), & \(F_{6}.F_{3}\), & \(F_{4}.F_{5}\) & \\ \(F_{10}\) & \(F_{2}.F_{9}\), & \(F_{8}.F_{3}\), & \(F_{6}.F_{5}\) & \\ \(F_{12}\) & \(F_{2}.F_{11}\), & \(F_{10}.F_{3}\), & \(F_{8}.F_{5}\), & \(F_{4}.F_{9}\), & \(F_{6}.F_{7}\) \\ \(F_{14}\) & \(F_{2}.F_{13}\), & \(F_{12}.F_{3}\), & \(F_{10}.F_{5}\), & \(F_{6}.F_{9}\), & \(F_{4}.F_{11}\), & \(F_{8}.F_{7}\) \\ \(F_{16}\) & \(F_{2}.F_{15}\), & \(F_{14}.F_{3}\), & \(F_{12}.F_{5}\), & \(F_{8}.F_{9}\) & \\ \(F_{18}\) & \(F_{2}.F_{17}\), & \(F_{16}.F_{3}\), & \(F_{14}.F_{5}\), & \(F_{10}.F_{9}\) & \\ \(F_{20}\) & \(F_{2}.F_{19}\), & \(F_{18}.F_{3}\), & \(F_{16}.F_{5}\), & \(F_{12}.F_{9}\), & \(F_{4}.F_{17}\), & \(F_{14}.F_{7}\), & \(F_{8}.F_{13}\), & \(F_{6}.F_{15}\), & \(F_{10}.F_{11}\) \\ \(F_{22}\) & \(F_{2}.F_{21}\), & \(F_{20}.F_{3}\), & \(F_{18}.F_{5}\), & \(F_{14}.F_{9}\), & \(F_{6}.F_{17}\), & \(F_{12}.F_{11}\) \\ \hline \end{tabular} \end{table} Table 4. Some fundamental Fibonacci cycles of type II in bases \(b=F_{2n}\) with initial terms \(F_{2n+1}=F_{2}.F_{2n-1}|_{b}\). See also Table 5. \(\Psi_{-}([r,s])=F_{r}.F_{s}|_{b}\), where \(b=F_{N-1}\), so that now we require \(N\geq 5\). Thus, we have \[S_{b}(F_{r}.F_{s})=\Psi_{-}(\psi_{-}([r,s])). \tag{5.2}\] Thus, we need only determine the cycles and fixed points of \(\psi_{-}\) on \(\mathcal{P}_{-}(N)\) to obtain the cycles and fixed points of \(S_{b}\), \(b=F_{N-1}\), on \(\mathbb{Z}|_{b}\), \(N\geq 5\). **Theorem 5.2** (\(\psi_{-}\) orbit).: _Let \(N=2n+1\), \(n\geq 1\). If \(n=1\) or \(n=2\), then \([2,N-2]\) is a fixed point of \(\psi_{-}\). If \(n\geq 3\), then the iterates of \(\psi_{-}\) on \([2,N-2]\) comprise a cycle with initial element \([2,N-2]\) and terminal element \([n,n+1]\) if \(n\) is even, or terminal element \([n+1,n]\) if \(n\) is odd._ Proof.: Observe that \((\psi_{-})^{-1}([r,2s-1])=[N-s,s]\) if \(s\) is odd, and \([s,N-s]\) if \(s\) is even, where \(r=N-(2s-1)\). Further, since \(\psi_{-}([n,n+1])=[2,N-2]\) if \(n\) is even, and \(\psi_{-}([n+1,n])=[2,N-2]\) if \(n\) is odd, we have that \([2,N-2]\) and \([n,n+1]\), \(n\) even, or \([n+1,n]\), \(n\) odd, are in the same \(\psi_{-}\)-orbit. Let us call the \(\psi_{-}\)-cycle generated by \([2,N-2]\) the _fundamental cycle_ of \(\psi_{-}\), since it occurs for every positive integer \(n\geq 1\). It is only meaningful for Fibonacci cycles when \(n\geq 2\), however. See Table 4 for some Fibonacci cycles, see Table 5 for some cycles and fixed points of \(\psi_{-}\) and see Section 6 on fixed points. **Corollary 5.3**.: _Every element of \(\mathcal{P}_{-}(N)\), \(N=2n+1\), \(n\geq 1\), is either a fixed point or in a cycle of \(\psi_{-}\)._ Proof.: Let \(z\) be an element of \(\mathcal{P}_{-}(N)\). If \(z\) is a fixed point, then we are done. Suppose \(z\) is not a fixed point and consider the iterates \(\{\psi_{-}^{(k)}(z)\}\) of \(z\). The following two theorems are immediate. **Theorem 5.4**.: _All fixed points and cycles of \(\psi_{-}\) on \(\mathcal{P}_{-}(N)\), \(N=2n+1\), \(n\geq 2\), generate via (5.1) Fibonacci fixed points and Fibonacci cycles._ **Theorem 5.5** (Fibonacci cycles of type II).: _The iterates of \(S_{b}\), \(b=F_{2n}\), \(n\geq 2\), on \(F_{2}.F_{2n-1}|_{b}\) comprise a cycle with initial element \(F_{2}.F_{2n-1}|_{b}\) and terminal element \(F_{n}.F_{n+1}|_{b}\) if \(n\) is even, or terminal element \(F_{n+1}.F_{n}|_{b}\) if \(n\) is odd._ ## 6. Fixed points **Definition 6.1** (Isolated fixed point).: _A fixed point with no preimage will be called an isolated fixed point._ **Theorem 6.2** (Isolated fixed points of \(\psi_{-}\)).: _Assume \(n\geq 1\)._ * _The pair_ \([2n,4n-1]\) _is an isolated fixed point of_ \(\psi_{-}\)_._ * _The pair_ \([2n,1]\) _is an isolated fixed point of_ \(\psi_{-}\)_._ Proof.: Suppose \([2n,N-2n]\) is a fixed point of \(\psi_{-}\) and \(2n=\min(2n,N-2n)\). Then \([2n,N-2n]=[N-(4n-1),4n-1]\) yields \(N=6n-1\) and \([2n,4n-1]\) is the fixed point. Suppose \([2n,N-2n]\) is a fixed point of \(\psi_{-}\) and \(N-2n=\min(2n,N-2n)\). Then \([2n,N-2n]=[N-(2(N-2n)-1),2(N-2n)-1]\) yields \(N=2n+1\) and \([2n,1]\) is the fixed point. If \(\psi_{-}([x,y])=[2n,4n-1]\) and \(x=\min(x,y)\), then \(2x-1=4n-1\) yields \(x=2n\). If \(\psi_{-}([x,y])=[2n,4n-1]\) and \(y=\min(x,y)\), then \(2y-1=4n-1\) yields \(y=2n\), but \(y\) is odd so this case does not occur. If \(\psi_{-}([x,y])=[2n,1]\), then \(2y-1=1\) yields \(y=1\). If \(x=\min(x,y)\), then \(2x-1=1\) yields \(x=1\), but \(x\) is even so this case does not occur. The following theorem is proven similarly. **Theorem 6.3** (Isolated fixed point of \(\psi_{+}\)).: _Assume \(n\geq 1\). The pair \([2n,4n+1]\) is an isolated fixed point of \(\psi_{+}\)._ Theorems 6.2 and 6.3 combine to give us the following theorem. **Theorem 6.4** (Fixed points in bases \(F_{6n-2}\) and \(F_{6n+2}\)).: _Assume \(n\geq 1\)._ * _The number_ \(F_{2n}.F_{4n-1}|_{b}\) _is an isolated fixed point in base_ \(b=F_{6n-2}\)_._ * _The number_ \(F_{2n}.F_{4n+1}|_{b}\) _is an isolated fixed point in base_ \(b=F_{6n+2}\)_._ **Remark 6.5**.: _Theorem 6.4 uses the following sequences in the OEIS [4]: \(F_{2n}\)[9], \(F_{4n+1}\)[15], \(F_{4n-1}\)[16], \(F_{6n-2}\)[27], \(F_{6n+2}\)[28]._ **Definition 6.6** (Companion base).: _If \(y.x|_{b}\) is fixed by \(S_{b}\) and \(x.y|_{b^{\prime}}\) is also fixed by \(S_{b^{\prime}}\), then \(b^{\prime}\) is said to be a companion base to \(b\). Equivalently, \(b\) and \(b^{\prime}\) are companion bases if and only if \(n=x^{2}+y^{2}=y.x|_{b}=x.y|_{b^{\prime}}\)._ \begin{table} \begin{tabular}{c l} \hline \(N\) & Cycles and fixed points \\ \hline 3 & [[2, 1]] \\ 5 & [[2, 3]] \\ & [[4, 1]] \\ 7 & [[2, 5], [4, 3]] \\ & [[6, 1]] \\ 9 & [[2, 7], [6, 3], [4, 5]] \\ & [[8, 1]] \\ 11 & [[2, 9], [8, 3], [6, 5]] \\ & [[4, 7]] \\ & [[10, 1]] \\ 13 & [[2, 11]], [10, 3], [8, 5], [4, 9], [6, 7]] \\ & [[12, 1]] \\ 15 & [[2, 13], [12, 3], [10, 5], [6, 9], [4, 11], [8, 7]] \\ & [[14, 1]] \\ 17 & [[2, 15], [14, 3], [12, 5], [8, 9]] \\ & [[4, 13], [10, 7]] \\ & [[6, 11]] \\ 19 & [[2, 17], [16, 3], [14, 5], [10, 9]] \\ & [[4, 15], [12, 7], [6, 13], [8, 11]] \\ & [[18, 1]] \\ 21 & [[2, 19], [18, 3], [16, 5], [12, 9], [4, 17], [14, 7], [8, 13], [6, 15], [10, 11]] \\ & [[20, 1]] \\ 23 & [[2, 21], [20, 3], [18, 5], [14, 9], [6, 17], [12, 11]] \\ & [[4, 19], [16, 7], [10, 13]] \\ & [[8, 15]] \\ 22, 1]] \\ \end{tabular} \end{table} Table 5. The cycles and fixed points of type II are obtained by the iterates of \(\psi_{-}\) on \(\mathcal{P}_{-}(N)\), \(N=2n+1\), \(n\geq 1\). The iteration applies to Fibonacci cycles only when \(n\geq 2\), however. **Theorem 6.7** (Companion bases \(F_{4n-1}\) and \(F_{4n}\)).: _Assume \(n\geq 1\)._ * _The number_ \(F_{2n+1}F_{2n-1}.F_{2n}F_{2n-1}|_{b}\) _is fixed in base_ \(b=F_{4n-1}\)_._ * _The number_ \(F_{2n}F_{2n-1}.F_{2n+1}F_{2n-1}|_{b}\) _is fixed in base_ \(b=F_{4n}\)_._ **Remark 6.8**.: _Theorem 6.7 uses the following sequences in the OEIS [4]: \(F_{4n}\)[14], \(F_{4n-1}\)[16], \(F_{2n+1}F_{2n-1}\)[23], \(F_{2n}F_{2n-1}\)[24]._ Proof of Theorem 6.7.: Let \(y.x|_{b}=F_{2n+1}F_{2n-1}.F_{2n}F_{2n-1}|_{b}\). Thus, \[x^{2}+y^{2}-by-x =F_{2n+1}^{2}F_{2n-1}^{2}+F_{2n}^{2}F_{2n-1}^{2}-F_{2n+1}F_{2n-1}F _{4n-1}-F_{2n}F_{2n-1}\] Factor out and discard \(F_{2n-1}\) to obtain \[\approx F_{2n+1}^{2}F_{2n-1}+F_{2n}^{2}F_{2n-1}-F_{2n+1}F_{4n-1}-F _{2n}\] \[=\left(F_{2n+1}^{2}+F_{2n}^{2}\right)F_{2n-1}-F_{2n+1}F_{4n-1}-F _{2n}\] \[=F_{2n-1}F_{4n+1}-F_{2n+1}F_{4n-1}-F_{2n}\quad\text{(by (\ref{eq:2.1.1}))} \tag{6.1}\] Observe that the pairs of indices are symmetric about \(3n\) to obtain the identities \[F_{2n+1}F_{4n-1} =F_{3n}^{2}+F_{n-1}^{2}, \tag{6.2}\] \[F_{2n-1}F_{4n+1} =F_{3n}^{2}+F_{n+1}^{2}. \tag{6.3}\] Substituting (6.2) and (6.3) into (6.1), we obtain \[=F_{3n}^{2}+F_{n+1}^{2}-F_{3n}^{2}-F_{n-1}^{2}-F_{2n}\] \[=F_{n+1}^{2}-F_{n-1}^{2}-F_{2n}\] \[=F_{2n}-F_{2n}=0\quad\text{(by (\ref{eq:2.1.1}))}.\] The proof for the companion base is similar and left to the reader. Here are three interesting results and a corollary on companion bases. The proof of the first is left to the reader. **Theorem 6.9** (Companion bases for numbers of the form \(n.n\)).: _If \(n\geq 2\), then \(n.n|_{b}\) is a fixed point in base \(b=2n-1\). Thus, every odd base is it's own companion base._ **Theorem 6.10** (Companion bases for numbers of the form \(nu.u\)).: _Let \(n\geq 1\), \(k\geq 0\), and let \(u=n+1+nk\). Then the number \(nu.u|_{b}\) is a fixed point in base \(b=n^{2}+n+1+(n^{2}+1)k\) and the number \(u.nu|_{b^{\prime}}\) is a fixed point in base \(b^{\prime}=n^{3}+n^{2}+1+n(n^{2}+1)k\). Thus, \(b\) and \(b^{\prime}\) are companion bases._ **Remark 6.11**.: _Theorem 6.10 uses the following sequences from the OEIS [4]: n [5], \(n^{2}+n+1\)[10], \(n^{2}+1\)[11], \(n(n^{2}+1)\)[17], \(n^{3}+n^{2}+1\)[26]._ Proof of Theorem 6.10.: If \(nu.u\) is fixed in base \(b\), then \[(nu)^{2}+(u)^{2} =nu\cdot b+u\] \[(n^{2}+1)u =nb+1. \tag{6.4}\] Similarly, if \(u.nu|_{b^{\prime}}\) is fixed in base \(b^{\prime}\), then \[(1+n^{2})u =b^{\prime}+n.\] If we reduce equation (6.4) modulo \(n\) then we obtain \(u\equiv 1\pmod{n}\). so that \(u=1+nk\) in general. Then we obtain \[b =\frac{(n^{2}+1)(1+nk)-1}{n}\] \[=n+(1+n^{2})k.\] However, this will not work since \(b=n\) and \(nu.u|_{b}\) makes no sense. However, if we take \(u=n+1+nk\), then we obtain \[b =\frac{(n^{2}+1)(n+1+nk)-1}{n}\] \[=n^{2}+n+1+(n^{2}+1)k,\] and we also obtain \[b^{\prime} =(n^{2}+1)(n+1+nk)-n\] \[=n^{3}+n^{2}+1+n(n^{2}+1)k.\] If \(n=3\), for example, then it is easily verified that \(12+9k.4+3k|_{b}\) is fixed in base \(b=13+10k\) and \(4+3k.12+9k|_{b^{\prime}}\) is fixed in base \(b^{\prime}=37+30k\). Even though the proof assumed \(n\geq 2\), the formulas work for \(n=1\) also, that is, \(2+k.2+k|_{b}\) is fixed in base \(b=3+2k\), as given by Theorem 6.9. **Theorem 6.12** (Companion bases for numbers of the form \(nu.mu\)).: _Let \(m\) and \(n\) be relatively prime integers such that \(n>m>1\). Then the number \(nu.mu\)\(|_{b}\) is a fixed point in base \(b=b_{0}+m(m^{2}+n^{2})k\), and the number \(mu.nu.nu\) is a fixed point in base \(b^{\prime}=b^{\prime}_{0}+n(m^{2}+n^{2})k\), where_ \[u =u_{0}+mnk,\] \[b_{0} =\frac{(m^{2}+n^{2})u_{0}-m}{n},\] \[b^{\prime}_{0} =\frac{(m^{2}+n^{2})u_{0}-n}{m},\] _and where, according to the Chinese Remainder Theorem, \(u_{0}\) the smallest solution of the congruences_ \[u\equiv n^{-1}\pmod{m}\quad\text{and}\quad u\equiv m^{-1}\pmod{n}.\] _Thus, \(b\) and \(b^{\prime}\) are companion bases._ Proof.: If \(nu.mu\)\(|_{b}\) is a fixed point in base \(b\), then \[(mu)^{2}+(nu)^{2} =nu\cdot b+mu\] \[(m^{2}+n^{2})u =nb+m \tag{6.5}\] Similary, If \(mu.nu\)\(|_{b^{\prime}}\) is a fixed point in base \(b^{\prime}\), then \[(m^{2}+n^{2})u=mb^{\prime}+n. \tag{6.6}\] If we reduce (6.5) modulo \(n\) and reduce (6.6) modulo \(m\), we obtain the congruences \[u\equiv m^{-1}\pmod{n}\quad\text{and}\quad u\equiv n^{-1}\pmod{m}.\] Let \(u_{0}\) be the smallest positive solution to this pair of congruences guaranteed by the Chinese Remainder Theorem, so that \(u=u_{0}+mnk\), \(k\geq 0\). The bases \(b\) and \(b^{\prime}\) are then given by the arithmetic sequences \[b =\frac{(m^{2}+n^{2})(u_{0}+mnk)-m}{n}\] \[=\frac{(m^{2}+n^{2})u_{0}-m}{n}+m(m^{2}+n^{2})k,\] and \[b^{\prime}=\frac{(m^{2}+n^{2})u_{0}-n}{m}+n(m^{2}+n^{2})k.\qed\] An immediate application of Theorem 6.12 is the following. **Corollary 6.13** (Companion bases for \((n+1)u.nu\)).: _Let \(n\geq 1\) and \(k\geq 0\). Then \((n+1)u.nu|_{b}\) is fixed in base \(b\) and \(nu.(n+1)u|_{b^{\prime}}\) is fixed in base \(b^{\prime}\), where_ \[u =2n+1+n(n+1)k,\] \[b =4n^{2}+2n+1+n(2n^{2}+2n+1)k,\] \[b^{\prime} =4n^{2}+6n+3+(n+1)(2n^{2}+2n+1)k.\] **Remark 6.14**.: _Corollary 6.13 uses the following sequences from the OEIS [4]: \(2n+1\)[13], \(n(n+1)\)[12], \(n(2n^{2}+2n+1)\)[19], \(4n^{2}+6n+3\)[20], \(4n^{2}+2n+1\)[21], \((n+1)(2n^{2}+2n+1)\)[22]._ Since \(\gcd(F_{m},F_{n})=F_{\gcd(m,n)}\) in general, Theorem 6.7 has an extension to arithmetic sequences of companion bases. **Corollary 6.15** (Arithmetic sequence of Fibonacci companion bases).: _Assume \(n\geq 1\) and \(k\geq 0\). The number \(F_{2n+1}u.F_{2n}u|_{b}\) is a fixed point in base \(b\), and \(F_{2n}u.F_{2n+1}u|_{b^{\prime}}\) is fixed in base \(b^{\prime}\), where_ \[u =F_{2n-1}+F_{2n}F_{2n+1}k,\] \[b =F_{4n-1}+F_{2n}F_{4n+1}k,\] \[b^{\prime} =F_{4n}+F_{2n+1}F_{4n+1}k.\] **Remark 6.16**.: _Theorem 6.15 uses the following sequences in the OEIS [4]: \(F_{2n-1}\)[8], \(F_{2n}\)[9], \(F_{4n}\)[14], \(F_{4n+1}\)[15], \(F_{4n-1}\)[16], \(F_{2n}F_{2n+1}\)[25]. The sequences \(F_{2n}F_{4n+1}\) and \(F_{2n+1}F_{4n+1}\) do not occur in the OEIS._ Let us provide one more example of natural interest. **Theorem 6.17** (Companion bases for triangular numbers).: _Let \(T_{n}=n(n+1)/2\), \(n\geq 2\), be a triangular number. Then \(T_{n}.T_{n+1}|_{b}\) is fixed in base \(b=n^{2}+n+1\) and \(T_{n+1}.T_{n}|_{b^{\prime}}\) is fixed in base \(b^{\prime}=(n+1)^{2}+(n+1)+1=n^{2}+3n+3\)._ **Remark 6.18**.: _Theorem 6.17 uses the following sequences from the OEIS [4]: \(T_{n}\)[6], \(n^{2}+n+1\)[10], \(T_{n^{2}}\)[18]. Recall that \(T_{n}^{2}+T_{n+1}^{2}=T_{(n+1)^{2}}\)._ Proof of Theorem 6.17.: The proof is left to the reader. If we have an arithmetic sequence of Fibonacci fixed points as in Theorem 6.15, then it should come as no surprise that we also have arithmetic sequences of Fibonacci cycles, and that's the topic of the next two sections. ## 7. Arithmetic progressions of Fibonacci cycles of type I This section shows that the fundamental cycles of type I admit extension to an arithmetic sequence of cycles in which the common differences are all Fibonacci numbers. **Theorem 7.1**.: _Let \(N=2n-1\), \(n\geq 2\), and \(k\geq 0\). Then_ \[S_{b}(x_{1}.x_{0})=y_{1}.y_{0}|_{b}, \tag{7.1}\] _where_ \[b =F_{N+1}+F_{N+2}k,\] \[x_{0} =F_{i}+F_{i+1}k,\] \[x_{1} =F_{N-i}+F_{N-i+1}k,\] \[y_{0} =F_{2i+1}+F_{2i+2}k,\] \[y_{1} =F_{N-(2i+1)}+F_{N-2i}k,\] _and where \(0\leq i\leq n-1\)._ We need a couple of lemmas to verify Theorem 7.1. **Lemma 7.2**.: _Let \(N=2n-1\), \(n\geq 2\). Then_ \[F_{N-2i}F_{N+1}=F_{N+2}F_{N-2i-1}+F_{2i+2}, \tag{7.2}\] _where \(0\leq i\leq n-1\)._ Proof.: Let \(m=N+1\) and \(n=N-2i-1\) in d'Ocagne's identity (3.5) to obtain \[F_{N+1}F_{(N-2i-1)+1}-F_{N+2}F_{N-2i-1} =(-1)^{N-2i-1}F_{N+1-N+2i+1}\] \[F_{N+1}F_{N-2i}-F_{N+2}F_{N-2i-1} =F_{2i+2}\] \[F_{N+1}F_{N-2i} =F_{N+2}F_{N-2i-1}+F_{2i+2}.\qed\] **Lemma 7.3**.: _Let \(N=2n-1\), \(n\geq 2\). Then_ \[F_{N-2i}F_{N+1}=F_{N-i}F_{N-i+1}+F_{i}F_{i+1} \tag{7.3}\] _where \(0\leq i\leq n-1\)._ Proof.: Observe that \(N+1-N+2i=2i+1\) so we let \(n=N-2i\), \(r=i\), \(s=i+1\) in Vadja's identity (3.3) to obtain \[F_{N-2i}F_{N+1} =F_{N-2i+i}F_{N-2i+i+1}-(-1)^{N-2i}F_{i}F_{i+1}\] \[=F_{N-i}F_{N-i+1}+F_{i}F_{i+1}.\qed\] Proof of Theorem 7.1.: Let \(N=2n-1\), \(n\geq 2\). Consider \(F_{N}\) in base \(F_{N+1}\). Thus, \[x_{0}^{2} +x_{1}^{2}-y_{1}b-y_{0}\] \[=(F_{i}+F_{i+1}k)^{2}+(F_{N-i}+F_{N-i+1}k)^{2}-(F_{N-2i-1}+F_{N-2i }k)(F_{N+1}+F_{N+2}k)\] \[\quad-(F_{2i+1}+F_{2i+2}k)\] \[=(F_{i}^{2}+F_{N-i}^{2}-F_{N-2i-1}F_{N+1}-F_{2i+1}) \tag{7.4}\] \[\quad+(2F_{i}F_{i+1}+2F_{N-i}F_{N-i+1}-F_{N-2i-1}F_{N+2}-F_{N-2i} F_{N+1}-F_{2i+2})k\] (7.5) \[\quad+(F_{i+1}^{2}+F_{N-i+1}^{2}-F_{N-2i}F_{N+2})k^{2} \tag{7.6}\] Let's take each term separately. The first term (7.4) is just Theorem 4.1 and has already been verified. Let us consider the \(k\)-term, (7.5). Thus, \[2F_{i}F_{i+1}+2F_{N-i}F_{N-i+1}-F_{N-2i-1}F_{N+2}-F_{N-2i}F_{N+1}-F_ {2i+2}\] \[=2\left(F_{i}F_{i+1}+F_{N-i}F_{N-i+1}\right)-F_{N-2i-1}F_{N+2}-F_ {N-2i}F_{N+1}-F_{2i+2}\] \[=2F_{N-2i}F_{N+1}-F_{N-2i-1}F_{N+2}-F_{N-2i}F_{N+1}-F_{2i+2}\quad \text{(by (\ref{eq:21}))}\] \[=F_{N-2i}F_{N+1}-F_{N-2i-1}F_{N+2}-F_{2i+2}\] \[=0.\quad\text{(by (\ref{eq:22}))}\] Let us consider the \(k^{2}\)-term, (7.6). Using Catalan's identity (3.2) with \(r=i+1\) we obtain \[F_{i+1}^{2}+F_{N-i+1}^{2}-F_{N-2i}F_{N+2} =F_{i+1}^{2}+F_{N-2i}F_{N+2}+(-1)^{N-2i}F_{i+1}^{2}-F_{N-2i}F_{N+2}\] \[=F_{i+1}^{2}+F_{N-2i}F_{N+2}-F_{i+1}^{2}-F_{N-2i}F_{N+2}\] \[=0.\qed\] Thus, iterates of a sum of squares map can be replaced by iterates on pairs of nonnegative integers. Consider pairs \([[r,r+1],[s,s+1]]\), where \(r+y=N\), \(N=2n-1\), \(n\geq 2\), and extend \(\psi_{+}\) defined in Section 5 by \(\psi_{+}([[r,r+1],[s,s+1]])=[[N-(2t+1),N-2t],[2t+1,2t+2]]\), where \(t=\min(r,s)\). The following theorem is proven similarly to Theorem 4.2. **Theorem 7.4** (Arithmetic \(\psi_{+}\)).: _Let \(N=2n-1\), \(n\geq 1\). If \(n=1\), then \([[0,1],[1,2]]\) is a fixed point of \(\psi_{+}\). If \(n\geq 2\), then the iterates of \(\psi_{+}\) on \([[0,1],[N,N+1]]\) comprise a cycle with initial element \([[0,1],[N,N+1]]\) and terminal element \([[n,n+1],[n-1,n]]\) if \(n\) is even, or terminal element \([[n-1,n],[n,n+1]]\) if \(n\) is odd._ Assume \(n\geq 2\) and let \(N=2n-1\). Define \[\Psi_{+}([[r,r+1],[s,s+1]])=F_{r}+F_{r+1}k.F_{s}+F_{s+1}k|_{b},\quad\text{where }b=F_{N+1}+F_{N+2}k,\] and \(k\) is any nonnegative integer. By Theorem 7.1, we have \[S_{b}(F_{r}+F_{r+1}k.F_{s}+F_{s+1}k|_{b})=\Psi_{+}(\psi_{+}([[r,r+1],[s,s+1]]) ),\quad(r+s=N).\] Consequently, we have the following theorem. **Theorem 7.5** (Arithmetic Fibonacci cycles of type I).: _Let \(N=2n-1\), \(n\geq 2\), and \(k\geq 0\). Then there is a cycle of \(S_{b}\), \(b=F_{N+1}+F_{N+2}k\), with initial element \(F_{0}+F_{1}k.F_{N}+F_{N+1}k|_{b}\) and terminal element \(F_{n}+F_{n+1}k.F_{n-1}+F_{n}k|_{b}\) if \(n\) is even, or terminal element \(F_{n-1}+F_{n}k.F_{n}+F_{n+1}k|_{b}\) if \(n\) is odd._ ## 8. Arithmetic progressions of Fibonacci cycles of type II This section shows that the fundamental cycles of type II admit extension to an arithmetic sequence of cycles in which the common differences are all Fibonacci numbers. **Theorem 8.1**.: _Let \(N=2n+1\), \(n\geq 1\), and \(k\geq 0\). Then_ \[S_{b}(x_{1}.x_{0})=y_{1}.y_{0}|_{b}, \tag{8.1}\] _where_ \[b =F_{N-1}+F_{N-2}k,\] \[x_{0} =F_{i}+F_{i-1}k,\] \[x_{1} =F_{N-i}+F_{N-i-1}k,\] \[y_{0} =F_{2i-1}+F_{2i-2}k,\] \[y_{1} =F_{N-(2i-1)}+F_{N-2i}k,\] _and where \(1\leq i\leq n\)._ Proof.: The proof is similar to that of Theorem 7.1, using Lemmas 8.2 and 8.3. **Lemma 8.2**.: \[F_{i}F_{i-1}+F_{N-i}F_{N-i-1}=F_{N-2i}F_{N-1}.\] (8.2) **Lemma 8.3**.: \[F_{N-2i}F_{N-1}=F_{N-(2i-1)}F_{N-2}+F_{2i-2}\] (8.3) Thus, iterates of a sum of squares map can be replaced by iterates on pairs of nonnegative integers. Consider pairs \([[r,r-1],[s,s-1]]\), where \(r+s=N\), \(N=2n+1\), \(n\geq 1\), and extend \(\psi_{-}\) defined in Section 4 by \(\psi_{-}([[r,r-1],[s,s-1]])=[[N-(2t-1),N-2t],[2t-1,2t-2]]\), where \(t=\min(r,s)\). The following theorem is proven similarly to Theorem 5.2. **Theorem 8.4** (Arithmetic \(\psi_{-}\)).: _Let \(N=2n+1\), \(n\geq 1\). The iterates of \(\psi_{-}\) on \([[2,1],[N-2,N-3]]\) comprise a cycle with initial element \([[2,1],[N-2,N-3]]\) and with terminal element \([[n,n-1],[n+1,n]]\) if \(n\) is even, or terminal element \([[n+1,n],[n,n-1]]\) if \(n\) is odd._ Define \[\Psi_{-}([[r,r-1],[s,s-1]])=F_{r}+F_{r-1}k.F_{s}+F_{s-1}k|_{b},\] where \(b=F_{2n}+F_{2n-1}k\), \(n\geq 2\), and \(k\geq 0\). By Theorem 7.1, we have \[S_{b}(F_{r}+F_{r-1}k.F_{s}+F_{s-1}k|_{b})=\Psi_{-}(\psi_{-}([[r,r-1],[s,s-1]]) ),\quad(r+s=N).\] Consequently, we have the following theorem. **Theorem 8.5** (Arithmetic Fibonacci cycles of type II).: _Let \(N=2n+1\), \(n\geq 2\), and \(k\geq 0\). Then there is a cycle of \(S_{b}\), \(b=F_{2n}+F_{2n-1}k\), with initial element \(F_{2}+F_{1}k.F_{2n-1}+F_{2n-2}k|_{b}\) and terminal element \(F_{n}+F_{n-1}k.F_{n+1}+F_{n}k|_{b}\) if \(n\) is even, or terminal element \(F_{n+1}+F_{n}k.F_{n}+F_{n-1}k|_{b}\) if \(n\) is odd._ ## 9. Generalizations to Pell polynomials **Definition 9.1** (Pell polynomials, [3]).: _The Pell polynomials are defined recursively by_ \[p_{0}(x) =0,\quad p_{1}(x)=1, \tag{9.1}\] \[p_{n}(x) =2xp_{n-1}(x)+p_{n-2}(x),\quad n\geq 2, \tag{9.2}\] _Note that \(p_{2}(x)=2x\). See Theorem 9.8._ **Remark 9.2**.: _The following results apply equally well to the Fibonacci polynomials defined by \(f_{n}(x)=p_{n}(x/2)\), \(n\geq 1\)[3]._ **Theorem 9.3** (Pell polynomial identities, [3]).: _The Pell polynomials satisfy the following identities:_ **Pell-Cassini identity:**__ \[p_{n}^{2}(x)=p_{n+1}(x)p_{n-1}(x)+(-1)^{n+1} \tag{9.3}\] **Pell-Catalan identity:**__ \[p_{n}^{2}(x)=p_{n+r}(x)p_{n-r}(x)+(-1)^{n-r}p_{r}^{2}(x) \tag{9.4}\] **Pell-Vajda identity:**__ \[p_{n+r}(x)p_{n+s}(x)=p_{n}(x)p_{n+r+s}(x)+(-1)^{n}p_{r}(x)p_{s}(x) \tag{9.5}\] **Pell-Lucas identity:**: \[p_{2n+1}(x)=p_{n+1}^{2}(x)+p_{n}^{2}(x)\] (9.6) **Pell-d'Ocagne identity:**: \[2x\,p_{2n}(x)=p_{n+1}^{2}(x)-p_{n-1}^{2}(x)\] (9.7) **Remark 9.4**.: _The "Pell" prefix is intended as descriptive and is unrelated to the historical discovery of the identities._ By Theorem 9.3, the same properties discovered in Sections 4 and 5 apply to the Pell polynomials. Consequently, we only summarize the results. **Theorem 9.5** (Pell identity of type I).: _Let \(N=2n-1\), \(n\geq 1\). Then_ \[p_{N-i}^{2}(x)+p_{i}^{2}(x)=p_{N-(2j+1)}(x)p_{N+1}(x)+p_{2j+1}(x), \tag{9.8}\] _where \(j=\min(i,N-i)\) and \(0\leq i\leq n-1\). Observe that \(N-(2j+1)\) is always even, and that the indices \(N-(2j+1)\) and \(2j+1\) sum to \(N\)._ **Theorem 9.6** (Pell cycles of type I).: _The iterates of \(S_{b}\), \(b=p_{2n}(x)\), yield a cycle with initial element \(p_{0}(x).p_{2n-1}(x)|_{b}\) and terminal element \(p_{n}(x).p_{n-1}(x)|_{b}\) if \(n\) is even, or terminal element \(p_{n-1}(x).p_{n}(x)|_{b}\) if \(n\) is odd._ **Theorem 9.7** (Pell identity of type II).: _Let \(N=2n+1\), \(n\geq 1\). Then_ \[p_{N-i}^{2}(x)+p_{i}^{2}(x)=p_{N-(2j-1)}(x)p_{N-1}(x)+p_{2j-1}(x), \tag{9.9}\] _where \(j=\min(i,N-i)\) and \(1\leq i\leq n+1\). Observe that \(N-(2j-1)\) is always even, and that the indices \(N-(2j-1)\) and \(2j-1\) sum to \(N\)._ **Theorem 9.8** (Pell cycles of type II).: _The iterates of \(S_{b}\), \(b=p_{2n}(x)\), yield via (9.9) a cycle with initial element \(p_{2}(x).p_{2n-1}(x)|_{b}\) and terminal element \(p_{n}(x).p_{n+1}(x)|_{b}\) if \(n\) is even, or terminal element \(p_{n+1}(x).p_{n}(x)|_{b}\) if \(n\) is odd._ **Theorem 9.9** (Pell fixed points).: _Let \(n\) be a positive integer._ * _The polynomial_ \(p_{2n}(x).p_{4n-1}(x)|_{b}\) _is a fixed point of_ \(S_{b}\)_, where_ \(b=p_{6n-2}(x)\)_._ * _The polynomial_ \(p_{2n}(x).p_{4n+1}(x)|_{b}\) _is a fixed point of_ \(S_{b}\)_, where_ \(b=p_{6n+2}(x)\)_._ * _The polynomial_ \(p_{2n}(x)p_{2n-1}(x).p_{2n+1}(x)p_{2n-1}(x)|_{b}\) _is a fixed point of_ \(S_{b}\)_, where_ \(b=p_{4n}(x)\)_._ Theorem 9.9 (c) generalizes Theorem 6.7 (b). Unfortunately, Theorem 6.7 (a) does not generalize to Pell polynomials because of the Pell-d'Ocagne identity, (9.7). Proof of 9.9 (c).: Thus, \[\left(p_{2n}(x)p_{2n-1}(x)\right)^{2} +\left(p_{2n+1}(x)p_{2n-1}(x)\right)^{2}-p_{2n}(x)p_{2n-1}(x)p_{ 4n}(x)-p_{2n+1}(x)p_{2n-1}(x)\] \[\approx p_{2n}(x)^{2}p_{2n-1}(x)+p_{2n+1}(x)^{2}p_{2n-1}(x)-p_{2n}( x)p_{4n}(x)-p_{2n+1}(x)\] \[=\left(p_{2n}(x)^{2}+p_{2n+1}(x)^{2}\right)p_{2n-1}(x)-p_{2n}(x)p_ {4n}(x)-p_{2n+1}(x)\] \[=p_{4n+1}(x)p_{2n-1}(x)-p_{2n}(x)p_{4n}(x)-p_{2n+1}(x)\] \[=p_{3n}(x)^{2}+p_{n+1}(x)^{2}-\left(p_{3n}(x)^{2}-p_{n}(x)^{2} \right)-p_{2n+1}(x)\] \[=p_{n+1}(x)^{2}+p_{n}(x)^{2}-p_{2n+1}(x)\] \[=0.\quad\text{(by (\ref{eq:pell}))}\] Theorem 9.9 (c) has an extension to arithmetic sequences of Pell polynomials. **Corollary 9.10** (Arithmetic sequence of fixed points in Pell polynomials).: _Assume \(n\geq 1\) and \(k\geq 0\). The polynomial \(p_{2n}(x)u.p_{2n+1}(x)u|_{b}\) is a fixed point in base \(b=p_{4n}(x)+p_{2n+1}(x)p_{4n+1}(x)k\), where \(u=p_{2n-1}(x)+p_{2n}(x)p_{2n+1}(x)k\)._ Proof.: The proof is similar to that of Theorem 9.9 (c). **Theorem 9.11**.: _Let \(N=2n-1\), \(n\geq 1\), and let \(k\geq 0\). Then_ \[S_{b}(x_{1}.x_{0})=y_{1}.y_{0}, \tag{9.10}\] _where_ \[b =p_{N+1}(x)+p_{N+2}(x)k,\] \[x_{0} =p_{i}(x)+p_{i+1}(x)k,\] \[x_{1} =p_{N-i}(x)+p_{N-i+1}(x)k,\] \[y_{0} =p_{2i+1}(x)+p_{2i+2}(x)k,\] \[y_{1} =p_{N-(2i+1)}(x)+p_{N-2i}(x)k.\] _and where \(0\leq i\leq n-1\)._ Proof.: The proof is similar to that of Theorem 7.1. A direct application of Theorem 9.11 yields a generalization of Theorem 4.5 to Pell polynomials. **Theorem 9.12** (Arithmetic Pell cycles of type I).: _Let \(N=2n-1\), \(n\geq 1\), and \(k\geq 0\). Then the iterates of \(S_{b}\), \(b=p_{N+1}(x)+p_{N+2}(x)k\), yield via (9.10) a cycle with initial element_ \[p_{0}(x)+p_{1}(x)k.p_{N}(x)+p_{N+1}(x)k|_{b}\] _and terminal element_ \[p_{n}(x)+p_{n+1}(x)k.p_{n-1}(x)+p_{n}(x)k|_{b}\] _if \(n\) is even, or terminal element_ \[p_{n-1}(x)+p_{n}(x)k.p_{n}(x)+p_{n+1}(x)k|_{b}\] _if \(n\) is odd._ Proof.: The proof is left to the reader. **Theorem 9.13**.: _Let \(N=2n+1\), \(n\geq 1\), and let \(k\geq 0\). Then_ \[S_{b}(x_{1}.x_{0})=y_{1}.y_{0}, \tag{9.11}\] _where_ \[b =p_{N-1}(x)+p_{N-2}(x)k,\] \[x_{0} =p_{i}(x)+p_{i-1}(x)k,\] \[x_{1} =p_{N-i}(x)+p_{N-i-1}(x)k,\] \[y_{0} =p_{2i-1}(x)+p_{2i-2}(x)k,\] \[y_{1} =p_{N-(2i-1)}(x)+p_{N-2i}(x)k,\] _and where \(1\leq i\leq n+1\)._ Proof.: The proof is similar to that of Theorem 8.1. A direct application of Theorem 9.13 yields a generalization of Theorem 5.5 to Pell polynomials. **Theorem 9.14** (Arithmetic Pell cycles of type II).: _Let \(N=2n+1\), \(n\geq 1\), and \(k\geq 0\). Then the iterates of \(S_{b}\), \(b=p_{N-1}(x)+p_{N-2}(x)k\), yield via (9.11) a cycle with initial element_ \[p_{2}(x)+p_{1}(x)k.p_{N-2}(x)+p_{N-3}(x)k|_{b}\] _and terminal element_ \[p_{n}(x)+p_{n-1}(x)k.p_{n+1}(x)+p_{n}(x)k|_{b}\] _if \(n\) is even, or terminal element_ \[p_{n+1}(x)+p_{n}(x)k.p_{n}(x)+p_{n-1}(x)k|_{b}\] _if \(n\) is odd._ Proof.: The proof is similar to that of Theorem 8.1.
2303.04501
Planetary computing for data-driven environmental policy-making
We make a case for "planetary computing" -- infrastructure to handle the ingestion, transformation, analysis and publication of global data products for furthering environmental science and enabling better informed policy-making. We draw on our experiences as a team of computer scientists working with environmental scientists on forest carbon and biodiversity preservation, and classify existing solutions by their flexibility in scalably processing geospatial data, and also how well they support building trust in the results via traceability and reproducibility. We identify research gaps in the intersection of computing and environmental science around how to handle continuously changing datasets that are often collected across decades and require careful access control rather than being fully open access.
Patrick Ferris, Michael Dales, Sadiq Jaffer, Amelia Holcomb, Eleanor Toye Scott, Thomas Swinfield, Alison Eyres, Andrew Balmford, David Coomes, Srinivasan Keshav, Anil Madhavapeddy
2023-03-08T10:46:53Z
http://arxiv.org/abs/2303.04501v2
# A Case for Planetary Computing ###### Abstract. We make a case for _planetary computing_: accessible, interoperable and extensible end-to-end systems infrastructure to process petabytes of global remote-sensing data for the scientific analysis of environmental action. We discuss some pressing scientific scenarios, survey existing solutions and find them incomplete, and present directions for systems research to help reverse the climate and biodiversity crises. ## 1. Introduction There are simultaneous crises across the planet due to rising CO\({}_{2}\) emissions (Hollcomb et al., 2017), rapid biodiversity loss (Sandhi et al., 2017), and desertification (Sandhi et al., 2017). Assessing progress on these complex and interlocked issues requires a global view on the effectiveness of our adaptations and mitigations. To succeed in the coming decades, we need a wealth of new data about our natural environment that we rapidly process into accurate indicators, with sufficient trust in the resulting insights to make decisions that affect the lives of billions of people worldwide. The scale of the problem demands that we shift beyond depending solely on governmental policies. Tackling the climate and biodiversity emergencies now involves ecologists, climate scientists, executives, journalists, and politicians -- all assessing the current environmental state of the world and predicting the impact of changes. They aim to provide information to both policy makers and the public about assessment of ongoing conservation interventions. A global view on planetary health is possible due to the availability of remote sensing data from satellites in orbit (Mohr et al., 2017), drones flying over natural habitats (Sandhi et al., 2017), and networks of ground-based measurement equipment (Hollcomb et al., 2017). However, the _systems_ required to effectively ingest, clean, collate, process, explore, archive, and derive policy decisions from the raw data are presently not usable by non-CS-experts, not reliable enough for scientific and political decision making, and not widely and openly available to all interested parties. As the climate crisis deepens, the feedback loop between environmental hypotheses and resulting policy action is happening faster than ever, which makes it ripe for abuse from bad actors who derive misleading interpretations of observations. We believe that computer systems have a vital role to play in not only powering the processing and understanding of planetary data, but also building public trust in the resulting policy actions by enforcing standards of transparency, reproducibility, accountability and timeliness in the decision making. We first motivate this with scenarios we have gathered from scientists working on environmental science (SS1.1) and distill some common requirements (SS1.2). We find that existing solutions only partially solve the systems problems (SS2), and so discuss directions towards a planetary computing platform that can be used non-CS-expert users (SS3). Our aim is to grow a federated ecosystem that will span individual organisations, and also be survivable beyond any one entity controlling it in the longer term, and be sensitive to the necessity of access control from malicious actors (SS4). ### Motivating Environmental Scenarios _Calculating Extinction Rates_. Ecologists assess areas of habitat data to generate worldwide extinction statistics (Sandhi et al., 2017), but must not reveal individual observation points or else species may come under threat from poachers (Sandhi et al., 2017). To generate this aggregate data they combine satellite data (Landsat, MODIS, Copernicus, GEDI (Mohr et al., 2017)) with readings collected manually over decades. The data is highly variable in quality and requires cleaning and normalisation, before machine learning is used to train models to interpolate missing data. Subsequently, the information gleaned from the data is used to direct habitat regeneration and protection efforts, but must be regenerated monthly as new data arrives. When challenged, it should be possible to reveal the provenance of conclusions to auditors, even from decades-old observations. _Land use policy_. Food and fibre production trades off against natural habitats, and understanding where to do this requires jurisdictional land management (Sandhi et al., 2017). A civil servant assessing different methods of evaluating the impact of land use changes on biodiversity needs to access datasets for their country that have a reasonable resolution (<100 metres/pixel and so 100GB/layer storage needed), across all the species on the IUCN extinction list (10000+ entries (Mohr et al., 2017)), and go back 30 years. Similarly, natural resource managers rely on being able to work on zoomed-out/cropped data for interactive and iterative exploration of potential land use policies, and then scale to cluster compute levels for a country-wide run. _Preserving tropical rainforests_. Consider a conservation project protecting millions of hectares of tropical rainforest from illegal logging. A park ranger might wish to subscribe to regularly-updated land use/land cover (LULC) data (LUC) and use an interactive explorer that overlays this with locally gathered information about threatened regions of the forest (such as where an illegal road was discovered (Luo et al., 2017)). They might then feed mobile alerts to security patrols monitoring illegal deforestation (Zhou et al., 2017). This requires continuous integration of remote sensing with local data to generate actionable triggers, and data provenance tracking to audit reliability. ### What our users need We advocate for a system that focuses on supporting environmental scientists, policy makers, journalists, and business specialists as its primary users, enabling them to work on spatial/temporal analysis for the scenarios in (SS1.1) without also having to resort to becoming computer systems experts. They need to access **large-scale input datasets** consisting of: _(i)_ primary observation data (e.g. from NASA (Luo et al., 2017)) that is petabyte-scale or direct measurements; _(ii)_ derived sources from AI-based inference; and _(iii)_ previous results derived by third parties or from earlier runs. They then **express computation** over these datasets that: _(i)_ is either algorithmic or machine learning-based, using a mix of CPUs and GPUs; _(ii)_ needs to autoscale to permit local development followed by global analysis; and _(iii)_ can be expressed by a non-CS expert, ideally with a visual interface or in a language like R or Python. The **derived results** must be archived for the long term, while: _(i)_ tracking provenance on input data and enforcing privacy constraints on output results; _(ii)_ being independently verifiable when given access to the source data; and _(iii)_ incrementally recalculating to allow for interactive exploration and incorporation of local data. These requirements translate to the data pipeline shown in Figure 1. This pipeline has the following phases: * **Ingestion**: is the acquisition of remote sensing datasets. Publishers often serve them via adhoc HTTP/FTP servers, requiring polling download scripts. The data formats vary (e.g. GeoJSON, TIFFs) and need to be normalised into a format such as a spatially indexed columnar store. * **Transform**: is the dynamic computation pipeline over the large datasets, usually expressed in multiple languages (e.g. Python, R, and Fortran). Machine learning is often used to interpolate sparse datasets, and dataset size is reduced by spatial-temporal slicing to focus on desired regions. * **Analyse**: is the foreign interface that can be used by external systems, either via API-driven endpoints (e.g. web-hooks), query interfaces, or AI-driven language models. * **Publish**: is the long-term storage of reproducible results (e.g. for scientific publications). It is also useful to provide online notebooks to give non-expert users a rapid development environment with all the data, code and tools. The pipeline is not useful unless it is attentive to core usability constraints of scientists and policy-makers who work across geographies and organisations: * **Extensibility:** Can the user (rather than a technologist) add new functionality to the system by importing libraries or tools to cover new techniques? Can the user easily incorporate the system into existing workflows? * **Accessibility:** Can it be used by anyone without specialised systems knowledge? Does it effectively conceal the system's rough edges from scientists and policy-makers, who should not have to worry about these details? Does it support the patterns most useful for scientists, such as _incremental_ data exploration and exploratory research? The system must also allow its results to survive durably as it will feed science and policy well into the future. * **Traceability:** Results must be traceable through to their data and code inputs using cryptographic techniques, whilst respecting not all inputs may be directly revealed, either because of governance constraints or sensitivity (e.g. species under threat are poaching targets (Zhou et al., 2017)). Constraints must be tracked across intermediate datasets, enforcing privacy requirements in outputs (Luo et al., 2017). * **Explainability:** In order to develop real-world policy based on computational results, the results and algorithms that led to them need to be understandable to a set of non-expert policymakers. This requires clear and concise expression of algorithms and introspection of ML models. Figure 1. Ideal dataflow pipeline for a planetary computing engine * **Reproducibility:** The results that come out of the system need to be independently rerun, which is surprisingly difficult with heterogenous workloads spanning CPU/GPU operations (Rasmal et al., 2017) and libraries varying internal algorithms across releases. ## 2. Current State of the Art When surveying existing systems to assist users in achieving the earlier requirements (SS1.2), there are two strategies: use existing end-to-end systems that cover as much of the lifecycle we have identified as possible, or pull together a custom system using off-the-shelf components. Our survey of existing approaches is summarized in Table 1. ### Existing end-to-end solutions Google's Earth Engine (GEE) (S on the GPU to amortise memory transfer costs. Interactions between frameworks can also cause significant performance issues: GDAL (Dalal, 2015), the standard library for reading and writing geo-data, typically suggests an efficient block-size for reading GeoTIFFs that is orientated incompatibly with CuPy's expectations, leading to just one ALU on the GPU being used unless the user knows to redimension their data first. _Publication and traceability._ Despite data lineage in science being of concern for some time (Stein * Tags each input source using Decentralised Information Flow Control (DIFC) labels (Zhou et al., 2017) ensuring that access control checks can be applied at any point later in the pipeline or in a query engine (Zhou et al., 2017). * Performs coordinate transforms and divides and hashes the data into spatial chunks, permitting subsequent version control (Zhou et al., 2017) of subsets. This layout strategy is compatible with both on-premises filesystems (e.g. ZFS (Zhou et al., 2017)) or hosted systems (e.g. S3 (Bahman et al., 2016)) due to being immutable and versioned with deduplication. ### Transformation Ark supports pipeline composition that can depend on earlier phases, using a dynamic dataflow graph (modelled on CIEL (Zhou et al., 2017) and Docker DataKit (Zhou et al., 2017)) to sequence computations that can depend on previous results. We use a dataflow library (Docker et al., 2017) that can perform incremental recomputation of functions. The library approach allows expressing multiple types of deployments ranging from interfacing with C/Python/R code, orchestrating container builds, GPU execution, and results retrieval from external systems (e.g. Google Earth Engine) where no local copy is available. AI-based inference of datasets using deep learning (e.g. Dynamic Earth (Kavli et al., 2017)) is a powerful mechanism to process multiresolution spatial data, but requires scheduling GPU resources (Zhou et al., 2017). Ark can track the inputs and outputs to the training sets for these, including labelled data and hyper-parameters. Although the GPU computation is often non-deterministic for performance reasons, tracking other inputs greatly aids with the reproducibility and explainability. ### Analysis Ark provides streams of results (typically much smaller than the source datasets) to external systems, via GraphQL/REST streaming to allow third-party cloud services to be used by downstream consumers. Ark propagates the DIFC labels from source data to the API endpoints, allowing for entirely private pipelines to be built from a mix of public and private data. This is useful when some local information needs to be mixed with a broader public baseline. ### Publishing Scientific results need to deliver artefacts to openly reproduce claims, and Ark can process the pipelines to generate standalone Docker images that include the subset of source data along with the pipeline source code to rebuild everything independently and (where possible with GPUs) deterministically. Since all the data is hashed upon ingestion the Docker layer caching works well (including for private data retrieval), and extensible to other content-addressed data distribution systems such as IPFS (Zhou et al., 2017) or recording on distributed ledgers such as Tezos (Zhou et al., 2017). These container images are also useful for spinning up developer environments (with VSCode (Kavli et al., 2017)) that permit local development. Since it is increasingly practical to perform interactive computation directly in the web browser (e.g. using wasm and WebGPU (Kavli et al., 2017)), we find this to be an important improvement in a simpler developer experience for non-CS-expert users that isn't tied to a single cloud platform. ## 4. A call for collective action We now discuss our vision for building a federated model of planetary computing that meets the requirements stated in SS1.2 and can survive and scale into the coming decades. As with the Internet, such an effort will need a multi-team, multi-disciplinary effort to address some critical open issues, discussed next. Primary users who are not CS expertsScientists and policy-makers are the _raison d'etre_ and primary users of planetary computing, not an afterthought. Their unique needs force us, as systems researchers, to rise to the challenge of designing a computing system that can be used to generate consequential scientific and policy outcomes. This will require use of state-of-the-art systems approaches to remove the burden of dealing with lower-level details, such as data alignment, compute scaling, and concurrency, from users. Reconciling privacy and transparencyExisting systems prioritise openness or transparency at the expense of privacy. However, being too open lets bad actors determine how best to game the system (Zhou et al., 2017). To mitigate this, as one possibility, we suggest the principle of "eventual openness" where data is initially embargoed and eventually made public (Zhou et al., 2017). Moreover, differential privacy (Zhou et al., 2017) and decentralized information flow control (Zhou et al., 2017), might permit some transparency while preserving data privacy even during the sensitive early period. Full query engines that respect the privacy constraints across multiple users are also an emerging area; e.g. the multi-verse database architecture (Zhou et al., 2017). It may also be useful to only partially reveal source data to avoid full disclosure and respect privacy, but allow subsequent auditing by third-parties who are granted access to the source information and can independently verify it, and use permissionless distributed ledgers for immutability (Zhou et al., 2017). A collective framework for planetary computingThe computer systems community has come together in the past to build federated testbeds for emerging technologies, such as PlanetLab (Zhou et al., 2017) in the early days of cloud computing, or Emulab (Zhou et al., 2017) for wireless deployments. Since the inception of the Internet, we have built up collective knowledge about how to foster and scale open source communities, curate open data collections, and drive shared governance mechanisms such as the IETF. We need a similarly ambitious drive to deliver planetary computing - one that eludes capture by any one organisation or provider, that enables the portable exchange of data and algorithms, and ultimately the source of positive and consequential actions as global remote sensing turbocharges our insights into the natural world. Such a system needs to build on the knowledge of scaling efficient large scale computer systems we have with global cloud providers, and integrate directly with the largest satellite data providers including national space agencies and private companies. It needs to be accessible to scientists and policymakers from the global south who act as conservators for close to two-thirds of our planet's terrestrial biodiversity, while also incorporating the latest systems research into privacy, traceability and provenance to restrict abuse from bad actors. We invite systems researchers to join us in this critical effort.
2301.05066
Branching symplectic monogenics using a Mickelsson--Zhelobenko algebra
In this paper we consider (polynomial) solution spaces for the symplectic Dirac operator (with a focus on $1$-homogeneous solutions). This space forms an infinite-dimensional representation space for the symplectic Lie algebra $\mathfrak{sp}(2m)$. Because $\mathfrak{so}(m)\subset \mathfrak{sp}(2m)$, this leads to a branching problem which generalises the classical Fischer decomposition in harmonic analysis. Due to the infinite nature of the solution spaces for the symplectic Dirac operators, this is a non-trivial question: both the summands appearing in the decomposition and their explicit embedding factors will be determined in terms of a suitable Mickelsson-Zhelobenko algebra.
David Eelbode, Guner Muarem
2023-01-12T15:03:09Z
http://arxiv.org/abs/2301.05066v1
# Branching symplectic monogenics using a Mickelsson-Zhelobenko algebra ###### Abstract In this paper we consider (polynomial) solution spaces for the symplectic Dirac operator (with a focus on \(1\)-homogeneous solutions). This space forms an infinite-dimensional representation space for the symplectic Lie algebra \(\mathfrak{sp}(2m)\). Because \(\mathfrak{so}(m)\subset\mathfrak{sp}(2m)\), this leads to a branching problem which generalises the classical Fischer decomposition in harmonic analysis. Due to the infinite nature of the solution spaces for the symplectic Dirac operators, this is a non-trivial question: both the summands appearing in the decomposition and their explicit embedding factors will be determined in terms of a suitable Mickelsson-Zhelobenko algebra. Branching, Symplectic Dirac operator, Mickelsson-Zhelobenko algebra, simplicial harmonics. ## 1 Introduction The Dirac operator is a first-order differential operator acting on spinor-valued functions which factorises the Laplace operator \(\Delta\) on \(\mathbb{R}^{m}\). It was originally introduced by Dirac in a famous attempt to factorise the wave operator, hence obtaining a relativistically invariant version of the Schrodinger equation. Since then, this operator has played a crucial role in mathematical domains such as representation theory and Clifford analysis. The latter is a multidimensional function theory which is often described as a refinement of harmonic analysis, and a generalisation of complex analysis. It is centred around a generalisation of the operator introduced by Dirac (his operator \(\not{\partial}\) is defined in \(4\) dimensions), and can be seen as a contraction between the generators \(e_{k}\) for a Clifford algebra (acting as endomorphisms on so-called spinors) and corresponding partial derivatives \(\partial_{x_{k}}\). To be more precise, introducing the Clifford algebra by means of the defining relations ###### Abstract We consider the \(k\)th-symmetric power of the fundamental vector representation (modelled by polynomials), and the symplectic spinor space \(\mathbb{S}_{0}^{\infty}\) (also referred to as the Segal-Shale-Weil representation). These spaces contain \(k\)-homogeneous \(\mathbb{S}_{0}^{\infty}\)-valued solutions for the symplectic Dirac operator. The behaviour of these spaces as representations for \(\mathfrak{sp}(2m)\) is known (see e.g. [1] and the references therein), but in this paper we will look at these spaces as _orthogonal_ representation spaces. This is motivated by the fact that \(\mathfrak{so}(m)\subset\mathfrak{sp}(2m)\), which means that we are dealing with a branching problem. ## 1 Introduction Let \(\mathfrak{g}\) be a \(k\)-dimensional vector space and \(\mathfrak{g}\) be a \(k\)-dimensional vector space. The \(k\)th-symmetric power of the fundamental vector representation (modelled by polynomials) is defined as \[\mathbb{S}_{k}^{\infty}=\mathcal{M}_{k}^{s}(\mathbb{R}^{2m},\mathbb{S}_{0}^{ \infty}):=\mathcal{P}_{k}(\mathbb{R}^{2m},\mathbb{C})\boxtimes\mathbb{S}_{0}^{ \infty}\ \ \ \ (k\in\mathbb{N}).\] Here \(\boxtimes\) denotes the Cartan product of the \(\mathfrak{sp}(2m)\)-representations \(\mathcal{P}_{k}(\mathbb{R}^{2m},\mathbb{C})\), the \(k\)th-symmetric power of the fundamental vector representation (modelled by polynomials), and the symplectic spinor space \(\mathbb{S}_{0}^{\infty}\) (also referred to as the Segal-Shale-Weil representation). These spaces contain \(k\)-homogeneous \(\mathbb{S}_{0}^{\infty}\)-valued solutions for the symplectic Dirac operator. The behaviour of these spaces as representations for \(\mathfrak{sp}(2m)\) is known (see e.g. [1] and the references therein), but in this paper we will look at these spaces as _orthogonal_ representation spaces. This is motivated by the fact that \(\mathfrak{so}(m)\subset\mathfrak{sp}(2m)\), which means that we are dealing with a branching problem. In general, a branching problem can be described as follows: given a representation \(\rho\) of a Lie algebra \(\mathfrak{g}\) and a subalgebra \(\mathfrak{h}\), we would like to understand how the representation \(\rho\) behaves as a \(\mathfrak{h}\)-representation. This restricted representation \(\rho_{|\mathfrak{h}}\) will no longer be irreducible, but will decompose into \(\mathfrak{h}\)-irreducible representations. A branching rule then describes the irreducible pieces which will occur, together with their multiplicities. For the symplectic spinors (i.e. for the space \(\mathbb{S}_{0}^{\infty}\)), this gives the Fischer decomposition in harmonic analysis, which means that the branching problem for \(\mathbb{S}_{k}^{\infty}\) leads to generalisations thereof. To describe the branching of the infinite-dimensional symplectic representation space \(\mathbb{S}_{k}^{\infty}\) under the inclusion \(\mathfrak{so}(m)\subset\mathfrak{sp}(2m)\), we will make use of a quadratic algebra which is known as a Mickelson-Zhelobenko algebra (see [9] for the general construction and properties). ## 2 The symplectic Dirac operator and monogenics We will work with the symplectic space \(\mathbb{R}^{2m}\) and coordinates \((\underline{x},\underline{y})\) equipped with the canonical symplectic form \(\omega_{0}=\sum_{j=1}^{m}dx_{j}\wedge dy_{j}\). The matrix representation of the symplectic form is given by \[\Omega_{0}=\begin{pmatrix}0&\mathrm{Id}_{m}\\ -\mathrm{Id}_{m}&0\end{pmatrix}.\] The group consisting of all invertible linear transformations preserving this non-degenerate skew-symmetric bilinear form is called the symplectic group and is formally defined as follows: \[\mathsf{Sp}(2m,\mathbb{R})=\{M\in\mathsf{GL}(2m,\mathbb{R})\mid M^{T}\Omega_ {0}M=\Omega_{0}\}.\] This is a non-compact group of dimension \(2m^{2}+m\). Its (real) Lie algebra will be denoted by \(\mathfrak{sp}(2m,\mathbb{R})\). In the orthogonal case, the spin group determined by the sequence \[1\to\mathbb{Z}_{2}\to\mathsf{Spin}(m)\to\mathsf{SO}(m)\to 1\] plays a crucial role concerning the invariance of the Dirac operator \(\underline{\partial}_{x}\) and the definition of the spinors \(\mathbb{S}\). In the symplectic case, this role is played by the metaplectic group \(\mathsf{Mp}(2m,\mathbb{R})\) fixed by the exact sequence \[1\to\mathbb{Z}_{2}\to\mathsf{Mp}(2m,\mathbb{R})\to\mathsf{Sp}(2m,\mathbb{R}) \to 1.\] Despite the analogies, there are some fundamental differences: * First of all, the group \(\mathsf{SO}(m)\) is compact, whereas \(\mathsf{Sp}(2m,\mathbb{R})\) is not. This has important consequences for the representation theory. As a matter of fact, the metaplectic group is not a matrix group and does not admit (faithful) finite-dimensional representations. * The orthogonal spinors \(\mathbb{S}\) can be realised as a maximal left ideal in the Clifford algebra, but this is not the case for the symplectic spinors. The latter are often modelled as smooth vectors in the infinite-dimensional Segal-Shale-Weil representation (see [7] and the references therein). One can also identify the symplectic spinor space \(\mathbb{S}_{0}^{\infty}\) with the space \(\mathcal{P}(\mathbb{R}^{m},\mathbb{C})\) of polynomials in the variables \((z_{1},\ldots,z_{m})\in\mathbb{R}^{m}\), which is the approach we will use in this paper. **Definition 2.1**.: Let \((V,\omega)\) be a symplectic vector space. The _symplectic Clifford algebra_\(\mathsf{Cl}_{s}(V,\omega)\) is defined as the quotient algebra of the tensor algebra \(T(V)\) of \(V\) by the two-sided ideal \(\underline{u},\underline{v}\in V\). In other words \(\mathsf{Cl}_{s}(V,\omega):=T(V)/\mathcal{I}_{\omega}\) is the algebra generated by \(V\) in terms of the relation \([\underline{v},\underline{u}]=-\omega(\underline{v},\underline{u})\), where we have omitted the tensor product symbols. Denote by \(\langle\underline{u},\underline{v}\rangle:=\sum_{k=1}^{m}u_{k}v_{k}\) the canonical inner product on \(\mathbb{R}^{m}\) (where we allow partial derivatives to appear as coefficients, see the operators below). We then define the following operators acting on polynomial functions in \(\mathcal{P}(\mathbb{R}^{3m},\mathbb{C})\): 1. The symplectic Dirac operator \(D_{s}=\langle\underline{z},\underline{\partial}_{y}\rangle-\langle\underline {\partial}_{x},\underline{\partial}_{z}\rangle\). 2. The adjoint operator \(X_{s}=\langle\underline{y},\underline{\partial}_{z}\rangle+\langle\underline {x},\underline{z}\rangle\) with respect to the symplectic Fischer product (see Section 5 of [2] for more details). 3. The Euler operator \(\mathbb{E}=\sum_{j=1}^{m}(x_{j}\partial_{x_{j}}+y_{j}\partial_{y_{j}})= \mathbb{E}_{x}+\mathbb{E}_{y}\) measuring the degree of homogeneity in the base variables \((\underline{x},\underline{y})\in\mathbb{R}^{2m}\). Note that some authors use the notation \(\langle\nabla_{x},\nabla_{y}\rangle\) for an expression such as \(\sum_{k}\partial_{x_{k}}\partial_{y_{k}}\), but we will use the Dirac operator symbol here instead of the nabla operator. The three operators \(X=\sqrt{2}D_{s}\), \(Y=\sqrt{2}X_{s}\) and their commutator \(H=[X,Y]=-2(\mathbb{E}_{x}+\mathbb{E}_{y}+m)\) give rise to a copy of the Lie algebra \(\mathfrak{sl}(2)\). One now easily sees that the symplectic Dirac operator is nothing more than the contraction between the Weyl algebra generators \((z_{k},\partial_{z_{k}})\) with the vector fields \((\partial_{x_{k}},\partial_{y_{k}})\) for \(k=1,\ldots,m\) using the canonical symplectic form \(\Omega_{0}\). The space of \(k\)-homogeneous symplectic monogenics is defined by \(\mathbb{S}_{k}^{\infty}:=\ker(D_{s})\cap\big{(}\mathcal{P}_{k}(\mathbb{R}^{2m },\mathbb{C})\otimes\mathcal{P}(\mathbb{R}^{m},\mathbb{C})\big{)}\), where the space \(\mathcal{P}(\mathbb{R}^{m},\mathbb{C})\) in the vector variable \(\underline{z}\in\mathbb{R}^{m}\) plays the role of the symplectic spinor space \(\mathbb{S}_{0}^{\infty}\). Note that as an \(\mathfrak{sp}(2m,\mathbb{R})\)-module, \(\mathbb{S}_{k}^{\infty}\) is reducible and decomposes into two irreducible parts: \(\mathbb{S}_{k}^{\infty}=\mathbb{S}_{k,+}^{\infty}\oplus\mathbb{S}_{k,-}^{\infty}\) with highest weights \[\mathbb{S}_{k,+}^{\infty}\longleftrightarrow\left(k-\frac{1}{2},-\frac{1}{2 },\ldots,-\frac{1}{2}\right)\quad\text{and}\quad\mathbb{S}_{k,+}^{\infty} \longleftrightarrow\left(k-\frac{1}{2},-\frac{1}{2},\ldots,-\frac{3}{2} \right).\] These weight entries are fixed by the Cartan algebra \(\mathfrak{h}=\mathsf{Alg}(X_{jj}:1\leq j\leq m)\), where the elements \(X_{jj}\) are defined in the lemma below. In this paper, we will omit the parity signs and work with \(\mathbb{S}_{k}^{\infty}\) as a notation which incorporates both the positive and negative spinors (in our model, this will correspond to even or odd in the variable \(\underline{z}\in\mathbb{R}^{m}\), see below, so it is always easy to 'decompose' into irreducible components when necessary). The three operators from Lemma 2.2 can be proven to be invariant under the action of the symplectic Lie algebra, in the sense that they commute with the following generators (see also Lemma 3.3 in [3]): **Lemma 2.5**.: _The symplectic Lie algebra \(\mathfrak{sp}(2m)\) has the following realisation on the space of symplectic spinor-valued polynomials \(\mathcal{P}(\mathbb{R}^{2m},\mathbb{C})\otimes\mathbb{S}_{0}^{\infty}\):_ \[\begin{cases}X_{jk}=x_{j}\partial_{x_{k}}-y_{k}\partial_{y_{j}}-(z_{k}\partial_{ z_{j}}+\frac{1}{2}\delta_{jk})&1\leq j,k\leq m\\ Y_{jk}=x_{j}\partial_{y_{k}}+x_{k}\partial_{y_{j}}-\partial_{z_{j}}\partial_{z _{k}}&1\leq j<k\leq m\\ Z_{jk}=y_{j}\partial_{x_{k}}+y_{k}\partial_{x_{j}}+z_{j}z_{k}&1\leq j<k\leq m\\ Y_{jj}=x_{j}\partial_{y_{j}}-\frac{1}{2}\partial_{z_{j}}^{2}&1\leq j\leq m\\ Z_{jj}=y_{j}\partial_{x_{j}}+\frac{1}{2}z_{j}^{2}&1\leq j\leq m\\ \end{cases} \tag{2.1}\] The branching rule for \(\mathbb{S}_{0}^{\infty}\), when considering it as a representation space for the orthogonal Lie algebra \(\mathfrak{so}(\mathfrak{m})\subset\mathfrak{sp}(2m)\), leads to the Fischer decomposition for \(\mathbb{C}\)-valued polynomials in the variable \(\underline{z}\in\mathbb{R}^{m}\) (see below). Note that \(\mathfrak{so}(m)\) is generated by the operators \(X_{jk}-X_{kj}\) for \(1\leq j<k\leq m\), giving rise to the well-known angular operators ubiquitous in quantum mechanics (often denoted by \(L_{ab}\) with \(1\leq a<b\leq m\)). In our previous paper [3], we therefore tackled the next case \(k=1\) as this is a natural generalisation of said Fischer decomposition. The main problem with our branching rule (Theorem 5.6 in [3]) is the fact that these \(\mathfrak{so}(m)\)-spaces appear with infinite multiplicities, which are not always easy to keep track of. Therefore the main goal of this paper is to show that one can organise these in an algebraic framework which extends to other values for \(k\) too, using a certain quadratic algebra. ## 3 Simplicial harmonics in three vector variables In this section we describe a generalisation of harmonic polynomials, in three vector variables. This will be done in terms of a solution space for a 'natural' collection of \(\mathfrak{so}(m)\)-invariant differential operators. The corresponding Howe dual pair will be useful for the branching problem addressed above. For the sake of completeness, we recall the following basic definition: **Definition 3.1**.: A function \(f(\underline{x})\) on \(\mathbb{R}^{m}\) is called _harmonic_ if \(\Delta f(\underline{x})=0\). The \(k\)-_homogeneous harmonics_ are defined as \(\mathcal{H}_{k}(\mathbb{R}^{m},\mathbb{C}):=\mathcal{P}_{k}(\mathbb{R}^{m}, \mathbb{C})\cap\ker(\Delta)\). These spaces define irreducible representations for \(\mathfrak{so}(m)\) with highest weight \((k,0,\ldots,0)\) for all \(k\in\mathbb{Z}^{+}\). It is well-known that the space of \(k\)-homogeneous polynomials \(\mathcal{P}_{k}(\mathbb{R}^{m},\mathbb{C})\) is reducible as an \(\mathfrak{so}(m)\)-module (see for example [4]) and decomposes into harmonic polynomials. In fact, the decomposition of the _full_ space of polynomials is known as the aforementioned _Fischer decomposition_, given by \[\mathcal{P}(\mathbb{R}^{m},\mathbb{C})=\bigoplus_{k=0}^{\infty}\mathcal{P}_{k} (\mathbb{R}^{m},\mathbb{C})=\bigoplus_{k=0}^{\infty}\bigoplus_{p=0}^{\infty}| \underline{z}|^{2p}\mathcal{H}_{k}(\mathbb{R}^{m},\mathbb{C}).\] This can all be generalised to the case of several vector variables (sometimes also called 'a matrix variable'): for any highest weight for \(\mathfrak{so}(m)\) there is a (polynomial) model in terms of simplicial harmonics (or monogenics for the half-integer representations). We refer to [8] for more details. In this paper, we will consider these spaces for \(\mathfrak{so}(m)\)-weights characterised by three integers \((a,b,c)\) where \(a\geq b\geq c\geq 0\). Also note that trailing zeros in the weight notation will be omitted from now on, so for instance \((k,0,\ldots,0)\) will be written as \((k)\). First of all, we consider homogeneous polynomials \(P_{a,b,c}(z;\underline{x},\underline{y})\) in three vector variables \((\underline{z};\underline{x},\underline{y})\in\mathbb{R}^{3m}\). Here we use the notation \((\underline{z};\underline{x},\underline{y})\) to stress the difference between the variable \(\underline{z}\) (the spinor variable, referring to an element in \(\mathbb{S}_{0}^{\infty}\)) from the other two variables \((\underline{x},\underline{y})\in\mathbb{R}^{2m}\), which are 'ordinary' variables. The parameters \((a,b,c)\) then refer to the degrees of homogeneity in \((\underline{z};\underline{x},\underline{y})\). These polynomials carry the regular representation of the orthogonal group (or the derived \(\mathfrak{so}(m)\)-action in terms of angular momentum operators \(L_{ab}\) from above). We further introduce the Weyl algebra in three vector variables as the algebra generated by the variables and their corresponding derivatives: \[\mathcal{W}(\mathbb{R}^{3m},\mathbb{C}):=\mathsf{Alg}(x_{\alpha},y_{\beta},z_{ \gamma},\partial_{x_{\delta}},\partial_{y_{\epsilon}},\partial_{z_{\zeta}}) \ \ \text{with}\ \alpha,\beta,\gamma,\delta,\varepsilon,\zeta\in\{1,\ldots,m\}\.\] Just like in the case of the classical Fischer decomposition, where the Lie algebra \(\mathfrak{sl}(2)\) appears as a Howe dual partner, there is a Lie algebra appearing here. To be precise, it is the Lie algebra \(\mathfrak{sp}(6)=\mathfrak{g}_{-2}\oplus\mathfrak{g}_{0}\oplus\mathfrak{g}_{+2}\), with parabolic subalgebra \(\mathfrak{p}:=\mathfrak{g}_{-2}\oplus\mathfrak{g}_{0}\) and Levi subalgebra \(\mathfrak{g}_{0}\cong\mathfrak{gl}(3)\). The subspaces \(\mathfrak{g}_{\pm 2}\) contain six 'pure' operators each (i.e. only variables, acting as a multiplication operator, or only derivatives). More specifically, the subspaces are spanned by the following \(\mathsf{SO}(m)\)-invariant operators: \[\mathfrak{g}_{-2}:= \operatorname{span}(\Delta_{x},\Delta_{y},\Delta_{z},\langle \underline{\partial}_{x},\underline{\partial}_{y}\rangle,\langle\underline{ \partial}_{y},\underline{\partial}_{z}\rangle,\langle\underline{\partial}_{x}, \underline{\partial}_{z}\rangle)\] \[\mathfrak{g}_{0}:= \operatorname{span}(\langle\underline{x},\underline{\partial}_{y} \rangle,\langle\underline{y},\underline{\partial}_{x}\rangle,\langle \underline{x},\underline{\partial}_{z}\rangle,\langle\underline{z}, \underline{\partial}_{x}\rangle,\langle\underline{y},\underline{\partial}_{z }\rangle,\langle\underline{z},\underline{\partial}_{y}\rangle,\mathbb{E}_{x}, \mathbb{E}_{y},\mathbb{E}_{z})\] \[\mathfrak{g}_{+2}:= \operatorname{span}(|\underline{x}|^{2},|\underline{y}|^{2},| \underline{z}|^{2},\langle\underline{x},\underline{y}\rangle,\langle \underline{y},\underline{z}\rangle,\langle\underline{x},\underline{z}\rangle)\] The space of _Howe harmonics_ of degree \((a,b,c)\) in the variables \((\underline{z},\underline{x},\underline{y})\) is defined as \(\mathcal{H}_{a,b,c}^{*}(\mathbb{R}^{3m},\mathbb{C}):=\mathcal{P}_{a,b,c}( \mathbb{R}^{3m},\mathbb{C})\cap\ker(\mathfrak{g}_{-2})\). In what follows the notation \(\ker(A_{1},\ldots,A_{n})\) stands for \(\ker(A_{1})\cap\ldots\cap\ker(A_{n})\), so \(\ker(\mathfrak{g}_{-2})\) means that simplicial harmonics are annihilated by all (pure differential) operators in \(\mathfrak{sp}(6)\). As a representation space for \(\mathfrak{so}(m)\), the spaces \(\mathcal{H}_{a,b,c}^{*}\) are _not_ irreducible. In order to obtain an irreducible (sub)space, we have to impose extra conditions. The vector space of _simplicial harmonics_ of degree \((a,b,c)\) in the variables \((\underline{z},\underline{x},\underline{y})\) is defined by means of \[\mathcal{H}_{a,b,c}(\mathbb{R}^{3m},\mathbb{C}):=\mathcal{H}_{a,b,c}^{*}( \mathbb{R}^{3m},\mathbb{C})\cap\ker\left(\langle\underline{z},\underline{ \partial}_{x}\rangle,\langle\underline{z},\underline{\partial}_{y}\rangle, \langle\underline{x},\underline{\partial}_{y}\rangle\right)\.\] As was shown in [8], this defines an irreducible representation space for \(\mathfrak{so}(m)\) with highest weight \((a,b,c)\), where the dominant weight condition \(a\geq b\geq c\) must hold. This now leads to the following generalisation of the result above (the Fisher decompostion in three vector variables): **Theorem 3.4**.: _The space \(\mathcal{P}(\mathbb{R}^{3m},\mathbb{C})\) of complex-valued polynomials in three vector variables (in \(\mathbb{R}^{m}\)) has a multiplicity-free decomposition under the action of \(\mathfrak{sp}(6)\times\mathsf{SO}(m)\) by means of:_ \[\mathcal{P}(\mathbb{R}^{3m},\mathbb{C})\cong\bigoplus_{a\geq b\geq c}\mathbb{ V}_{a,b,c}^{\infty}\otimes\mathcal{H}_{a,b,c}(\mathbb{R}^{3m},\mathbb{C}),\] _where we used the dominant weight condition in the summation. The notation \(\mathbb{V}_{a,b,c}^{\infty}\) hereby refers to a Verma module (see for example [6]) for \(\mathfrak{sp}(6)\)._ ## 4 The Mickelsson-Zhelobenko algebra (general setup) We have now introduced 21 differential operators giving rise to a realisation of the Lie algebra \(\mathfrak{sp}(6)\) inside the Weyl algebra (on 3 vector variables in \(\mathbb{R}^{m}\)). In this section we construct a related algebra, the so-called Mickelsson-Zhelobenko algebra (also called transvector or step algebra) \(\mathcal{Z}\). Let \(\mathfrak{g}\) be a Lie algebra and let \(\mathfrak{s}\subset\mathfrak{g}\) be a reductive subalgebra. We then have the decomposition \(\mathfrak{g}=\mathfrak{s}\oplus\mathfrak{t}\), where \(\mathfrak{t}\) carries an \(\mathfrak{s}\)-action for the commutator (i.e. \([\mathfrak{s},\mathfrak{t}]\subset\mathfrak{t}\)). For \(\mathfrak{s}\) we then fix a triangular decomposition \(\mathfrak{s}=\mathfrak{s}^{-}\oplus\mathfrak{h}\oplus\mathfrak{s}^{+}\), where \(\mathfrak{s}^{\pm}\) consists of the positive (resp. negative roots) with respect to the Cartan subalgebra \(\mathfrak{h}\subset\mathfrak{s}\). We then also define a left ideal \(J\subset\mathcal{U}(\mathfrak{g})\) in the universal enveloping algebra \(\mathcal{U}(\mathfrak{g})\) by means of \(\mathcal{U}(\mathfrak{g})\mathfrak{s}^{+}\). This allows us to define a certain subalgebra of \(\mathcal{U}(\mathfrak{g})\) which is known as the normaliser: \[\mathrm{Norm}(J):=\{u\in\mathcal{U}(\mathfrak{g})\mid Ju\subset J\}.\] The crucial point is that \(J\) is a two-sided ideal of \(\mathrm{Norm}(J)\), which allows us two define the quotient algebra \(\mathcal{S}(\mathfrak{g},\mathfrak{s})=\mathrm{Norm}(J)/J\) which is known as the _Mickelsson algebra_. In a last step of the construction, we consider an extension of \(\mathcal{U}(\mathfrak{g})\) to a suitable localisation \(\mathcal{U}^{\prime}(\mathfrak{g})\) given by \[\mathcal{U}^{\prime}(\mathfrak{g})=\mathcal{U}^{\prime}(\mathfrak{g})\otimes_ {\mathcal{U}(\mathfrak{h})}\mathrm{Frac}(\mathcal{U}(\mathfrak{h}))\,\] where \(\mathrm{Frac}(\mathcal{U}(\mathfrak{h}))\) is the field of fractions in the (universal enveloping algebra of the) Cartan algebra. The ideal \(J^{\prime}\) can be introduced for this extension too (in a completely similar way) and the corresponding quotient algebra \(\mathcal{Z}(\mathfrak{g},\mathfrak{s}):=\mathrm{Norm}(J^{\prime})/J^{\prime}\) is the _Mickelsson-Zhelobenko algebra_. These two algebras are naturally identified, since one has that \[\mathcal{Z}(\mathfrak{g},\mathfrak{s})=\mathcal{S}(\mathfrak{g},\mathfrak{s} )\otimes_{\mathcal{U}(\mathfrak{h})}\mathrm{Frac}(\mathcal{U}(\mathfrak{h}))\.\] Note that this algebra is sometimes referred to as a 'transvector algebra', which is what we will often use in what follows. ## 5 The Mickelsson-Zhelobenko algebra \(\mathcal{Z}(\mathfrak{sp}(6),\mathfrak{so}(4))\) We will now define a specific example of the construction from above, which will help us to understand how the branching of \(\mathbb{S}_{k}^{\infty}\) works. First of all, we note the following: **Lemma 5.1**.: _The three (orthogonally invariant) operators_ \[L:=\langle\underline{x},\underline{\partial}_{y}\rangle-\frac{1}{2}\Delta_{z} \qquad R:=\langle\underline{y},\underline{\partial}_{x}\rangle+\frac{1}{2}| \underline{z}|^{2}\qquad\mathcal{E}:=\mathbb{E}_{y}-\mathbb{E}_{x}+\mathbb{E}_ {z}+\frac{n}{2}\] _give rise to yet another copy of the Lie algebra \(\mathfrak{sl}(2)\). This Lie algebra commutes with the Lie algebra \(\mathfrak{sl}(2)\cong\operatorname{Alg}(D_{s},X_{s})\)._ This thus means that we have now obtained a specific realisation for the Lie algebra \(\mathfrak{so}(4)\cong\operatorname{Alg}(D_{s},X_{s})\oplus\operatorname{Alg}( L,R)\cong\mathfrak{sl}(2)\oplus\mathfrak{sl}(2)\) which appears as a subalgebra of \(\mathfrak{sp}(6)\). This algebra will play the role of \(\mathfrak{s}\) from Section 4. Let us therefore consider the lowest weight vectors in \(\mathfrak{so}(4)\): \[Y_{1}=D_{s}=\langle\underline{z},\underline{\partial}_{y}\rangle-\langle \underline{\partial}_{z},\underline{\partial}_{x}\rangle\quad\text{and}\quad Y _{2}=L=\langle\underline{x},\underline{\partial}_{y}\rangle-\frac{1}{2}\Delta _{z}\.\] We will focus on the solutions of both lowest weight vectors, i.e. \(\ker(D_{s},L)\). Note that the operators in \(\mathfrak{sp}(6)\) do not necessarily act as endomorphisms on this space, but the transvector framework allows us to'replace' these operators by (related) transvector algebra generators which do act as endomorphisms. We start with proving the reductiveness of the algebra \(\mathfrak{so}(4)\) in \(\mathfrak{sp}(6)\). **Lemma 5.2**.: _The Lie algebra \(\mathfrak{so}(4)\) is reductive in \(\mathfrak{sp}(6)\)._ Proof.: We need to show that \(\mathfrak{sp}(6)\) decomposes as \(\mathfrak{so}(4)+\mathfrak{t}\), where the subspace \(\mathfrak{t}\) carries an action of \(\mathfrak{so}(4)\). For that purpose we introduce the following 15 (linearly independent) differential operators: \[\begin{array}{ccccc}\Delta_{x}&\langle\underline{z},\underline{\partial}_{x} \rangle&\langle\underline{y},\underline{\partial}_{x}\rangle-|\underline{z}|^{ 2}&\langle\underline{y},\underline{z}\rangle&|\underline{y}|^{2}\\ \langle\underline{\partial}_{x},\underline{\partial}_{y}\rangle&\langle \underline{z},\underline{\partial}_{y}\rangle+\langle\underline{\partial}_{z},\underline{\partial}_{x}\rangle&\mathbb{E}_{x}-\mathbb{E}_{y}+2\mathbb{E}_{z }+m&\langle\underline{x},\underline{z}\rangle-\langle\underline{y}, \underline{\partial}_{z}\rangle&\langle\underline{x},\underline{y}\rangle\\ \Delta_{y}&\langle\underline{\partial}_{y},\underline{\partial}_{z}\rangle& \langle\underline{x},\underline{\partial}_{y}\rangle+\Delta_{z}&\langle \underline{x},\underline{\partial}_{z}\rangle&|\underline{x}|^{2}\end{array}\] It is now a straightforward computation to check that for each of these operators the commutator with one of the operators in \(\mathfrak{so}(4)\) is again a linear combination of the operators above. In order to construct the generators for the algebra \(\mathcal{Z}(\mathfrak{g},\mathfrak{s})\) with \(\mathfrak{g}=\mathfrak{sp}(6)\) and \(\mathfrak{s}=\mathfrak{so}(4)\), we need the following: **Definition 5.3**.: The _extremal projector_ for the Lie algebra \(\mathfrak{sl}(2)=\operatorname{Alg}(X,Y,H)\) is the idempotent operator \(\pi\) given by the (formal) expression \[\pi:=1+\sum_{j=1}^{\infty}\frac{(-1)^{j}}{j!}\frac{\Gamma(H+2)}{\Gamma(H+2+j)}Y^ {j}X^{j}. \tag{5.1}\] This operator satisfies \(X\pi=\pi Y=0\) and \(\pi^{2}=\pi\). Note that this operator is defined on the extension \(\mathcal{U}^{\prime}(\mathfrak{sl}(2))\) of the universal enveloping algebra defined earlier, so that formal series containing the operator \(H\) in the denominator are well-defined (in practice it will always reduce to a _finite_ summation). **Lemma 5.4**.: _The extremal projector \(\pi_{\mathfrak{so}(4)}\) is given by the product of the extremal projectors for the Lie algebras \(\mathfrak{sl}(2)\), i.e. \(\pi_{\mathfrak{so}(4)}=\pi_{D_{s}}\pi_{L}=\pi_{L}\pi_{D_{s}}\) (the operator appearing as an index here refers to the realisation for \(\mathfrak{sl}(2)\) that was used)._ Proof.: This is due to the fact that the two copies of \(\mathfrak{sl}(2)\) commute. The operator \(\pi_{\mathfrak{so}(4)}\) is thus explicitly given by \[\left(1+\sum_{j=1}^{\infty}\frac{(-1)^{j}}{j!}\frac{\Gamma(\mathbb{E}+2)}{ \Gamma(\mathbb{E}+2+j)}X_{s}^{j}D_{s}^{j}\right)\left(1+\sum_{j=1}^{\infty} \frac{(-1)^{j}}{j!}\frac{\Gamma(\mathcal{E}+2)}{\Gamma(\mathcal{E}+2+j)}R^{j}L ^{j}\right)\] and satisfies \(D_{s}\pi_{\mathfrak{so}(4)}=L\pi_{\mathfrak{so}(4)}=0=\pi_{\mathfrak{so}(4)}X_ {s}=\pi_{\mathfrak{so}(4)}R\). This means that we now have a natural object that can be used to project polynomials on the intersection of the kernel of the operators \(D_{s}\) and \(L\). The 15 operators in \(\mathfrak{t}\subset\mathfrak{sp}(6)\) as such do not preserve this kernel space (as these operators do not necessarily commute with \(D_{s}\) and \(L\)), but their projections will belong to \(\operatorname{End}(\ker(D_{s},L))\). In what follows we will use the notation \(Q_{a,b}\), where \(a\in\{\pm 2,0\}\) and \(b\in\{\pm 4,\pm 2,0\}\), to denote the operators in \(\mathfrak{t}\) (see Lemma 5.2, and the scheme below). For each operator \(Q_{a,b}\) we then also define an associated operator \(\mathbb{P}_{a,b}:=\pi_{\mathfrak{so}(4)}Q_{a,b}\). For instance \(\mathbb{P}_{4,-2}=\pi_{\mathfrak{so}(4)}|\underline{y}|^{2}\). The \(\mathbb{P}\)-operators will then be used to define the generators for our transvector algebra. The diagram below should then be seen as the analogue of the 15 operators \(Q_{a,b}\) given above, grouped into a \(5\times 3\) rectangle, where each operator \(\alpha\in\mathfrak{t}\) carries a label. The meaning of the labels \((a,b)\) comes from the observation that \(\mathfrak{t}\cong\mathbb{V}_{4}\otimes\mathbb{V}_{2}\) as a representation for \(\mathfrak{sl}(2)\oplus\mathfrak{sl}(2)\), with \(\mathbb{V}_{n}\) the standard notation for the irreducible representation of dimension \((n+1)\). Given an operator \(\alpha\in\mathfrak{t}\), the numbers \(a\) and \(b\) can thus be retrieved as eigenvalues for the commutator action of the Cartan elements in \(\mathfrak{so}(4)\). Note that the projection operator \(\mathfrak{so}(4)\) commutes with these Cartan elements (i.e. the operators \(Q_{a,b}\) and \(\mathbb{P}_{a,b}\) indeed carry the same labels). Despite the fact that \(\mathcal{Z}(\mathfrak{sp}(6),\mathfrak{so}(4))\) is _not_ a Lie algebra, we have organised these operators in such a way that the notions of 'positive' and 'negative' roots can be used. To be more precise: black dots (resp. grey dots) refer to negative (resp. positive) operators, and the white dot plays the role of a 'Cartan element' (this analogy will come in handy below). The 7 black dots (resp. 7 grey dots) will be referred to as operators in \(\rho^{-}\) (resp. in \(\rho^{+}\)). Together with the operator \(\mathbb{P}_{0,0}\) we then get the set \[\mathcal{G}_{\mathcal{Z}}=\{\mathbb{P}_{a,b}:a\in\{\pm 2,0\},b\in\{\pm 4,\pm 2,0\}\},\] containing all the generators for the transvector algebra \(\mathcal{Z}(\mathfrak{sp}(6),\mathfrak{so}(4))\). Due to a general result by Zhelobenko, these generators then satisfy _quadratic_ relations (i.e. different from the classical Lie brackets). In the next theorem, we will relate the spaces \(\mathcal{H}_{a,b,c}(\mathbb{R}^{3m},\mathbb{C})\) introduced in Definition 3.3 to the space of polynomial solutions for the symplectic Dirac operator \(D_{s}\), the lowering operator \(L\) and the negative 'roots' \(\rho^{-}\) which we have just introduced (i.e. the operators \(\mathbb{P}_{a,b}\) corresponding to black dots). **Theorem 5.5**.: _The solutions for the operators \(D_{s}\) and \(L\) and the negative roots \(\rho^{-}\subset\mathcal{G}_{\mathcal{Z}}\) which are homogeneous of degree \((a,b,c)\) in the variables \((\underline{z},\underline{x},\underline{y})\) are precisely given by the simplicial harmonics \(\mathcal{H}_{a,b,c}(\mathbb{R}^{3m},\mathbb{C})\). In other words, we have:_ \[\mathcal{P}_{a,b,c}(\mathbb{R}^{3m},\mathbb{C})\cap\ker(D_{s},L,\rho^{-})= \mathcal{H}_{a,b,c}(\mathbb{R}^{3m},\mathbb{C}).\] Proof.: The idea behind this proof is a recursive argument, where the ordering on the black dots will be from left to right and from bottom to top in the rectangular scheme above (in terms of labels this means that \((2,-4)>(0,-4)>(2,-2)\), as an example). The reason for doing so is the following: the commutators \([L,Q_{a,b}]\) and \([D_{s},Q_{a,b}]\) give an operator situated below or to the left of the operator \(Q_{a,b}\) we started from. Up to a constant, these operators are equal to \(Q_{a+2,b}\) and \(Q_{a,b-2}\) respectively (or trivial whenever the parameters \(a\) and \(b\) are not in the correct range). This means that combinations of the form \(LQ_{a,b}\) and \(D_{s}Q_{ab}\) act trivially on functions \(H(\underline{z},\underline{x},\underline{y})\) in the kernel of \(L\) and \(D_{s}\), provided we know that also \(Q_{a+2,b}\) and \(Q_{a,b-2}\) act trivially. Given the fact that each operator \(\mathbb{P}_{a,b}\in\rho^{-}\) is of the form \[\mathbb{P}_{a,b}=\big{(}1+\mathcal{O}_{1}L\big{)}\big{(}1+\mathcal{O}_{2}D_{s }\big{)}Q_{a,b}\,\] where \(\mathcal{O}_{j}\) is a short-hand notation for the correction terms coming from the extremal projection operator (which, unless this operator reduces to the identity operator, always contains either an operator \(L\) or \(D_{s}\) at the right). The upshot of our recursive scheme is that once we know that \(Q_{a+2,b}\) and \(Q_{a,b-2}\) act trivially, this immediately tells us that \(\mathbb{P}_{a,b}H=0\Rightarrow Q_{a,b}H=0\). Because \(\mathbb{P}_{2,-4}H=0\) and \(\mathbb{P}_{2,-4}=Q_{-2,4}=\Delta_{y}\), we can immediately conclude that the following operators will then act trivially: \[\Delta_{y}\quad\langle\underline{\partial}_{x},\underline{\partial}_{y} \rangle\quad\Delta_{x}\quad\langle\underline{\partial}_{y},\underline{ \partial}_{z}\rangle\quad\langle\underline{z},\underline{\partial}_{y} \rangle+\langle\underline{\partial}_{x},\underline{\partial}_{z}\rangle\quad \langle\underline{z},\underline{\partial}_{x}\rangle\quad\langle\underline{x },\underline{\partial}_{y}\rangle+\Delta_{z}\.\] In order to be simplicial harmonic, \(H(\underline{z};\underline{x},\underline{y})\) should belong to the kernel of 9 operators in \(\mathfrak{sp}(6)\) (see Definition 3.3), but it is straightforward to see that one can reproduce these operators as commutators of the 7 operators on the previous line. For example: \(\Delta_{x}(\langle\underline{x},\underline{\partial}_{y}\rangle+\Delta_{z})H=0\) leads to \(\Delta_{z}H=0\), since \(\langle\underline{\partial}_{x},\underline{\partial}_{y}\rangle H=0\) (and so on). ## 6 Application: branching symplectic monogenics We will now use the operators \(\mathbb{P}_{a,b}\) to explicitly describe the branching of the \(k\)-homogeneous symplectic monogenics \(\mathbb{S}_{k}^{\infty}\). By this we mean that it will give us a systematic way to define the 'embedding factors' realising the isomorphic copy of those spaces in \(\mathbb{S}_{k}^{\infty}\). To do so, we will make an analogy again: one can consider the asssociative algebra \(\mathcal{U}(\mathcal{Z})\), the 'universal enveloping algebra' of \(\mathcal{Z}(\mathfrak{sp}(6),\mathfrak{so}(4))\). The meaning should be clear here: it is a tensor algebra \(\bigotimes V\) (with \(V\) the span of \(\mathcal{G}_{\mathcal{Z}}\)-generators as an underlying vector space) modulo the ideal spanned 'by the quadratic relations' in the transvector algebra. We will refer to elements in this algebra as 'words' in 'an alphabet' that can be ordered. This statement, which should thus be seen as an analogue of the Poincare-Birkhoff-Witt theorem (PBW theorem), requires a proof but we will not do this in the present paper. As a matter of fact, the general case \(k\in\mathbb{Z}^{+}\) will be treated in an upcoming (longer) paper, in the present article we will focus on the case \(k=1\) as a guiding example. The main idea is the following: imposing the lexicographic ordering on the labels \((a,b)\) will dictate the position of our letters in the alphabet (from left to right), with e.g. \((4,0)>(4,-2)>(2,2)\). Letting such a word acting as an operator on simplicial harmonics \(H_{a,b,c}(\underline{z};\underline{x},\underline{y})\), it should be clear (in view of the previous theorem) that only the 'letters' corresponding to grey dots in the scheme will play a role (the white dot acts as a constant, whereas the black dots act trivially). Considering the fact that the total degree of 'a word' in \(\underline{x}\) and \(\underline{y}\) should not exceed \(k=1\), we can only use the operators \(\mathbb{P}_{a,b}\) from the third and fourth column in our example. Note that once the operator \(\mathbb{P}_{ab}\) has been chosen (i.e. the 'word' in front of the simplicial harmonics), the degree \((a,b,c)\) of these polynomials \(H_{a,b,c}(\underline{z};\underline{x},\underline{y})\) is automatically fixed too: the total degree in \(\underline{z}\) and \((\underline{x},\underline{y})\) is then equal to \(k\) and \(1\) respectively. So, when the 'word' is homogeneous of degree one in \((\underline{x},\underline{y})\) we get contributions of the form \(\mathbb{P}_{0,0}\mathcal{H}_{a,1,0}\) and \(\mathbb{P}_{2,0}\mathcal{H}_{a,1,0}\). Whereas when the chosen 'word' is homogeneous of degree zero we get \(\mathbb{P}_{-2,2}\mathcal{H}_{a,0,0}\), \(\mathbb{P}_{0,2}\mathcal{H}_{a,0,0}\) and \(\mathbb{P}_{2,2}\mathcal{H}_{a,0,0}\). Finally, we note that we can still act with the raising operator \(R\in\mathfrak{sl}(2)\) on each of the polynomials from above (i.e. a suitable projection operator acting on a suitable space of simplicial harmonics) to arrive at a direct sum of Verma modules which can be embedded into \(\mathbb{S}_{1}^{\infty}\). This is based on the trivial albeit crucial observation that \([R,D_{s}]=0\), so that acting with \(R\) preserves symplectic monogenic solutions. This means that we have now resolved the branching problem for \(k=1\) in a completely different way. Resulting in the decomposition \[\mathbb{S}_{1}^{\infty}\bigg{|}_{\mathfrak{so}(m)}^{\mathfrak{sp }(2m)} \cong\bigoplus_{a\geq 1}\bigoplus_{\ell=0}^{\infty}R^{\ell}(\mathcal{H}_ {a,1}\oplus\mathbb{P}_{2,0}\mathcal{H}_{a,1})\] \[\oplus\bigoplus_{a\geq 0}\bigoplus_{\ell=0}^{\infty}R^{\ell}( \mathbb{P}_{-2,2}\mathcal{H}_{a}\oplus\mathbb{P}_{-2,0}\mathcal{H}_{a}\oplus \mathbb{P}_{-2,-2}\mathcal{H}_{a}).\] Summarising the idea behind this decomposition, we thus claim that \(\mathbb{S}_{k}^{\infty}\) can be decomposed under the joint action of \[\mathfrak{so}(m)\times\mathfrak{sl}(2)\times\mathcal{Z}(\mathfrak{sp}(6), \mathfrak{so}(4)),\] whereby the final decomposition will contain summands of the form \[R^{p}\left(\mathcal{U}(\rho^{+})\mathcal{H}_{a,b,c}\right)\] for suitable 'words' in the algebra \(\mathcal{U}(\rho^{+})\) and suitable spaces of simplicial harmonics. **Acknowledgments** The author G.M. was supported by the FWO-EoS project G0H4518N.
2307.08814
Invertible disformal transformations with arbitrary higher-order derivatives
Invertible disformal transformations serve as a useful tool to explore ghost-free scalar-tensor theories. In this paper, we construct a generalization of invertible disformal transformations that involves arbitrary higher-order covariant derivatives of the scalar field. As a result, we obtain a more general class of ghost-free scalar-tensor theories than ever. Notably, our generalization is such that matter fields can be consistently coupled to these theories without introducing an unwanted extra degree of freedom in the unitary gauge.
Kazufumi Takahashi
2023-07-17T20:04:43Z
http://arxiv.org/abs/2307.08814v2
# Invertible disformal transformations with arbitrary higher-order derivatives ###### Abstract Invertible disformal transformations serve as a useful tool to explore ghost-free scalar-tensor theories. In this paper, we construct a generalization of invertible disformal transformations that involves arbitrary higher-order covariant derivatives of the scalar field. As a result, we obtain a more general class of ghost-free scalar-tensor theories than ever. Notably, our generalization is such that matter fields can be consistently coupled to these theories without introducing an unwanted extra degree of freedom in the unitary gauge. + Footnote †: preprint: YITP-23-91 ## I Introduction General relativity (GR) has passed various gravitational experiments as well as cosmological observations and is now commonly accepted as the standard model of gravitation and cosmology. Nevertheless, there are several motivations to study modifications/extensions of GR. For instance, GR is expected to be a low-energy effective theory and should be modified at high energies. Also, extended gravitational theories serve as good candidates that can be tested against GR [1; 2; 3]. In general, modified gravity models involve additional degrees of freedom (DOFs) on top of the metric, of which scalar-tensor theories (i.e., those involving a single scalar field besides the metric) have been studied extensively. The most general class of scalar-tensor theories with second-order Euler-Lagrange equations is now known as the Horndeski class [4; 5; 6]. Note that the second-order nature of the Euler-Lagrange equations guarantees the absence of Ostrogradsky ghosts, i.e., unstable extra DOFs associated with higher-order equations of motion [7]. A more general class of ghost-free scalar-tensor theories was constructed in Refs. [8; 9; 10] by imposing the degeneracy condition [8; 11; 12; 13; 14; 15] on the higher-derivative terms, and this class is called the degenerate higher-order scalar-tensor (DHOST) class. (See Refs. [16; 17] for reviews.) The DHOST class consists of many subclasses, and one of them can be obtained by the (conformal or) disformal transformation [18; 19; 20] of the Horndeski class, which we call the disformal Horndeski (DH) class. Note in passing that a ghost-free theory is mapped to another ghost-free theory by the disformal transformation since it is invertible in general [21; 22]. Interestingly, DHOST theories that lie outside the DH class are known to exhibit ghost/gradient instabilities (or otherwise the metric becomes nondynamical) on a cosmological background [23; 24; 25]. Therefore, when one applies the DHOST theories to phenomenology, one usually focuses on the DH class. It was then realized that the framework of ghost-free scalar-tensor theories can be further extended by requiring the degeneracy only in the unitary gauge where the time coordinate is chosen so that the scalar field is spatially uniform. Such an extension was dubbed the U-DHOST class [26], which is equivalent to spatially covariant gravity [27; 28; 29; 30] in the unitary gauge. Note that the scalar field profile has to be timelike in order to be consistent with the unitary gauge. Away from the unitary gauge, there is an apparent Ostrogradsky mode, but this mode does not propagate as it satisfies a three-dimensional elliptic differential equation on a spacelike hypersurface [31; 26; 32]. Such a mode is often called a shadowy mode and is harmless when the scalar field has a timelike profile. Although the (U-)DHOST theories form a general class of ghost-free scalar-tensor theories, one is interested in further generalizations as they would exhibit peculiar phenomena that could be tested with experiments/observations. A systematic approach to this issue was proposed in Refs. [33; 34]. The idea is to generalize the disformal transformation to incorporate higher-order derivatives of the scalar field, keeping the transformation invertible. This is indeed possible when the transformation involves (covariant) derivatives of the scalar field up to the second order [33]. (See also Ref. [35] for an earlier attempt in this direction and Refs. [36; 37] for a complementary class of invertible disformal transformations with higher derivatives.) Then, by performing the generalized disformal transformation on the Horndeski theories, one obtains a novel class of ghost-free scalar-tensor theories that goes beyond the conventional DHOST class. This novel class was dubbed the generalized disformal Horndeski (GDH) theories [34]. A further extension can be obtained by choosing the U-DHOST theories as the seed of the generalized disformal transformation, which is called the generalized unitary-degenerate (GDU) theories [38]. A possible problem with such generalized disformal theories is that an unwanted extra DOF can show up when matter fields are coupled [34; 39].1 Needless to say, matter fields should be taken into account when we construct gravitational theories that can be used for phenomenological purposes. In addition, the GDH/GDU theories can be distinguished from the seed Horndeski/U-DHOST theories only in the presence of matter since the matter fields define a special frame where they are minimally coupled to gravity (i.e., the Jordan frame). Therefore, one is interested in a subclass of the GDH/GDU class where matter fields can be consistently coupled without introducing an unwanted extra DOF. The consistency of matter coupling in generalized disformal theories has been studied extensively in Refs. [34; 39; 41; 42], and it was shown that there exists a nontrivial class of generalized disformal theories such that ordinary matter fields, including the in the standard model, can be consistently coupled in the unitary gauge. Along this line of thought, in the present paper, we construct invertible disformal transformations that involve _arbitrary_ higher-order covariant derivatives of the scalar field and hence extend the class of generalized disformal transformations obtained in Ref. [33]. These transformations can be employed to obtain a more general class of ghost-free scalar-tensor theories than ever. Our generalization is such that the resultant theories allow for consistent matter coupling in the unitary gauge. The rest of this paper is organized as follows. In SSII, we briefly review the generalized disformal transformations with second-order covariant derivatives of the scalar field. We also discuss the conditions on the generalized disformal transformations under which matter fields can be consistently coupled to the GDH/GDU theories. In SSIII, we construct generalized disformal transformations with arbitrary higher-order derivatives of the scalar field in such a way that they respect the conditions for consistent matter coupling. Finally, we draw our conclusions in SSIV. ## II Transformations with second-order derivatives Let us first provide a brief review of generalized disformal transformations with second-order (covariant) derivatives of the scalar field. As clarified in Ref. [33], although the inclusion of such higher-order derivatives spoils the invertibility of the transformation in general, one can systematically obtain a class of invertible transformations by focusing on a group structure under functional composition of disformal transformations. Then, by performing the invertible generalized disformal transformations on Horndeski theories, the authors of Ref. [34] constructed the class of GDH theories. Moreover, if one chooses U-DHOST theories as the seed of the generalized disformal transformation, one obtains the class of GDU theories [38]. The issue of matter coupling in GDH theories was investigated in Refs. [34; 39; 41; 42], showing that the generalized disformal transformation is subjected to severe constraints in order to avoid Ostrogradsky ghost in the presence of matter fields. We note that the analyses in Refs. [34; 39; 41] were performed in the unitary gauge, and hence the scalar field was assumed to have a timelike profile. Away from the unitary gauge, apparently an extra mode shows up, but it is actually a non-propagating shadowy mode [31; 26; 32]. The analysis of Ref. [42] shows that matter-coupled GDH theories always have an extra mode, which becomes a shadowy mode when the scalar field profile is timelike. However, when the scalar field profile is spacelike, the extra mode is nothing but an Ostrogradsky mode, which is problematic. Therefore, throughout the present paper, we assume that the scalar field has a timelike profile. We emphasize that there are many situations that allow for a timelike scalar profile, including not only cosmology but also black holes (e.g., Refs. [43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55]) and neutron stars (e.g., Refs. [56; 57; 58; 47]). The subclass of generalized disformal transformations allowing for consistent matter coupling is given by the following form: \[g_{\mu\nu}\quad\rightarrow\quad\bar{g}_{\mu\nu}[g,\phi]=f_{0}g_{\mu\nu}+f_{1} \phi_{\mu}\phi_{\nu}+2f_{2}\phi_{(\mu}\mathrm{D}_{\nu)}X, \tag{1}\] where \(\phi_{\mu}\coloneqq\partial_{\mu}\phi\), \(X\coloneqq\phi_{\alpha}\phi^{\alpha}\), and \(\mathrm{D}_{\mu}\) denotes the covariant derivative on a constant-\(\phi\) hypersurface, i.e., \[\mathrm{D}_{\mu}\Phi\coloneqq h_{\mu}^{\alpha}\nabla_{\alpha}\Phi,\qquad h_{ \mu}^{\alpha}\coloneqq\delta_{\mu}^{\alpha}-\frac{\phi_{\mu}\phi^{\alpha}}{X}, \tag{2}\] for any scalar quantity \(\Phi\). Also, \(f_{i}\)'s are functions of \((\phi,X,\mathcal{Z})\), with \[\mathcal{Z}\coloneqq\mathrm{D}_{\alpha}X\mathrm{D}^{\alpha}X. \tag{3}\] As pointed out in Ref. [33], a group structure under functional composition of disformal transformations is essential for the invertibility of the transformation: Once one finds the group structure, it is straightforward to construct the inverse transformation for a given disformal transformation as its inverse element in this group. In order for the set of transformations of the form (1) to have the group structure, the following set of conditions should be satisfied [41; 33]: \[f_{0}\neq 0,\qquad f_{0}(f_{0}+Xf_{1})-X\mathcal{Z}f_{2}^{2}\neq 0,\qquad \bar{X}=\bar{X}(\phi,X),\qquad\bar{X}_{X}\neq 0,\qquad\left(\frac{\mathcal{Z}}{f_{0}} \right)_{\mathcal{Z}}\neq 0. \tag{4}\] Here, we have defined \(\bar{X}\coloneqq\bar{g}^{\mu\nu}\phi_{\mu}\phi_{\nu}\), with \(\bar{g}^{\mu\nu}\) being the inverse disformal metric such that \(\bar{g}^{\mu\alpha}\bar{g}_{\alpha\nu}=\delta_{\nu}^{\mu}\). Written explicitly, \[\bar{g}^{\mu\nu}=\frac{1}{f_{0}}g^{\mu\nu}-\frac{1}{f_{0}(f_{0}+Xf_{1})-X \mathcal{Z}f_{2}^{2}}\left[\left(f_{1}-\frac{\mathcal{Z}f_{2}^{2}}{f_{0}} \right)\phi^{\mu}\phi^{\nu}+2f_{2}\phi^{(\mu}\mathrm{D}^{\nu)}X-\frac{Xf_{2}^{2 }}{f_{0}}\mathrm{D}^{\mu}X\mathrm{D}^{\nu}X\right], \tag{5}\] and hence \[\bar{X}=\frac{Xf_{0}}{f_{0}(f_{0}+Xf_{1})-X\mathcal{Z}f_{2}^{2}}, \tag{6}\] which is a function of \((\phi,X,\mathcal{Z})\) in general. The condition \(\bar{X}=\bar{X}(\phi,X)\) in Eq. (4) requires that this \(\bar{X}\) should be independent of \(\mathcal{Z}\). Actually, without this condition, a functional composition of disformal transformations yields higher-order derivatives that are not involved in the original transformation law, which spoils the group structure. It should be noted that the transformation (1) satisfies the following two properties: [A] All the higher-order derivatives of the scalar field appear only through D\({}_{\mu}\). [B] Projected onto a constant-\(\phi\) hypersurface, all the non-conformal terms vanish, i.e., \(h_{\mu}^{\alpha}h_{\nu}^{\beta}\bar{g}_{\alpha\beta}=f_{0}h_{\mu\nu}\). We note that Eq. (1) is the most general transformation law with these properties constructed out of \(g_{\mu\nu}\), \(\phi_{\mu}\), and \(\partial_{\mu}X\). In fact, the properties [A] and [B] ensure the consistency of matter coupling in GDH/GDU theories. To see this, let us consider matter fields that are minimally coupled to gravity, where the gravitational action is given by some invertible generalized disformal transformation of the following action: \[S_{\text{g}}[g,\phi]=\int\text{d}^{4}x\sqrt{-g}\,\mathcal{L}(g_{\mu\nu},\phi, \phi_{\mu},K_{\mu\nu},a_{\mu},\,{}^{(3)}\!R_{\mu\nu\lambda\sigma}), \tag{7}\] where \(K_{\mu\nu}\), \(a_{\mu}\), and \({}^{(3)}\!R_{\mu\nu\lambda\sigma}\) are the extrinsic curvature, the acceleration vector, and the three-dimensional Riemann tensor associated with a constant-\(\phi\) hypersurface, respectively. It should be noted that the action (7) does not yield any higher-order time derivatives at least under the unitary gauge and hence does not contain unwanted extra DOFs in itself. Therefore, the action (7) encompasses the Horndeski and (a subclass of) U-DHOST theories, and its generalized disformal transformation encompasses the GDH/GDU theories. When one considers matter fields that are minimally coupled to such generalized disformal theories, it is practically more useful to work in the frame where the gravitational action is described by Eq. (7). Schematically, we consider the following action: \[S[g,\phi,\Psi]=S_{\text{g}}[g,\phi]+S_{\text{m}}[\bar{g},\Psi], \tag{8}\] where \(S_{\text{m}}\) denotes the matter action and the matter fields are collectively denoted by \(\Psi\). In this frame, there are no higher-order time derivatives in the gravitational action, but the matter coupling could yield Ostrogradsky ghost(s) through the higher derivatives contained in \(\bar{g}_{\mu\nu}\). The reason why the properties [A] and [B] remove the ghost(s) can be understood by taking the unitary gauge and expressing the action in terms of the Arnowitt-Deser-Misner (ADM) variables, i.e., the lapse function \(N\), the shift vector \(N^{i}\), and the spatial metric \(h_{ij}\). Written explicitly, the spacetime metric is decomposed as \[g_{\mu\nu}\text{d}x^{\mu}\text{d}x^{\nu}=-N^{2}\text{d}t^{2}+h_{ij}(\text{d} x^{i}+N^{i}\text{d}t)(\text{d}x^{j}+N^{j}\text{d}t), \tag{9}\] and hence \(X=-\dot{\phi}^{2}/N^{2}\) in the unitary gauge, with a dot denoting the time derivative. Note that the gravitational action (7) does not involve the time derivative of \(N\) and \(N^{i}\), and therefore these variables are nondynamical when matter fields are absent. On the other hand, if one introduces matter fields that are coupled to the generalized disformal metric, \(\dot{N}\) can show up in the matter sector, which could make \(N\) dynamical. It is precisely the property [A] that allows us to avoid this problem: Since D\({}_{\mu}\) is nothing but the spatial covariant derivative under the unitary gauge, the transformation law (1) involves only the spatial derivative of \(X\) and hence only the spatial derivative of \(N\). Therefore, for ordinary bosonic matter fields whose action does not involve derivatives of the metric, the property [A] ensures the consistency of matter coupling. For fermionic matter fields, the situation is more nontrivial as the covariant derivative acting on a fermionic field contains derivatives of the metric (or the tetrad). Nevertheless, as clarified in Ref. [41], the unwanted \(\dot{N}\) can be absorbed into a redefinition of the fermionic field when the spatial metric \(h_{ij}\) is transformed conformally, which is guaranteed by the property [B].*2 Thus, the properties [A] and [B] are crucial for the consistency of matter coupling in generalized disformal theories. (For a more detailed discussion, see Ref. [41].) [FOOTNO ## III Transformations with arbitrary higher-order derivatives ### Transformation law Let us now extend the transformation law (1) to include third- or higher-order covariant derivatives of the scalar field, keeping the properties [A] and [B]. We assume that derivatives of the metric appear in the transformation law only through the Christoffel symbol contained in the covariant derivative of the scalar field as in Eq. (1). For instance, when we include third-order derivatives, we introduce a new building block \(\mathrm{D}_{\mu}\mathcal{Z}\) and consider the transformation law \[g_{\mu\nu}\quad\to\quad\bar{g}_{\mu\nu}[g,\phi]=f_{0}g_{\mu\nu}+f_{1}\phi_{\mu} \phi_{\nu}+2f_{2}\phi_{(\mu}\mathrm{D}_{\nu)}X+2f_{3}\phi_{(\mu}\mathrm{D}_{ \nu)}\mathcal{Z}, \tag{10}\] where \(f_{i}\)'s are now functions of \((\phi,X,\mathcal{Z},\mathcal{U},\mathcal{V})\), with \[\mathcal{U}\coloneqq\mathrm{D}_{\alpha}X\mathrm{D}^{\alpha}\mathcal{Z},\qquad \mathcal{V}\coloneqq\mathrm{D}_{\alpha}\mathcal{Z}\mathrm{D}^{\alpha} \mathcal{Z}. \tag{11}\] This is the most general transformation law with the properties [A] and [B] constructed out of \(g_{\mu\nu}\), \(\phi_{\mu}\), and \(\partial_{\mu}\chi\), with \(\chi\in\{X,\mathcal{Z}\}\eqqcolon E_{2}\). Here, the set \(E_{2}\) consists of scalar quantities that contain up to \(\partial^{2}\phi\) and do not involve \(\dot{N}\) under the unitary gauge where \(\phi=\phi(t)\). Note that terms like \(\mathrm{D}_{(\mu}XD_{\nu)}\mathcal{Z}\) spoil the property [B] and hence are not included in the transformation law. Also, we assume that the derivative operator \(\mathrm{D}_{\mu}\) always acts on a scalar quantity, as otherwise one cannot apply the general strategy to construct invertible disformal transformations developed in Ref. [33]. Likewise, it is straightforward to construct the transformation law with \(\partial^{d}\phi\) for \(d\geq 3\) in an inductive manner. Suppose we know the set of scalar quantities \(E_{n}\) that contain up to \(\partial^{n}\phi\) and do not involve \(\dot{N}\) under the unitary gauge. Then, we have \[E_{n+1} =E_{n}\sqcup\left\{\mathrm{D}_{\alpha}\chi\mathrm{D}^{\alpha} \chi^{\prime}\mid\chi\in E_{n},\chi^{\prime}\in E_{n}\backslash E_{n-1}\right\}\] \[=\left\{X\right\}\sqcup\left\{\mathrm{D}_{\alpha}\chi\mathrm{D}^ {\alpha}\chi^{\prime}\mid\chi,\chi^{\prime}\in E_{n}\right\}. \tag{12}\] Here, we define \(E_{0}\coloneqq\emptyset\) and \(E_{1}\coloneqq\left\{X\right\}\), so that we recover \(E_{2}=\left\{X,\mathcal{Z}\right\}\) and obtain \(E_{3}\) and \(E_{4}\) as \[E_{3} =E_{2}\sqcup\left\{\mathcal{U},\mathcal{V}\right\}, \tag{13}\] \[E_{4} =E_{3}\sqcup\left\{\mathrm{D}_{\alpha}X\mathrm{D}^{\alpha} \mathcal{U},\mathrm{D}_{\alpha}\mathcal{Z}\mathrm{D}^{\alpha}\mathcal{U}, \mathrm{D}_{\alpha}\mathcal{U}\mathrm{D}^{\alpha}\mathcal{U},\mathrm{D}_{ \alpha}X\mathrm{D}^{\alpha}\mathcal{V},\mathrm{D}_{\alpha}\mathcal{Z}\mathrm{D }^{\alpha}\mathcal{V},\mathrm{D}_{\alpha}\mathcal{U}\mathrm{D}^{\alpha} \mathcal{V}\right\}.\] The number of elements of \(E_{n}\) satisfies the following recurrence relation: \[|E_{0}|=0,\qquad|E_{n+1}|=\frac{|E_{n}|\left(|E_{n}|+1\right)}{2}+1\quad(n\geq 0). \tag{14}\] For instance, \[|E_{1}|=1,\qquad|E_{2}|=2,\qquad|E_{3}|=4,\qquad|E_{4}|=11,\qquad|E_{5}|=67, \qquad|E_{6}|=2279,\qquad\cdots. \tag{15}\] The large-\(n\) behavior of \(|E_{n}|\) is given by \(|E_{n}|\sim 2\times c^{2^{n}}\), with \(c\approx 1.116253032687330\). Let us now denote the elements of \(E_{n}\backslash E_{n-1}\) by \(\chi_{n}^{(A)}\), with \(A=1,2,\cdots,|E_{n}|-|E_{n-1}|\). Note that \(\chi_{n}^{(A)}\) contains only \(n\)th- and lower-order derivatives of \(\phi\). For instance, \[\chi_{1}^{(1)}=X,\qquad\chi_{2}^{(1)}=\mathcal{Z},\qquad\chi_{3}^{(1)}= \mathcal{U},\qquad\chi_{3}^{(2)}=\mathcal{V},\qquad\chi_{4}^{(1)}=\mathrm{D}_{ \alpha}X\mathrm{D}^{\alpha}\mathcal{U},\qquad\chi_{4}^{(2)}=\mathrm{D}_{ \alpha}\mathcal{Z}\mathrm{D}^{\alpha}\mathcal{U},\qquad\cdots. \tag{16}\] With this notation, one can write down the most general transformation law with the properties [A] and [B] constructed out of \(g_{\mu\nu}\), \(\phi_{\mu}\), and \(\partial_{\mu}\chi\), with \(\chi\in E_{d-1}\). As a result, we obtain the following generalized disformal transformation with \(\partial^{d}\phi\) that accommodates consistent matter coupling: \[g_{\mu\nu}\quad\to\quad\bar{g}_{\mu\nu}[g,\phi]=f_{0}g_{\mu\nu}+f_{1}\phi_{\mu} \phi_{\nu}+2\phi_{(\mu}\xi_{\nu)},\qquad\xi_{\mu}\coloneqq\sum_{n=1}^{d-1} \sum_{A}f_{n+1}^{(A)}\mathrm{D}_{\mu}\chi_{n}^{(A)}, \tag{17}\] where \(f_{0}\), \(f_{1}\), and \(f_{n+1}^{(A)}\) are now functions of the elements of \(E_{d}\) as well as \(\phi\). Here, the summation over \(A\) runs from \(1\) to \(|E_{n}|-|E_{n-1}|\) for each \(n\). Therefore, the number of coefficient functions is given by \(|E_{d-1}|+2\). Note that Eqs. (1) and (10) are recovered by setting \(d=2\) and \(d=3\), respectively, with the identifications \(f_{2}^{(1)}=f_{2}\) and \(f_{3}^{(1)}=f_{3}\). ### Invertibility condition In what follows, we clarify the condition under which the transformation (17) is invertible, i.e., Eq. (17) can be solved uniquely for the unbarred metric at least locally in the configuration space. For this purpose, we first need to know the inverse metric associated with the disformal metric \(\bar{g}_{\mu\nu}\). As explained in Ref. [33] for the case of \(d=2\) [i.e., for transformations of the form (1)], it is straightforward to find the inverse metric \(\bar{g}^{\mu\nu}\) for arbitrary \(d\). In the present case, we obtain \[\bar{g}^{\mu\nu}=\frac{1}{f_{0}}\left(g^{\mu\nu}+\frac{f_{0}^{2}-\mathcal{F}}{ X\mathcal{F}}\phi^{\mu}\phi^{\nu}-\frac{2f_{0}}{\mathcal{F}}\phi^{(\mu}\xi^{ \nu)}+\frac{X}{\mathcal{F}}\xi^{\mu}\xi^{\nu}\right), \tag{18}\] where we have defined \[\mathcal{F}\coloneqq f_{0}(f_{0}+Xf_{1})-X\xi_{\alpha}\xi^{\alpha}. \tag{19}\] Indeed, the above \(\bar{g}^{\mu\nu}\) satisfies \(\bar{g}^{\mu\alpha}\bar{g}_{\alpha\nu}=\delta^{\mu}_{\nu}\). Note that we assume \(f_{0}\neq 0\) and \(\mathcal{F}\neq 0\). The inverse disformal metric can be used to construct the barred counterparts of scalar quantities \(\chi^{(A)}_{n}\). For instance, the barred counterparts of \(\chi^{(1)}_{1}=X\) and \(\chi^{(1)}_{2}=\mathcal{Z}\) are given by \[\bar{X}\coloneqq\bar{g}^{\alpha\beta}\phi_{\alpha}\phi_{\beta}=\frac{Xf_{0}}{ \mathcal{F}},\qquad\bar{\mathcal{Z}}\coloneqq\bar{g}^{\alpha\beta}\bar{ \mathrm{D}}_{\alpha}\bar{X}\bar{\mathrm{D}}_{\beta}\bar{X}, \tag{20}\] where \(\bar{\mathrm{D}}_{\mu}\) is related to the unbarred derivative operator \(\mathrm{D}_{\mu}\) by \[\bar{\mathrm{D}}_{\mu}\Phi\coloneqq\left(\delta^{\alpha}_{\mu}-\frac{\phi_{ \mu}\phi_{\nu}}{\bar{X}}\bar{g}^{\nu\alpha}\right)\partial_{\alpha}\Phi= \mathrm{D}_{\mu}\Phi+\frac{1}{f_{0}}\left(\xi^{\alpha}\mathrm{D}_{\alpha}\Phi \right)\phi_{\mu}, \tag{21}\] with \(\Phi\) being any scalar quantity. One can show that \[\bar{g}^{\alpha\beta}\bar{\mathrm{D}}_{\alpha}\Phi\bar{\mathrm{D}}_{\beta} \Phi^{\prime}=\frac{1}{f_{0}}\mathrm{D}_{\alpha}\Phi\mathrm{D}^{\alpha}\Phi^ {\prime}, \tag{22}\] for arbitrary scalar quantities \(\Phi\) and \(\Phi^{\prime}\). Note that \(\bar{X}\) defined in Eq. (20) is a function of the elements of \(E_{d}\) as well as \(\phi\) in general. This means that \(\bar{\mathcal{Z}}\) yields \(\partial^{d+1}\phi\), which is not contained in the original transformation law (17). Likewise, even higher-order derivatives of \(\phi\) show up in the barred counterparts of \(\chi^{(A)}_{n}\) in general, which makes it difficult to construct the inverse disformal transformation (see Ref. [33] for a more detailed discussion). In order for the transformation to be invertible, we require that \(\bar{\chi}^{(A)}_{n}\) (\(n=1,2,\cdots,d\)) is a function of \(\phi\) and the elements of \(E_{n}\) only, or equivalently, \[\frac{\partial\bar{\chi}^{(A)}_{n}}{\partial\chi^{(B)}_{m}}=0\quad(1\leq n<m \leq d). \tag{23}\] For instance, \(\bar{X}=\bar{X}(\phi,X)\) and \(\bar{\mathcal{Z}}=\bar{\mathcal{Z}}(\phi,X,\mathcal{Z})\) follow from the above condition with \(n=1\) and \(2\), respectively. We note that the functional form of \(\bar{\chi}^{(A)}_{n}\) is determined by the coefficient functions in the transformation law (17), and therefore the condition (23) imposes a set of constraints on those coefficient functions. Note also that this condition ensures the closedness of the functional composition of two disformal transformations, which allows us to construct the inverse disformal transformation systematically [33]. On top of Eq. (23), we require the following condition: \[\left|\frac{\partial\bar{\chi}^{(A)}_{n}}{\partial\chi^{(B)}_{n}}\right|\neq 0 \quad(n=1,2,\cdots,d). \tag{24}\] This condition allows us to express \(\chi^{(B)}_{n}\) as a function of \(\bar{\chi}^{(A)}_{m}\) with \(m\leq n\). For example, the above condition implies \(\bar{X}_{X}\neq 0\) and \(\bar{\mathcal{Z}}_{\mathcal{Z}}\neq 0\), and therefore we have \(X=X(\phi,\bar{X})\) and \(\mathcal{Z}=(\phi,\bar{X},\bar{\mathcal{Z}})\). Surprisingly, one can show that the conditions (23) and (24) are equivalent to the following simpler condition: \[\bar{X}=\bar{X}(\phi,X),\qquad\bar{X}_{X}\neq 0,\qquad\bar{\mathcal{Z}}=\bar{ \mathcal{Z}}(\phi,X,\mathcal{Z}),\qquad\bar{\mathcal{Z}}_{\mathcal{Z}}\neq 0, \tag{25}\] which is nothing but the \(n=1\) and \(n=2\) parts of Eqs. (23) and (24). In other words, the remaining parts of the conditions (23) and (24) (i.e., \(3\leq n\leq d\)) are redundant. This can be verified by explicit computation of \(\bar{\chi}_{n}^{(A)}\). For instance, under the condition (25), we obtain \[\bar{\mathcal{Z}}=\frac{\bar{X}_{X}^{2}}{f_{0}}\mathcal{Z},\qquad\bar{\mathcal{ U}}=\frac{\bar{X}_{X}}{f_{0}}\left(\bar{\mathcal{Z}}_{\mathcal{Z}}\mathcal{U}+ \bar{\mathcal{Z}}_{X}\mathcal{Z}\right),\qquad\bar{\mathcal{V}}=\frac{1}{f_{0 }}\left(\bar{\mathcal{Z}}_{\mathcal{Z}}^{2}\mathcal{V}+2\bar{\mathcal{Z}}_{X} \bar{\mathcal{Z}}_{\mathcal{Z}}\mathcal{U}+\bar{\mathcal{Z}}_{X}^{2}\mathcal{Z }\right). \tag{26}\] The first equation together with Eq. (25) implies \(f_{0}=f_{0}(\phi,X,\mathcal{Z})\), so that we have \(\bar{\mathcal{U}}=\bar{\mathcal{U}}(\phi,X,\mathcal{Z},\mathcal{U})\), \(\bar{\mathcal{V}}=\bar{\mathcal{V}}(\phi,X,\mathcal{Z},\mathcal{U},\mathcal{ V})\), and \[\left|\frac{\partial(\bar{\mathcal{U}},\bar{\mathcal{V}})}{\partial(\mathcal{ U},\mathcal{V})}\right|=\frac{\bar{X}_{X}\bar{\mathcal{Z}}_{\mathcal{Z}}^{3}}{f_{0} ^{2}}\neq 0. \tag{27}\] Therefore, we see that the conditions (23) and (24) are satisfied for \(n=3\) (recall that \(\chi_{3}^{(1)}=\mathcal{U}\) and \(\chi_{3}^{(2)}=\mathcal{V}\)). Likewise, one can straightforwardly recover the parts of Eqs. (23) and (24) with \(3\leq n\leq d\) from the condition (25). Let us now specify the independent functional DOFs that characterize the invertible subclass of the generalized disformal transformation (17). Equation (25) motivates us to regard \(\bar{X}=\bar{X}(\phi,X)\) and \(\bar{\mathcal{Z}}=\bar{\mathcal{Z}}(\phi,X,\mathcal{Z})\) (such that \(\bar{X}_{X}\neq 0\) and \(\bar{\mathcal{Z}}_{\mathcal{Z}}\neq 0\)) as given functions. Then, from \(\bar{X}=Xf_{0}/\mathcal{F}\) [with \(\mathcal{F}\) defined in Eq. (19)] and \(\bar{\mathcal{Z}}=\bar{X}_{X}^{2}\mathcal{Z}/f_{0}\), one can express \(f_{0}\) and \(f_{1}\) in terms of \(\bar{X}\) and \(\bar{\mathcal{Z}}\) as well as the other coefficient functions \(f_{n}^{(A)}\) (\(n=2,\cdots,d\)) as \[f_{0}=\frac{\bar{X}_{X}^{2}(\phi,X,\mathcal{Z})}{\bar{\mathcal{Z}}(\phi,X, \mathcal{Z})},\qquad f_{1}=\frac{1}{\bar{X}(\phi,X)}-\frac{f_{0}}{X}+\frac{f_ {0}}{f_{0}}\xi_{\alpha}\xi^{\alpha}, \tag{28}\] where \(f_{n}^{(A)}\)'s are encapsulated in \(\xi_{\mu}\) [see Eq. (17)]. Note that we have \(\mathcal{F}=X\bar{X}_{X}^{2}\mathcal{Z}/(\bar{X}\bar{\mathcal{Z}})\). Thus, the independent functional DOFs are \(\bar{X}(\phi,X)\), \(\bar{\mathcal{Z}}(\phi,X,\mathcal{Z})\), and \(f_{n}^{(A)}\) (\(n\geq 2\)), with \(f_{n}^{(A)}\)'s being arbitrary functions of \(\phi\) and the elements of \(E_{d}\).7 Footnote 7: A simple example of invertible generalized disformal transformations of the form (17) can be obtained by putting \(\bar{X}=X\) and \(\bar{\mathcal{Z}}=\mathcal{Z}\). In this case, we have \(\bar{g}_{\mu\nu}=g_{\mu\nu}+(\xi_{\alpha}\xi^{\alpha})\phi_{\mu}\phi_{\nu}+2 \phi_{(\mu}\xi_{\nu)}\), for which \(\bar{\chi}_{n}^{(A)}=\chi_{n}^{(A)}\). Having obtained the invertibility condition, let us explicitly construct the inverse transformation for the generalized disformal transformation (17). To this end, one has to know how the building blocks of the transformation are related to barred quantities. Since \(\chi_{n}^{(A)}\) is a function of \(\bar{\chi}_{m}^{(B)}\) with \(m\leq n\), the derivative of \(\chi_{n}^{(A)}\) can be written in the form \[\mathrm{D}_{\mu}\chi_{n}^{(A)}=\sum_{m=1}^{n}\sum_{B}J_{nm}^{(AB)}\mathrm{D}_{ \mu}\bar{\chi}_{m}^{(B)},\qquad J_{nm}^{(AB)}\coloneqq\frac{\partial\chi_{n} ^{(A)}}{\partial\bar{\chi}_{m}^{(B)}}, \tag{29}\] which enables us to express \(\xi_{\mu}\) as \[\xi_{\mu} =\sum_{n=1}^{d-1}\sum_{m=1}^{n}\sum_{A,B}f_{n+1}^{(A)}J_{nm}^{(AB) }\mathrm{D}_{\mu}\bar{\chi}_{m}^{(B)}\] \[=\Xi\phi_{\mu}+\sum_{n=1}^{d-1}\sum_{m=1}^{n}\sum_{A,B}f_{n+1}^{ (A)}J_{nm}^{(AB)}\bar{\mathrm{D}}_{\mu}\bar{\chi}_{m}^{(B)}. \tag{30}\] Here, we have employed Eq. (21) and defined the following quantity: \[\Xi\coloneqq-\frac{1}{f_{0}}\sum_{n=1}^{d-1}\sum_{m=1}^{n}\sum_{A,B}f_{n+1}^{ (A)}J_{nm}^{(AB)}\xi^{\alpha}\mathrm{D}_{\alpha}\bar{\chi}_{m}^{(B)}, \tag{31}\] which is a function of \(\phi\) and the elements of \(E_{d}\). We are now ready to write down the unbarred metric in terms of barred quantities. Written explicitly, \[g_{\mu\nu} =\frac{1}{f_{0}}\left(\bar{g}_{\mu\nu}-f_{1}\phi_{\mu}\phi_{\nu}-2 \phi_{(\mu}\xi_{\nu)}\right)\] \[=\frac{1}{f_{0}}\left[\bar{g}_{\mu\nu}-\left(f_{1}+2\Xi\right)\phi _{\mu}\phi_{\nu}-2\sum_{n=1}^{d-1}\sum_{m=1}^{n}\sum_{A,B}f_{n+1}^{(A)}J_{nm}^ {(AB)}\phi_{(\mu}\bar{\mathrm{D}}_{\nu)}\bar{\chi}_{m}^{(B)}\right], \tag{32}\] where each \(\chi_{n}^{(A)}\) on the right-hand side should be regarded as a function of \(\bar{\chi}_{m}^{(B)}\) with \(m\leq n\). This provides the inverse disformal transformation associated with Eq. (17). ### Generalized disformal theories So far, we have clarified the invertibility condition for generalized disformal transformations of the form (17) that involve arbitrary higher-order covariant derivatives of the scalar field [see Eq. (25)]. By performing such an invertible transformation on Horndeski/U-DHOST theories, one can further extend the framework of GDH/GDU theories.4 Schematically, the action of Horndeski/U-DHOST theories has the following form: Footnote 4: If one starts from a theory where the scalar field is nondynamical (e.g., the cuscuton [59] or its extension [60; 61]), then the scalar field remains nondynamical in the resultant theory. \[S_{\text{g}}[g,\phi]=\int\mathrm{d}^{4}x\sqrt{-g}\,\mathcal{L}(g_{\mu\nu},R_{ \mu\nu\lambda\sigma},\phi,\phi_{\mu},\phi_{\mu\nu}), \tag{33}\] with \(R_{\mu\nu\lambda\sigma}\) being the four-dimensional Riemann tensor and \(\phi_{\mu\nu}\coloneqq\nabla_{\mu}\nabla_{\nu}\phi\). Note that Eq. (7) can always be recast in this form. The generalized disformal transformation of the action (33) is obtained by replacing \(g_{\mu\nu}\) with \(\bar{g}_{\mu\nu}[g,\phi]\). More concretely, we perform the following replacements: \[\begin{split}\sqrt{-g}&\to\sqrt{-g}\,f_{0} \mathcal{F}^{1/2},\\ R^{\mu}{}_{\nu\lambda\sigma}&\to R^{\mu}{}_{\nu \lambda\sigma}+2\nabla_{[\lambda}C^{\mu}{}_{\sigma]\nu}+2C^{\mu}{}_{\alpha[ \lambda}C^{\alpha}{}_{\sigma]\nu},\\ \phi_{\mu\nu}&\to\phi_{\mu\nu}-C^{\lambda}{}_{\mu \nu}\phi_{\lambda}.\end{split} \tag{34}\] Here, we have defined \[C^{\lambda}{}_{\mu\nu}\coloneqq\bar{g}^{\lambda\alpha}\left(\nabla_{(\mu}\bar{ g}_{\nu)\alpha}-\frac{1}{2}\nabla_{\alpha}\bar{g}_{\mu\nu}\right), \tag{35}\] which corresponds to the change of the Christoffel symbol under the generalized disformal transformation. Note that \(\phi\) and \(\phi_{\mu}\) remain unchanged under the transformation. We emphasize that our generalized disformal transformation (17) respects the properties [A] and [B], and hence the generalized disformal theories described by the action \(\tilde{S}_{\text{g}}[g,\phi]\coloneqq S_{\text{g}}[\bar{g},\phi]\) accommodate consistent matter coupling. Finally, let us briefly comment on the relation to the so-called effective field theory (EFT) of inflation/dark energy [62; 63; 64]. When one studies cosmology based on scalar-tensor theories, the EFT description provides a useful and robust framework for studying the dynamics of perturbations.5 In this context, one assumes that the background scalar field has a timelike profile so that one can choose the unitary gauge where the scalar DOF is eaten by the metric. Such a scalar field spontaneously breaks the time diffeomorphism and the residual spacetime symmetries are only the spatial diffeomorphisms. Therefore, in the unitary gauge, the action is written in terms of geometrical quantities that respect spatial covariance as well as those respecting full spacetime covariance. It is straightforward to write down the action of cosmological perturbations up to the leading order in the derivative expansion, and it accommodates the case of Horndeski theories, as it should. The EFT was extended to incorporate the DHOST theories in Ref. [23] and then further extended to incorporate the GDH/GDU theories in Ref. [38]. The idea of Ref. [38] is to start from the EFT action up to the leading order in the derivative expansion and then perform the generalized disformal transformation (1) that involves covariant derivatives of the scalar field up to the second order. In particular, it turned out that the effects of the GDH/GDU theories appear already at the level of linear perturbations (i.e., at the level of quadratic action). The same idea can be applied to the transformation (17) that involves arbitrary higher-order covariant derivatives of the scalar field. However, in this case, the effects of the new terms would show up only at the level of second- or higher-order perturbations (i.e., at the level of cubic- or higher-order action). For instance, the first two new building blocks of the transformation (17), \(\mathcal{U}=\mathrm{D}_{\alpha}\chi\mathrm{D}^{\alpha}\mathcal{Z}\) and \(\mathcal{V}=\mathrm{D}_{\alpha}\mathcal{Z}\mathrm{D}^{\alpha}\mathcal{Z}\), are at least cubic and quartic order in perturbations, respectively. (Note that \(\mathcal{Z}\) is defined by \(\mathcal{Z}=\mathrm{D}_{\alpha}X\mathrm{D}^{\alpha}X\) and \(\mathrm{D}_{\mu}\) corresponds to the spatially covariant derivative under the unitary gauge, and hence \(\mathcal{Z}\) starts at the quadratic order.) It should be noted that the above discussion assumes a homogeneous and isotropic cosmological background, and the situation could change if one considers an inhomogeneous background. Recently, the EFT of perturbations on an arbitrary background spacetime with a timelike scalar profile was formulated in Ref. [53] and then applied to black hole perturbations in Refs. [67; 54; 68]. It would be intriguing to extend this EFT to incorporate our generalized disformal theories, which we leave for future work. Footnote 5: The EFT was recently extended to accommodate vector-tensor theories [65] and solids/fluids [66], where the symmetry breaking pattern is different from the one in the case of scalar-tensor theories. Conclusions Invertible disformal transformations provide a useful tool to investigate ghost-free scalar-tensor theories. Recently, a general class of invertible disformal transformations with covariant derivatives of the scalar field up to the second order was constructed in Ref. [33], which enabled us to construct a novel class of ghost-free scalar-tensor theories, i.e., GDH/GDU theories. These generalized disformal theories lie outside the conventional DHOST class and hence would exhibit novel phenomena that can be tested with experiments/observations. The consistency of matter coupling was studied in Refs. [34; 39; 41; 42], and it was shown that there exists a nontrivial subclass of generalized disformal theories where matter fields can be coupled without introducing an unwanted extra DOF in the unitary gauge. As we have discussed in SSII, there are two properties (i.e., the properties [A] and [B]) that ensure the consistency of matter coupling. Along this line of thought, in the present paper, we have constructed a class of invertible disformal transformations with arbitrary higher-order covariant derivatives of the scalar field, respecting the properties [A] and [B]. The explicit form of the transformation has been obtained in SSIII.1 [see Eq. (17)], and the invertibility condition has been studied in SSIII.2. We have shown that the invertibility condition can be written in the simple form of Eq. (25). Moreover, we have specified the independent functional DOFs that characterize the invertible subclass of the generalized disformal transformation [see the discussion around Eq. (28)]. As discussed in SSIII.3, these transformations can be used to further extend the framework of the GDH/GDU theories, keeping the consistency of matter coupling in the unitary gauge. There are several possible future directions. As mentioned in SSIII.3, it would be intriguing to extend the EFT of cosmological perturbations [62; 63; 64] or the EFT of perturbations on an arbitrary background spacetime with a timelike scalar profile [53] to incorporate generalized disformal theories with arbitrary higher-order derivatives. It is also interesting to study the screening mechanism. As shown in Refs. [69; 70; 71], the Vainshtein screening mechanism [72; 73] is built-in in the Horndeski theories. On the other hand, in the DH theories, the authors of Refs. [74; 75; 76; 77; 78; 79] showed that the Vainshtein screening is partially broken inside astrophysical bodies. Since our generalized disformal theories involve arbitrary higher-order spatial derivatives, their effect on small scales could be much more significant than in known theories. We leave these issues for future studies. ###### Acknowledgements. The author was supported by Japan Society for the Promotion of Science KAKENHI Grant Nos. JP22KJ1646 and JP23K13101.
2308.11518
EM for Mixture of Linear Regression with Clustered Data
Modern data-driven and distributed learning frameworks deal with diverse massive data generated by clients spread across heterogeneous environments. Indeed, data heterogeneity is a major bottleneck in scaling up many distributed learning paradigms. In many settings however, heterogeneous data may be generated in clusters with shared structures, as is the case in several applications such as federated learning where a common latent variable governs the distribution of all the samples generated by a client. It is therefore natural to ask how the underlying clustered structures in distributed data can be exploited to improve learning schemes. In this paper, we tackle this question in the special case of estimating $d$-dimensional parameters of a two-component mixture of linear regressions problem where each of $m$ nodes generates $n$ samples with a shared latent variable. We employ the well-known Expectation-Maximization (EM) method to estimate the maximum likelihood parameters from $m$ batches of dependent samples each containing $n$ measurements. Discarding the clustered structure in the mixture model, EM is known to require $O(\log(mn/d))$ iterations to reach the statistical accuracy of $O(\sqrt{d/(mn)})$. In contrast, we show that if initialized properly, EM on the structured data requires only $O(1)$ iterations to reach the same statistical accuracy, as long as $m$ grows up as $e^{o(n)}$. Our analysis establishes and combines novel asymptotic optimization and generalization guarantees for population and empirical EM with dependent samples, which may be of independent interest.
Amirhossein Reisizadeh, Khashayar Gatmiry, Asuman Ozdaglar
2023-08-22T15:47:58Z
http://arxiv.org/abs/2308.11518v1
# EM for Mixture of Linear Regression with Clustered Data ###### Abstract Modern data-driven and distributed learning frameworks deal with diverse massive data generated by clients spread across heterogeneous environments. Indeed, _data heterogeneity_ is a major bottleneck in scaling up many distributed learning paradigms. In many settings however, heterogeneous data may be generated in _clusters_ with shared structures, as is the case in several applications such as federated learning where a common latent variable governs the distribution of all the samples generated by a client. It is therefore natural to ask how the underlying clustered structures in distributed data can be exploited to improve learning schemes. In this paper, we tackle this question in the special case of estimating \(d\)-dimensional parameters of a two-component mixture of linear regressions problem where each of \(m\) nodes generates \(n\) samples with a _shared_ latent variable. We employ the well-known Expectation-Maximization (EM) method to estimate the maximum likelihood parameters from \(m\) batches of dependent samples each containing \(n\) measurements. Discarding the clustered structure in the mixture model, EM is known to require \(\mathcal{O}(\log(mn/d))\) iterations to reach the statistical accuracy of \(\mathcal{O}(\sqrt{d/(mn)})\). In contrast, we show that if initialized properly, EM on the structured data requires only \(\mathcal{O}(1)\) iterations to reach the same statistical accuracy, as long as \(m\) grows up as \(e^{o(n)}\). Our analysis establishes and combines novel asymptotic optimization and generalization guarantees for population and empirical EM with dependent samples, which may be of independent interest. ## 1 Introduction With the ever-growing applications of data-intensive and distributed learning paradigms, it becomes more critical to address new challenges associated with such frameworks. For instance, federated learning is a novel distributed learning architecture consisting a central parameter server and a network of clients (or nodes) each equipped with locally generated data. In general, the main premise of such distributed learning methods is to estimate the underlying ground truth model using the collective data samples across the clients. _Data heterogeneity_ (or non-i.i.d. data) is among the most significant challenges in scaling up distributed learning methods. Indeed, naive distributed and federated benchmarks such as FedAvg are known to diverge if deployed on highly heterogeneous settings, unless particularly tailored for non-i.i.d. data (Karimireddy et al., 2020). In this paper, we consider a _structured_ or _clustered_ data heterogeneity model which roots in an observation specific to modern data-driven distributed and federated learning applications. Under this structured heterogeneity model, an _identical_ and unobserved latent variable governs the distribution of _all_ the samples generated at any node (Pei et al., 2017; Hendrycks and Dietterich, 2019; Robey et al., 2020; Diamandis et al., 2021). Particularly in this paper, we zoom in on _mixture of linear regression_ model which is a classical approach to capture data heterogeneity (Jordan and Jacobs, 1994; Xu et al., 2016; Viele and Tong, 2002). To be more clear, in our setting each node observes not one but a potentially large number of linear measurements for all of which a common latent variable governs the true parameter. These latent variables are unknown, random, independent and identically distributed across the nodes. Throughout the paper, we refer to this model as _clustered mixture of linear regressions_, or C-MLR in short. Our goal in this work is to estimate the maximum likelihood parameters of the regression model in the above-described C-MLR heterogeneity model using the collection of _all_ the observations across all the devices. However, maximizing likelihood objectives are notoriously intractable in general, due to non-convexity of the likelihood function (Yi et al., 2014). The most popular approach for computationally efficient inference in such models with latent variables is the Expected-Maximization (EM) method (Dempster et al., 1977; Redner and Walker, 1984; Wu, 1983). We therefore aim to study optimization and generalization characteristics of the EM method in estimating the C-MLR models. To this end, we first characterize and analyse the so-called _population EM_ variant for which we establish an asymptotic, local and deterministic convergence guarantee. Next, we move to the empirical counterpart with finite number of observations known as the _empirical EM_ method and provide probabilistic generalization bounds on its estimation error. Both results are local and asymptotic. That is, our analysis relies on the assumption that the initial iterate of the EM method is suitable (as opposed to random). Moreover, we let the number of nodes and the number of samples per node grow while all the other parameters assumed to be constants. To be more specific, let us precisely describe the C-MLR model in the following. ### Clustered MLR model As discussed above and motivated by distributed learning applications, we consider a collection of \(m\) nodes where each node \(j=1,\cdots,m\) observes \(n\) pairs of measurements denoted by \(\{(x_{i}^{j},y_{i}^{j})|i=1,\cdots,n\}\). Here, \(x_{i}^{j}\in\mathcal{X}\subseteq\mathbb{R}^{d}\) and \(y_{i}^{j}\in\mathcal{Y}\subseteq\mathbb{R}\) denote the covariate and response variables, respectively. These observations are linear measurements of a _clustered_ mixture of linear regressions (C-MLR) model described below \[y_{i}^{j}=\xi^{j}\langle x_{i}^{j},\theta^{*}\rangle+\epsilon_{i}^{j},\quad i =1,\cdots,n,\quad j=1,\cdots,m.\] (C-MLR) In this model, \(\xi^{j}\in\Xi\) denotes the hidden latent variable corresponding to node \(j\). In this paper, we focus on a symmetric and two-component mixture of linear regressions with \(\Xi=\{-1,+1\}\), where \(\xi^{j}\) takes on values uniformly at random, denoted by \(\xi^{j}\sim\mathcal{U}\{\pm 1\}\). Note that this latent variable is _identical_ for _all_ the measurements of a given node, however, we assume that they are _independent_ across different nodes. Moreover, we let \(\theta^{*}\in\mathbb{R}^{d}\) denote the fixed and unknown ground truth regression vector and assume that covariates and noises are independent and Gaussian with \(x_{i}^{j}\sim\mathcal{N}(0,I_{d})\) and \(\epsilon_{i}^{j}\sim\mathcal{N}(0,\sigma^{2})\), respectively. This model clearly implies that the observations of any given node are _not_ independent due to the shared latent variable. In the remainder of the paper, we denote the signal-to-noise ratio (SNR) by \(\mathsf{snr}=\|\theta^{*}\|/\sigma\). _Remark 1_.: C-MLR model in (1) captures the underlying node-dependent data heterogeneity through the latent variable \(\xi^{j}\) which is shared and identical for all the \(n\) samples measured by node \(j\). Therefore, C-MLR is a well-motivated abstract model to encapsulate the structured data heterogeneity observed in modern distributed learning application as discussed before (Diamandis et al., 2021). _Remark 2_.: We further clarify that in the C-MLR model described above, the term "clustered" referrers to the fact that data samples are available in batches of size \(n\) where all the \(n\) samples in each batch share the same latent variable \(\xi\). Though, it is worth noting that the folklore two-component MLR model with independent latent variables partitions the samples into two clusters, as well. However, we adopt the term "clustered" to particularly underscore the batched structure modeled in (1). _Remark 3_.: In our asymptotic analysis in this paper, we are interested in the regime that \(m\) and \(n\) grow while other problem parameters, that are \(\|\theta^{*}\|\), \(\sigma\), and \(d\) remain constant. Our main goal is this paper is to answer he following question: _What is iteration complexity of the sample-based EM algorithm to estimate the ground truth \(\theta^{*}\) from \(m\) batches of samples, each of size \(n\) generated by the C-MLR described in (1)?_ We answer this question in this paper as follows. We assume that \(m\) batches of in total \(mn\) samples generated by the C-MLR model in (1) are available where \(m\) grows at most up to \(e^{o(n)}\). We prove that if initialized within a constant-size neighbourhood of the ground truth \(\theta^{*}\) and after \(T=\mathcal{O}(1)\) iterations of the sample-based (or empirical) EM algorithm, either (_i_) there exists an iterate \(0\leq t\leq T\) of the algorithm for which \(\|\theta_{t}-\theta^{*}\|\leq\mathcal{O}(\sqrt{d/(mn)})\); or (_ii_) the \(\|\theta_{T}-\theta^{*}\|\leq\mathcal{O}(\sqrt{d/(mn)})\) with high probability. Our result is asymptotic, that is, it holds for sufficiently large \(n\). To highlight this result, it is worth noting that the underlying clustered structure in C-MLR is essential for a constant iteration complexity. Indeed, if such structure is discarded, the EM algorithm requires \(\mathcal{O}(\log(mn/d))\) iterates to reach the same statistical accuracy. **Contribution.** To summarize the above discussion, ee consider a data heterogeneity structure observed in various distributed learning application such as federated learning where a latent variable governs the distribution of all the samples generated on any node. In particular, we zoom in on a _clustered_ two-component mixture of linear regression model described in (1) where all the linear measurements of any node share their binary latent variable. We utilize the EM algorithm to estimate the maximum likelihood regressor and establish asymptotic and local optimization and generalization guarantees for both population and empirical EM updates. Lastly, we employ these two results and asymptotically characterize the iteration complexity of the sample-based EM algorithm to estimate the ground truth parameters of the C-MLR model. **Related work.** Studying convergence characteristics of Expectation-Maximization (EM) dates back to the seminal work of Wu (1983) in which asymptotic and local convergence of EM is established for general latent variable models. Balakrishnan et al. (2017) provides a general framework to analyze local convergence of the EM algorithm in several settings such as mixture of linear regressions (MLR) and Gaussian mixture model (GMM). Several follow up works study GMM, MLR and Missing Covariate Regression (MCR) models including Yi and Caramanis (2015); Daskalakis et al. (2017); Li and Liang (2018); Klusowski et al. (2019); Ghosh and Kannan (2020); Yan et al. (2017). Although it is not the main focus of this paper, global convergence of the EM method (with random initialization) has been extensively studied for Gaussian mixture model (Chen et al., 2019) and mixture of linear regressions (Kwon et al., 2019; Wu and Zhou, 2019). Another interesting direction is establishing statistical lower bounds on the accuracy of the EM method for the MLR model Kwon et al. (2021). Going beyond two-component MLR model, Kwon and Caramanis (2020) proves that well-initialized EM converges to the true regression parameters of \(k\)-component MLR in certain SNR regimes. In the same setting, Chen et al. (2020) proposes an algorithm that is sub-exponential in \(k\). For noiseless MLR model, Yi et al. (2014, 2016) were among the first works to establish convergence guarantees for EM. To tackle the computational complexity of EM in learning MLR models, Li and Liang (2018); Zhong et al. (2016) propose gradient descent-type methods with nearly optimal sample complexity. From practical point of view, EM has demonstrated empirical success in MLR models (Jordan and Jacobs, 1994; De Veaux, 1989) and its simple implementation has made it a suitable choice in several applications (Chen and Li, 2009; Li et al., 2009). ## 2 Preliminaries In this section, we first review backgrounds on MLE and EM and then characterize the population and empirical EM updates for our C-MLR model followed by an insightful benchmark. ### Maximum Likelihood Estimator and EM Algorithm **Population EM.** Let us focus on one node observing \(n\) samples \(\{(x_{i},y_{i})|i=1,\cdots,n\}\) where we adopt the shorthand notations \(x_{[n]}=(x_{1},\cdots,x_{n})\) and \(y_{[n]}=(y_{1},\cdots,y_{n})\). Furthermore, let \(\xi\) denote the latent variables in the C-MLR model described in (1), respectively. To reiterate the underlying C-MLR model, we have that \[y_{i}=\xi\langle x_{i},\theta^{*}\rangle+\epsilon_{i},\quad i=1,\cdots,n. \tag{2}\] As discussed before, in our setting, only the variables \((x_{[n]},y_{[n]})\) are observed and the latent variable \(\xi\in\Xi\) remains hidden. Suppose that the tuple \((x_{[n]},y_{[n]},\xi)\) is generated by the joint distribution \(f_{\theta^{*}}\) where \(\{f_{\theta}|\theta\in\Omega\}\) and \(\Omega\) is a non-empty compact convex set. As our main goal in this paper, we aim to estimate the ground-truth model \(\theta^{*}\) by maximizing the likelihood function, that is, finding \(\hat{\theta}\in\Omega\) that maximizes the following likelihood \[g_{\theta}(x_{[n]},y_{[n]})=\int_{\Xi}f_{\theta}(x_{[n]},y_{[n]},\xi)\mathrm{ d}\xi.\] In many settings, it is computationally expensive to compute the likelihood function \(g_{\theta}(x_{[n]},y_{[n]})\), while computing log-likelihood \(\log f_{\theta}(x_{[n]},y_{[n]},\xi)\) is relatively easier. The EM method is an iterative algorithm that aims to maximize a lower bound on the log-likelihood \(\log g_{\theta}(\cdot,\cdot)\). This lower bound which is known as the \(Q\)-function can be written as follows \[Q(\theta^{\prime}|\theta)=\int_{\mathcal{X}^{n}\times\mathcal{Y}^{n}}\bigg{(} \int_{\Xi}f_{\theta}(\xi|x_{[n]},y_{[n]})\log f_{\theta^{\prime}}(x_{[n]},y_{[ n]},\xi)\mathrm{d}\xi\bigg{)}f_{\theta^{*}}(x_{[n]},y_{[n]})\mathrm{d}x_{[n]} \mathrm{d}y_{[n]}. \tag{3}\] At each iteration of the empirical EM (Algorithm 1) and given the current estimate of the true model \(\theta\), the next model is obtained by maximizing the above \(Q\)-function, that is, \(\theta\gets M(\theta)\) where \[M(\theta)\coloneqq\operatorname*{arg\,max}_{\theta^{\prime}\in\Omega}Q( \theta^{\prime}|\theta). \tag{4}\] Note that computing \(M(\cdot)\) requires having access to the joint distribution \(f_{\theta^{*}}\), or to put it differently, observed data from infinitely many nodes (\(m\to\infty\)) is required. We call such variant of the EM algorithm _population EM_ and discuss the _empirical_ variant with finite clients (finite \(m\)) in the following section. Next proposition characterizes the \(M\)-function and the population EM update. **Proposition 2.1** (Population EM).: _Consider \(n\) linear measurements from the C-MLR model in (2) with Gaussian features \(X_{i}\sim\mathcal{N}(0,I_{d})\) and noises \(\epsilon_{i}\sim\mathcal{N}(0,\sigma^{2})\) with shared latent variable \(\xi\sim\mathcal{U}\{\pm 1\}\). Then, the \(M(\cdot)\) function of the population EM defined in (4) is as follows_ \[M(\theta)=\mathbb{E}\bigg{[}X_{1}Y_{1}\tanh\bigg{(}\frac{1}{\sigma^{2}}\sum_{i= 1}^{n}\langle X_{i},\theta\rangle Y_{i}\bigg{)}\bigg{]}. \tag{5}\] Proof.: We defer the proof to Appendix C.1. Note that equally likely \(\xi\in\{\pm 1\}\) makes the distribution of \(Y^{n}\) symmetric given \(X^{n}\). Moreover, \(\tanh(\cdot)\) is an odd function and therefore, the expectation in (5) can also be taken with respect to \(X_{i}\sim\mathcal{N}(0,I_{d})\) and \(Y_{i}|X_{i}\sim\mathcal{N}(\langle X_{i},\theta^{*}\rangle,\sigma^{2})\), _i.e._ no randomness in the the latent variable \(\xi\). **Empirical EM.** For a finite number of nodes \(m\), the empirical EM algorithm updates the estimate of the true model using the empirical \(Q_{m}\)-function defined below \[Q_{m}(\theta^{\prime}|\theta)=\frac{1}{m}\sum_{j=1}^{m}\int_{\Xi}f_{\theta}( \xi|x_{[n]}^{j},y_{[n]}^{j})\log f_{\theta^{\prime}}(x_{[n]}^{j},y_{[n]}^{j}, \xi)\mathrm{d}\xi, \tag{6}\] where samples are independent across different nodes. Similarly, in each iteration of the empirical EM algorithm (Algorithm 2), the current model estimate \(\theta\) is updated to \(\theta\gets M_{m}(\theta)\) where \[M_{m}(\theta)\coloneqq\operatorname*{arg\,max}_{\theta^{\prime}\in\Omega}Q_{m} (\theta^{\prime}|\theta). \tag{7}\] Next proposition characterizes the empirical \(M_{m}\)-function defined in (7). **Proposition 2.2** (Empirical EM).: _Consider \(m\) nodes each observing \(n\) linear measurements generated by the C-MLR model in (1) denoted by \(\{(x_{i}^{j},y_{i}^{j})|i=1,\cdots,n,\,j=1,\cdots,m\}\). Then, the \(M_{m}(\cdot)\) function of the empirical EM defined in (7) can be computed as follows_ \[M_{m}(\theta)=\widehat{\Sigma}^{-1}\frac{1}{mn}\sum_{j=1}^{m}\sum_{i=1}^{n}x_{ i}^{j}y_{i}^{j}\tanh\bigg{(}\frac{1}{\sigma^{2}}\sum_{i=1}^{n}\langle x_{i}^{j}, \theta\rangle y_{i}^{j}\bigg{)},\ \text{where}\ \ \widehat{\Sigma}\coloneqq\frac{1}{mn}\sum_{j=1}^{m}\sum_{i=1}^{n}x_{i}^{j}x_{ i}^{j}{}^{\top} \tag{8}\] _denotes the sample covariance matrix of the total \(mn\) observations._ Proof.: We defer the proof to Appendix C.2. Our goal in the remainder of the paper is to rigorously study the optimization and generalization performance of the two population and empirical EM algorithms described above. Before that, let us elaborate on a simple and intuitive benchmark. ### A benchmark: EM with independent samples As we described in our C-MLR model in (1), the measurements observed on a given node share the same latent variable, making them dependent. In contrast, the well-established literature on EM is centered around the i.i.d. setting where each sample is generated through a latent variable independent of the ones for any other sample. To be more precise, consider the setting where \(N\) i.i.d. linear measurements \(\{(x_{i},y_{i})|i=1,\cdots,N\}\) generated by a mixture of two component linear regression model are available. That is, \(y_{i}=\xi_{i}\langle x_{i},\theta^{*}\rangle+\epsilon_{i}\) for all \(i=1,\cdots,N\) where \(\xi_{i}\sim\mathcal{U}\{\pm 1\}\), \(x_{i}\sim\mathcal{N}(0,I_{d})\) and \(\epsilon_{i}\sim\mathcal{N}(0,\sigma^{2})\) are i.i.d. and mutually independent. In this setting, the population and empirical EM update rules are as follows \[M(\theta)=\mathbb{E}\Big{[}XY\tanh\Big{(}\frac{1}{\sigma^{2}}\langle X, \theta\rangle Y\Big{)}\Big{]},\ \text{and}\ \ M_{N}(\theta)=\widehat{\Sigma}^{-1}\frac{1}{N}\sum_{i=1}^{N}x_{i}y_{i} \tanh\Big{(}\frac{1}{\sigma^{2}}\langle x_{i},\theta\rangle y_{i}\Big{)}, \tag{9}\] where the expectation is over \(X\sim\mathcal{N}(0,I_{d})\), \(\xi\sim\mathcal{U}\{\pm 1\}\) and \(Y|X,\xi\sim\mathcal{N}(\xi\langle X,\theta^{*}\rangle,\sigma^{2})\). In above, \(\widehat{\Sigma}=1/N\sum_{i=1}^{N}x_{i}x_{i}^{\top}\) denotes the sample covariance matrix (Balakrishnan et al., 2017; Kwon et al., 2019). In particular, it was shown in Balakrishnan et al. (2017) that for any suitable initialization with \(\|\theta_{0}-\theta^{*}\|\leq\|\theta^{*}\|/32\), after \(T=\log(N/d\cdot\|\theta^{*}\|^{2}/(\|\theta^{*}\|^{2}+\sigma^{2}))\cdot \mathcal{O}(1)\) iterations of empirical EM with update rule \(M_{N}(\cdot)\) as above, the following sub-optimality is guaranteed with probability at least \(1-\delta\), \[\|\theta_{T}-\theta^{*}\|\leq\sqrt{\|\theta^{*}\|^{2}{+}\sigma^{2}}\,\sqrt{ \frac{d+\log(1/\delta)}{N}}\,\log\bigg{(}\frac{N}{d}\cdot\frac{\|\theta^{*}\| ^{2}}{\|{\theta^{*}}\|^{2}{+}\sigma^{2}}\bigg{)}\cdot\mathcal{O}(1).\] Now, consider \(N=mn\) linear measurements generated by the C-MLR model in (1) which we also denote by the same notation \(\{(x_{i},y_{i})|i=1,\cdots,N\}\). Clearly, the EM update rules in (9) may not be employed in this setting as samples are not independent due to the shared latent variables. However, one could make such \(N\) samples independent by the following simple trick. For each sample \(i=1,\cdots,N\), let us denote \(\tilde{y}_{i}=\tilde{\xi}_{i}\cdot y_{i}\) where \(\tilde{\xi}_{i}\)s are independent Rademacher variables. In words, \(\tilde{y}_{i}=y_{i}\) or \(\tilde{y}_{i}=-y_{i}\) equally likely. It is straightforward to check that the new \(N\) samples \(\{(x_{i},\tilde{y}_{i})|i=1,\cdots,N\}\) are indeed independent. Therefore, one may employ the guarantee above and conclude that with a suitable initialization and after \(T\) iterations of EM (on the new samples), the final sub-optimality is with probability \(1-\delta\) bounded by \[\|\theta_{T}-\theta^{*}\|\leq\sqrt{\|\theta^{*}\|^{2}{+}\sigma^{2}}\,\sqrt{ \frac{d+\log(1/\delta)}{mn}}\,\cdot\tilde{\mathcal{O}}(1),\ \ \text{where}\ \ T=\log\Big{(}\frac{mn}{d}\cdot\frac{\|\theta^{*}\|^{2}}{\|\theta^{*}\|^{2} {+}\sigma^{2}}\Big{)}\cdot\mathcal{O}(1).\] As mentioned before, we aim to characterize the complexity of the EM algorithm deployed on clustered samples per the C-MLR model described in (1). Before laying out our formal analysis, it is worth highlighting our main result here and comparing it to the simple benchmark described above. **Theorem** (Main, informal).: _Consider the empirical EM in Algorithm 2 with a constant \(\mathsf{snr}\geq 4\) and any tolerance probability \(\delta\in(0,1)\). Moreover, assume that \(mn\geq\mathcal{O}(d+\log(1/\delta))\) and \(n\geq\mathcal{O}(\log(m)+d+\log(1/\delta))\). Then, for a suitable initialization and sufficiently large \(n\), after \(T=\mathcal{O}(1)\) iterations of Algorithm 2, either_ 1. _there exists an iterate_ \(0\leq t\leq T\) _such that_ \[\|\theta_{t}-\theta^{*}\|\leq\sqrt{\|\theta^{*}\|^{2}{+}\sigma^{2}}\sqrt{ \frac{d+\log(1/\delta)}{mn}},\] 2. _or with probability at least_ \(1-\delta\)_,_ \[\|\theta_{T}-\theta^{*}\|\leq\sqrt{\|\theta^{*}\|^{2}{+}\sigma^{2}}\sqrt{ \frac{d+\log(1/\delta)}{mn}}\cdot\mathcal{O}(1).\] Our result above demonstrates that incorporating the underlying clustered structure in the C-MLR model, EM requires only \(\mathcal{O}(1)\) iterations to reach the statistical accuracy \(\mathcal{O}(\sqrt{d/(mn)})\) under proper scaling assumptions. In contrast and as illustrated above, discarding such structure makes EM algorithm to run for \(\mathcal{O}(\log(mn/d))\) iterations to reach the same accuracy. In the following sections, we prove this result by laying out optimization and generalization guarantees for the EM algorithm on samples generated by the C-MLR model. ## 3 Analysis of Population and Empirical EM Updates ### Population EM update In this section, we consider the population EM updates in Algorithm 1 with the \(M\) operator characterized in (5) and establish optimization guarantees for it. Let us recall the population EM scenario and the underlying C-MLR model. Denoted by \(\{(x_{i},y_{i})|i=1,\cdots,n\}\) are \(n\) pairs of linear measurements generated according to the mixture model (2), that is, \(y_{i}=\xi\langle x_{i},\theta^{*}\rangle+\epsilon_{i}\) for all \(i=1,\cdots,n\). In Proposition 2.1, we characterised the population \(M\)-function and in the following theorem, we establish its contraction property. Here and throughout the paper, we denote a Euclidean ball of radius \(r\) around the fixed point \(\theta^{*}\) by \(\mathtt{B}(r;\theta^{*})\coloneqq\{\theta\in\Omega\,|\,\|\theta-\theta^{*}\| \leq r\}\). **Theorem 3.1**.: _Consider the population EM update rule \(M\) in (5) and assume that \(\theta\in\mathbb{B}(\alpha\|\theta^{*}\|;\theta^{*})\) for some constant \(0\leq\alpha<1\). If \(\|\theta-\theta^{*}\|\geq\varepsilon\), then there exist constants \(N_{0}(\alpha,\mathsf{snr})\) and \(C(\alpha,\mathsf{snr})\) depending on \(\alpha\) and \(\mathsf{snr}=\|\theta^{*}\|/\sigma\) such that for any \(n\geq N_{0}(\alpha,\mathsf{snr})\) we have_ \[\|M(\theta)-\theta^{*}\|\leq\kappa\|\theta-\theta^{*}\|,\quad\text{for}\quad \kappa=(\|\theta^{*}\|+\sigma)\Big{(}\mathsf{snr}+\frac{1}{n\varepsilon} \Big{)}\exp\left(-n\cdot C(\alpha,\mathsf{snr})\right).\] Proof.: We defer the proof to Appendix A. The result of this theorem reveals a number of insightful remarks as follows. _Remark 4_.: First, for any constant accuracy lower bound \(\varepsilon\), as the number of samples per node \(n\) grows, the factor \(\kappa\) decreases and there exists a constant \(N_{0}\) depending on the problem parameters such that for any \(n\geq N_{0}\), the \(M\)-operator is a contraction, that is, \(\kappa<1\). Secondly and more importantly, it shows that if initialized within a ball around the ground truth model \(\theta^{*}\), iterates of the population EM in Algorithm 1 converge _linearly_ in \(n\) till reaching the accuracy \(\varepsilon\). The following corollary provides an informal but insightful implication of this theorem. **Corollary 3.1.1** (Informal).: _Suppose that the population EM in Algorithm 1 is initialized with \(\theta_{0}\) where \(\|\theta_{0}-\theta^{*}\|=\mathcal{O}(\|\theta^{*}\|)\). Then, for sufficiently large \(n\) and after \(T=\mathcal{O}(1+\log(n/d)/n)=\mathcal{O}(1)\) iterations, either there exists an iterate \(0\leq t\leq T\) for which \(\|\theta_{t}-\theta^{*}\|=\mathcal{O}(\sqrt{d/n}\,\|\theta^{*}\|)\)._ While we provide the proof of Theorem 3.1 in Section A, it is worth elaborating on the proof technique as follows. ### Proof sketch To establish optimization guarantees for the population EM iterates and Algorithm 1, we first adopt the _First-Order Stability_ (FOS) notion (Balakrishnan et al., 2017) as defined below. **Definition 3.1** (First-Order Stability (FOS)).: _The functions \(\{Q(\cdot|\theta)|\theta\in\Omega\}\) satisfy condition FOS(\(\gamma\)) over \(\mathbb{B}(r;\theta^{*})\) if_ \[\|\nabla Q(M(\theta)|\theta^{*})-\nabla Q(M(\theta)|\theta)\|\leq\gamma\| \theta-\theta^{*}\|,\quad\text{for all }\theta\in\mathbb{B}(r;\theta^{*}).\] This property of the \(Q\)-function helps showing the contraction of the population EM operator \(M\). The following general theorem from Balakrishnan et al. (2017) characterizes the conditions under which the population EM operator \(M\) is contractive. **Theorem 3.2** (Balakrishnan et al. (2017)).: _For some radius \(r>0\) and pair \((\gamma,\lambda)\) such that \(0\leq\gamma<\lambda\), suppose that the function \(Q(\cdot|\theta^{*})\) is \(\lambda\)-strongly concave, and that the FOS(\(\gamma\)) condition holds on the ball \(\mathbb{B}(r;\theta^{*})\). Then, the population EM operator \(M\) is contractive over \(\mathbb{B}(r;\theta^{*})\), in particular,_ \[\|M(\theta)-\theta^{*}\|\leq\frac{\gamma}{\lambda}\|\theta-\theta^{*}\|,\quad \text{for all }\theta\in\mathbb{B}(r;\theta^{*}).\] For the EM function in (5), we prove the first-order stability property in Definition 3.1 for a fixed \(\theta\). More precisely, for any \(\theta\in\mathbb{B}(\alpha\|\theta^{*}\|;\theta^{*})\), we show that for the population \(Q\)-function (3) the FOS(\(\gamma\)) property holds true with \[\gamma=\frac{1}{\sigma^{2}}(\|\theta^{*}\|+\sigma)\Big{(}n\cdot\mathsf{snr}+ \frac{1}{\varepsilon}\Big{)}\exp\left(-n\cdot C(\alpha,\mathsf{snr})\right),\] as long as \(\|\theta-\theta^{*}\|\geq\varepsilon\). On the other hand, it is straightforward to check that population \(Q\)-function is \(\lambda\)-strongly concave with \(\lambda=n/\sigma^{2}\). This, together with the first-order stability and Theorem 3.2 yields the contractive property of the population \(M\)-function in Theorem 3.1. ### Empirical EM update Having set up the optimization guarantees for the population EM (Algorithm 1) in the previous section, we move to the sample-based setting and establish generalization characteristics the empirical EM. Coupling these two results, we provide convergence guarantees of the (empirical) EM algorithm later in this section. Let us recall the empirical setting of our interest where each node \(j=1,\cdots,m\) nodes observes \(n\) linear measurements denoted by \(\{(x_{i}^{j},y_{i}^{j})|i=1,\cdots,n\}\) and generated by the C-MLR model in (1), that is, \(y_{i}^{j}=\xi^{j}\langle x_{i}^{j},\theta^{*}\rangle+\epsilon_{i}^{j}\). In the following, we establish a uniform generalization error bound for the empirical EM update with finitely many nodes \(m\) and samples per node \(n\). **Theorem 3.3** (Generalization gap).: _Consider the C-MLR model in (1) with \(\mathsf{snr}\geq 4\), any tolerance probability \(\delta\in(0,1)\) and the empirical and population EM operators in (8) and (5) with \(mn\geq 192^{2}(d+\log(8/\delta))\) and \(n-64\log m\geq 104(2d+\log(4/\delta))\). Then, with probability at least \(1-\delta\),_ \[\sup_{\theta\in\mathsf{Sh}(\varepsilon,r;\theta^{*})}\|M_{m}(\theta)-M(\theta )\|\leq\sqrt{\|\theta^{*}\|^{2}{+}\sigma^{2}}\sqrt{\frac{d+\log(1/\delta)}{mn} }\cdot\mathcal{O}(1+\kappa(\varepsilon)).\] _Here, the supermom is over the spherical shell \(\mathsf{Sh}(\varepsilon,r;\theta^{*})\coloneqq\{\theta\in\mathbb{R}^{d}: \varepsilon\leq\|\theta-\theta^{*}\|\leq r\}\) with \(r=\|\theta^{*}\|/14\) and \(\kappa(\varepsilon)\) is the contraction factor of the expected EM update characterized in Theorem 3.1, i.e.,_ \[\kappa(\varepsilon)=(\|\theta^{*}\|{+}\sigma)\Big{(}\mathsf{snr}+\frac{1}{n \varepsilon}\Big{)}\exp\left(-n\cdot C(\mathsf{snr})\right).\] Proof.: We defer the proof to Appendix B. Let us provide a useful implication of Theorem 3.3. Assume the signal-to-noise ratio is a constant larger than \(1\) and the total number of samples are at least \(mn=\Omega(d+\log(1/\delta))\). Moreover, suppose that the number of nodes is at most \(m=\exp(o(n))\), for instance, it grows at a rate polynomial in \(n\). Now take the accuracy \[\varepsilon_{\ell}=\sqrt{\|\theta^{*}\|^{2}{+}\sigma^{2}}\sqrt{\frac{d+\log(1 /\delta)}{mn}},\] which is particularly of our interest in this paper. This pick of the accuracy lower bound yields that for sufficiently large \(n\), the expected EM update is contractive, i.e. \(\kappa(\varepsilon_{\ell})<1\). Now, we denote by \(\varepsilon_{\ell}^{\text{unif}}\) the smallest scalar for which \[\sup_{\theta\in\mathsf{Sh}(\varepsilon_{\ell},\frac{1}{14}\|\theta^{*}\|; \theta^{*})}\|M_{m}(\theta)-M(\theta)\|\leq\varepsilon_{\ell}^{\text{unif}}\] with probability at least \(1-\delta\). As a result of Theorem 3.3, we have with high probability that the supermom generalization gap \(\|M_{m}(\theta)-M(\theta)\|\) over the spherical shell \(\theta\in\mathsf{Sh}(\varepsilon_{\ell},\|\theta^{*}\|/14;\theta^{*})\) is at most \(\varepsilon_{\ell}^{\text{unif}}\leq C_{\varepsilon}\varepsilon_{\ell}\) for a constant \(C_{\varepsilon}\geq 1\). To put it differently, for any parameter \(\theta\) in a ball around \(\theta^{*}\) with \(\|\theta-\theta^{*}\|\leq\|\theta^{*}\|/14\), if \(\|\theta-\theta^{*}\|\leq\varepsilon_{\ell}\), then \(\theta\) is already a fairly accurate estimate of \(\theta^{*}\). Otherwise, Theorem 3.3 guarantees that the generalization error of the empirical EM update is with high probability bounded by a constant multiplicative factor of \(\varepsilon_{\ell}\). ## 4 Main Results on Sample-based EM Algorithm Having laid out the main two components of our analysis in Theorems 3.1 and 3.3, we are ready to formally state the main result of the paper. **Theorem 4.1** (Main).: _Consider the empirical EM update (8) with \(\mathsf{snr}\geq 4\) and any tolerance probability \(\delta\in(0,1)\) and suppose that the initialization \(\theta_{0}\) is in \(\mathsf{B}(r;\theta^{*})\) for \(r=\|\theta^{*}\|/14\). Moreover, assume that \(mn\geq 192^{2}(d+\log(8/\delta))\) and \(n\geq 64\log(m)+104(2d+\log(4/\delta))\) while \(n\) is large enough that \(\kappa(\varepsilon_{\ell})\leq 1/2\), \(\kappa(\varepsilon_{\ell})\leq\exp(-C_{\kappa}n)\) for a constant \(C_{\kappa}\) and \(4C_{\varepsilon}\varepsilon_{\ell}\leq r/2\). Then, after_ \[T=1+\frac{1}{2C_{\kappa}n}\log\left(mn\cdot\frac{1}{28C_{\varepsilon}}\cdot \frac{\|\theta^{*}\|^{2}}{\|\theta^{*}\|^{2}{+}\sigma^{2}}\cdot\frac{1}{d+ \log(1/\delta)}\right)\] _iterations of Algorithm 2, either_ * \(\|\theta_{t}-\theta^{*}\|\leq\varepsilon_{\ell}\) _for some iteration_ \(t=0,1,\cdots,T\)_, or_ * \(\|\theta_{T}-\theta^{*}\|\leq 4C_{\varepsilon}\varepsilon_{\ell}\) _with probability at least_ \(1-\delta\)_._ _Remark 5_.: The result of Theorem 4.1 implies the following remarks. Let the empirical EM (Algorithm 2) be initialized with \(\theta_{0}\) where \(\|\theta_{0}-\theta^{*}\|\leq\|\theta^{*}\|/14\). In addition, consider the C-MLR model in (1) with a constant SNR larger than \(4\) where \(m\) and \(n\) are such that \(mn\geq\Omega(d+\log(1/\delta))\) and \(n\geq\Omega(\log(m)+d+\log(1/\delta))\), that is, \(m\) grows at a rate no greater than \(e^{o(n)}\). Then, Theorem 4.1 implies that for sufficiently large \(n\) and after \[T=\mathcal{O}(1)+\frac{1}{n}\log\left(\frac{mn}{d}\cdot\frac{\|\theta^{*}\|^{ 2}}{\|\theta^{*}\|^{2}{+}\sigma^{2}}\right)\cdot\mathcal{O}(1)=\mathcal{O}(1)\] iterations, either \(\|\theta_{t}-\theta^{*}\|\leq\varepsilon_{\ell}\) for some iteration \(t=0,1,\cdots,T\); or otherwise, \[\|\theta_{T}-\theta^{*}\|\leq\mathcal{O}(\varepsilon_{\ell})=\sqrt{\|\theta^ {*}\|^{2}{+}\sigma^{2}}\sqrt{\frac{d+\log(1/\delta)}{mn}}\cdot\mathcal{O}(1),\] with probability at least \(1-\delta\). Note that since \(m\leq e^{o(n)}\), then the iteration complexity is indeed bounded by a constant, that is, \(T=\mathcal{O}(1+1/n\cdot\log(mn/d))=\mathcal{O}(1)\). _Remark 6_.: We would like to particularly highlight the fact that implications of the above theorem are two-folded. Theorem 4.1 shows that if the EM method in Algorithm 2 is applied to the \(mn\) samples generated by the C-MLR while honoring the underlying structure (i.e. shared latent variables for samples of any node), after only a constant number of iterations independent of the number of samples, the statistical accuracy \(\mathcal{O}(\sqrt{d/(mn)})\) is attained with high probability. On the one hand and regarding the iteration complexity, this is a significant improvement over the benchmark described in Section 2.2 where the iteration complexity grows logarithmically with the number of samples. On the other hand, Theorem 4.1 guarantees that the statistical accuracy \(\mathcal{O}(\sqrt{d/(mn)})\) is indeed achievable by the same EM algorithm. ### Proof of Theorem 4.1 As mentioned in the theorem's statement, suppose that Algorithm 2 is initialized with \(\theta_{0}\) such that \(\|\theta_{0}-\theta^{*}\|\leq r=\|\theta^{*}\|/14\) and consider any iteration \(t=0,1,\cdots\). We can write that \[\|\theta_{t+1}-\theta^{*}\|=\|M_{m}(\theta_{t})-\theta^{*}\|\leq\|M(\theta_{ t})-\theta^{*}\|+\|M_{m}(\theta_{t})-M(\theta_{t})\|. \tag{10}\] Assume that for all iterates \(0\leq k\leq t\) we have \(\|\theta_{k}-\theta^{*}\|>\varepsilon_{\ell}\), otherwise the theorem's first claim is concluded. Then from Theorem 3.1, for large enough \(n\), we have \[\|M(\theta_{t})-\theta^{*}\|\leq\kappa(\varepsilon_{\ell})\cdot\|\theta_{t}- \theta^{*}\|,\quad\text{for}\quad\kappa(\varepsilon_{\ell})=(\|\theta^{*}\|+ \sigma)\Big{(}\mathsf{snr}+\frac{1}{n\varepsilon_{\ell}}\Big{)}\exp\left(-n \cdot C(\mathsf{snr})\right).\] In particular, note that \[\frac{1}{n\varepsilon_{\ell}}=\frac{1}{n}\bigg{(}\sqrt{\|\theta^{*}\|^{2}{+}\sigma ^{2}}\sqrt{\frac{d+\log(1/\delta)}{mn}}\,\bigg{)}^{-1}=\mathcal{O}\bigg{(}\sqrt {\frac{m}{n}}\,\bigg{)},\] and since \(m\) grows at a rate at most \(m=\exp(o(n))\), there exists a constant \(C_{\kappa}\) that for large enough \(n\), we have \(\kappa(\varepsilon_{\ell})\leq\exp(-C_{\kappa}n)\) and \(\kappa(\varepsilon_{\ell})\leq 1/2\). In the course of the proof, we show by induction that the iterates remain in the \(r\)-neighbourhood of \(\theta^{*}\). Assume that for all iterates \(0\leq k\leq t\) we have \(\|\theta_{k}-\theta^{*}\|\leq r\) and therefore, \(\|M_{m}(\theta_{t})-M(\theta_{t})\|\leq\varepsilon_{\ell}^{\rm unif}\) with probability at least \(1-\delta\). Plugging in (10) we have that with probability at least \(1-\delta\) \[\|\theta_{t+1}-\theta^{*}\|\leq e^{-C_{\kappa}n}\|\theta_{t}-\theta^{*}\|{+} \varepsilon_{\ell}^{\rm unif} \tag{11}\] Note that the above inequality also implies that \(\|\theta_{t+1}-\theta^{*}\|\leq r/2+r/2=r\), where we used the fact that for large enough \(n\), we have \(\kappa(\varepsilon_{\ell})\leq 1/2\). This concludes the induction argument described before, that is for any \(t\), if \(\|\theta_{k}-\theta^{*}\|>\varepsilon_{\ell}\) for all \(0\leq k\leq t\), then with probability at least \(1-\delta\), we have that \(\|\theta_{k}-\theta^{*}\|\leq r\) for all \(0\leq k\leq t\). Now, consider the last iterate \(T\) and assume that \(\|\theta_{t}-\theta^{*}\|>\varepsilon_{\ell}\) for all \(0\leq t\leq T\). We condition the rest of the analysis on the event \(\{\|M_{m}(\theta_{t})-M(\theta_{t})\|\leq\varepsilon_{\ell}^{\rm unif}\) for all \(t=0,\cdots,T-1\}\) which happens with probability at least \(1-\delta\). Repeating the argument yielding to (11) implies that \[\|\theta_{T}-\theta^{*}\|\leq e^{-C_{\kappa}nT}\|\theta_{0}-\theta^{*}\|{+} \sum_{t=0}^{T}\Big{(}\frac{1}{2}\Big{)}^{t}\varepsilon_{\ell}^{\rm unif}\leq e ^{-C_{\kappa}nT}\frac{\|\theta^{*}\|}{14}+2C_{\varepsilon}\varepsilon_{\ell}.\] Balancing the two terms above yields that after \(T\) iterations for \[T=\frac{1}{C_{\kappa}n}\log\Big{(}\frac{\|\theta^{*}\|}{28C_{\varepsilon} \varepsilon_{\ell}}\Big{)}=\frac{1}{2C_{\kappa}n}\log\bigg{(}mn{\cdot}\frac{1 }{28C_{\varepsilon}}{\cdot}\frac{\|\theta^{*}\|^{2}}{\|\theta^{*}\|^{2}{+} \sigma^{2}}{\cdot}\frac{1}{d+\log(1/\delta)}\bigg{)},\] we have with probability at least \(1-\delta\) that \[\|\theta_{T}-\theta^{*}\|\leq 4C_{\varepsilon}\varepsilon_{\ell}=4C_{ \varepsilon}\sqrt{\|\theta^{*}\|^{2}{+}\sigma^{2}}\sqrt{\frac{d+\log(1/ \delta)}{mn}}=\sqrt{\|\theta^{*}\|^{2}{+}\sigma^{2}}\sqrt{\frac{d+\log(1/ \delta)}{mn}}\cdot\mathcal{O}(1).\] Note that Algorithm 2 has to iterate at least for one iteration and since \(m=e^{o(n)}\), therefore we can write that \(T=\mathcal{O}(1+1/n\cdot\log(mn/d))=\mathcal{O}(1)\). ## 5 Conclusion Data heterogeneity is a major challenge in scaling up distributed learning frameworks such as federated learning. However, there exist underlying structures in the data generation model of such paradigms that can be employed. In this paper, we focus on a particular model of two-component mixture of linear regressions where \(m\) batches of samples each containing \(n\) samples with identical latent variable are available. Expectation-Maximization is a popular method to estimate parameters of models with latent variables, while its theoretical analysis is typically complicated. We provide optimization and generalization guarantees for EM algorithm on clustered samples which enables us to characterize its iteration complexity to estimate he true parameters. An interesting follow-up of our work is to implement the EM algorithm in a distributed fashion which is aligned with modern applications such as federated learning. While new challenges such as consensus of local estimates arise, we believe that our techniques and analysis in this paper will be highly applicable. ## Acknowledgments This work was supported, in part, by MIT-DSTA grant 031017-00016 and the MIT-IBM Watson AI Lab.